Apple has officially announced that its annual Worldwide Developers Conference (WWDC) 2026 will take place from June 8 to June 12, continuing its now-established hybrid format. The keynote, which usually shows the company’s plans for its software, will be held at Apple Park in California and streamed globally. The tech titan is expected to use the event to reveal a more advanced AI-powered Siri and expand its AI capabilities across devices.
Notably, WWDC remains Apple’s most important software-focused event of the year, bringing together developers from across the world to preview upcoming tools, frameworks, and operating system updates. And a major highlight this year will be the unveiling of iOS 27, the next version of Apple’s iPhone operating system. Unlike some earlier updates that focused on visual redesigns, iOS 27 is expected to highlight performance improvements, better battery efficiency, and overall system stability. The tech giant is likely refining the design language introduced in recent versions while optimizing the experience across both new and older iPhone models.
Along with iOS, the company will also introduce updates to macOS, iPadOS, watchOS, tvOS, and visionOS. These platforms are likely to receive shared improvements, particularly in terms of AI integration and cross-device functionality. At the same time, Apple is likely to continue its transition away from Intel-based Macs, with WWDC 2026 possibly signaling reduced or ended support for older Intel machines as the lineup fully moves to Apple Silicon. While the event focuses on software, updates to devices like the Mac mini and Mac Studio, along with AI-driven or smart home hardware, could also be announced.
However, the most prominent shift is expected to come from Siri. The Tim Cook-led firm is preparing a major overhaul that could fundamentally change how the assistant works, including transforming it into a chatbot-style system, internally codenamed ‘Campos’. Reports suggest this new version will support both voice and text interactions, allowing users to have more natural, back-and-forth conversations rather than issuing one-off commands. It is also being designed to handle complex, multi-step tasks across apps, bringing its capabilities closer to modern AI assistants like ChatGPT.
A key part of this transformation is Apple’s partnership with Google to use its Gemini AI models as the foundation for the next-generation Siri. Apple is reportedly paying around $1 billion annually for access to these models, some of which are built with over a trillion parameters. These models are expected to power features like advanced language understanding, summarization, and task planning, while Apple continues to integrate them with its own on-device intelligence.
Importantly, the iPhone maker is maintaining control over how responses are generated and delivered, ensuring that user data remains within its ecosystem and is processed either on-device or through its Private Cloud Compute infrastructure. This upgrade is part of Apple’s broader ‘Apple Intelligence‘ strategy, which has been in development since 2024 but faced delays due to technical challenges. Initially, the company explored building its own large-scale AI models and even evaluated alternatives from OpenAI and Anthropic before finalizing the Gemini partnership.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →