
Interfaces for AI or AI as an interface?
Apple has often been the de facto trendsetter in the tech world. With AI, they seem to be going against the grain. What’s going on?
by Edoardo Maggio
Whether we are at the beginning of a so-called “intelligence explosion” is a matter for tomorrow’s history books. However, there are clear signs that the field is heating up beyond the waves of hype, which oddly tend to overemphasize the wrong aspects even when they unwittingly and correctly underscore the phenomenon’s magnitude. Indeed, AI seems to be such a case, with Apple’s formal baptism sanctioning its relevance to the masses (the kind of treatment that other long-buzzed-about tech-adjacent trends like crypto or the metaverse never got).
Apple’s position tends to become an industry cornerstone, as per the famous adage, according to which Cupertino’s vision is to ship something only when it’s ready, mature, and significant. They’ve mostly gotten it right for the past two decades or so (some might argue even longer), but their flavor of AI — aptly named Apple Intelligence — feels, unlike previous intros, stuck between a rock and a hard place. The gist of Apple’s implementation is that AI is not the all-encompassing, infrastructural technology designed to underpin all future endeavors — the position of fellow giants Google, Microsoft, and (somewhat reluctantly) Meta.
AI is but another tool to add to the company’s sprawling-yet-polished Swiss Army knife of hardware products, software suites, and services. A feature baked into the familiar interface, designed to address specific issues in specific contexts, barely dipping its toes into the most basic LLM (Large Language Model) features like summarization and smart autocomplete. For better or worse, Apple doesn’t seem to want to show it’s playing the game as much as it wants to reveal the game’s supposedly inflated importance. There is a clear hierarchy, and the company’s long-term strategies haven’t been challenged. Odd — but when it’s Apple speaking, where’s the narrative violation?
Nobody has a crystal ball; the house of cards could fall, and Apple could nimbly pivot to wherever the next-next-big-thing goes. However, one decision casts some doubt on the iPhone company’s playbook: when called upon to answer complex questions, Apple Intelligence will defer to none other than OpenAI’s ChatGPT, with very explicit branding. Why is this? Was Apple unable to build a competitive product, or was it simply cautious and unwilling to do so?
OpenAI, Big Tech, and indeed a fleet of startups in Silicon Valley and virtually everywhere else in the world are working under the assumption that the impact of AI will be no smaller than that of the iPhone — and, for some, much greater. The explicit goal of building an artificial general intelligence (AGI) is so radical that it would completely upend how we interact with technology, each other, and the world, making the idea of AI as a smart tool to be integrated into today’s interfaces seem quaint.

Despite its substantial limitations, mostly due to a lack of proper agency (i.e., the inability to act in the world by chaining software together), current AI systems don’t venture beyond the chatbot interface they were designed for. And, of course, it’s not at all clear that a text-based chat will become a universal interface for software in the way we think of apps, touchscreens, and gestures today. There may be a combination of text and voice that closely matches the primary interfaces used for human communication, but we are still a long way from that.
Yet even a rudimentary AGI — if such a thing can exist — would, in principle, render all other interfaces useless. Since we can’t create outputs out of thin air, we’ve created tools that allow us to do so, and interfaces (natural or designed) to handle those tools. Sometimes, playing with the interface can become a valuable exercise in itself, and its higher forms can rise to artistry (when someone paints, the act of painting can be more valuable than the final product). But for all the things that are necessary and for which we have created tools and interfaces to accomplish them more quickly, an AI may truly be the ultimate interface. Indeed, the job of an AI is to understand a user’s needs and intentions and come up with a solution — an output — that reduces all the necessary but tedious, complex steps in between. A tool to will something into existence.
It may sound overly grandiose, but it’s exactly the kind of scope that the big AI labs are explicitly working toward. Not a tool inside a computer, but something much further up the ladder of abstraction: a mind that can use a computer — soon, perhaps, with a physical embodiment capable of interacting with the world itself. And do stuff; hopefully at our command. As noted above, today’s LLMs are not yet powerful enough, but they are already operating under a radically different assumption: probabilistic rather than deterministic systems, whose mode of interaction is not so much the chatbox they are confined to, but language itself.
This radical rethinking of a major substrate of society would have such monumental implications that trying to picture a world where an AGI actually exists and operates is close to impossible. This makes Apple’s position seem untenable, albeit fascinating: do they genuinely believe that AI will somehow flow into the current state and nothing more? Or are they secretly panicking? If not, are all the other companies putting far too many of their eggs in the AI basket?
The dissonance between Apple’s vision and that of the AGI pioneers isn’t merely a technological disagreement — it’s a philosophical schism about the future of human-computer interaction, and then some. While Apple is polishing the familiar, by integrating AI as another feature, the AI vanguard dreams of obliterating interfaces altogether, envisioning a world where the boundary between intention and action blurs into insignificance. That’s radical.
This is the crossroads we stand at: a refinement of the known, or a leap into the radically unknown. Tomorrow’s interface is being conceptualized today, and the stakes are high. Will “designing for the AI age” make no sense, and will the very notion of “using a computer” become as obsolete as the rotary telephone? It’s hard to imagine, but that’s where we’re headed. And, short of superintelligent machines, we humans are pretty good at achieving our goals. We may once again get what we want.