Emergent capabilities, uncontrollable realities

Tech companies have entered the AI race, in pursuit of an unprecedented evolutionary step. So when something smarter than us is born, can we make sure to keep it in check?

by Edoardo Maggio

alt=

Michael Dziedzic / Unsplash

17 June 2024

The history of technology is laden with tools. From the wheel onwards, most of what we would consider technology has been about using our brains to enhance our capabilities: where our bodies couldn’t reach, a tool would fill the gap. Thousands of years of history have played out this way, making civilizations across the globe that much smarter with each iterative step. Some inventions came in the form of tools, while others, like electricity, were created to harness a power that was already there. The latest milestone in our evolutionary journey is undoubtedly the computer: it’s not just a tool but a collection of tools and a platform on which other tools can be built.

The computer shifted the paradigm from atoms to bits and redefined how humans could “get things done.” It effectively accelerated technological progress along an unprecedented axis, enabling mind-bogglingly fast advancements in virtually every field. Computers supercharged human evolution, ultimately changing everything at such a disruptive, breakneck pace that parts of modern society are still coming to terms with it. As Steve Jobs famously put it, computers are like a bicycle for the mind.

Indeed, for the past fifty years or so, humanity has grabbed the bike and run with it: first came mainframes, then personal computers, then smartphones. In 2011, Marc Andreessen — who helped invent the first web browser — aptly pointed out that software was “eating the world.” And it did: everything became software, and software became everything. You’d be hard-pressed to find any aspect of our interconnected reality that can live entirely untouched by software, whatever the layer of the stack. As a result, the technology industry itself — those who design, create, maintain, and deploy the underlying framework — has been flooded with capital, both human and financial. These resources quickly fed into a (mostly) virtuous cycle that reshaped huge chunks of our economy and social sphere, bringing both real change and more misguided disruption.

We could rightly identify tech with Silicon Valley, and although the ecosystem extends far beyond that (notably in Asia), the Californian hub is still where the shots are called — for better or worse. When technology married venture capital, it became clear that innovation couldn’t be an only child. The even more nefarious sibling, hype, soon followed as a carnival offspring whose effects some are still reeling from: Juicero, Theranos, Parler, assorted social media shenanigans, Google Glass, Amazon’s Fire Phone, and, more recently, the web3/crypto and metaverse fads.

Even when some of their underlying technologies have value in principle, Silicon Valley’s desire to flood the market with half-baked products — often fueled by venture capital firms like Marc Andreessen-led Andreessen Horowitz (a16z) — overrides the rigors of the market, sometimes even defying logic.

After the insane (and very real) success of the smartphone as a universal platform, all sorts of stakeholders — technologists and investors, developers and journalists, VCs and analysts — have been wondering what “the next big thing” will be. The interesting ideas brought about by the crypto world and the metaverse, in particular, have died prematurely precisely because of the hype which artificially inflated their value, and because they have not delivered anything concrete or actionable — solutions in search of a problem more than anything truly disruptive. However, something potentially much more impactful and profound has been brewing in the industry’s amniotic fluid and is only now coming to the surface indirectly: artificial intelligence (AI). In the 2010s, the “public face” of AI was machine learning, a subfield whose algorithms are used to teach computers to sort through unimaginable amounts of data, recognize patterns, and — with the latest wave, aptly named “Generative” Artificial Intelligence (GAI) — create new content. Much like web3/crypto and NFTs two years ago and the metaverse in 2022, GAI has exploded, seemingly becoming the hot new thing overnight.

The defining moment can be pinpointed to the launch of ChatGPT, OpenAI’s Large Language Model (LLM) that applies the GAI principles to text and wraps it all behind a simple chat interface. Ask the machine something or give it a prompt, and a string of legible, sensible text comes out the other end.

Since its launch, ChatGPT has become the fastest product in history to reach 100 million users. Of course, accessing a website is much easier than downloading an app, but that’s part of the point: unlike the vagueness of the metaverse (really a future iteration of the internet that may be a decade or two away) and crypto (whose usefulness beyond currencies — since they are more like unregulated securities than stores of value — is unproven), GAI applications actually exist. They’re being built, deployed, and updated daily and can do things that simply couldn’t have been done before —cheaper, faster, and at scale.

alt=

Illustration by Francesca Ragazzi

Silicon Valley’s new golden hen is spewing gold, and the money being poured in is so loud it’s drowning out the screams of web3 and metaverse infants being mercilessly unplugged in their digital cradles. So, are LLMs and GAI (which has already colonized image generation and will soon set its sights on music, video, and everything else) real and genuinely disruptive? Is this the new “iPhone moment” we’ve been waiting for? Shouldn’t we at least wait for Apple’s much-anticipated augmented reality headset to make its grand entrance?

I don’t think so. I believe we have every right to say that yes, this is it. After many years of trial and tribulation, the AI genie is out of the bottle — and it will change everything. In fact, it already is, and we’re just a few months into this wave. Does that mean that nothing AI-related will fall victim to the hype? Of course not. But, the nature of these systems is so radically different that hyping things up might be entirely unnecessary. AI systems like LLMs are being trained to be generalistic, and not optimized for one specific goal. Their function is as simple as “predict the next word” — and that can be almost anything. Indeed, much of their power is in their emergent capabilities: things they weren’t explicitly programmed to do, but can do anyway. As these systems grow larger, more (and more complex and robust) emergent capabilities will show up.

AI researchers and scientists have wondered what kind of yet-to-be-invented paradigm would make current systems “leap” from being so-called “stochastic parrots” — systems that are ultimately only capable of mixing, matching, and regurgitating the information they are given — into truly “intelligent” general systems, capable of creating things from scratch across a broad range of domains. This kind of AI, called Artificial General Intelligence (AGI), seems to have escaped the realm of science fiction and entered reality when the latest GAI programs like ChatGPT became widely available. And, well, OpenAI’s CEO, Sam Altman, said that Yes, “gradient descent can [deliver AGI]” (gradient descent being the machine learning algorithm used to train LLMs).

So what should we make of it? I’ve come to believe that this paradigm shift may be more disruptive than even our wildest imagination might preemptively suggest. The history of technology, as mentioned earlier, is full of tools; tools we’ve used to speed up or automate processes and increase productivity. Many, perhaps naively, equate AI with a new kind of accelerator. Some things will be replaced, and new ones will sprout up. Because “that’s the way it’s always been.” Except no. It hasn’t. AI is fundamentally different. And not just from any other tool or system since the first industrial revolution, but from everything human civilization has built in its entire lifespan.

This lengthy introduction was necessary to illustrate how, throughout history, we humans have always built “the thing.” But this is the first (and, arguably, the last) time we’re building the thing behind the thing. Not a bicycle for the mind, and not even a rocket for it. We’re building the mind itself — a mind that, by design, is better than ours — like an ant giving birth to a human being. Now, however relevant (or technically accurate) the metaphor, the things the artificial mind builds are already better, at least in principle: faster, cheaper, more scalable, and capable of making sense of amounts of data that are simply unfathomable for a human mind. And if they are not yet better today, they will be tomorrow because AIs are also faster at improving, retraining, and repurposing.

Indeed, generalization is the only thing that keeps them from being superintelligent relative to us (meaning better than all of humanity combined, at any cognitive task), as we’re also already training these systems to act autonomously and independently. Give them sufficient agency and capability, and the idea of superintelligent beings spreading among us suddenly becomes not so far-fetched. Our inner, inscrutable desire to play God seems to be driving us down a road that is putting even the most techno-optimist people on high alert. For some, the creation of superhuman AI systems is just a natural evolution of our inclusive genetic fitness-driven goal of “making more copies of ourselves” — perhaps a bit out of distribution, but nonetheless aligned with our desire to reach further and go higher. What we don’t know, however, is how these machines work. We marvel at their inhuman capabilities and have figured out a way — gradient descent — to develop them into something borderline magical. We have no idea what emergent capabilities will arise and what will happen after that. By the time this article is published, things will have changed again because our drive to develop them and figure out where the limits are — and then push them further — is too strong. We have fended off the hype haze and realized that we can steamroll ahead. That’s our desire.

What will the machine desire?