A brief history of interfaces
Exploring the evolution of interfaces and their impact on human interaction from ancient tools to modern digital systems
by Alice Azzolini
Artwork by MAIZE / Wikimedia Commons, iStock, Unsplash
If you think interfaces are all about technology, you’re not alone, but you’re not entirely right either: humans have been “interfacing” since the dawn of time. But what is an interface, exactly?
According to the Online Etymology Dictionary, the word was first used in 1874 as “a plane surface regarded as the common boundary of two bodies.” In 1962, Canadian philosopher Marshall McLuhan used it in the sense of “place of interaction between two systems.” The meaning of interface as we know it emerged just a couple of years later, as “an apparatus to connect two devices.”
In its current use, Wikipedia defines an interface as “a shared boundary across which two or more separate components of a computer system exchange information. The exchange can be between software, computer hardware, peripheral devices, humans, and combinations of these.”
Thus, a computer mouse, an eye, Google Alexa, an abacus, a washing machine, a telegram, xylography, a cave painting, and ChatGPT are all different kinds of interfaces. But what’s their purpose?
Why do we interface?
As product designer Ehsan Noursalehi states in his enlightening essay, “Why Do We Interface?”, all interfaces are meant to “retrieve, encode, decode, modify, and/or distribute information.” This is why the history and evolution of interfaces have always been intrinsically intertwined with the history of information technology.
Noursalehi continues, “Throughout history, each form of information technology, such as the alphabet, has corresponded with a different interface — or mechanism — to retrieve, decode, modify, and/or distribute that information. Each of these evolutions in information has, in turn, corresponded with changes in human behavior, each unlocking a more effective way for more humans to maximize the utility of information.”
Let’s then take a look at the history of interfaces and the changes in human behavior that came with their evolution.
The industrial revolution
The first examples of interfaces are cave paintings and abacuses, followed — thanks to the development of the first writing systems — by manuscripts and xylographies.
Millennia went by, and interfaces stayed pretty much the same for a long, long time. In the 15th century, the invention of the printing press sparked mass communication: the quantity of information humans could exchange increased significantly, but the nature of interfaces didn’t change; they still didn’t require any interaction or data manipulation. And all actors in the process were human beings.
The first human-machine interfaces appeared at the end of the 18th century. The machines invented during the First Industrial Revolution, such as the telegraph, generated a new need: humans somehow needed to interact with them. This is when punch cards, pieces of cardstock that stored digital data using punched holes, entered the game. That’s when interfaces became mechanical.
Artwork by MAIZE / Wikimedia Commons, Unsplash
Let’s get digital
In the mid-1960s, an alternative to punch cards emerged: command-line interfaces (CLIs), which allowed users to interact with computers by typing textual commands. Command-line interfaces were not very user-friendly but marked the transition from mechanical to electronic interaction.
The next major revolution came with graphical user interfaces (GUIs), introduced in 1979 with Xerox Alto, followed by Apple Lisa and Macintosh, and Microsoft Windows in the early 1980s. But what’s so special about them? “Instead of encoding information via language alone,” states Noursalehi, “the GUI uses a much richer multilayered approach to simultaneously encode information via language, symbolism, hierarchy, and context to produce an interactive interface.”
GUIs are more complicated to design but much more accessible for the average user. They rely on visual metaphors such as desktops, files, and folders, which make them much easier to understand than CLIs.
Look mum, no hands
The simplification of GUIs in the 2000s, driven by touchscreen technology and the rise of smartphones and tablets, led to the boom of social media and messaging apps like Telegram, Instagram, and WhatsApp. From that point on, it’s been hard to keep up with the evolution of interfaces.
In the 2010s, voice user interfaces such as Alexa or Siri made things even more natural: thanks to speech recognition, users could interact with machines just by talking. Advancements in gesture recognition during the same decade led to the boom of AR and VR, which were first conceived in the mid-20th century. Brain-Computer Interfaces (BCIs) have also made significant progress: in 2024, a brain chip was implanted in a human for the first time in history, leading to promising advancements in neuroprosthetics and the treatment of neurological disorders.
AI-powered interfaces
The next revolution is already here, spearheaded by Artificial Intelligence. AI is dramatically changing the way we interact with technology, and conversational interfaces such as Gemini and ChatGPT are leading us towards even more frictionless interaction with machines—you won’t even notice it’s a machine you’re talking to.
AI will most definitely radically change the interaction between humans and technology, and there are many challenges we shouldn’t overlook. From persistent gender and discrimination bias to ongoing accessibility issues in the interfaces surrounding us, to ethical questions about privacy, accountability and transparency: the path forward needs us to be more vigilant than ever, if we don’t want technology to amplify social, racial and economic inequalities.