Our future, friendly, robot overlords
The truest manifestation of intelligence in robots would be their ability to fit into our society. We want them to be helpful, without even having to tell them what we need
In conversation with Giuseppe Averta by Francesca Alloatti and Simone Cesano
Photos by David Vintiner
We often characterize artificial intelligence as software, a system capable of thought within the digital realm, and that can generate text and images immaterially. But what happens when we venture out into the physical world? Where does robotics stand in the quest to give machines human-like intelligence? We put these and other questions to Giuseppe Averta: a roboticist, researcher, and professor at the Polytechnic University of Turin.
Simone Cesano e Francesca Alloatti: Let’s begin at the very core: robotics. What precisely is your area of expertise? And how did you embark on this journey?
Giuseppe Averta: I’m captivated by developing an intelligence for robots that draws inspiration from nature and humans. It’s a bioengineering abstraction: I used to study the human body and how it works, so it was a natural connection when I approached robotics. The difference is that humans are made of flesh and bone, while machines are crafted from steel. There are a lot of parallels, and I’m intrigued by trying to understand how nature has already solved the challenges we face in robotics and how we can emulate nature’s ingenious solutions. Understanding how nature has adapted to solve the problems of the human body is my most valuable source of knowledge. This approach deviates from the conventional research paradigm.
I want to believe by David Vintiner — Transhumanism is the belief that human beings are destined to transcend their mortal flesh through technology. They believe our biology constrains our experience of reality, refusing to accept what nature has given us. From bionic limbs and eyes to designing new senses and extending life expectancy, these individuals are redefining what it means to be human. Although these ideas have long lived on the pages of comic books and sci-fi novels, the movement — now a reality — is starting to disrupt industries and individuals in meaningful ways. With technology evolving at an unprecedented rate, further change is imminent. This project documents a critical moment in time, as we enter the next chapter in human evolution. Transhumanist ideas raise some important questions for us all. While we love the efficiency and entertainment technology provides, can we embrace a future where it goes beyond our environment and enters our minds and bodies? Could we reach a point where we gift friends and family cognitive implants and new senses? If we are able to defy death, what are the implications for the meaning of life? And, most importantly, will this evolution divide or unite us? Humans are now Gods. We can now create and design our personal evolution — but do we have the foresight to do it the right way?
How does it deviate?
The typical research approach goes like this: I have a problem, and I try to solve it. So the effort is to seek a solution. Instead, I start out by looking at what crucial problems nature has already solved because if nature has already solved them, it implies that they are really important. And perhaps these problems are still obstacles in robotics. Especially in artificial intelligence, we devise “artificial problems” to test algorithms and establish benchmarks. But that’s not really very interesting; it’s much more interesting to understand the fundamental problems in order to bring this technology into the real world.
And what are those fundamental problems? Manual dexterity comes to mind. So far, teaching a robot to build a wall has proven challenging because fully human senses are still required to figure out how much mortar to spread, where to put the brick, and how hard to press down on it. Are you working on teaching robots to replicate certain human manual skills, or is your focus more on intellectual capabilities?
It is not necessarily about replication; it is more about enabling skills of interaction and coexistence in our society. We humans have built a society on our scale; the tools we have created are made for our hands, so machines need to be able to interact with the tools we have created for ourselves. This is the challenge for robots: to fit into our society, which is made for us humans (and a few animals). The goal is not to replace or supplant human abilities but to be able to coexist and perhaps even support them.
We know that intelligence is not necessarily all in our brains. For example, we don’t learn to pull our hand out of a fire, but rather, we know how to do so when we feel it’s burning. How do you approach this problem with robots?
It has always amazed me that our brains do not really control our bodies precisely. Instead, the brain imagines and initiates a movement, and then the movement unfolds according to the body’s dynamics, without the brain necessarily worrying about constantly monitoring the limb’s position. The scientific community calls this phenomenon distributed intelligence. Consider the sense of touch: in robots, touch is processed at the sensor level, distributing intelligence at the mechanical level, and offloading computation from the central hardware. We are accustomed to thinking this way as individuals and as a society. A research team functions more effectively as a whole than as individuals because of the distribution of effort, and a “distributed” body works better than a centralized body because of the distribution of computation.
Plato argued that our bodies have stomachs and hearts but it is the brain that controls thought and decision-making. Western education upholds this scale of values, placing abstract, purely mental skills, like mathematics, at the top and physical skills, such as dance and drama, at the bottom. The latter is really just another way of expressing intelligence. Does any of this difference come through in your work with robots? Is it part of your work to give machines an emotional and expressive facet, or do you aim for pure mechanical efficiency?
There is definitely a hierarchy of intelligence in what we are developing. Compared to mechanical intelligence, for example, there is the controller through which the hardware moves the hand, but it does so by moving the arm, and we can go further up this hierarchy of intelligences: how I plan my movement — in research, it is called “task and motion planning” — how I actively control it on the robot, which robot I do it with, and so on. We can even go further into abstraction until we get into psychology. There is a tradition of cognitive robotics that specifically aims to understand what happens when a human interacts with a robot (or an intelligent machine in general) after an emotional task, for example. This is really the uncanny valley theory: what do I feel emotionally when I interact with a machine? We have to be aware that this technology that we engineers are developing also has an impact beyond the rational level.
You said earlier that your research tries to solve concrete problems and not just purely academic ones. Tell us about these concrete problems that you are working on, or that should definitely be worked on.
A major challenge in robotics research is learning how to use tools designed for humans. For example, a “classical” robot today grasps objects as stably as possible. Instead, we humans grasp functionally: I grasp a drill by its handle when I want to use it; I grasp it by its head when I want to pass it. This very ability to change strategy depending on what you want to do with an object is not always reflected in the literature. In the literature, we say: “what I want to do is to grasp the object, and I want to do it as well as I can.” However, if I want to use that object, how do I do that?
Another thing we are working on a lot is giving robots the ability to understand what a person is doing without being told explicitly. For example, I’m in my kitchen, cooking dinner, and a robot has to figure out what I’m doing on its own: what I’m cooking, what recipe I’m making, and where I am in the recipe. The fact that we understand without having to ask allows us to help ourselves better. In the jargon, tools are said to have affordances. That is, every object suggests actions we can take. If we see a concave object, we can pour water into it, and we can carry it somewhere. A long tool, on the other hand, may suggest a hammer or a knife, but not a bowl. We need to rediscover these characteristics of everyday objects that suggest activities to us as humans. Robots need to discover activities based on objects made for us.
Are there perspectives or avenues of research that are not part of your work and that you would like to connect with? Who would you ideally like to talk to, and about what?
I would have loved to talk to Kasparov the day after a machine beat him for the first time. I think it’s important to understand our relationship with machines, that is, whether we feel smarter or dumber than the machine: did Kasparov feel dumber than the machine when he lost? Is having an excellent ability in a particular skill (like playing chess) what makes up intelligence, or is it just a computer that is very good at doing a particular task?
How much has your work in analyzing and developing robotic solutions that interact with humans changed you? Has working with robots changed you as a person?
I feel it has changed me much more than I would like to admit. When people start doing research, they are extremely proud of themselves and think they are the best in the world. Instead, understanding how nature distributes tasks and also distributes rewards makes me realize that within a lab or a community, I am just a part of the machinery. I don’t necessarily need to be in the spotlight all the time; I need the relationships at certain times. In my daily life, that means oiling the machinery of a laboratory that is made up of many souls and many minds that can make decisions regardless of what I think. You have to give everyone the role they deserve. Distributing energy and rewards helps create a more successful lab and research team. Respecting and supporting the work interests of the people around you allows everyone to excel in the synergy we want to create.
I am thinking of the world of TV quizzes, which is the most popular way to define intelligence. Do you think that even there the concept of human intelligence should be redefined because, for example, some artificial intelligences are much better at mnemonic and verbal association tasks? Or, instead, could the very fact that machines are better at that task make that human ability special? For example, cars go faster than humans, but we are interested in Usain Bolts, humans who run faster than other humans.
I think it will change what we consider really difficult. We may think that playing chess is hard. But maybe the really complex activities are the manual ones. It is much more complicated to write software that makes a robot do manual tasks than it is to write software that makes a robot play chess. I expect a reappraisal of manual work, especially when it involves physical interaction with objects and people. And that will change the perception of what is a complicated activity for us versus a simple activity, depending on what we can do with a machine.
Perhaps what happened to painting with the advent of photography will occur again: suddenly, it will no longer be essential to be able to paint realistically but rather to apply a different level of conceptualization. This can also be seen in mass-produced objects, in objects to which a design has been applied. If an object has been designed by a specific person, it takes on a distinct value because there is a perception that there is a person behind it rather than a chain of anonymous events that just produced a piece of merchandise. There will be value in the genuinely human factor as opposed to artificially generated content.
Hopefully, we will “use” people to perform more intellectually elevated, higher value-added activities than human interactions. In all of this, we will also need to develop a sensitivity to the impact of the technology we develop, not only in terms of society but also in terms of our planet. It has always struck me that computer science, which is an intangible science, is extremely energy-intensive. We build deep-learning models that are good at distinguishing cows from dogs. But we don’t measure how much those models consume and, therefore, their impact on our planet. Some studies show that models like GPT pollute as much as four, five, ten, twenty, or even fifty small American towns. The big companies that do DL, like Google or Amazon, don’t worry about that at all. If I have the cluster with the biggest model, with the most data, and with the most information, and that works best, I’m doing the best thing, right? That is the view of 95% of the people involved in this field. Instead, we try to simplify the models and make them more efficient, lighter, and greener. “Green” is a very trendy word, but there is a reason behind it that goes beyond trends: if I am running an energy-intensive technology, I must minimize its impact on the climate. On the other hand, our intelligence works the same way: it is efficient.
We don’t have enormous brains, but we can do complex reasoning with limited resources. It must be possible to do the same thing with machines.