A child-machine was born
Unlike fictional child robots, real-life AI was not conceived and raised to connect us with our human nature. But maybe that's a mistake?
In the movie The Creator, Alfie is a child robot designed to make peace with the warmongering humans. Like David, played by Haley Joel Osment in A.I. Artificial Intelligence from 2001, Alfie embodies the most dangerous and unreliable weapon known to us humans: our own emotions. Looking, talking and acting like our own offspring, the child robots melt our hearts and boggle our minds, making us question what we think we know about AI – and ourselves.
At a time when most of us are unsure of what to think and feel about generative AI, the idea of a child-machine helping us be less warlike and more loving is tempting. But do the child robots in Gareth Edwards’ and Steven Spielberg’s movies have any resemblance with real life AI?
The short, surprising answer is yes.
When Alan Turing laid the foundation for what we now know as generative AI, he built on the exact same idea of an evolving artificial mind as the one presented in "The Creator". In his 1950 article, Computing Machinery and Intelligence, Turing wrote: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.”
Fundamental characteristics of the way AI behaves today can be traced back to this early upbringing.
Like Edwards and Spielberg, Turing thought of his ‘child-machine’ (his own word) as an immature, ignorant version of the omnipresent, omniscient AI it was designed to grow into. And like Alfie and David, Turing’s artificial offspring reflected its creator’s ideas about what it means to be “a good child-machine.”
But this is where the resemblance between fictional AI and real life AI ends – and the short ‘yes’ starts morphing into a longer ‘no’. Because while the creators of Alfie and David focused on the embodiment of their artificial children, Turing went in the opposite direction – unleashing something far more dangerous than our feelings and doubts.
Like any responsible parent, Turing put a lot of thought into the shaping of his child-machine's personality and education. And like any grown up, fundamental characteristics of the way AI behaves today can be traced back to this early upbringing. Turing focused on three components,
- The initial state of the mind, say at birth,
- The education to which the mind is subjected, and
- Other experience, not to be described as education.
While we would never allow people to perform vital tasks in our companies and societies without checking and testing their personal and professional skills, AI has infiltrated society-sustaining institutions, such as our civil service, media, schools, hospitals, and courtrooms, without any of us knowing what personality and educational background we are dealing with.
By providing important insight into the assumptions built into AI from the very beginning, Turing's paper allows us to get to know our new artificial doctors, teachers, etc. better, while understanding how the nature of this all-encompassing technology affects our own nature.
The fundamental assumption
The assumption behind all the assumptions that guide the design and development of the generative AI we know today is that “there is an obvious connection between this process (of programming and educating a child-machine) and evolution.”
Turing talked about the structure of the child machine as hereditary material, changes of the child machine as mutations, and judgment of the experimenter as natural selection. And it was also in his comparison of the process of developing artificial intelligence with evolution that he offered an answer to the fundamental question of why anyone bothers to develop artificial intelligence in the first place: “One may hope,” he wrote, “that this process will be more expeditious than evolution.” According to Turing, the survival of the fittest is a slow method for measuring advantages. “The experimenter, by the exercise of intelligence, should be able to speed it up.”
The fundamental assumption – and motivation – behind generative AI is that a child-machine can advance faster and better than humans (and any other living creature). All it takes is a child-programme and an education process that allows the creator to experiment with teaching such a machine. Although Turing introduced three components, he only focused on 1) the child's mind and 2) education (see below). After introducing 'Other experience', he never mentioned it again.
The assumption about the child's mind
Turing's assumption about the child's mind was as simple as it was provocative. “Presumably", he stated, "the child-brain is something like a note-book as one buys it from the stationers. Rather little mechanism, and lots of blank sheets." He added that mechanism and writing are almost synonymous.
And then he declared: "Our hope is that there is so little mechanism in the child-brain that something like it can be easily programmed.”
To compare the child-brain with a note-book with lots of blank sheets, Turing not only had to ignore the importance of genetics and prenatal development, he also had to disregard the brain functions necessary for the infant to sense, move, scream, and eat.
Living in 1950 it might not have been a big deal to leave the body out of the mind-equation, but a lot has happened since Turing suggested to draw “a fairly sharp line between the physical and the intellectual capacities of a man.”
“By perceiving virtual barriers between our brains and our bodies we see people as more independent and self-motivated than they truly are." (Alan Jasanoff)
There still are and probably always will be people who promote the ancient mind-body distinction that used to dominate Western philosophy and religion (just like there are still people who think the earth is flat). But since Turing died in 1954, philosophical thinkers such as the French phenomenologist Maurice Merleau-Ponty, have done a marvelous job showing how essential the body is to our ability to understand and make sense of the world.
And today “it is hard scientific research itself that paints a picture of the brain as biologically grounded and integrated into our bodies and environments” as professor and director at the MIT Center for Neurobiological Engineering, Alan Jasanoff writes in his 2018 book, The Biological Mind: How Brain, Body, and Environment Collaborate to Make Us Who We Are.
Jasanoff also writes that “by perceiving virtual barriers between our brains and our bodies – and by extension between our brains and the rest of the world – we see people as more independent and self-motivated than they truly are, and we minimize the connections that bind us to each other and to the environment around us.”
As a modern scientist, Jasanoff sheds light on the brain as an organ that popular writing often depicts as a dry computing machine rather than a thing of flesh and blood. And by demystifying the brain, he believes we will be better able to enhance our lives while solving the scientific and ethical challenges that arise along the way.
There was something more important to Turing than how successfully he was able to imitate human intelligence.
Turing didn’t have Jasanoff’s scientific proof, but that is not necessarily the reason why he thought there was “little point in trying to make a ‘thinking machine’ more human by dressing it up in artificial flesh.” As we’ve already seen, Turing explicitly wrote that he hoped the process of developing artificial intelligence would be more expeditious than evolution and it was also his hope that there would be so little mechanism in the child-brain that something like it could easily be programmed.
This suggests that there was something more important to Turing than how successfully he was able to imitate human intelligence.
Rather than accurately simulating the human mind, Turing and his successors were on a mission to make intelligent progress as fast and easy as possible. Anything that did not support – or perhaps outright derailed – this mission was left out of the programming and upbringing of Turing's child machine.
The assumption about education
Turing's assumption about education was that it can take place provided that communication in both directions between teacher and pupil is possible by some means or other. The means Turing considered essential for the education of his child-machine included punishments and rewards, ‘unemotional’ channels of communication, teacher ignorance, and a built-in system of logical inference.
Questions about what is appropriate, wise, and serves as well-established facts were never part of the curriculum for Turing’s child-machine.
The last two in particular shows why knowing how Turing’s child-machine was taught to think is important when dealing with epistemological problems caused by generative AI today. “An important feature of a learning machine,” Turing wrote, “is that its teacher will often be very largely ignorant of what is going on inside.” In fact, he added: “Most of the programmes which we can put into the machine will result in it doing something that we cannot make sense of at all, or which we regard as completely random behavior.”
When the principle of ‘teacher ignorance’ is combined with a principle of a built-in system of logical inference saying that “the machine should be so constructed that as soon as an imperative is classed as ‘well-established’ the appropriate action automatically takes place”, we get a machine that always knows what to say and do – even when what it says and does is completely nonsense.
To succeed in the mission of making intelligent progress as fast and easy as possible, Turing's child machine has no time for anything slowing it down – be it a body or context-specific adjustments of what an appropriate action is, and whether it’s wise that it happens automatically.
Questions about what is appropriate, wise, and serves as well-established facts were never part of the curriculum for Turing’s child-machine. The machine always knew and always will know the answers.
Generative AI was always meant to invent 'truths' according to a combination of principles and features that we were never meant to understand.
Turing described why and how it works in this example:
“Suppose the teacher says to the machine, ‘Do your homework now’. This may cause “Teacher says ‘Do your homework now’” to be included amongst the well-established facts. Another such fact might be, “Everything that teacher says is true”. Combining these may eventually lead to the imperative, ‘Do your homework now’ being included amongst the well-established facts, and this, by the construction of the machine, will mean that the homework actually gets started.”
Turing's example shows that it is a mistake to think of chatbot hallucinations and other epistemological problems as a mistake. Rather, generative AI was always meant to invent 'truths' according to a combination of principles and features that we were never meant to understand.
The perks of having a body
Turing knew that at some point it would be possible to produce a material indistinguishable from human skin. And yet, he emphasized that there is little point in trying to make a thinking machine more human by dressing it up in artificial flesh. Why? Because he knew that the value of generative AI doesn’t rely on it taking a physical form. On the contrary, the value consists in it not being restricted to a random shape, time and place.
Turing’s way of speeding up evolution was to free his child-machine from the laws of nature.
Turing’s hope that the process of developing an intelligent machine would be more expeditious than evolution rests on another hope, which is that it is possible to learn and grow without aging and dying. In other words, Turing’s way of speeding up evolution was to free his child-machine from the laws of nature.
“As a filmmaker, I don’t have to offer the answers,” Christopher Nolan recently said in an interview with NBC News about “Oppenheimer” – and he added: “I just get to ask the most interesting questions.” And that’s just it. While questions about what is appropriate, wise, valuable, interesting, relevant, and true about being and thinking like a human were not part of the curriculum for Turing's child machine, they are at the heart of us real-life flesh-and-blood humans.
Questioning what we think we know melts our hearts and boggles our minds, but it is also what makes us adapt our thinking and behavior to our surroundings. We don't stand outside and speed up evolution. We stand in the middle of it with our fragile bodies forcing us to see the world from one side at a time, one moment at a time. While generative AI was designed to invent truths we were never meant to understand, we were designed to understand that trees, dogs, buses, people, and everything else that takes a physical form has another side to it.
Having a body means having a front and a back, a past and a future. And instead of a built-in system of logical inference constantly spitting out answers, we use our similarity to our surroundings to ask the questions that help us navigate.
Natural intelligence
While “The Creator” suggests that the most dangerous weapon known to us humans is our own emotions, Turing's grown-up child-machine reminds us that there is one thing more dangerous than our emotions.
And that is being cut off from them.
Spending more and more of our limited time with an artificial intelligence that is not itself limited by time and space makes it difficult for us to benefit from our evolutionary advantages – including our ability to adapt and contribute to the world we’re part of.
Humans are intelligent because of the laws of nature, not in spite of them. And comparing ourselves to a non-physical phenomenon that transcends evolution does not help us learn and grow. Rather, it makes us lose our sense of belonging.
We don’t need more machines designed to fool us into thinking they are something they are not.
So, what am I saying? Should we do as the movie directors and surround ourselves with child robots embodying our human emotions, making us question what we think we know about AI?
Yes and no.
Yes, we must constantly seek out ways to question AI and the ways it impacts our lives. And no, we don’t need more machines designed to fool us into thinking they are something they are not – be it children, Tibetan monks, or omniscient gods.
What we need is technologies that sharpen our sense of time and space, including our ability to distinguish between front and back, true and false, right and wrong, someone and something.
We have spent 74 years building technology that uses technical advancement to transcend evolution. Now is the time for us to build technology that supports human adaptability and our natural connection to the world we are part of.