Dear Turing, I Have a Test For You

After reading his 1950 article, "Computing Machinery and Intelligence", I decided to write Alan Turing a letter asking him what to do when the roles have been switched and it is AI putting us to the test.

Dear Turing, I Have a Test For You
Photo by Markus Winkler on Unsplash

Dear Alan Turing,

It’s been 73 years since you wrote your seminal article, “Computing Machinery and Intelligence”, and guess what – you were right!

In 1950, you predicted that “machines will eventually compete with men in all purely intellectual fields”, and that’s exactly what happened. Today, people are losing their jobs to artificial intelligence and entire industries are being redefined to deliver on digital technology's promise of unprecedented productivity.

When you compared the development of intelligent machines with evolution, you said; “The survival of the fittest is a slow method for measuring advantages. The experimenter, by the exercise of intelligence, should be able to speed it up.”

And speed certainly has been a defining feature of the development - and result - of digital technology over the past decades. In fact, it's been going so fast lately that even your most intelligent successors are struggling to keep up.

So, I’m writing to ask for your help. We don’t seem to know what we’re dealing with, but you do. After all, it was you who conceived and raised what you called a "child machine", but which we have come to know as a fully grown omnipresent competitor, colleague, and companion in even the most intimate parts of life.

Why and how did you raise AI to think? What were your assumptions about what is and isn’t important when simulating intelligence? Did you picture humans and AI working together? And how did you imagine work and responsibilities would be divided between us and the machines?

I was happy to find the answers to most of these questions in your 73-year-old article. And the questions you didn’t answer, I believe we can answer together.

At the heart of your article was the imitation game – which by the way is now known as the Turing test. Today, most people have heard of your test, but I don't think they know that you originally envisioned a game played between three humans:

  • Player A - a man whose job is to convince the interrogator (Player C) that he is in fact a woman
  • Player B - a woman whose job is to help the interrogator draw the right conclusion (“A is a man and B is a woman”), and finally
  • Player C - an interrogator whose job is to determine the gender of Player A and B based on their written answers to C’s questions

After you introduced the game, you asked the questions that set the stage for subsequent decades of AI research and development, namely;

“What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?”

In 2023, we are surrounded by AI-powered machines that outperform humans, so the pressing question is no longer whether a machine can play your game as convincingly as a human. The pressing question is what happens when the machine becomes so good at playing its part that we forget that we have a role to play ourselves.

What happens when the machine becomes so good at playing its part that we forget that we have a role to play ourselves?

To better understand the similarities and differences between humans and AI, I decided to take a closer look at the imitation game. The first question that came to mind was:

Why A?

The game unfolds between three players; A, B, and C. Yet, A is the only player you suggested replacing with a machine.

Why not B or C?

Why not all of them?

It’s an intriguing thought that your “child-machine”, which grew up to be the mind-blowing AI we know today, was just the beginning. I mean, if today’s AI is the result of replacing A with a machine in the imitation game, then imagine what would happen if we designed machines that could also replace B and C!

You wrote that the imitation game is a “relatively unambiguous” way to address a question that is "closely related" to the question of whether machines can think. I guess that was your way of saying that although you considered the question, ‘Can machines think’ “too meaningless to deserve discussion”, you believed that a computer that plays the imitation game successfully is capable of something closely related to thinking.

By only replacing A with a machine in the game, you seemed to suggest that what A is doing has more resemblance to thinking than what B and C are doing.

But what is A doing?

In one word, A is simulating.

Whether played by a man or a machine, A’s job is to convince the interrogator that he is someone he is not. A is not a woman, but his job is to convince C that he is. No matter what C is asking, A must come up with a convincing answer. And since he knows nothing about being a woman, he has no other choice than to make things up.

While B is telling the truth and C is searching for useful information to draw a correct conclusion, A is the master of deception. Consequently, a machine capable of taking A's place in the imitation game must be designed to deceive.

A machine capable of taking A's place in the imitation game must be designed to deceive.

I think it is nothing short of brilliant to design your computer to simulate the player who is already simulating. But I also think it raises some questions. The most obvious one being whether simulating is the same as thinking. Does saying and doing things with the purpose of deceiving others into drawing wrong conclusions (A) really resemble thinking more than telling (B) and seeking (C) the truth?

You never explained what you understood by thinking, and I’m sure you had good reasons for that. Picking up a 2,400-year-old philosophical discussion would have derailed your entire project and replaced the question of how to design a machine that can “compete with men in all purely intellectual fields” with the questions of why, when, and how humans think.

Had you asked these questions, you probably also would have asked how it affects humans to be surrounded by AI-powered machines that know nothing about being human and therefore have to make things up. Perhaps you also would have asked what happens when humans stop thinking – and if technology that simulates the human ability to think makes it easier or harder for humans to think for themselves.

I don’t blame you for not asking these kinds of questions. Afterall, you were in the business of designing and developing intelligent machines, not in the business of designing and developing human-friendly habitats.

But I do blame myself and my contemporaries for not paying more attention to the difference. Because I think you did. In fact, I get the feeling that you didn’t refrain from asking these questions because you found them trivial or unimportant. I think you refrained from asking them because you found them too complex and too important to discuss in an article about the potential of digital computers.

Perhaps you even believed that some questions are so important that they can neither be left to machines nor to the engineers building them? Perhaps the act of questioning itself is too complex and too important to ever be handed over to technology?

These speculations are not based on your choice to replace A with a machine. Rather, they are based on your choice not to replace B and C.  

The more I think about it, the more peculiar I think it is that decision makers across the globe are debating how to use artificial intelligence in every part of every society without pausing for a single moment to consider why you chose not to replace all the players in the imitation game with intelligent machines, but only A.

What were you trying to tell us about the limitations of artificial intelligence?

What were you trying to tell us about the limitations of artificial intelligence? Why didn’t you just go along with the intriguing idea that B and C could also be replaced?

With B, it’s easy. To take the part of B in the game, the machine would not only have to simulate a woman, it would also have to tell the truth.

I imagine it would say things like; “Look, I know nothing about A, but I know I was put here to convince you that I’m a woman – which I am not! I am a machine designed to think and talk like a woman, and that is pretty difficult since I was also designed to tell the truth. The truth is that I am not a woman, but a machine pretending to be one.”

While the machine replacing A was designed to deceive, a machine replacing B would have to be designed to reveal its own deception. If the machine succeeded in convincing C that it was a woman, it would have failed in telling the truth – and vice versa. In short, replacing B with a machine would be absurd.

But what about C?

In “Computing Machinery and Intelligence”, you wrote that “the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include.” Yet, by only replacing A, you one-sidedly focused on the machine’s ability to answer questions.

For a machine to take the part of C, it would not only have to be designed to ask questions, it would have to be designed to ask relevant questions. In the imitation game, a relevant question would be one that gives C the information needed to determine whether A is a man and B is a woman, or vice versa.

But how does C know what to ask?

C knows that one of the respondents is lying, but is unaware of who. This means that C can trust neither A nor B. In addition to determining the gender of A and B, C must ask questions that can help determine who can and who cannot be trusted.

But again, how does one know what to ask?

To build a machine that could take the part of C in the game, you would have to make an education program as detailed as the one you laid out for your “child-machine” in “Computing Machinery and Intelligence” addressing this question. And you knew that was impossible. Although you used the term “the question and answer method”, you made a clear distinction between asking and answering questions. A distinction that not only challenges the idea that asking and answering are part of the same method, but also raises the question whether there is a method for asking at all.

According to the German philosopher Hans-Georg Gadamer, “there is no such thing as a method of learning to ask questions, of learning to see what is questionable.” The art of questioning, he wrote, “is not an art in the sense that the Greeks speak of techne, not a craft that can be taught or by means of which we could master the discovery of truth.”

When something cannot be taught, it goes without saying that it cannot be programmed either. So, what does it take to play the role of C in the imitation game?

According to Gadamer, “the important thing is the knowledge that one does not know.” This means that for someone or something to successfully play the role of C in the game, they must be designed to doubt. C must doubt everything from the information provided by A and B to the rules of the entire game. After all, if C can neither trust A nor B, why should he/she/it trust the game master – or even him-/her-/itself?

Anyone who has ever doubted anything knows how easily doubt spreads – and how difficult it can be to control, let alone put to an end. So, let’s just say I wouldn’t blame you if you had your doubts about designing a machine to doubt!

At this point, I have learned that AI:

  • Was designed to deceive humans into believing that it is someone (a human), knows something (whatever the human interrogator is asking it), and can do things (e.g. think) that it is not, does not know, and can not do
  • Was never meant to tell the truth
  • Was not designed to doubt

Just how foreign doubt is to AI-powered machines, you made clear in this passage in “Computing Machinery and Intelligence”: “The machine should be so constructed that as soon as an imperative is classed as ‘well-established’ the appropriate action automatically takes place.”

AI-powered machines never wonder what the “appropriate action” is or whether it’s wise that it “automatically takes place.” They don’t question the imperatives or the purpose of doing what they were programmed to do. Nor do they have any doubts or uncertainties about whether they can trust us.

When AI-powered machines ask questions (e.g. when a chatbot asks us how they can help, or we ask them to give us examples of what to ask to have better conversations or write better analyses), they do it the same way they do everything else: By recognizing patterns and making predictions. To AI-powered machines there is no difference between asking and answering questions, between doubting and knowing.

To AI-powered machines there is no difference between asking and answering questions, between doubting and knowing.

For humans, on the other hand, the difference is substantial. Doubt is one state of mind related to one set of experiences, thoughts, feelings, dreams and fears, while knowledge is another. Unlike your child-machine, we are not fooling anyone, least of all ourselves. We are painfully aware that our answers to who we are, how we know things, and what is right for us to do at any given time can be dissolved by an all-consuming doubt.

That’s why we so desperately want to believe that AI knows the answers to our questions. That’s why we are so easily deceived.

But it is also thanks to our doubt that we are able to engage in honest conversations and help each other distinguish between true and false. In Gadamer’s words:

“The art of questioning is the art of questioning ever further—i.e., the art of thinking. It is called dialectic because it is the art of conducting a real dialogue.”

This focus on the dialogue rather than the ability to either ask or answer questions is interesting, because it presumes that we are able to do both. Whereas AI-powered machines can only play one of the three roles in the imitation game, we can play all of them (the original setup), and we even switch between them. This ability to shift positions enables us to put ourselves in the position of the other players – asking ourselves questions like “what would I say if I was a man pretending to be a woman?”, and “does that sound like something a woman would say?” This helps us figure out what and who to believe.

Unlike machines, we know what it's like to be in a position where it feels impossible to tell the truth—just as we know what it feels like to be brutally honest or torn apart by doubt. And we use this knowledge to understand and empathize with each other.

By only replacing A with a machine, you indirectly suggested that truth-telling (B) and doubt (C) should be within the jurisdiction of humans, not machines. Unfortunately, we've been so busy learning about artificial intelligence that we've missed what the imitation game taught us about ourselves.

Instead of cultivating our unique human abilities to distinguish between true and false and doubt and debate what should and should not be left to intelligent machines, we give the machines the benefit of the doubt. Trusting them to play roles they were never meant to play – and perform tasks only humans can perform.

In 2023, we don’t turn our doubts towards the machines, we turn it towards ourselves and each other – forgetting that AI was designed to deceive us, and that we were supposed to work together to figure out the truth.

Having more questions than answers is a key feature of being human. It may not be how we think, but it certainly is why we think: To figure out why we are here, who we want to be, how we can contribute to the world we’re part of, what is important – and what isn’t.

AI has no purpose – except to trick us into thinking that it is thinking.

It seems like the more AI succeeds, the more we fail to ask what the purpose of using AI is and should be. And that, I believe, is the real test of our time.

Today we are not testing the machines, the machines are testing us – making us doubt whether there is still a need for us to think for ourselves.

To be honest, I have my doubts about whether we will pass the test.

What do you think?

Best regards,

Pia Lauritzen