Today in 1991, the first “Turing Test” (a.k.a Loebner Prize Competition) was held at the Boston Computer Museum.
Joseph Weintraub, president of Thinking Software, Inc., won the competition by fooling 5 of the 10 judges into thinking his software, “programmed to make whimsical conversations,” was human.
Mark Halpern in “The Trouble with the Turing Test”:
Remarkably, five judges found T5 [Weintraub’s software program] to be human, including two who also made the converse error of taking humans for computers. Overall, the performance of the judges leaves us to draw some sad conclusions about their inability to engage in sustained conversation, their lack of knowledge on general human subjects, and their need to share their personal concerns even with entities that contribute little more to the ‘conversation’ than a mirror does to the relief of loneliness.
When Alan Turing proposed the “Imitation Game” (which became to be known as the “Turing Test”) in his 1950 paper “Computing Machinery and Intelligence,” he devoted one section to “Lady Lovelace's Objection”:
[Lady Lovelace] states, "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform” (her italics)….
A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks. Perhaps I say to myself, "I suppose the Voltage here ought to be the same as there: anyway let's assume it is." Naturally I am often wrong, and the result is a surprise for me for by the time the experiment is done these assumptions have been forgotten.
In “Creativity, the Turing Test, and the (Better) Lovelace Test,” Bringsjord, Bello and Ferruci wrote:
Unfortunately, attempts to build computational systems able to pass [the Turing Test]… have devolved into shallow symbol manipulation designed to, by hook or by crook, to trick. The human creators of such systems know all too well that they have merely tried to fool those people who interact with their system into believing that these systems really have minds. And the problem is fundamental: The structure of the Turing Test is such as to cultivate tricksters.
The “Turing Test” ran for the last time in 2019. Today, we can just ask AI if it is intelligent or not. Or just hallucinate about it.
Responding to Microsoft researchers’ discovery of an “early version of Artificial General Intelligence (AGI)” in GPT-4, I wrote in “Artificial General Intelligence (AGI) Is A Very Human Hallucination”:
Human intelligence is comfortable with vague, circular, dissimilar, even contradictory definitions. Human intelligence indulges in hallucinations and has been indulging since the rise of modern science and technology especially in hallucinations about man being a God-like creator. Recently, these hallucinations have been upgraded to envision Man as even better than God since the men and women of AI will no doubt create an intelligent machine that is smarter, more moral, less bias, than human beings.