Tuesday, June 30, 2009

A New Turing Test

Anyone familiar with this blog will have figured out by now that I am a stickler for elegant and correct prose, even if my writing rarely measures up to my own exacting standards. Until a little over a year ago, I taught philosophy in a large university. As such, I would often receive e-mails from my students. Reading students’ e-mails, as well as their term papers and exams, was always exquisite torture for a man of my rather rarefied literary tastes. Reflecting on this torture recently led me to thinking about Turing’s Test. You're scratching your head. Don’t worry. I’ll explain the connection.

Alan Turing (1912-1954), the great father of computer science, first proposed his test in a paper entitled “Computing Machinery and Intelligence” (Mind 59 (1950), 433-460). He wanted to address the question, “can machines think?” He seems to have thought that if a computer could fool a human interrogator into thinking it was human, then the question could be answered in the affirmative.

There are many versions of Turing’s test, but the most basic idea goes like this. Imagine yourself (the interrogator) place in front of an interface, say a computer monitor, connected to an agent. You pose questions, and on the basis of the answers you receive, after you are satisfied enough to make a judgment, you are to guess whether your interlocutor is a human or a computer. This is one trial. After repeated trials, the computer’s score is to be determined by the percentage of trials in which you wrongly guessed that the computer is a real person.

Turing thought that if a computer passed this test, it was for all intents and purposes a human mind, and that, therefore, the machine could think. He further predicted that by the year 2000 such a machine would be built. Despite many attempts, that prediction has so far proved to be wrong, though such a machine may exist in the future.

There have been many criticisms of the Turing Test. For one thing, it seems to confound intelligence with simulated intelligence. This is, perhaps, a result of the crude psychological behaviourism that was fashionable at the time that Turing wrote. In any case, I do not intend to go over the finer philosophical implications of the test.

Instead, by reflection on my former students’ communications, I am brought to consider a different implication of the Turing Test. Rather than expecting computing technology to come up to our standards of rationality, I think Turing neglected to consider another possibility, which is that human rationality might someday sink to the level where human discourse becomes indistinguishable from crude technology. I am thinking here of those e-mails I used to receive from my students, begging for an extension or a grade reappraisal, which I was unable to decipher because they were written in an inchoate pidgin, an impoverished amalgam of bizarre grammar and orthography, and text-messaging contractions.

Given that I could not understand many of these e-mails (nor in many instances could I understand their term papers), I see no reason why a machine need have any fear of failing to “measure up” — if that is even the right phrase — to such a low standard of discourse. Indeed, if things get any worse, a computer that generated random strings of characters and spaces could not fail Turing’s Test. I had to wonder myself sometimes if the e-mail I was reading had been generated by a human or a virus-infected computer.

2 comments:

  1. I really hope that you are using this as an example to be sarcastic, right?

    ReplyDelete
  2. Of course, at least with regard to the Turing Test. As for the student e-mails, I'm unfortunately not exaggerating.

    ReplyDelete