Like
I’ve recently engaged in book-club discussions on the following classic papers in the field of “artificial intelligence”:
- Geoffrey Jefferson (1949), “The Mind of Mechanical Man”
- Alan Turing (1950), “Computing Machinery and Intelligence”: introduced the “Imitation Game”
- Thomas Nagel (1974), “What Is it Like to Be a Bat?”
- John Searle (1980), “Minds, Brains, and Programs”: introduced the “Chinese Room”
Some notes.
Turing and Jefferson
Turing’s essay (says Bernardo Gonçalves, and I think he’s right) was basically a response to Jefferson’s essay (as well as to Douglas Hartree’s Calculating Machines (1947) and to Turing’s discussions with Michael Polanyi). I highly recommend Jefferson’s essay. Not only did I find it extremely coherent and focused, but to know that Turing read it goes a long way to explain some features of Turing’s own essay: the focus on a non-mathematical “game,” the suggestion of sonnet-writing, the otherwise tangential theme of sex difference. Jefferson:
[N]either animals nor men can be explained by studying nervous mechanics in isolation, so complicated are they by endocrines, so coloured is thought by emotion. Sex hormones introduce peculiarities of behaviour often as inexplicable as they are impressive (as in migratory fish). We should not have any real idea how to make a model electronic salmon however simple relatively its nervous system is […]
Nagel and Searle
Nagel and Searle both base their conclusions more on intuition than on rigor. Nagel claims, intuitively, that there are things humans can never know, such as what it is like to be a bat — that is, a bat’s subjective experience, as opposed to merely assenting to a list of objective propositions such as “A bat hunts by echolocation” which we might say tell us what being a bat is, without giving us any whiff at all of what being a bat is like.
John Locke (1690), An Essay Concerning Human Understanding III.iv:
Simple ideas […] are only to be got by those impressions objects themselves make on our minds, by the proper inlets appointed to each sort. If they are not received this way, all the words in the world, made use of to explain or define any of their names, will never be able to produce in us the idea it stands for. […]
A studious blind man, who had mightily beat his head about visible objects, and made use of the explication of his books and friends, […] bragged one day that he now understood what “scarlet” signified. Upon which, his friend demanding what scarlet was? the blind man answered, it was like the sound of a trumpet. Just such an understanding of the name of any other simple idea will he have who hopes to get it only from a definition, or other words made use of to explain it.
Searle claims, intuitively, that syntax is distinct from semantics. No amount of piling syntax upon syntax will ever get you all the way to semantics. No formal algorithm, however lengthy, for transforming squiggles into squoggles can ever be said itself to engage in the understanding of the Chinese language. (As a computer programmer, I’m very sympathetic with this view. We often “express” a semantic idea by inventing new syntax for it, and telling people to use that syntax whenever they mean to express the semantic. But the compiler itself doesn’t understand the semantics, or at least does not get those semantics from your code; it’s merely reacting to the syntax of your code.)
Searle considers an “English room,” containing a person such as himself, who receives questions written on slips of paper through one slot and writes out answers through another slot. In the English room system, an “understanding” of English (the subjective experience of what it is like to be an English speaker) clearly resides in the occupant of the room. Then, Searle considers a similar “Chinese room,” containing a person such as himself who does not understand Chinese, but has access to a formal algorithm (expressed as a gigantic book of instructions, say) by which the uncomprehended squiggles on each input slip of paper are mapped to another set of uncomprehended squoggles on an output slip. In the Chinese room system, it seems there is nowhere for an “understanding” of the Chinese language to hide: it’s certainly not in the occupant, and intuitively it seems impossible for “understanding” (that is, the subjective experience of what it is like to be a Chinese speaker) to reside in an inanimate instruction-book, however gigantic.
The obvious rebuttal (which Searle acknowledges, rather dismissively, under the name “the systems reply”) is that there must indeed be “understanding” taking place in the whole Chinese room system itself (man, book, slots, and all). That the human occupant doesn’t partake in the subjective experience of speaking Chinese is no more surprising than that my own tongue, say, fails to partake in my own subjective experience of speaking English.
Now, suppose we disagree with Searle and agree that the Chinese room (or a formal algorithm in general) can have the same kinds of subjective experiences as a human. Well, says Nagel, surely a computer couldn’t have the same kinds of subjective experiences as a bat! Which seems a bit like a reductio ad absurdum, doesn’t it? …Or perhaps it’s totally unsurprising that a computer cannot tell us what it’s like to be a bat, cannot tell us what it’s like to be a human — but perhaps a computer could tell us what it’s like to be a computer.
Wittgenstein (1949-ish), Philosophical Investigations: “If a lion could speak, we could not understand him.”
Like
Can a computer be conscious? Nagel equates consciousness with subjective experience: “fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism.” As above: not a simple definition of what it is to be that organism, but something that it is like to be it.
Turing offers that a computer that can pass the imitation game ought to be able to feel enjoyment. “[The claimed inability of any machine] to enjoy strawberries and cream may have struck the reader as frivolous. […] What is important about this disability is that it contributes to some of the other disabilities, e.g., to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.” (In these enlightened times we might say “between man and man, or between bat and bat.”) That is, one important criterion for a Turing-style thinking machine is that it be able to like things.
We might say: “Machines like people like people.”
Paul Claudel (1954), quoted in Antiqua et Nova (2025): “Intelligence is nothing without delight.” Mind you, I feel like the Vatican authors were taking that quote a little out of context; but it’s a nice quote anyway.
Douglas Hofstadter, of course, has written volumes and volumes on his ideas about computer (and other) intelligence; to him the paramount quality of human-type intelligence is what he has called “analogical awareness.” Hofstadter quotes Heinz Pagels quoting Stanisław Ulam:
What is it that you see when you see? You see an object as a key, you see a man in a car as a passenger, you see some sheets of paper as a book. It is the word “as” that must be mathematically formalized … Until you do that, you will not get very far with your AI problem.
Hofstader’s Fluid Analogies Research Group has worked on analogical reasoning in such guises as Bongard problems and the Copycat program; see Hofstadter’s Fluid Concepts and Creative Analogies (1996). More recently, the “ARC-AGI Prize” evaluates LLMs on their ability to solve analogy-based pictorial tasks; for examples of such tasks see section 3 of “On the Measure of Intelligence” (François Chollet, 2019).
That is, Hofstadter’s prime criterion for a thinking machine is that it intuit how one thing is like another.
It is probably an utterly meaningless coincidence that the same English word appears in all three of these thinkers’ criteria for a “thinking machine” — that there is something it is like to be it, that it can actually like something, that it can see one thing is like another. These criteria seem disjointed — unrelated — maybe because the concept of a “thinking machine” itself is vague and nebulous. And it is most likely coincidental that when we must speak about the nebulous, we often speak vaguely and disjointedly — falling back on one utterly meaningless interjection in particular — and we’re like, “Like…”