The Chinese Room Argument
Thinking about what “ Strong AI” actually is and means.
I took a bunch of half days last week, because goodness but I was tired. Too long running at full-throttle, and I’d been running out of steam as a result. And what did I do instead, that ended up being so effective in recharging? Well, mostly… read literature reviews on interesting topics in philosophy, at least for the first few days. Dear reader, I am a nerd. But I thought I’d share a few of the thoughts I jotted down in my notebook from that reading.1
“ The Chinese Room Argument”
This was an argument whose influence I’ve certainly encountered, but the actual content of which I was totally unfamiliar with.2
The argument, in exceedingly brief summary, is that
The rejoinders to this are varied, of course, and I encourage you simply to follow the link above and read the summary — it’s good.
There were two particularly interesting points to me in reading this summary: the Churchland response, and the Other Minds response. To these I’ll add a quick note of my own.
1: The Churchland response
Searle’s argument specifically addressed an approach to AI (and especially so-called
The main point of interest here is not so much whether the Churchlands were correct in their description of the brain’s behavior, but in their point that any hypothesis about neural networks is not defeated by Searle’s thought experiment. Why not? Because neural networks are not performing symbolic computation.
2: The Other Minds response
The other, and perhaps the most challenging response for Searle’s argument, is the
But the
And this gets again at the difficulty of using thought experiments to reason to truth. What a thought experiment can genuinely be said to show is complicated at best. Yet their utility — at least in raising problems, but also in making genuine advances in understanding the world — seems clear.
Lowered standards
The other thing I think is worth noting in all these discussions is a point I first saw Alan Jacobs raise a few years ago, but which was only alluded to in this literature review. Jacobs cites Jaron Lanier’s You Are Not A Gadget. (I don’t have a copy of the book, so I’ll reproduce Jacobs’ quotation here.)
But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?
This is one of the essential points often left aside. Is the test itself useful? Is
For those of you following along at home: I wrote all but the last 100 or so words of this a week ago and just hadn’t gotten around to publishing it. It’s not the even more absurd contradiction to yesterday’s post on writing plans than it seems. Really. I promise.↩
It’s occasionally frustrating to find that there is so much I’m unfamiliar with despite attempting to read broadly and, as best I can, deeply on subjects relevant to the things I’m talking about on Winning Slowly, in programming, etc. One of the great humility-drivers of the last few years is finding that, my best efforts to self-educate notwithstanding, I know very little even in the fields I care most about.↩