June 27, 2003
Searle and consciousness

UPDATE: Weinberger's and Searle's arguments are simply, and trivially false. Read the final word (I hope) on the matter.

After a lengthy discussion on Searle's chinese room argument on David Weinbergers weblog I think I finally understand what the Searleists are getting at, even if I don't think they are making their case against strong AI.
The discussion is rather involved, but my summary of it goes like this:
Weinberger argues that the common position that we will have built intelligent, conscious machines if we can build a simulation of an actual conscious intelligence is wrong, since the simulation relies on our interpretation of it (as something symbolically equal to a conscious being) to be understood as conscious.
UPDATE: Here Weinberger confuses contrary opposite with logical opposite. Read the final word on the matter.

There's a point there, even if one might argue that interpretation is all we can do - inasmuch as language about 'stuff' is always an interpretation of said stuff.
The important thing to notice is however that Weinberger (and Searle) are actually not saying anything about the consciousness of the physical system performing the simulation, but are only dismissing the argument that it is conscious by virtue of being a simulation of a conscious system.
In short, the objection made to strong AI is what Lakatos calls 'local' - an objection to part of an argument in defense of a thesis, not 'global' - an objection to the proposed thesis itself. I think this is an important objection to the chinese room argument, and I don't recall having seen it before - but then again I am not that well read on the issue.
UPDATE: As mentioned, I think the objection is just plain wrong.

While I am not sure where this leaves us with respect to reasoning about consciousness at all, maintaining the position that consciousness can only be understood as a quality of something real (insert longish blurb on intensionality here) does provide a good explanation of some of the conundrums proposed by the mind as pattern explanation.
For example it offers an immediate answer to question of whether a copied consciousness is the same consciousness as the original. It is not - since it is not the same real object anymore.
UPDATE: The point on the reality of consciousness is well made, but not in opposition to claims made by strong AI.

The Weinberger log entry has good links to Kurzweils website on the matter (pun intended).
And I grudgingly have to admit I didn't get Weinbergers point in previous posts.
UPDATE: And then to understanding them and being fooled by them. Not my proudest moment.

Posted by Claus at June 27, 2003 09:49 PM
Comments (post your own)

Claus, thanks for this.

I don't fully understand why you think I'm not arguing against strong AI. But I'm getting there.

As for your being embarrassed that you didn't understand my point initially: Jeez, that was my fault! You've been very helpful in helping me clarify both what I mean and how I say it.

Posted by: David Weinberger on June 27, 2003 10:31 PM
Help the campaign to stomp out Warnock's Dilemma. Post a comment.

Email Address:


Type the characters you see in the picture above.

(note to spammers: Comments are audited as well. Your spam will never make it onto my weblog, no need to automate against this form)


Remember info?