The Assumptions of Searle’s Chinese Room Thought Experiment

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

User avatar
TrlstanC
Flexo
Posts: 365
Joined: Thu Oct 08, 2009 5:16 pm UTC

The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Thu Nov 09, 2017 8:37 pm UTC

I recently stumbled on this Talk at Google where Searle discusses the Chinese Room at length. It's a great review of the argument, and a number of raise good, if typical, questions at the end.
Here's the wikipedia page for Searle's Chinese room thought experiment. The experiment is part of the following argument:

A. Programs use syntax to manipulate symbols
B. Minds have semantics
C. Syntax by itself is not sufficient for semantics
Therefore programs are not minds.

The more I think about the topic, the more I agree with Searle, although I do have a problem with a couple assumptions he seems to make, but doesn’t state clearly. For example when he talks about different ways of studying the brain, and about different animals that may or may not have consciousness. It seems that he assumes that consciousness is something which is created by a relatively large and complex structure. In the best example we have, humans need to have a relatively large and complex brain with billions of neurons to be conscious.

But this seems problematic because if a brain can’t become conscious by running a program, then the consciousness must be some physical characteristic, and that physical characteristic must be shared by the component parts. For example, if we use the analogy of an electromagnet, we wouldn’t expect that running a simulation of an electromagnet would create actual magnetism. But while we need a relatively large and correctly wired piece of equipment to create electromagnetism that way, we also know that the magnetism is the result of the characteristics of the individual particles.

Or to put it another way, we might not need a structure like a brain to create consciousness, it’s just that we need something relatively large and complex to harness the physical characteristics of its component parts in a useful way. Taking this idea to the extreme we end up with Hylopathism, and if we combine that idea with the idea that a program must be run on something, then while programs might not be conscious anything a program runs on probably is. And the running of the program might create experiences very much in the same way that the wiring of our brain creates our experiences. For example, maybe our brain uses electron tunneling to create the sensation of red, and the same mechanism in modern CPUs means the computer is having an experience of “twinkly red” whenever any program is running?

User avatar
Pfhorrest
Posts: 3898
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Thu Nov 09, 2017 10:44 pm UTC

Your conclusion there is basically my point of view as well. Anti-emergentist reasoning like yours successfully (IMO) concludes that any kind of phenomenal consciousness (as distinct from mere access consciousness) that may exist in human brains must have precursors in the constituent components of those brains. So either there is no such thing as phenomenal consciousness at all, or else everything has some degree of it, in a kind of pan-proto-psychism or hylopathism, as you say. Thought experiments like Mary's Room show that there is some kind of phenomenal consciousness, inasmuch as first-person experience of being a thing undergoing a process (e.g. being a brain perceiving redness) is not imparted by any amount of third-person knowledge about things undergoing such processes (e.g. studying what brains do in response to their eyes' response to red light); or as I like to say for a more visceral example, no amount of studying sexology will teach you what it's like to have sex yourself.

So the conclusion to draw is that there is a first-person, phenomenally "conscious" experience for everything, about which there isn't really much more to say, per se. The interesting distinctions to draw between e.g. humans and rocks is the nature of the kind of thing, which is given by the function of the thing, which determines its experience every bit as much as it determines its behavior: experience is the input to the function that defines a thing, behavior is the output from it. (And every behavior is something else's experience, and vice versa: they're all just interactions, seen from either the first or third person, as the subject or as the object). Functionality is what makes human consciousness notable and interesting: access consciousness is where all the interesting questions are. Saying that everything has a first-person experience as a subject isn't really any more interesting or substantive a statement than saying everything has a third-person behavior as an object: okay, but what is its behavior, what is its experience, in short what is its function? That's what really matters. Functionalism.

The Chinese Room argument is often put forth to dispute functionalism, but I think it fails at that in an important way. It successfully (IMO) proves that syntax is not sufficient for semantics, but it doesn't disprove functionalism because the supposedly (and I'd agree) not-conscious room is not functionally equivalent to an actual Chinese speaker. You can hand an actual Chinese speaker a picture of a duck on a lake and ask them (in Chinese) "What kind of bird is on the water?" and he can answer; the room cannot, because while it contains a person with eyes that can see the picture, and that person (with his books) could tell you (the Chinese equivalent of) that ducks are a kind of bird and a lake is a body of water, he cannot connect the words for "duck", "bird", "lake", or "water" to the images in the picture. That connection is where the semantics come from. Knowing that a symbol signifies some experiential phenomenon.

But we can in principle build programs that can do that, even though those programs still at their base only manipulate symbols, by translating experiential phenomena into huge arrays of symbols. An digital photo is a visual image translated into a bunch of numbers, and identifying patterns in such numbers and connecting them to more abstract symbols is what machine vision is all about. I feel a little unsure of how to translate this into a direct contradiction of any of Searle's premises, but it's like saying "Computers only do logical operations on boolean values. You can't do division using only logical operations on boolean values. Arithmetic involves doing division. Therefore computers can't do arithmetic." It looks on the surface like all the premises are cogent and the inferences valid but the conclusion is clearly false so something somewhere in there is wrong, and whatever it is, that's the same thing wrong with Searle's formal argument.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
TrlstanC
Flexo
Posts: 365
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Nov 10, 2017 2:27 am UTC

Saying that everything has a first-person experience as a subject isn't really any more interesting or substantive a statement than saying everything has a third-person behavior as an object: okay, but what is its behavior, what is its experience, in short what is its function? That's what really matters. Functionalism.


There was a part in the talk that I hadn't heard this explicitly stated before, and thought was really interesting, about the difference between something being subjective (or objective) in an ontological or epistemic sense. And when we think about the possibility that everything could have a subjective experience, then the question is why are people different? From a behavioral perspective our consciousness obviously can have a very big effect, whereas even if rocks or ants had conscious experiences they wouldn't be able to "do" anything with them. The experience would be there, but they wouldn't be wired up or made up in a way that those subjective experiences cause differences in behavior.

Of course rocks and minerals do have lots of characteristics that are objectively true and do affect their "behavior" such as it is. So the question I have, is why does consciousness need to be another characteristic of these things? Let's say we have iron atoms, they have all kinds of objective characteristic, their mass and density and the way they interact with electrons and magnetism, etc. Now let's say they also have conscious experiences, and maybe even that human brains use this characteristic of the iron in their blood to create our human experience of consciousness. Why would this conscious characteristic have to be new or additional? Why couldn't it be the subjective experience that corresponds with one of the objective characteristics, say magnetism for example.

When an atom of iron interacts with a magnetic field we can observe it's objective behavior, but that same field could be causing a subjective experience of consciousness as well. And in fact, if we think that our conscious experiences have some affect on our behavior, then we'd want consciousness to have an objective effect of some sort on the world right? If the neurons in our brain are experiencing consciousness and that subjective experience is causing a difference in the way they fire or interact, that's an objective change. And unless we've completely missed some other kind of physical force that acts on the neurons in our brains, then that would mean that consciousness is interacting via an existing force we already know of. Or to put it another way, that it's the subjective experience of an objective force we already recognize.

Personally, once I accept that chinese room argument, that programs can't be conscious, then I can only think of these two eventual conclusions. Either consciousness is the subjective experience of a known physical force, or it's the subjective experience of a physical force we haven't discovered yet.

User avatar
Pfhorrest
Posts: 3898
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Fri Nov 10, 2017 3:40 am UTC

I wouldn't say that (phenomenal) consciousness is the subjective experience of a physical force; it's not like, as in your example, magnetism is consciousness, or anything like that. Rather it's the subjective experience of all the physical stuff happening to a the physical thing having that experience, including in some cases (like human brains) physical stuff the physical thing is doing to itself (which is where it starts to get interesting).

I would actually prefer not to use the term "consciousness" for what is called "phenomenal consciousness" at all, and reserve that for the functional characteristic called "access consciousness", which is a form of reflexive function (the brain doing stuff to itself, experiencing itself, and experiencing being done-unto by itself). Calling the-having-of-subjective-experiences "consciousness" feels a little bit like calling the quantum nondeterminism of an electron its "free will". (In that case too, I would say that free will properly speaking is just a functional characteristic, nothing really metaphysical at all; a function highly analogous to access consciousness, in fact). So when you ask "why does consciousness need to be another characteristic of these things?", I'd say it's just not -- access consciousness is just a complex function built up from perfectly ordinary physical functions, and phenomenal consciousness is just the subjective experience of being done-unto in ordinary physical ways.

I find it especially interesting to combine this with Whitehead's ontology of "occasions of experience". Given the foregoing model of both behavior and experience being just different perspectives on interactions (the perspective of the subject and the perspective of the object), those being just ordinary physical interactions barring any reason to think there's any other kind, all of which boil down ultimately to exchanges of gauge bosons, I think it's justified to literally identify Whiteheadian "occasions of experience" with those bosons. Occasions of visual experience? The literal photons hitting your eye. Occasions of auditory or tactile experience? The literal photons mediating electrostatic repulsion between you and the air / whatever you're touching. Occasions of olfactory or gustatory experience? The literal photons mediating the chemical interactions between your taste buds /olfactory bulbs and whatever you're tasting or smelling. This unifies "materialism" and "idealism" into a single physicalist phenomenalism, quite parallel to the unification of functionalism and panpsychism we already seem to agree on.

All we are directly aware of, the fundamental building blocks of the reality we know, are the occasions of subjective experience we are subject to, but those are identical to the physical particles we are interacting with, and all the rest of physics as necessary to explain why those patterns of particles interact with us that way is implied by the experiencing of those patterns of experience. But we ourselves are not some kind of special entity beyond the world we experience, but just another arrangement of the same kind of stuff that we're experiencing, all of which in turn has some kind of experience itself.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
TrlstanC
Flexo
Posts: 365
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Nov 10, 2017 2:41 pm UTC

I wouldn't say that (phenomenal) consciousness is the subjective experience of a physical force; it's not like, as in your example, magnetism is consciousness, or anything like that.


Maybe it is, maybe it isn't? The fact that we have no idea what causes conscious experiences makes it hard to say either way. I just thought that Searle's talk about the difference be ontological or epistemic meaning was an interesting way to look at the question.

The literal photons hitting your eye. Occasions of auditory or tactile experience? The literal photons mediating electrostatic repulsion between you and the air / whatever you're touching. Occasions of olfactory or gustatory experience? The literal photons mediating the chemical interactions between your taste buds /olfactory bulbs and whatever you're tasting or smelling. This unifies "materialism" and "idealism" into a single physicalist phenomenalism, quite parallel to the unification of functionalism and panpsychism we already seem to agree on.


Are you saying that consciousness happens when photons actually hit the eye? That seems like it's unlikely to be true given that there's a lot of information in our visual field that we're not conscious of and/or that we actually get wrong. For example, all kinds of optical illusions are caused by the way the neurons in the eye and nervous system process information, but we're not consciously aware of all those steps, we're only aware of the experience at some later step.

User avatar
Pfhorrest
Posts: 3898
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Fri Nov 10, 2017 5:18 pm UTC

I think you are overlooking the distinction I keep emphasizing between phenomenal and access consciousness. Photons hitting you eye are not access consciousness, which is the real, important sense of consciousness, that meshes with our everyday use of the term like you just used it ("consciously aware"). But so-called phenomenal consciousness (which is what Searle et all seem interested in) is just (on my account) the experience of being a thing interacting with other things, and gauge bosons like photons are the constituent elements of all such interactions, and so the "occassions of experience" in the sense Whitehead uses that phrase.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

SuicideJunkie
Posts: 152
Joined: Sun Feb 22, 2015 2:40 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby SuicideJunkie » Fri Nov 10, 2017 6:28 pm UTC

You can hand an actual Chinese speaker a picture of a duck on a lake and ask them (in Chinese) "What kind of bird is on the water?" and he can answer; the room cannot, because while it contains a person with eyes that can see the picture, and that person (with his books) could tell you (the Chinese equivalent of) that ducks are a kind of bird and a lake is a body of water, he cannot connect the words for "duck", "bird", "lake", or "water" to the images in the picture.
I imagine the answer from the Chinese Room would be "Sorry, I've been blind since birth. Can you describe it for me?"
You'd get the same problems from an actual Chinese speaker who happened to be as blind as the Room. (While it does contain physical eyes in most constructions, they're a mechanism for thinking, not for seeing the environment)

In order to properly add vision to the Chinese Room, you'd have to put vastly more work into the instructions, and probably pass the image in as a bitmap to correspond with the existing speech-as-text input channel.

User avatar
TrlstanC
Flexo
Posts: 365
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Nov 10, 2017 8:07 pm UTC

I think you are overlooking the distinction I keep emphasizing between phenomenal and access consciousness.


I've just never seen a good argument for why it should be called "access consciousness" as opposed to "access to consciousness". I think everyone would agree that phenomenal consciousness is consciousness, or more accurately might just say that it's the only kind of consciousness.

Take the experiment discussed here that looks at the difference between the sharpness of experience vs. the ability to name and report the parts of that experience:

Block's definitions of these two types of consciousness leads us to the conclusion that a non-computational process can present us with phenomenal consciousness of the forms of the letters, while we can imagine an additional computational algorithm for extracting the names of the letters from their form (this is why computer programs can perform character recognition). The ability of a computer to perform character recognition does not imply that it has phenomenal consciousness or that it need share our ability to be consciously aware of the forms of letters that it can algorithmically match to their names.


I understand it as saying that "additional computational algorithm for extracting the names" is what would be called "access consciousness"? But that doesn't sound like consciousness to me. I might be consciously aware of the output of that algorithm, but I'm not aware of how it works i.e. it's not part of my consciousness.

User avatar
Pfhorrest
Posts: 3898
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Fri Nov 10, 2017 9:04 pm UTC

Regardless of the aptness of the names, the meaning of what I said before hinges on distinguishing between the things they named. Let's call them something else and just leave the word "consciousness" out of it to avoid confusion. There are two concepts: one of them is the what-it's-like experience of being a thing of some kind (call that "P"), and the other is a kind of reflexive functionality that gives a thing access to information about itself (call that "A"). These can, of course both apply at once, and in the case of humans almost uncontroversially do: there is a what-it's-like-to-be-a-brain-with-reflexive-functionality-like-access-to-information-about-itself experience.

What I was saying before was that the photons, constituting as they do the elementary components of all the interactions anything is undergoing, likewise constitute the elementary components of the what-it's-like experience of being something undergoing that. But that doesn't mean that we have reflexive access to the information about every interaction with every photon we're undergoing.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
TrlstanC
Flexo
Posts: 365
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Nov 10, 2017 10:02 pm UTC

[url]But that doesn't mean that we have reflexive access to the information about every interaction with every photon we're undergoing.[/url]

Ahhh, ok! I get what you're saying, you're right I was getting hung up on the name. And I think that's a really interesting point because it raises some good questions about consciousness. If we start with a bit of a recap:

  • Consciousness is a physical process that happens in the brain
  • Since the brain is made up of neurons, their structure undoubtedly plays a role in either how we experience consciousness or what we experience
  • Neurons are made up of molecules, which are made up of atoms, which are made up of subatomic particles, etc.
  • Unless consciousness is a physical characteristic that's completely unlike any other physical thing, there's something about the nature of all those particles that creates consciousness
  • But there's nothing unique about the particles in my brain, they're the same kinds of particles that make up everything else
  • Question One: where do the particles that create my mind/conscious experience end and the non-my-consciousness particles that make up the rest of the world start?
  • Even if we assume that some particular large scale structure of the brain is required for consciousness (as Searle seems to do), we can just rearrange the question a bit
  • We don't have conscious access to information about every photon our brains are interacting with, or even every neuron, and there's even large parts of our brain which are doing things that we don't have too access too
  • But a defining characteristic of our consciousness is that it's unified. We don't experience separate consciousness that are happening in different parts of our brain, or even experience a graininess or "pixelation" of a consciousness that's made up of many smaller parts, we also don't experience an edge or limitation to our consciousness, we can experience as much information as can get in.
  • Question Two: At some point there's a neuron that's part of the system creating my consciousness, which is connected to a neuron that's not. Where is this boundary, and why can't we find it?
  • Question Two Rephrased: What keeps my consciousness, which is a unified creation of a bunch of neurons from "leaking" into other nearby neurons. Or at more basic level, how can I have access to the consciousness created by the atoms in my brain, but not the atoms in my skull or eyes that are interacting with them.

The easy way to ignore these questions is to reject the conclusion of the Chinese Room, but I can't find any reasonable way to do that that doesn't raise even more difficult questions. If minds can't be (matter independent) programs, then minds have to be dependent on the characteristics of the matter they're made of, but also seem to work in ways unlike anything else we've observed.

User avatar
Pfhorrest
Posts: 3898
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Fri Nov 10, 2017 10:33 pm UTC

If I understand you correctly, I think both of those questions are essentially questions about the nitty-gritty details of neurology. I don't have access to information about the state of a random calcium ion in my left femur because there is no structure that conveys that information to the part of my brain that I am having the experience of being. The boundary between that part of my brain and the rest of my brain or the rest of my body beyond that is just defined by whatever the physical neurology happens to be.

Kind of taking a step back, an image I hold in my mind to help understand this model goes something like this. Draw a diagram of every interaction between everything, represented by arrows flowing between points. The arrows flowing out of a point are the behaviors of that thing, which constitute all of its objective properties: a thing just is what it does. The arrows flowing into a point are the experiences of that thing, the qualia or the like. The arrows are essentially describing the flow of information, and the map of in-arrows to out-arrows of a point describes that thing's function: how it behaves in response to different experiences. Every kind of thing can be mapped this way, but most things have non-reflexive functions: something does something to them, they experience that and do something to something else in response.

Where it gets interesting is when a thing's function becomes reflexive, especially in particular ways. A human brain's function is highly reflexive, to the point that most of the arrows coming in or out of it bend around back to itself, and so our experience is not just of the world, but largely of ourselves, including how we're experiencing and reacting to the world; and likewise much of our (mental) behavior is not upon the world directly, but upon ourselves, changing our own state so that we react differently to our experience of the world. That's the interesting thing about human consciousness. If you stripped away that self-awareness and self-control and simplified the diagram down to just input from experience of the world leading straight to behavioral outputs to the world, you'd strip away everything that makes us "conscious" in an interesting way, even though there would still be some technical experience of the world, which we would be completely unaware we were having.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
ucim
Posts: 5564
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ucim » Fri Nov 10, 2017 11:09 pm UTC

TrlstanC wrote:Consciousness is a physical process that happens in the brain
No. Consciousness is the result of a physical process that happens in the brain. Important difference. Consciousness is not a process.

TrlstanC wrote:Unless consciousness is a physical characteristic that's completely unlike any other physical thing, there's something about the nature of all those particles that creates consciousness
Consciousness doesn't come from the particles, it comes from the relationship between the particles. As a (simple) analogy, an orbit can't exist without two particles to be in orbit around each other. But the orbit is not embodied in the particles; it is a relationship between two (specific) particles. All particles have the capability of being in such a relationship, but not all of them are, and there's nothing intrinsic about the particles that are not.

TrlstanC wrote:Question Two: At some point there's a neuron that's part of the system creating my consciousness, which is connected to a neuron that's not. Where is this boundary, and why can't we find it?
Neurons don't create consciousness. The relationship between neurons is what does it. The boundary, if there is one, would be in whether these neurons are in that relationship with those neurons. This is a subject of neurology that is being explored, but we don't have much yet to go on.

As to the Chinese room, the problem with it is that consciousness embodies experience, and the Chinese room does not have the experience of the things it's conversing about. It might know the Chinese character for baseball, but it has never tasted a hot dog, never heard the roar of the crowd, and has never played ball with anybody. It doesn't grok baseball. Figure out how to get the Chinese room to experience baseball and you'll be well on your way towards convincing me there's no difference. But until then, that's a world of difference right there.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
TrlstanC
Flexo
Posts: 365
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Sat Nov 11, 2017 1:59 am UTC

I don't think I'd be confident making any conclusions about what consciousness is or isn't given how little we know about it, from an objectively epistemic perspective we know essentially nothing.

No. Consciousness is the result of a physical process that happens in the brain. Important difference. Consciousness is not a process.


"Process" is a pretty vague category of things, virtually anything that changes and has some result could be a process of some sort.

Consciousness doesn't come from the particles, it comes from the relationship between the particles.


Maybe? But then the orbit example sounds a lot like a process to me? And of course, a lot of things are the result of a relationship between particles. Or from another perspective nothing exists by itself, what qualities does any particle have that aren't defined by it's interaction or relationship with something else?

Neurons don't create consciousness. The relationship between neurons is what does it.


Again, how can we be sure? Maybe there's a single consciousness causing neuron in all of us, and everything else just feeds it information? That's a testable hypothesis, but we can't say for sure whether it's true or not because we don't know enough about consciousness.

Figure out how to get the Chinese room to experience baseball and you'll be well on your way towards convincing me there's no difference.


Maybe someone wants to try and convince you there's no difference, but the point Searle was making (and I agree with) is that there is a difference.

User avatar
Pfhorrest
Posts: 3898
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Sat Nov 11, 2017 2:39 am UTC

FWIW my position, that I thought Tristan agreed with, is that the Room WOULD be no different from a conscious being IF IT WERE functionally the same (which it’s not), because anything beyond mere functionality that is needed for consciousness is something shared by all things, not something special about human brains.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
TrlstanC
Flexo
Posts: 365
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Sat Nov 11, 2017 1:05 pm UTC

I would say that even if the chinese room were functionally equivalent to a person, that it wouldn't be the same as a human, for two reasons:

  • Defining what counts as the room is somewhat arbitrary. If we can't even figure out what parts of a human are necessary for consciousness, then I don't think we could say what parts of a room were either.
  • The chinese room argument concludes that programs are not minds, and while consciousness is required for a mind, I'm not sure it's sufficient. I could imagine something that accomplished the same thing as a mind, and was also conscious, but wasn't equivalent.

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5212
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Somerville, MA
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby doogly » Fri Nov 17, 2017 3:31 pm UTC

The Chinese room argument is the purest form of begging the question.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7302
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Zamfir » Fri Nov 17, 2017 7:56 pm UTC

I have never understood how the guy is supposed to change anything about the argument. He's clearly crucial - if we imagine a silicon CPU in the room instead of the guy, it's not The Chinese Room Argument anymore. Its another argument. But if the guy was only pedalling to provide power to the room, then that surely is the same non-Chinese-Room argument, with the guy as superfluous decoration. What if he's replacing a single connection in the CPU, by pushing a button whenever a light comes on? Does it matter whether the button pusher speaks Chinese?

Supposedly , there is some point where he starts to matter. Where his presence makes a difference compared to a thought experiment about a generic silicon machine that somehow passes a Turing test. But don't see it, at all. He just seems superfluous distraction all the way.

User avatar
Sizik
Posts: 1156
Joined: Wed Aug 27, 2008 3:48 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Sizik » Sat Nov 18, 2017 1:46 am UTC

Taking the thought experiment at face value, it only demonstrates that CPU chips don't "understand" Chinese, even if they're running a Chinese-speaking strong AI program that passes the Turing test.
gmalivuk wrote:
King Author wrote:If space (rather, distance) is an illusion, it'd be possible for one meta-me to experience both body's sensory inputs.
Yes. And if wishes were horses, wishing wells would fill up very quickly with drowned horses.

User avatar
ucim
Posts: 5564
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ucim » Sat Nov 18, 2017 3:42 am UTC

Neurons don't understand Chinese either, even when they are part of a brain belonging to a native Chinese speaker.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
Pfhorrest
Posts: 3898
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Sat Nov 18, 2017 3:53 am UTC

I think a more illustrative modification to the thought experiment is to imagine that the guy in the room just memorizes his rulebooks, and then you let him out of the room. I at least would say that the guy still does not speak Chinese, because although he knows the relations between a bunch of hanzi, he doesn’t know what any single hanzi means, in terms of the phenomenal world. They are all just empty symbols to him. That is why the room also does not speak Chinese. Not because of anything metaphysically wrong with the substrate running the program, but because the program is itself deficient. Those rule books wouldn’t teach a human Chinese, so why would we expect them to teach a (manually executed) computer it?
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
ucim
Posts: 5564
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ucim » Sat Nov 18, 2017 4:42 am UTC

If the guy "memorized the rulebooks" and wanted a hot dog, could he order one in Chinese? If the waiter said that they didn't have any, would the person be able to "apply the rulebooks" and figure out what he said?

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
Pfhorrest
Posts: 3898
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Sat Nov 18, 2017 4:59 am UTC

ucim wrote:If the guy "memorized the rulebooks" and wanted a hot dog, could he order one in Chinese?

As I recall Searle's setup, nope, because the guy has no means by which to connect his feeling of want for a hot dog to any hanzi (that's Chinese characters in case not everyone knows that). He only knows that he's supposed to reply to certain strings of hanzi with certain other strings of hanzi according to certain rules, but those hanzi don't connect to anything else besides other hanzi. And that's why I think Searle's thought experiment only proves something trivial (syntax is not semantics) and not the substantial thing he wanted to prove (computers can't think). If the books the guy had did include means to connect hanzi to other things, to give referents to the symbols -- like if he had picture books and speak-and-spell books and scratch-and-sniff books and whatever -- then I would say that to memorize those books just would be to have learned Chinese, and the Room (with the guy in it having to look a bunch of things up in his non-memorized books) as a whole does understand Chinese, and so would a computer (with appropriate artificial senses available to it) running the same program as laid out in the books as well.

If the waiter said that they didn't have any, would the person be able to "apply the rulebooks" and figure out what he said?

He would probably understand that the waiter was saying the negation of the question he had asked, since that's a purely syntactic relation, but he wouldn't know what the question he had asked meant (and so, I guess, wouldn't have been able to ask it, since he wouldn't know what to say to convey his desire for a hot dog).
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

morriswalters
Posts: 6899
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby morriswalters » Sat Nov 18, 2017 12:11 pm UTC

If the guy "memorized the rulebooks" and wanted a hot dog, could he order one in Chinese? If the waiter said that they didn't have any, would the person be able to "apply the rulebooks" and figure out what he said?
Can your smart phone learn Chinese by using Google Translate?

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7302
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Zamfir » Sat Nov 18, 2017 1:21 pm UTC

I think a more illustrative modification to the thought experiment is to imagine that the guy in the room just memorizes his rulebooks, and then you let him out of the room.

Human beings just cannot memorize large (computer-sized) amounts of precise, context-free data. Let alone perform exact operations on them in their head. Phone numbers already strain our capacity in this regard.

The Chinese Room is at least conceivable, apart from the glacial speed. If 'glacial' is the right word - the room might, perhaps, do something like an operations per minute. That's a million year for every second of a desktop computer. Our hypothetical Turing-test AI program could be much more demanding than that desktop can handle. The remaining lifetime of the sun might well be too short for the Chinese Room to formulate a single answer.

Such considerations are more than technicalities, I think. The guy and the room and the 'rulebooks' are appeals to our intuition, but we do not have much intuition here. We find it misleadingly easy to imagine someone who learns a book by heart, and then has conservations he doesn't understand by applying rules from the book.


Return to “Serious Business”

Who is online

Users browsing this forum: Leovan and 7 guests