Can a machine be conscious?

Please compose all posts in Emacs.

Moderators: phlip, Moderators General, Prelates

User avatar
phillipsjk
Posts: 1213
Joined: Wed Nov 05, 2008 4:09 pm UTC
Location: Edmonton AB Canada
Contact:

Re: Can a machine be conscious?

Postby phillipsjk » Mon Jul 06, 2009 5:07 pm UTC

How about this for a working definition:
And entity is conscious if it has a drive for self-preservation.


Of course, such a definition would make your computer conscious when it shuts down to prevent overheating due to a fan failure.

Azimovs' third law of robotics would make them conscious as well.

Edit: by this definition, plants would be conscious as well (but not intelligent). For example, trees grow toward the sun and drop their leaves in the winter. Both actions probably rely on "simple" triggers; requiring the tree to take action to save itself. There is little evidence that plants are capable of complex thought.
Last edited by phillipsjk on Fri Jul 10, 2009 2:22 am UTC, edited 1 time in total.
Did you get the number on that truck?

McLiarpants
Posts: 9
Joined: Fri Jun 05, 2009 6:15 pm UTC

Re: Can a machine be conscious?

Postby McLiarpants » Tue Jul 07, 2009 3:29 pm UTC

I have quite enjoyed the discussion on consciousness. 0xBADFEED makes some very good points. I'm still fomulating my own thoughts on the subject, but I wonder if somewhere in your definition of consciousness we should add the ability to perfom C-Abilities independent of outside influence? That is, the ability to move through 0xBADFEED's list of C-Abilities without another, similarly-able entity giving or forcing the next step.

I'm taking this somewhat from a discussion I once heard about AI with Einstein. I searched but could no repoduce the argument Eistein used, but the possibility of machine consciousness was based around the fact that someone must always feed a machine rules. All software is, at its core, if-then statements (or more accurately, can be re-formulated into if-then statements. We use other forms for efficiency or elegance). So Einstein asks whether something that is always told what to do, can actually be conscious (or artificially intelligent. Consciousness and intelligence are somewhat related, in my mind. See below).

I see it like this. A machine could potentialy move through the steps of 0xBADFEED's C-Abilities. However, to do so, someone (the programer) must tell it how to learn. The computer must be told to save an experience in memory. It must be told to go look for memories when its sensory equipment detect a new experience. Even if we give the computer a lot of rules, it will eventually encounter something it doesn't have a rule for. In this case, if the computer is not told to go analyse a bunch of other inputs, experience, and rules and come up with an answer, it will not know what to do. I would argue that conscious entities do this automatically.

But perhaps I'm confusing intelligence with consciousness. However, I don't think the too are as separate as we might think. Sure, an entity can be conscious without a great deal of intelligence. But I would say that intelligence is a prerequisite for consciousness. If this is the case, when asking about the consciousness potential of machines, we must ask if they can be made intelligent (btw, I would not define intelligence as the ability to do complicated tasks quickly. I hesitate to suggest a definition without more thought on my part, but I might say intelligence is the potential to learn. Therefore, computers would be smart, but not intelligent).

In conclusion, I would argue that because computers (or machines) must always be given rules they cannot achieve intelligence. Since they cannot achieve intelligence, they cannot become conscious.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Wed Jul 08, 2009 4:00 am UTC

McLiarpants wrote:But perhaps I'm confusing intelligence with consciousness. However, I don't think the too are as separate as we might think. Sure, an entity can be conscious without a great deal of intelligence. But I would say that intelligence is a prerequisite for consciousness. If this is the case, when asking about the consciousness potential of machines, we must ask if they can be made intelligent (btw, I would not define intelligence as the ability to do complicated tasks quickly. I hesitate to suggest a definition without more thought on my part, but I might say intelligence is the potential to learn. Therefore, computers would be smart, but not intelligent).

It seems like several people are talking about "intelligence" and trying to draw a distinction between "intelligence" and "consciousness". While I agree that "consciousness" and "intelligence" are two distinct concepts it's not clear to me that they're really qualitatively different.

When we say that something is "conscious" or "sentient" we are making a statement about its "intelligence". We are saying that it exhibits a level of intelligence that is above some very vaguely defined threshold. Right now, lacking any clear definition, it's very much an "I know it when I see it" threshold. Systems above the threshold we call "conscious" and those below it we don't.

We can distinguish between things that have different levels of "intelligence", say state of the art AI software vs. a common finch. I would say the finch is orders of magnitude more intelligent than today's most impressive AI systems, given the variety and complexity of situations that it can handle. I would even say it sits somewhere slightly above the "consciousness" threshold. I'm not convinced though that systems above and below the threshold are really so dissimilar. If we follow the "consciousness" progression down from the finch, to a goldfish, to a beetle, to a fluke, etc. eventually we hit a point where we are no longer comfortable calling the system conscious but it still exhibits many indicators of intelligence. The main point I'm trying to make is that it's not a question of qualitative, but quantitative difference. It's merely a matter of degree.
McLiarpants wrote:I see it like this. A machine could potentially move through the steps of 0xBADFEED's C-Abilities. However, to do so, someone (the programer) must tell it how to learn. The computer must be told to save an experience in memory. It must be told to go look for memories when its sensory equipment detect a new experience. Even if we give the computer a lot of rules, it will eventually encounter something it doesn't have a rule for. In this case, if the computer is not told to go analyse a bunch of other inputs, experience, and rules and come up with an answer, it will not know what to do.

Who's to say our behavior isn't also completely programmed? I don't see any real difference between a system that has been programmed by millions of years of evolution, versus a system that has been programmed by a computer scientist. Human brains also have big problems when they encounter a truly new or unexpected situation. Usually we know it as "panic". And one of the most common responses to panic is to just freeze, especially if the situation is life-threatening.
I would argue that conscious entities do this automatically.

It's not entirely clear that humans aren't also incredibly complex automatons. The mind, being a physical system, is subject to the laws of physics like anything else. In a purely reductionist approach it's difficult to see how and where free will is introduced. I'm not sure what I believe on this front.

somebody already took it
Posts: 310
Joined: Wed Jul 01, 2009 3:03 am UTC

Re: Can a machine be conscious?

Postby somebody already took it » Wed Jul 08, 2009 6:16 am UTC

0xBADFEED wrote:
somebody already took it wrote:I don't think there is a way to establish that it is true without knowing what feelings are.
How would you argue against a theory where feelings are considered other-worldly?
For instance, what if the physical world is tapped into or even constructed by feelings?
Feelings might not be willing to inhabit/create nonbiological brains.

You keep referring to "feelings". I'm not sure what you mean by this. I personally don't find anything special about "feelings" that requires or even points to an other-worldly or magical explanation. Which of these "feelings" do you feel seem "other-worldly" or magical?
* I am hungry.
* I am tired.
* I am horny.
* I am sad.
* I am confused.
* I am in love.
Or can you give me an example so I might know where you're coming from?

What I'm refering to as feelings are sorta along the lines of qualia (http://en.wikipedia.org/wiki/Qualia). I'm not saying that feelings are necessaryly other-worldly, but only that I don't see any reason to assume they are of this world (Is that the same as not existing?).
Also, I think it this case "other-worldlyness" might be better expressed as "being outside the realm of human reasoning."
To answer your last question, I think all the those feelings have the possibility of being other-worldly/outside the realm of human reasoning.

User avatar
LuNatic
Posts: 973
Joined: Thu Nov 20, 2008 4:21 am UTC
Location: The land of Aus

Re: Can a machine be conscious?

Postby LuNatic » Wed Jul 08, 2009 10:12 am UTC

0xBADFEED wrote:The brain, being a physical system, is subject to the laws of physics like anything else. In a purely reductionist approach it's difficult to see how and where free will is introduced. I'm not sure what I believe on this front.


FTFY. The brain is just a chemical computer a complex series of reactions that ultimately just boil down to cause and effect. I cannot reconcile that the mind and brain are one and the same, however. There are thousands of functions that my brain takes care of that my mind is does not perceive. I don't know don't know how to make my heart beat. I don't know how to control the chemical balance of my body. My brain handles all these things, yet my mind is aware of none.
Cynical Idealist wrote:
Velict wrote:Good Jehova, there are cheesegraters on the blagotube!

This is, for some reason, one of the funniest things I've read today.

User avatar
OOPMan
Posts: 314
Joined: Mon Oct 15, 2007 10:20 am UTC
Location: Cape Town, South Africa

Re: Can a machine be conscious?

Postby OOPMan » Wed Jul 08, 2009 11:00 am UTC

LuNatic wrote:
0xBADFEED wrote:The brain, being a physical system, is subject to the laws of physics like anything else. In a purely reductionist approach it's difficult to see how and where free will is introduced. I'm not sure what I believe on this front.


FTFY. The brain is just a chemical computer a complex series of reactions that ultimately just boil down to cause and effect. I cannot reconcile that the mind and brain are one and the same, however. There are thousands of functions that my brain takes care of that my mind is does not perceive. I don't know don't know how to make my heart beat. I don't know how to control the chemical balance of my body. My brain handles all these things, yet my mind is aware of none.


Erm, who's saying Mind === Brain?

Mind arises from Brain, sure.

Mind === Brain? False
Image

Image

User avatar
LuNatic
Posts: 973
Joined: Thu Nov 20, 2008 4:21 am UTC
Location: The land of Aus

Re: Can a machine be conscious?

Postby LuNatic » Wed Jul 08, 2009 1:16 pm UTC

OOPMan wrote:
LuNatic wrote:
0xBADFEED wrote:The brain, being a physical system, is subject to the laws of physics like anything else. In a purely reductionist approach it's difficult to see how and where free will is introduced. I'm not sure what I believe on this front.


FTFY. The brain is just a chemical computer a complex series of reactions that ultimately just boil down to cause and effect. I cannot reconcile that the mind and brain are one and the same, however. There are thousands of functions that my brain takes care of that my mind is does not perceive. I don't know don't know how to make my heart beat. I don't know how to control the chemical balance of my body. My brain handles all these things, yet my mind is aware of none.


Erm, who's saying Mind === Brain?

Mind arises from Brain, sure.

Mind === Brain? False


refer to what I quoted. He said the mind is a physical system. I can only assume he was talking about the brain, as there currently no scientific definition of the mind.
Cynical Idealist wrote:
Velict wrote:Good Jehova, there are cheesegraters on the blagotube!

This is, for some reason, one of the funniest things I've read today.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Wed Jul 08, 2009 1:41 pm UTC

LuNatic wrote:Refer to what I quoted. He said the mind is a physical system. I can only assume he was talking about the brain, as there currently no scientific definition of the mind.

Yes, brain is what I meant and I'm aware of the distinction. It was late and I misspoke.

User avatar
LuNatic
Posts: 973
Joined: Thu Nov 20, 2008 4:21 am UTC
Location: The land of Aus

Re: Can a machine be conscious?

Postby LuNatic » Thu Jul 09, 2009 6:05 am UTC

My bad then. Disregard above :D
Cynical Idealist wrote:
Velict wrote:Good Jehova, there are cheesegraters on the blagotube!

This is, for some reason, one of the funniest things I've read today.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Fri Jul 10, 2009 12:08 am UTC

LuNatic wrote:My bad then. Disregard above

No problem. It's important to be as specific and precise as possible when talking about these very nebulous ideas and when using heavily overloaded terminology like "mind" and "consciousness".
somebody already took it wrote:What I'm refering to as feelings are sorta along the lines of qualia (http://en.wikipedia.org/wiki/Qualia). I'm not saying that feelings are necessaryly other-worldly, but only that I don't see any reason to assume they are of this world (Is that the same as not existing?).
Also, I think it this case "other-worldlyness" might be better expressed as "being outside the realm of human reasoning."
To answer your last question, I think all the those feelings have the possibility of being other-worldly/outside the realm of human reasoning.

Just so we're clear, when I say "other-worldly" I mean something in the realm of the supernatural. That is, something that is not bound by physical law.

I don't really subscribe to the idea of "qualia", or at least the part of the idea that points towards a supernatural definition of the mind. That something like the "feeling" of "redness" or "burning" cannot be suitably conveyed from one person to another doesn't suggest to me any supernatural basis for "feelings". It only suggests that our rather primitive methods of communication are insufficient to describe them.

Imagine we had an AI that displayed C-Abilities equal to that of humans. We'll assume somewhere in the depths of its programming the AI software has some kind of internal ability to detect buffer overflows, and that the perception percolates up to the higher level such that it is aware of them. I imagine it would be analogous to a human pricking his finger.

What if you were to ask it "What does a buffer overflow 'feel' like?"

Would the AI's inability to communicate or fully explain this "feeling" indicate that there is something supernatural about buffer overflows or the AI's ability to detect them?

McLiarpants
Posts: 9
Joined: Fri Jun 05, 2009 6:15 pm UTC

Re: Can a machine be conscious?

Postby McLiarpants » Fri Jul 10, 2009 3:11 pm UTC

Imagine we had an AI that displayed C-Abilities equal to that of humans. We'll assume somewhere in the depths of its programming the AI software has some kind of internal ability to detect buffer overflows, and that the perception percolates up to the higher level such that it is aware of them. I imagine it would be analogous to a human pricking his finger.

What if you were to ask it "What does a buffer overflow 'feel' like?"

Would the AI's inability to communicate or fully explain this "feeling" indicate that there is something supernatural about buffer overflows or the AI's ability to detect them?


Aren't we talking about two different kinds of feeling? I was understanding the discussion to refer to emotions rather than sensory input. I may have missunderstood.

So I see this AI with C-Abilities sensing buffer overflows and describing how the overflow "feels" a somewhat misleading example to the discussion. The ability to communicate tactile sensory inputs is a matter of selecting the proper word. So if I prick my finger and am asked how it feels, I need only find the right word. However, if I am asked how I feel about pricking my finger, well, that's all together different.

I still feel like we need additional parameters in our deffinition of consciousness. Can we find evidence that entities displaying C-Abilities also display emotions? Fear, love, anger, etc? If all do, can we add the ability to display emotions to our deffinition? I'm trying to avoid saying "feel emotions." I think displaying them is a scientifically testable thing, feeling them may not be.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Fri Jul 10, 2009 7:45 pm UTC

McLiarpants wrote:Aren't we talking about two different kinds of feeling? I was understanding the discussion to refer to emotions rather than sensory input. I may have missunderstood.
<...snip...>

I was specifically responding to the notion of "qualia". This notion doesn't make any distinction between "feeling" sensory input or "feeling" emotions, thus a distinction is not necessary. My main point with the AI example was to move the locus of perception from the human mind to an AI and to ask what ramifications the AI's ability to "feel" a buffer overflow would have for the notion of qualia.
I still feel like we need additional parameters in our deffinition of consciousness. Can we find evidence that entities displaying C-Abilities also display emotions? Fear, love, anger, etc? If all do, can we add the ability to display emotions to our deffinition? I'm trying to avoid saying "feel emotions." I think displaying them is a scientifically testable thing, feeling them may not be.

Suppose we had a human that for some reason did not have emotions. He can feel physical pain and other sensory input, but is never sad, happy, lustful, etc. Would he not be conscious? You might argue that his quality of life is less than that of a normal human, but I wouldn't feel comfortable calling him not-conscious. Or what about if he only had one emotional range, say a lust-spectrum. Does that make him more conscious? If emotions are necessary, then which emotions, how many, and to what degree is necessary for consciousness?

A_pathetic_lizardmnan
Posts: 69
Joined: Wed Jul 08, 2009 2:37 am UTC

Re: Can a machine be conscious?

Postby A_pathetic_lizardmnan » Sat Jul 11, 2009 5:15 am UTC

In this discussion, it has come up that no computer has declared itself conscious without being prompted. What would its motivation be to do so?

Also, does this work as a passable definition of consciousness?

Any self-contained system that has objectives, can take in and process information from its surroundings, and can analyze the most probable course of action to accomplish that objective.

For example, the thermostat example "wants" the room to be 26C.
It takes in information (the temperature)
It determines whether the temperature is above or below desired (processing)
It determines what will likely have the best effect (if the room is 30C, it determines that air conditioning will likely lower that)

Some may choose to add an aspect of learning behavior, though I believe that learning is not strictly necessary as the learning subroutines (AKA neural connections forming and dying) are preprogrammed into brains, and therefore are merely another level of complexity.

hammerkrieg
Posts: 16
Joined: Thu Jun 25, 2009 11:48 am UTC

Re: Can a machine be conscious?

Postby hammerkrieg » Sat Jul 11, 2009 1:45 pm UTC

McLiarpants wrote:
Imagine we had an AI that displayed C-Abilities equal to that of humans. We'll assume somewhere in the depths of its programming the AI software has some kind of internal ability to detect buffer overflows, and that the perception percolates up to the higher level such that it is aware of them. I imagine it would be analogous to a human pricking his finger.

What if you were to ask it "What does a buffer overflow 'feel' like?"


A strong AI's architecture would presumably be entirely neurally-based and not experience "buffer overflow".

I see a whole bunch of comments here working under the presumption that traditional programmatic approaches are adequate for genuine intelligence, when there is little evidence for that claim, and a lot against it. That is sooooo 70's.

In any case, it would be difficult to imagine how an agent could be said to exhibit rationality without emotion. Not only is machine emotion possible, it is necessary, I believe.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Sat Jul 11, 2009 4:25 pm UTC

hammerkrieg wrote:A strong AI's architecture would presumably be entirely neurally-based and not experience "buffer overflow".

You're missing the point.

I'm aware that a strong AI would likely have a very different architecture from modern hardware/software (though, I'm not sure we can assume it will be neural).

Surely, an AI's internal experience is vastly different from that of a human. I was just trying to find something tangible that an AI might perceive but would have no hope of communicating "what it's like" to a human. Whether or not some future AI is actually capable of buffer overflows is beside the point. If you don't like the "buffer overflow" example then replace the "buffer overflow" with any other event that would fit this criteria, say a non-catastrophic hardware failure, whatever.

Again, this example was specific to the notion of qualia and whether or not it points to an other-worldly explanation of consciousness. Much of the notion of qualia rests on the fundamental problem that "feelings" are not communicable. I was merely asking what this situation would imply.

I see a whole bunch of comments here working under the presumption that traditional programmatic approaches are adequate for genuine intelligence, when there is little evidence for that claim, and a lot against it. That is sooooo 70's.

I don't see where anyone is assuming this. The only thing I see people assuming is that the fundamental principles of computability will not change drastically, which I think is a rather safe assumption at the moment.

In any case, it would be difficult to imagine how an agent could be said to exhibit rationality without emotion. Not only is machine emotion possible, it is necessary, I believe.

It's hard to respond to this without some kind of definition for the rather nebulous word "emotion". Many facets of "emotion" are inherently irrational. I see no indication that "emotions" are necessary for rationality, unless you're using "emotions" to just mean "desire" or "directives".

What about a being like Data (from Star Trek:TNG)? Would you consider him conscious? I know I certainly would.

If what you really mean is that "emotions" arise naturally from consciousness then I tend to agree but you've stated it the wrong way round.

A_pathetic_lizardmnan
Posts: 69
Joined: Wed Jul 08, 2009 2:37 am UTC

Re: Can a machine be conscious?

Postby A_pathetic_lizardmnan » Sun Jul 12, 2009 4:09 am UTC

I think what 0xBADFEED is saying is that you are not only defining consciousness as necessitating emotion, you are stating that consciousness requires human emotions. Moreover, while machine emotion may occur, there is no reason that it should be analogous to human emotion and many reasons it should not. The following emotions
* I am hungry.
* I am tired.
* I am horny.
* I am sad.
* I am confused.
* I am in love.
All have their basis in human "programming" for survival and reproduction. One could say that a human being is a program with 6.4 gigabytes of instruction in its DNA, which is much longer than computer programs. Moreover, this codes for about 100 trillion neural connections of "writable memory." So basically, one could argue that a human is just an extremely large computer that has been programmed over many millions of years to survive and reproduce, and is far more tested than any AI software we currently have.

When defining consciousness, most humans unconsciously look at one thing: would I have to change my behavior toward this if it were conscious, thus making my life harder? This is obviously a fallacy, but since there is no clear-cut line, that is what people do. If you doubt it, look at the plots of books and movies such as I, Robot or many other plotlines involving robots, sentience, and rights.

hammerkrieg
Posts: 16
Joined: Thu Jun 25, 2009 11:48 am UTC

Re: Can a machine be conscious?

Postby hammerkrieg » Sun Jul 12, 2009 8:52 am UTC

0xBADFEED wrote:It's hard to respond to this without some kind of definition for the rather nebulous word "emotion". Many facets of "emotion" are inherently irrational. I see no indication that "emotions" are necessary for rationality, unless you're using "emotions" to just mean "desire" or "directives".


Yeah, like a utility function, at the very least.

(I imagine posthumans will want to experience more than one-dimensional emotional spectrum though.)

User avatar
Newbreed
Posts: 1
Joined: Sun Jul 12, 2009 5:08 pm UTC

Re: Can a machine be conscious?

Postby Newbreed » Sun Jul 12, 2009 5:33 pm UTC

Is a machine concious of its surrondings? Some cars turn on their windshield wipers when it begins to rain, so it is reacting to external stimuli. Does it know why it turns on its wipers, or is it simply a controlled response? If a machine such as the car mentioned above was conscious, wouldn't it be able to decide for itself whether or not to turn on the wipers? I am still trying to decide for myself, but this was just a thought that came to mind.

A_pathetic_lizardmnan
Posts: 69
Joined: Wed Jul 08, 2009 2:37 am UTC

Re: Can a machine be conscious?

Postby A_pathetic_lizardmnan » Mon Jul 13, 2009 6:37 am UTC

Newbreed wrote:Is a machine concious of its surrondings? Some cars turn on their windshield wipers when it begins to rain, so it is reacting to external stimuli. Does it know why it turns on its wipers, or is it simply a controlled response? If a machine such as the car mentioned above was conscious, wouldn't it be able to decide for itself whether or not to turn on the wipers? I am still trying to decide for myself, but this was just a thought that came to mind.


In a sense, it does. If only two drops of water land on the windshield, it will not turn on the wipers. It is just calibrated to respond more consistently than a human. A human does everything based on generalities because it does not have perfect measurements, but a machine has more information and therefore an ability to make more logical decisions.

The basic idea of intelligence tends to involve the ability to infer based on incomplete information and react based on guesswork or "free choice", but so far this thread has not not really come to the conclusion that intelligence=consciousness. Likewise, free choice has not fully been established, and in fact to something of the wiper's level of intelligence, the decision to turn on the wipers WOULD be free choice. Likewise, to something a similar level of complexity above humans as we are above the wipers, humans would appear not to have free choice but rather to respond in specific ways to specific stimuli.

somebody already took it
Posts: 310
Joined: Wed Jul 01, 2009 3:03 am UTC

Re: Can a machine be conscious?

Postby somebody already took it » Tue Jul 14, 2009 1:34 am UTC

0xBADFEED wrote:Just so we're clear, when I say "other-worldly" I mean something in the realm of the supernatural. That is, something that is not bound by physical law.


I don't like involving physical law in this definition because it introduces a lot of additional complications.
It brings into play question like:

  • Do you see math as emerging from physics or physics as emerging from math?
  • Will our physical model of the universe ever be a true representation of the universe?
  • To what extent do ideas/intelligence physically exist?
  • What is reality? An entity's experience of the physical world, the physical world, or maybe something else (the matrix)? (And how can it be verified?)
  • If a tree falls in the woods and nobody is around to hear it does it make a sound?

I think they are interesting things to contemplate but might be getting off topic as far as conciousness is concerned.

Anyways, since you believe that human reasoning has a physical basis, wouldn't you say that not being bound by physical law implies being outside the realm of human reasoning? If so, demonstrating qualia are within the realm of human reasoning is the same as demonstrating they are bound by physical law.

And besides that I don't think physical law is going to be a very helpful tool in examining the emergent behaviour of such a complicated physical system as the human brain. To me, that seems like trying to understand a computer program by reading its binary and looking at a spec for the CPU it was compiled for.

0xBADFEED wrote:I don't really subscribe to the idea of "qualia", or at least the part of the idea that points towards a supernatural definition of the mind. That something like the "feeling" of "redness" or "burning" cannot be suitably conveyed from one person to another doesn't suggest to me any supernatural basis for "feelings". It only suggests that our rather primitive methods of communication are insufficient to describe them.


The idea of communicating different qualia is intriguing. Although I'm not convinced it is possible.
One of Daniel Dennet's four properties of qualia suggest that they are...
ineffable; that is, they cannot be communicated, or apprehended by any other means than direct experience.

I'm not convinced that qualia cannot be communicated either, but you seem to have adopted an ideology that science will eventually conquer all which is why I think you should take notice of the explanatory gap argument. (http://en.wikipedia.org/wiki/Explanatory_gap)
This Joseph Levine quote taken from it I think is particularly of interest:
The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in our understanding of nature. Of course a plausible explanation for there being a gap in our understanding of nature is that there is a genuine gap in nature. But so long as we have countervailing reasons for doubting the latter, we have to look elsewhere for an explanation of the former.

I think one of the most intriguing questions here is this: If there is a "gap in nature," how is it possible to know that the gap exists?
Certainly, if enough time passes where we make our best efforts to determine something and fail, we will have good reason to suspect a gap is there, but to prove it is not possible in some cases (see the halting problem).

If you wish to claim that qualia are communicable, then I think you should at least explore what it would entail.
For your consideration, I have prepared the following questions about communicating qualia:

  • Are qualia not implementation specific? (So to elaborate, can a man send/recieve qualia with a goat?)
  • When communicating qualie will it be possible to differentiate one's own qualia from someone else's?
  • Through what medium will qualia be communicated? (Will there need to be some kind of direct connection between minds?)

Also, this is a little bit nit-picky, but I was bothered by your use of the word primitive.
Primitive is at best a comparative measurement, and at worst a highly biased and subjective one.
What could our methods of communication be primitive with respect to beside hypothetical communication methods of the future?

0xBADFEED wrote:Imagine we had an AI that displayed C-Abilities equal to that of humans. We'll assume somewhere in the depths of its programming the AI software has some kind of internal ability to detect buffer overflows, and that the perception percolates up to the higher level such that it is aware of them. I imagine it would be analogous to a human pricking his finger.

What if you were to ask it "What does a buffer overflow 'feel' like?"


As hammerkrieg pointed out, an AI having any sensical experience of a buffer overflow is unlikely. It would be like the AI equivalent of a seizure.
So, for simplicity sake lets just say the question is "What is quale X like?"

These are the possible outcomes I can think of:

    The computer does not experience qualia and...
    • The computer says it does not experience qualia.
    • The computer lies and says it does experience qualia but can not articulate them.

    The computer does experience qualia and...
    • The computer lies and says it does not experience qualia.
    • The computer says it does experience qualia but can not articulate them.
    • The computer successfully explains what the designated qualia feels like.

0xBADFEED wrote:Would the AI's inability to communicate or fully explain this "feeling" indicate that there is something supernatural about buffer overflows or the AI's ability to detect them?


This question only seems to addresses one of the aforementioned outcomes: The computer does experience qualia and says it experiences them but can not articulate them. And the answer is a definite maybe.

In that case either the computer is specifically programmed to experience qualia (which indicates an understanding of qualia by its programmer, and thus that they are within the realm of human reasoning), or qualia have emerged on their own somehow (which indicates good chance of them being outside of human reasoning). As far as qualia emerging on their own I see two important cases:

  • Qualia emerging as a by-product of some kind of programming. (for example, the computer being programmed to pretend it experiences qualia and through pretending actually experiencing qualia.)
  • Everything experiencing qualia, and the articulation of them being irrelevant to the concious experience of them.

User avatar
Newbreed
Posts: 1
Joined: Sun Jul 12, 2009 5:08 pm UTC

Re: Can a machine be conscious?

Postby Newbreed » Tue Jul 14, 2009 8:58 pm UTC

If something is conscious, it must be able to, in some manner, protect itself. If you constantly eject and re-insert the cd-rom, it will eventually (it might take a long time) wear out. Would a computer eventually stop letting you open its cd-rom because it knows it is wearing out? Or would it just continue with its programming and wear out the cd-rom?

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Can a machine be conscious?

Postby headprogrammingczar » Wed Jul 15, 2009 11:48 am UTC

Newbreed wrote:If something is conscious, it must be able to, in some manner, protect itself. If you constantly eject and re-insert the cd-rom, it will eventually (it might take a long time) wear out. Would a computer eventually stop letting you open its cd-rom because it knows it is wearing out? Or would it just continue with its programming and wear out the cd-rom?

People do a shitty job protecting themselves. If a person attempts to commit suicide, is that person not conscious during the act? Before? After? Why then would a computer have to protect itself to be conscious?
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Wed Jul 15, 2009 2:18 pm UTC

somebody already took it wrote:I don't like involving physical law in this definition because it introduces a lot of additional complications.
It brings into play question like:

  • Do you see math as emerging from physics or physics as emerging from math?
  • Will our physical model of the universe ever be a true representation of the universe?
  • To what extent do ideas/intelligence physically exist?
  • What is reality? An entity's experience of the physical world, the physical world, or maybe something else (the matrix)? (And how can it be verified?)
  • If a tree falls in the woods and nobody is around to hear it does it make a sound?

I'm not trying to inject physical law into the definition. I was just telling you that I'm basically a physicalist and I don't think there is any magical component to consciousness.
Anyways, since you believe that human reasoning has a physical basis, wouldn't you say that not being bound by physical law implies being outside the realm of human reasoning? If so, demonstrating qualia are within the realm of human reasoning is the same as demonstrating they are bound by physical law.

I believe everything has a physical basis so I don't really entertain the notion that anything is not bounded by or based in the physical. Even if something were able to exist by 'magic' I'm not really sure if you can make the assumption that it would be outside our understanding.
And besides that I don't think physical law is going to be a very helpful tool in examining the emergent behaviour of such a complicated physical system as the human brain. To me, that seems like trying to understand a computer program by reading its binary and looking at a spec for the CPU it was compiled for.

I'm not saying it will be a particularly useful tool for understanding it. I'm saying it forms the basis and limits for the emergent system. It's more like saying the hardware of a computer forms the basis and limits for the software that can be reliably run on the machine. You wouldn't study the hardware of the computer to figure out how some software system works. But the physical hardware still forms the basis for what is practically possible in the software.
The idea of communicating different qualia is intriguing. Although I'm not convinced it is possible.

I'm not convinced it is either. I'm not convinced qualia even exist except as maybe a shorthand way of referring to a particular brain state. I was saying that most of the explanations involving qualia occur because "feeling" is inherently difficult to communicate. Take the "Mary the Color Scientist" or "Inverted Spectrum" (which I think is a load of BS by the way) arguments. Mary is unable to have full knowledge of "redness" because there is no way to communicate the brain state of perceiving "red". The "full-knowledge" portion of the argument is null and void. The argument draws an arbitrary distinction between factual knowledge and experiential knowledge. You only do this if you already subscribe to a dualist view of the world. There's no indication to me that these two types of knowledge are particularly different. The argument basically boils down to "Qualia exist because Mary can't know what 'red' looks like without seeing 'red'". Which, while true (until they come up with some sort of brain state inducer), still (to me) doesn't point to a supernatural explanation of qualia.

I'm not convinced that qualia cannot be communicated either, but you seem to have adopted an ideology that science will eventually conquer all which is why I think you should take notice of the explanatory gap argument.

Not that science will conquer all, just that nothing is magic.
I think one of the most intriguing questions here is this: If there is a "gap in nature," how is it possible to know that the gap exists?
Certainly, if enough time passes where we make our best efforts to determine something and fail, we will have good reason to suspect a gap is there, but to prove it is not possible in some cases (see the halting problem).

What constitutes "enough time"? 100 years? 1000 years? 1000000000000 years?
If you wish to claim that qualia are communicable, then I think you should at least explore what it would entail.

I'm not making any such claim.
Also, this is a little bit nit-picky, but I was bothered by your use of the word primitive.
Primitive is at best a comparative measurement, and at worst a highly biased and subjective one.
What could our methods of communication be primitive with respect to beside hypothetical communication methods of the future?

Primitive meaning "unsophisticated or crude". Primitive in that the medium of communication is of very low fidelity. Our interior experience is much richer than what we can communicate. The complexity of interior experience must be broken down into a more digestible form which is inherently lossy. I wasn't saying that humans' communication with each other is primitive but the communicated form of experience is primitive compared to its original form. My original statement didn't make this particularly clear, sorry.
As hammerkrieg pointed out, an AI having any sensical experience of a buffer overflow is unlikely.

Yes, and I responded to why this is missing the point of the example.
It would be like the AI equivalent of a seizure.

Not really. Buffer overflows are no big deal if you have a system designed to accommodate and handle them when they occur. An unhandled buffer overflow might be a seizure. But whatever.
(Even when I wrote the example I knew I would get several of these "buffer overflows aren't realistic" responses. Should have just gone with non-catastrophic hardware failure.)
So, for simplicity sake lets just say the question is "What is quale X like?"
These are the possible outcomes I can think of:
    The computer does not experience qualia and...
    • The computer says it does not experience qualia.
    • The computer lies and says it does experience qualia but can not articulate them.
    The computer does experience qualia and...
    • The computer lies and says it does not experience qualia.
    • The computer says it does experience qualia but can not articulate them.
    • The computer successfully explains what the designated qualia feels like.

This question only seems to addresses one of the aforementioned outcomes: The computer does experience qualia and says it experiences them but can not articulate them. And the answer is a definite maybe.

In that case either the computer is specifically programmed to experience qualia (which indicates an understanding of qualia by its programmer, and thus that they are within the realm of human reasoning), or qualia have emerged on their own somehow (which indicates good chance of them being outside of human reasoning).

As far as qualia emerging on their own I see two important cases:

  • Qualia emerging as a by-product of some kind of programming. (for example, the computer being programmed to pretend it experiences qualia and through pretending actually experiencing qualia.)
  • Everything experiencing qualia, and the articulation of them being irrelevant to the concious experience of them.

It's a little hard to respond to this without further definition of what you mean when you say "qualia". "Qualia" is a term that's so overloaded that to use the broadest definition it basically reduces to a synonym for "perception". Different meanings are more or less loaded with increasing tendencies towards magic and the metaphysical. Using the broadest definition it's hard to deny they exist. Using other, more magical, definitions I'd definitely have to dispute their existence.

somebody already took it
Posts: 310
Joined: Wed Jul 01, 2009 3:03 am UTC

Re: Can a machine be conscious?

Postby somebody already took it » Sat Jul 18, 2009 12:55 am UTC

0xBADFEED wrote:I'm not trying to inject physical law into the definition.

Is it or is it not your intention for other-worldly to mean not bound by physical law?
0xBADFEED wrote:The argument basically boils down to "Qualia exist because Mary can't know what 'red' looks like without seeing 'red'". Which, while true (until they come up with some sort of brain state inducer), still (to me) doesn't point to a supernatural explanation of qualia.

Then would you consider qualia or rather "knowing what red looks like" to be ineffable?
(In case it's not clear, I'm refering to the definition of ineffable taken from the qualia wikipedia article.)
0xBADFEED wrote:
I think one of the most intriguing questions here is this: If there is a "gap in nature," how is it possible to know that the gap exists?
Certainly, if enough time passes where we make our best efforts to determine something and fail, we will have good reason to suspect a gap is there, but to prove it is not possible in some cases (see the halting problem).

What constitutes "enough time"? 100 years? 1000 years? 1000000000000 years?

It's a lot like deciding when to halt a computer program because it might be stuck in an infinite loop.
It depends on what is being determined and what resources are available, and it's just a guessing game.
0xBADFEED wrote:
As hammerkrieg pointed out, an AI having any sensical experience of a buffer overflow is unlikely.

Yes, and I responded to why this is missing the point of the example.

I didn't mean for that to sound snarky, it was just there to summarize why I was changing your example.
0xBADFEED wrote:It's a little hard to respond to this without further definition of what you mean when you say "qualia". "Qualia" is a term that's so overloaded that to use the broadest definition it basically reduces to a synonym for "perception". Different meanings are more or less loaded with increasing tendencies towards magic and the metaphysical. Using the broadest definition it's hard to deny they exist. Using other, more magical, definitions I'd definitely have to dispute their existence.

Then can I ask you, what do you see as the major distinctions between different definitions of qualia?
And where do you draw the line for where you would dispute their existence?
And why are you putting qualia, perception and feelings in quotes?

User avatar
Mr. N
Posts: 418
Joined: Wed Jul 08, 2009 7:37 pm UTC

Re: Can a machine be conscious?

Postby Mr. N » Sat Jul 18, 2009 1:39 am UTC

I threw my laptop about twenty feet across the room and it shattered when it hit a stainless steel fridge. It is now unconscious. So by definition it was conscious two minutes ago. What do you think of THAT logic?
Your momma eats Pop Tarts!

User avatar
phillipsjk
Posts: 1213
Joined: Wed Nov 05, 2008 4:09 pm UTC
Location: Edmonton AB Canada
Contact:

Re: Can a machine be conscious?

Postby phillipsjk » Sat Jul 18, 2009 11:13 am UTC

I don't know.

My "trees are conscious" (dead trees don't drop leaves) post sort of ignores the "unconscious" state. People are considered unconscious while they are sleeping, but their brain (stem) still monitors (and controls):
  • Breathing
  • Temperature
  • Heartbeat
  • Digestion
  • Pain

In addition to that, most people are paralyzed during REM sleep to avoid acting out their dreams: a potentially fatal activity in the forest.

According to my "working definition," people are conscious while they are unconscious!
Last edited by phillipsjk on Sun Jul 19, 2009 6:35 pm UTC, edited 1 time in total.
Did you get the number on that truck?

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Sat Jul 18, 2009 2:20 pm UTC

somebody already took it wrote:Is it or is it not your intention for other-worldly to mean not bound by physical law?

That is my intention.
But in my personal definition of consciousness I don't require that something have "feelings". In my personal definition something is conscious if it exhibits sufficiently developed C-Abilities. So, I would still consider an AI zombie ("zombie" in the sense that it is used when referring to qualia) conscious if it exhibits the requisite abilities. This makes no mention of the mechanism of the C-Abilitiesand allows for either a physical or non-physical explanation. Now, it is my personal belief that the mechanism is completely physical, but that is not required by the definition.
What we've mostly been talking about is the mechanism or explanation of consciousness. Not everyone draws this distinction and I apologize if it was unclear.
Then would you consider qualia or rather "knowing what red looks like" to be ineffable?

I would. But as I've said before, to me, this points more to a limitation in the utility of spoken language in such matters than anything else.
It's a lot like deciding when to halt a computer program because it might be stuck in an infinite loop.
It depends on what is being determined and what resources are available, and it's just a guessing game.

Right, that was my point. Any decision you make about "enough time" is going to be arbitrary. There's no reason to ever believe that a gap in nature exists because it is (A) undetectable and (B) undecidable. We haven't yet had any indication that a gap exists. It's an interesting proposition to think about but realistically believing that a gap exists doesn't get you anywhere. If a gap truly does exist then it is a sad state of affairs indeed.
And why are you putting qualia, perception and feelings in quotes?

In some instances I was putting "feelings", "perception", and "qualia" in quotes to draw attention to the fact that these are very nebulous words loaded with hidden meanings and associations. Basically, just saying "I'm using this word but I understand that it is very imprecise and there may be subtle but important differences in your usage."
In other instancces I put them in quotes because I was invoking the word not the meaning. For instance:
A) Qualia is a word. (false) -> Here invokes the meaning of the word
B) "Qualia" is a word. (true) -> Here invokes the word itself.
Then can I ask you, what do you see as the major distinctions between different definitions of qualia?

Here's a decent site that gives a short description of the different uses and meanings of the word "qualia". You were the one who brought up qualia so I assumed you had a particular definition in mind that you wanted to use. I can pick a definition but it will probably be somewhat arbitrary (and biased towards my beliefs).
And where do you draw the line for where you would dispute their existence?

It's somewhat difficult to draw a sharp line. I would dispute their existence if the definition implies or requires an other-worldly explanation for consciousness.

Phaust
Posts: 0
Joined: Thu Jul 23, 2009 6:46 am UTC

Re: Can a machine be conscious?

Postby Phaust » Thu Jul 23, 2009 7:26 am UTC

Gaydar2000SE wrote:
A: The laws of physics as currently there offer no reason to believe that things can just be 'conscious', in fact, it has no concept of it. It's like some sort of magic to them.


Something I've noticed (and not just in the piece of the post quoted above) is that everyone appears to be clinging on to classical physics for an explanation that as Gaydar has aptly pointed out, is unavailable due to the fact that classical physics cannot deal with such an idea. Thusly, if science is still to play a role in the definition of consciousness (what else could consistently and coherently do so in a manner that is almost undisputable, given that all laws and such are still considered theories with enough evidence to make them accepted until something else with more evidence shows up) it might be wise to look at the question from the point of view of Quantum Mechanics. Quantum Mechanics, points out that an electron has 4 quantum values (correct me if I'm wrong), one of which is called the "spin" of an electron. Now, the spin of an electron is incredibly difficult to comprehend (and I don't presuppose to say that I know what it is exactly) and utterly incomparable to the macroscopic concept as is understood by classical physics described under the verb "gyrate", but quantum physicists know what it does, and know the numerical values for it. Applied (finally) to the question of consciousness, the question of what consciousness consists of at a structural level has been (within the confines of this thread) more or less torn between the capabilities of a system (medium being unimportant) as they exist in referendum to us, and behavioral systems as they exist for us, whereas I am of the opinion that it is impossible to accurately describe what consciousness is (i.e. an electron's spin) by classical means and that rather than look for a unit of consciousness we ought to tie down what an example of consciousness does (similar to the way to define what is alive), and whence that is established, if a machine can do that spontaneously (that is, without stimuli) then I would say it is conscious. It's kinda like working backwards, I guess.
Just to post there for more material to debate with, I post my belief upon how to decide what is conscious as such:
- That which is alive.
- That which harbors the ability to make a decision for one's self.
- That which displays and uses the ability to make decisions for one's self, excluding simple actions like respiration and other involuntary actions.
- That which is able to distinguish what it is (no matter how wrong) independent of stimulus, i.e. spontaneously. (A computer, for example only "knows" it is different from something when it interacts with say, another computer on the same server, because the distinction is a matter of convenience.
- That which distinguishes the nature of other things (Like how a photon acts as a wave when we look upon it measuring for the presence of a wave, and how it acts as a particle when we look for it as being a particle.)

joejoefine
Posts: 5
Joined: Sun Nov 16, 2008 2:52 am UTC

Re: Can a machine be conscious?

Postby joejoefine » Fri Jul 24, 2009 6:25 pm UTC

0xBADFEED wrote:
McLiarpants wrote:I see it like this. A machine could potentially move through the steps of 0xBADFEED's C-Abilities. However, to do so, someone (the programer) must tell it how to learn. The computer must be told to save an experience in memory. It must be told to go look for memories when its sensory equipment detect a new experience. Even if we give the computer a lot of rules, it will eventually encounter something it doesn't have a rule for. In this case, if the computer is not told to go analyse a bunch of other inputs, experience, and rules and come up with an answer, it will not know what to do.

Who's to say our behavior isn't also completely programmed? I don't see any real difference between a system that has been programmed by millions of years of evolution, versus a system that has been programmed by a computer scientist. Human brains also have big problems when they encounter a truly new or unexpected situation. Usually we know it as "panic". And one of the most common responses to panic is to just freeze, especially if the situation is life-threatening.


I think an important element in helping to set apart "true" human consciousness as well as free will from a very complicated set of programs that could mimic human behaviour closely, is the ability of the machine to modify its own code ("beliefs") in accordance with changes in the environment. I like to use this definition of consciousness as given by 0xBADFEED:

0xBADFEED wrote:If we define (in a very loose and off-the-cuff sense) consciousness as the ability to:
* Perceive one's environment
* Make decisions based on those perceptions (whether by "free will" or by programming)
* Remember cause and effect relationships based on the perception->decision (i.e. learn)
* Abstract these relationships to create general propositions about our environment
* Maintain an internal model to predict future cause and effect relationships


So far the "learning" process is limited to applying received sensory inputs to a pre-programmed code, which means the machine can only exhibit conscious behaviour up to a certain extent. "Learning" (requirement 3) should also include the ability to adapt to situations and grow beyond the original programming parameters (i.e. remember Data from Star Trek trying to discover emotions - trying to grow beyond his original programming).

The difference between a machine with a pre-programmed code and a human being is that, while both can be placed into a new situation and experience shock or panic (ignoring the whole argument of whether a machine is conscious of the feeling in the same way that a human is), the human will gradually acclimatize to its new environment and create new rules with which to make progress. The machine, not being programmed with a set of code that deals with the new experience, would never be able to learn how to deal with it, or to even know what a preferred resolution should be.

Just thinking at some possibilities, I imagine creating a code that works in the same way as evolution - it keeps pumping out new instructions until it finds something that "works", or that is aligned with the enhancing or maintaining the original programming code. I suppose that's how early cavemen did it - learning that fire isn't a nice thing to touch, trying to put a piece of wood in it, observing that it could be used as a torch (etc.). But the problem is you need to have some pre-programmed way to analyze the new situation in order make any sense out of it - what "works"? What will determine whether the new experience helps or hinders the machine in its pre-programmed directives? To be able to analyze the billions of new situations we deal with everyday, a machine would need a pre-programmed "analysis code" for each of these situations. This means it has to store a TON of data and instructional code for each new situation before it encounters it, which doesn't make sense, and would probably require far too much storage space. If these analysis codes aren't preprogrammed, then it circles back to the original problem - how can a machine create its own code, such that it is meaningful in terms of advancing or maintaining its original programming (i.e. personality)?

User avatar
Area Man
Posts: 256
Joined: Thu Dec 25, 2008 8:08 pm UTC
Location: Local

Re: Can a machine be conscious?

Postby Area Man » Fri Jul 24, 2009 8:21 pm UTC

How about if something can make a decision which goes against its "programming"? Because of developed feelings or an advanced ability to learn.

A human can decide to go against his own deep instincts, a dog can be trained to not to eat its favorite food in front of it til you give permission. A wild animal which cannot be taught anything and only acts according to its hard-wired reactions, we would consider a less-conscious "dumb" animal, like a machine or thermostat.

Social animals are generally considered more conscious and intelligent, can interact with and discern its own kind, and can recognise itself when looking in a mirror.

Incidentally, if you're trying to determine if something is conscious, it has to also recognise *you* as being conscious; you have to convince it that you're worthy of attention and investigation, how would you do that? Understanding requires sympathy and empathy- "caring".

The 'magic' of consciousness is a complexity (of intelligence and knowledge, self-awareness and perception, and ability to communicate such) which is beyond our current computational/programming ability. Then, you might have to prove to the machine that you created it.
Bisquick boxes are a dead medium.

User avatar
Area Man
Posts: 256
Joined: Thu Dec 25, 2008 8:08 pm UTC
Location: Local

Re: Can a machine be conscious?

Postby Area Man » Fri Jul 24, 2009 11:40 pm UTC

And how do you tell the difference between consciousness and emulated consciousness? Hire Blade Runner?
That's what this thread is about.

Meteorswarm wrote:Out of curiosity, do you program? It's not like you can just give a robot a directive like "destroy all humans" and have it work.
Area Man wrote:The 'magic' of consciousness is a complexity [...] which is beyond our current computational/programming ability.

...
Bisquick boxes are a dead medium.

User avatar
Area Man
Posts: 256
Joined: Thu Dec 25, 2008 8:08 pm UTC
Location: Local

Re: Can a machine be conscious?

Postby Area Man » Mon Jul 27, 2009 6:25 am UTC

Could be. And I wanted your thoughts on it. I'm not saying consciousness can't be programmed. On the contrary (but unlikely in my lifetime). The question really is to define what constitutes "consciousness" and how do you detect and [url=]produce it.

My point was that a more conscious entity can modify its behavior based on more subtle, abstract feedback, even to a point where the "natural" instinct is completely eliminated or even reversed; humans can decide to change based on their own inner thoughts.

If you do want to talk programming, at this point self-modifying code is generally frown upon, and mostly forbidden in production. The research tools are rudimentary, and without very sophisticated tools we are incapable of creating anything close to our consciousness (and the intelligence which would accompany it).

Self-awareness goes beyond sensing the hardware. I don't know how much memory I have, nor can I examine most of it. I do have an idea of what I am, though.

And again, how would emulated consciousness be different? Would simulated consciousness not effectively *be* consciousness?
Bisquick boxes are a dead medium.

User avatar
SpazzyMcGee
Posts: 191
Joined: Tue Nov 18, 2008 5:36 am UTC

Re: Can a machine be conscious?

Postby SpazzyMcGee » Sun Aug 09, 2009 7:21 am UTC

I think too many people put the human mind on a pedestal.

Bacteria are aware of their surroundings as they can react to it. Humans just happen to have a much more diverse range of reactions to the environment than bacteria. There is no line that can be crossed between consciousness and unconsciousness.

I see no reason why one day a machine (though we are machines) will become "conscious". All it takes is enough programming. The real question is whether we will be able to make it act human. That is when the normal person will call it conscious (the aforementioned ambiguity of consciousness aside). Once we get a computer to act human in nearly every way their is no reason not to call it conscious.

Mindor
Posts: 28
Joined: Sat Aug 15, 2009 7:53 pm UTC
Location: Milwaukee, WI

Re: Can a machine be conscious?

Postby Mindor » Sat Aug 15, 2009 10:59 pm UTC

VorpalSword wrote:
somebody already took it wrote:
VorpalSword wrote:"I believe I am conscious" can be proven internally with the argument "cogito ergo sum".

How do you arrive at the conclusion that you believe you are concious from cogito ergo sum? I think all that it says is that thinking implies existence (perhaps there is more to it, I only read the Wikipedia summary). Furthermore, you should take a look at some of the critiques of the argument, in particular Kierkegaard's critique, which states that the existence of "I" is presupposed by it.


Kierkegaard's critique seems to me to be more that the argument assumes that the thinking is done by "I". He doesn't dispute thinking occurs, but gets upset that "I" is already expected and that it only demonstrates that "I" is thinking. I think this is missing the point that clearly something exists to do the thinking; "I" is just a convenient label for whatever this thinker is.

I did jump from existence to consciousness. I assumed that what is being proved is that my mind exists on some level, and that (because of the definition of a mind, and that we already know it can think) that such an "I" would qualify as conscious.


I've not read Kierkegaard's critique, but it seems sufficiently similar to the response to cogito ergo sum that (according to my understanding of them)would be posed by some Eastern philosophers. The response to the claim of "I think, therefore I am." would be "I think, therefore thinking occurs."
To fully grasp this, you have to lose the conceptual distinction between objects and actions, and cause and effect...
To pull the cause and effect process to its logical end: If you are doing the thinking, then what is doing the you? A setp further: What is doing the thing(s) doing the you?

Anyway back to the original...
I would give the requirement of self-conciousness as the ability to distinguish the self from the environment: what is internal from what is external.

Simply being able to process data about one's environment and react to it would not be sufficient to show conciousness.
You get the 'Best Newbie (Nearly) Ever" Award. -Az
Yay me.

monroetransfer
Posts: 26
Joined: Wed Aug 27, 2008 1:03 am UTC

Re: Can a machine be conscious?

Postby monroetransfer » Mon Sep 21, 2009 6:06 pm UTC

I don't know. Are you?

User avatar
insom
Posts: 40
Joined: Mon Feb 25, 2008 11:29 am UTC

Re: Can a machine be conscious?

Postby insom » Tue Sep 22, 2009 1:38 pm UTC

I should probably not try to define consciousness, I don't think I know the terminology to do so. As I see it, consciousness is metaphysical and cannot be measured, and thus it is not possible to distinguish a conscious machine from a sufficiently advanced machine. In other words, I cannot prove the consciousness of even fellow humans, -they might be biological robots that just happen to pass the turing test.
With this in mind, as I cannot know what is and isn't conscious, it is entirely possible that a small pebble has a rich inner life, as well as computers and anything else.
Normal cynics think they are realists. Hardcore cynics know they are optimists.
Woo I draw stuff - how incredibly awesome

User avatar
Yakk
Poster with most posts but no title.
Posts: 11082
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Can a machine be conscious?

Postby Yakk » Tue Sep 22, 2009 2:35 pm UTC

Does it matter?

In particular, I am not conscious. There is nothing in me besides a "chinese room" of physics and chemistry. I have no awareness of self, yet can function in society. I can even express thoughts about what I feel like, because it is conventional for others to do so. And I'm a decent mimic.

Should I be treated worse than anyone else because I am not conscious? And just as you know you are conscious, I know I am not. I am expressing my lack of consciousness to you, just as you are expressing your consciousness to me. Is it acceptable to treat me differently than the beings who claim they are conscious?
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Can a machine be conscious?

Postby headprogrammingczar » Thu Sep 24, 2009 1:04 am UTC

Those conscious guys can be such snobs sometimes.
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

User avatar
neoliminal
Posts: 626
Joined: Wed Feb 18, 2009 6:39 pm UTC

Re: Can a machine be conscious?

Postby neoliminal » Thu Sep 24, 2009 2:32 pm UTC

  • Assume humans are conscious.
  • Assume eventually the human brain can be modelled to the molecular level by machine.
  • Assume there is not some currently unmeasurable force(s) acting upon the human brain.

Machine is conscious.

The problem with most people is the third point. Almost everyone will agree that we could do the first two eventually. However the majority of the population believe in currently unmeasurable forces like souls or free will and that would be a deal breaker because they are intractably linked to conscious thought for these people.
http://www.amazon.com/dp/B0073YYXRC
Read My Book. Cost less than coffee. Will probably keep you awake longer.
[hint, scary!]

Goplat
Posts: 490
Joined: Sun Mar 04, 2007 11:41 pm UTC

Re: Can a machine be conscious?

Postby Goplat » Fri Sep 25, 2009 4:15 am UTC

A simulation of a human wouldn't be actually conscious, any more than a simulation of a black hole would make the computer swallow up the Earth. Please, let's dispense with the delusion that a mere representation of something automatically gets some real-world properties of that thing - it's called magical thinking, and has been pretty well discredited by science.


Return to “Religious Wars”

Who is online

Users browsing this forum: No registered users and 4 guests