do errors 'hurt' my PC?

A place to discuss the implementation and style of computer programs.

Moderators: phlip, Moderators General, Prelates

User avatar
bitsplit
Posts: 57
Joined: Thu May 13, 2010 12:40 pm UTC

Re: do errors 'hurt' my PC?

Postby bitsplit » Mon Jun 21, 2010 12:22 pm UTC

Well, if we are talking about whether or not computers as they exist at this current point in time really do experience pain, then they do not. It is that simple. Computers, as they currently exist, are not sufficiently self aware, and even the most advanced AI out there is artificially created, not evolved. Pain and suffering are evolved as mechanisms to promote the survival of the individual and therefore, hopefully, the species. Suffering can be a side effect of the co-development of advanced mental capacity in co-existence with pain, or it could exist for several other reasons.

In any situation, in order to fully be able to determine whether or not computers experience pain and suffering two things must happen:

1) We must have a computer capable of satisfying the Turing test sufficiently well.
2) We must be able to determine, from interaction with said computer, whether it is truly experiencing pain and suffering, or merely displaying advanced forms of mimicry.

It is interesting that requirement #2 is difficult to satisfy, since it is difficult to satisfy even with humans. In humans, we know that other humans can experience pain, because we ourselves experience pain. But when we see a very good Hollywood movie, all those special effects and makeup convince us that the character portrayed by the actor is in extreme pain when the actor is, in fact, not at all in pain.

Computers at this point in time cannot experience pain. Whether or not computers will or will not be capable of experiencing pain in the future, we may never know if computers experience pain, because we ourselves are not computers, at least not in the sense we are talking about at the moment.

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: do errors 'hurt' my PC?

Postby WarDaft » Mon Jun 21, 2010 6:00 pm UTC

Axidos wrote:Yes except pain does neither of these things. Your brain has a knee-jerk reaction to it (which varies between people between very strong and nonexistent) but that's your brain and the way it's configured, not the pain.
Pain is that knee-jerk reaction, or at least the majority of it. It's the motivation to fix the problem. There are some disorders that cause pain to not motivate, and then there is no knee-jerk reaction to avoid it.
Axidos wrote:
WarDaft wrote:It is pain because the system does not want to be in that state.

Two assumptions here:
1. That computers can want when they cannot. They just execute instructions none of which involve wanting. Bluescreens? A set of instructions. Segfaults? A set of instructions that what it just did wasn't good for anything.
2. That if the system is in an unpleasant state it must be pain. There you go again.
And later in your Turing Machine description:
3. You also assume they "try" to be in states, or "avoid" states, when all along they're just following instructions, and those man-made instructions prevent it from blindly wandering into those "avoid" states and keep it in the general direction of those "try" states (unless you read TDWTF).
Things like "want" and "try" are just aspects of a chemical computer following instructions. Barring some incomputable physical process playing a key role in the functioning of neurons in the brain they must have a Turing Machine equivalent. In truth the equivalent will come from understanding the processes of the human brain in terms of software - and things like "wanting" to be in a state (like wanting to be happy) instead being seen as a general tendency of the brain to perform operations that transition it toward the set of states in question, and away from other sets. (When you have billions or trillions of states, there is enough of a gradient to consider motion through a medium of states, particularly if it the machine does not tend to jump wildly from one state to any other state.)

Axidos wrote:
WarDaft wrote:If that is feeling, is it so easy to say that a reduced complexity system - but with the same response types - is incapable of feeling pain?

Yes. Yes, it's very easy. This is a forum of coders; we're intimately familiar with what our computer does and I'm sure you are too. At a high level it carries out instructions. At a lower level it flips bits. At a much lower level it sends electrical signals through wires. There is no capacity for pain or any human sensations on any of these levels. It is not intelligent.
That's what the brain does too, the wires are just wet and squishy. There's currently nothing to suggest that gives it more computational power.

Computers at this point in time cannot experience pain. Whether or not computers will or will not be capable of experiencing pain in the future, we may never know if computers experience pain, because we ourselves are not computers, at least not in the sense we are talking about at the moment.
They do not tell us that they feel pain, and as we are in near complete control of their actions, we could design them to feel pain and never complain about it.

A cat can clearly feel pain, and there are computers more complex than a cat's brain (in fact there was an approximate simulation of one done).
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
bitsplit
Posts: 57
Joined: Thu May 13, 2010 12:40 pm UTC

Re: do errors 'hurt' my PC?

Postby bitsplit » Mon Jun 21, 2010 8:00 pm UTC

WarDaft wrote:They do not tell us that they feel pain, and as we are in near complete control of their actions, we could design them to feel pain and never complain about it.

A cat can clearly feel pain, and there are computers more complex than a cat's brain (in fact there was an approximate simulation of one done).


We could call it pain if we want but that wouldn't make it pain. Pain is an evolved mental state. However, just calling it pain wouldn't make it painful for the computer. It would have the same result, yes. However, just because a black box takes in the same input and produces the same output as another, it doesn't mean that the inner workings of those boxes are the same. Cats and humans have similar physiology, and so we can infer that they feel pain, because we have similar central nervous systems, and we are both mammals. It is also the case with most other vertebrates, and likely the case with other animals.

However, the concept of pain doesn't necessarily carry over to everything. The pain comes from the interpretation of a signal by the CNS. The signal helps you survive, and the brain expands on the information conveyed by that signal. Other organisms might interpret the signal differently, and come up with different ways of coping with that signal that are not necessarily pain. A computer that evolves or is designed under a different paradigm from our wetware will not necessarily experience pain. If you argue you could design a neural network that can experience pleasure/pain, go ahead. Of course, that is a meta machine, since it is not the computer itself, but a computer inside the computer that is experiencing pain, and the computer itself is the universe that machine exists in. That, and it's not as easy as it sounds.

Stating that there are computers more powerful than a cat's brain is overreaching. The studies where cat brains have been simulated have been debated by other neuroscientists. This is due to the fact that they have not been simulated with a sufficient degree of accuracy, and did not actually simulate a cat brain but only a "brain" with computing power similar to a cat brain (and even that has been the subject of debate). Just having the same amount of neurons in a net doesn't make a network work at the same computing level as another. The way the connections form evolve over generations, and have a lot to do with how the processing is done, and how powerful the processing is. You can't just rewire the neurons in a brain and expect it to work the same.

To top all of that, there is a big difference between a specialized computer like the brain, which is designed to run the animal body, help it survive, and entice it to procreate, than a general purpose computer that is designed to carry out instructions for whatever reason there is. Sure, humans have the capability to do general purpose computing, but it is a layer over all the survival stuff. Let's put a hungry lion in a room full of very smart thinkers and see if they compute their way out of the situation, or if they go into the fight or flight response modes of their primitive brain. Cognitive and evolutionary psychologists side with the latter.

Mavrisa
Posts: 340
Joined: Mon Dec 22, 2008 8:49 pm UTC
Location: Ontario

Re: do errors 'hurt' my PC?

Postby Mavrisa » Tue Jun 22, 2010 6:48 am UTC

I'd like to point out that pain is actually a sense, just like vision, smell, proprioception, etc. It's a result of tissue damage, or the threat of tissue damage, and a signal is sent to our brain. In this sense, a computer measuring its batteries' remaining voltage is doing the same sort of thing.

I'll be damned, though, if anyone can show me my brain's instruction set, or describe it to me in a way that given a set of inputs equal to real life, they could predict exactly what I would do in a situation, then throw me into the situation and verify that they are correct. We don't know how the things work, so how can anyone compare a computer (relatively speaking, microchips are quite easy to understand if you have a basic understanding of the way the components work) to our brain (yes we have modelled a cat's brain. Did it behave like a cat? Not at all. In the article I read on the project, it was just a fancy, painfully slow simplification of the way we think the brain might work). We have looked at neurons and we think we have a pretty good picture of how they work, yet nobody can explain how the brain does what it does. There are guesses as to how specific functions are carried out, and they seem to look approximately right, but the brain as a whole is a mystery. There is a worm with an average of 300 brain cells. We don't know how THAT works yet.

With a battery being low, a few things happen which lead up to the point of the processor running a quick bit of code, displaying a message. This happens in the same way every time.
If someone pricks my finger with a needle, a signal is sent to my brain where it is interpreted and then a shitload of stuff happens all at once:
--my senses are checked over to see if there's any information on the cause, other than it being there
--my memory is consulted to see if any events have happened like that in the past, but also to write this experience to short term memory and to determine if I remember anything which may be causing my immediate experience
--the part of my brain which controls motion may be engaged if the pain is unexpected and the reasoning part of my brain isn't telling me not to move: there will likely be a move away from the source
and probably a lot of other things that I don't know about because I know very little about brains. All of this happens nearly simultaneously and often randomly (ie. some pathways may be activated some of the time, but not always).

It could be that the brain actually uses quantum phenomenon to accomplish the things it does. We don't know. But a computer does not. It follows a set of instructions which makes everything it does based on the input and output COMPLETELY predictable.
Someone looking at a section of code could tell you if there was an error in it, and how the processor in question will handle that error when it gets to it before it happens. The processor will handle the same input the same way every single time.

styrofoam wrote:the laptop has no awareness of why the warning should bring power, or even that the warning is meant to bring power, or that it even has anything to do with power ... or even understand that there's a link between the power it uses and the meter provided by the battery module.
This. It has no awareness at all, electrons just happen to flow because we designed the components to make that happen.

Someone looking at a person will not be able to tell you what that person will do if they are poked with a pin. Different people will do different things. They also won't have the same reaction twice.

When my calculator says Ma ErrOr, it never tries to prevent the error from happening again: it took the input and produced the exact output that its instructions specified. If it had feelings, it would not feel agony, shame or disappointment. It would feel joy in a job well done. If it could speak it might even ask for a cookie.
If it's a person... well it depends on the person.
"I think nature's imagination is so much greater than man's, she's never gonna let us relax."

bytbox
Posts: 56
Joined: Wed Aug 19, 2009 5:43 am UTC

Re: do errors 'hurt' my PC?

Postby bytbox » Tue Jun 22, 2010 7:22 am UTC

bitsplit wrote:Computers, as they currently exist, are not sufficiently self aware...


Woah! Five or ten years ago, we could say this, but now, it seems to me that computers are, for pretty much any definition of self-aware that doesn't require the computational power necessary to pass the Turing test, self-aware. Dumb example: virus scanners. Better example: advanced firewalls that monitor incoming traffic and instructions executed, and try to find a correlation, and act appropriately on that correlation (usually by sending panicky emails and shutting stuff down, but still.

Is there any definition for self-aware such that a) no computers today are self-aware and b) massive computational power/other engineering feats is not an obvious prerequisite? (So language processing and other things typical of the Turing test cannot be involved.)

Mavrisa wrote:It follows a set of instructions which makes everything it does based on the input and output COMPLETELY predictable.

Provided you know the entire input - which we theoretically can for your computer (assuming no hardware errors and we can peek at the entropy file), but we can't for you. Comparing computers and humans isn't really fair.

User avatar
bitsplit
Posts: 57
Joined: Thu May 13, 2010 12:40 pm UTC

Re: do errors 'hurt' my PC?

Postby bitsplit » Tue Jun 22, 2010 12:16 pm UTC

A computer is self-aware when it has a sufficient cognitive, if intuitive, ontological model of the world that includes itself and the type of object it represents. It can also differentiate between itself and others within this model. This model needs to be the driving system of the computer, and not merely a layer on top of the operating system. If this system drives the machine, then I would agree the machine is sufficiently self-aware.

A cat, for example, might not know how to explain what a cat is, but it intuitively knows the difference between cat and not cat. It also knows the difference between me, and other cats. It is, therefore, sufficiently self-aware. It is obvious that self-awareness is a fuzzy subject, and lends itself to different interpretations depending on the definition given.

Artificially coded AI is still artificially coded, and does not have true understanding; it merely mimics true behavior. I do not, however, disregard the possibility that computers will be capable of self-awareness; to the contrary, I believe that computers will be capable of true self-awareness. But in order for this to happen, the computer must be driven by an ontological system on some level. The operating system could provide a layer on top of which the ontological system, which could be intuitive such as the cat's which I mentioned above, operates and interacts with the operating system to control the entire computer.

Also, I did not say computers are not self aware, I said they are not sufficiently self aware. There may be studies in artificial intelligence to which I am not privy which may be experimenting with the topics I am describing here. If so, there may be computers which are starting to show self-awareness. However, there is a big step between self-awareness and being able to recognize pain, since that would require a similarity to the animal model of "understanding" and we have no guarantee that the model that will emerge for the computer will be the same as that for animals, since they have a different set of evolutionary constraints.

squareroot
Posts: 548
Joined: Tue Jan 12, 2010 1:04 am UTC
Contact:

Re: do errors 'hurt' my PC?

Postby squareroot » Tue Jun 22, 2010 2:58 pm UTC

Mavrisa wrote:It could be that the brain actually uses quantum phenomenon to accomplish the things it does. We don't know. But a computer does not. It follows a set of instructions which makes everything it does based on the input and output COMPLETELY predictable.


Firstly, computers are not completely predictable. Given the *exact* same state and input, yes it may act the same, but very often it will still have some roots in very chaotic phenomena - every time I run Apophysis (an IFS creation & rendering program), the exact same parameters on a low quality setting will give me different pictures. For all I know, there may be some small basin which points would otherwise never map too - and if one of the initial points is there, it will cause great changes to everything else in the image. I think this is somewhat comparable to a human making tea, who suddenly begins to display their schizophrenia symptoms. Earlier it was mild, they ignored it as normal paranoia, but this one time it's different, it's real. The tea will probably never be made if it's a very strong attack, or it may turn out too strong if they leave the tea bag in. You get the idea.

Another example: Genetic algorithms for solving complex problems are also becoming increasingly common. Maybe you could run the algorithm twenty times, and always end up with solutions tending towards the same spot. Then the twenty first time, (if pseudo-chance has it) they might just happen to evolve in such a manner that the solution collect in a "valley", cut off from the other, more common solution. This "valley" might be much better, or much worse.

Secondly, computers will probably be using quantum techniques by 2020 or so. There are already algorithms written for some common tasks that could take great advantage of quantum computing; many (maybe all?) plants already do it during photosynthesis to optimize transport - I don't see why computers couldn't.

Finally, you talk about how "all sorts of things" happen when someone pricks your finger. One word: Multi-threaded?
<signature content="" style="tag:html;" overused meta />
Good fucking job Will Yu, you found me - __ -

Mavrisa
Posts: 340
Joined: Mon Dec 22, 2008 8:49 pm UTC
Location: Ontario

Re: do errors 'hurt' my PC?

Postby Mavrisa » Tue Jun 22, 2010 6:00 pm UTC

squareroot wrote:Firstly, computers are not completely predictable. Given the *exact* same state and input, yes it may act the same, but very often it will still have some roots in very chaotic phenomena - every time I run Apophysis (an IFS creation & rendering program), the exact same parameters on a low quality setting will give me different pictures. For all I know, there may be some small basin which points would otherwise never map too - and if one of the initial points is there, it will cause great changes to everything else in the image.

I love Apophysis! Especially showing it to people who don't like math, hearing them say it's beautiful, and then telling them it's math. But it's just the result of an equation (or a number of them I suppose) given pseudo-random numbers over and over again. A higher quality image will give you nearly the same thing every time because over time, the data smooths out.

squareroot wrote:Another example: Genetic algorithms for solving complex problems are also becoming increasingly common. Maybe you could run the algorithm twenty times, and always end up with solutions tending towards the same spot. Then the twenty first time, (if pseudo-chance has it) they might just happen to evolve in such a manner that the solution collect in a "valley", cut off from the other, more common solution. This "valley" might be much better, or much worse.

Again, this is just pseudo-random number generators at work. They do the best they can (which is often a very good job), but if you know the state of the processor at the time, you can determine which number will pop out (at least for the ones that rely on the processor's clock to come up with the numbers, which is most of them nowadays. Yes, there are real random number generators available, but it'd be a safe bet that they aren't too common - and yes, they rely on such things as spin and polarization).

squareroot wrote:Secondly, computers will probably be using quantum techniques by 2020 or so. There are already algorithms written for some common tasks that could take great advantage of quantum computing; many (maybe all?) plants already do it during photosynthesis to optimize transport - I don't see why computers couldn't.

Off topic: Whoa! I did not know that about photosynthesis. For anyone interested.
On topic: I'm a little less optimistic about their release date, but still excited for it. I would like to think that this is what it would take for a computer to a) gain consciousness or at least b) be self aware. I like bitsplit's definition of self awareness.

squareroot wrote:Finally, you talk about how "all sorts of things" happen when someone pricks your finger. One word: Multi-threaded?

Let's just say the brain has billions (trillions?) of possible threads, that can be embarrassingly parallel, but can also work together fantastically well at the same time. Supercomputers are sortof on the right track, but again, there's nothing in them that allows them to be self-aware.
"I think nature's imagination is so much greater than man's, she's never gonna let us relax."

User avatar
Robert'); DROP TABLE *;
Posts: 730
Joined: Mon Sep 08, 2008 6:46 pm UTC
Location: in ur fieldz

Re: do errors 'hurt' my PC?

Postby Robert'); DROP TABLE *; » Tue Jun 22, 2010 7:31 pm UTC

it merely mimics true behavior.

If it looks like a duck and quacks like a duck, how do you say with any conviction that it isn't a duck?

Yes, there are real random number generators available, but it'd be a safe bet that they aren't too common

AFAIK, most variations of Linux implement /dev/(u)random as truly random, using disk noise.

Supercomputers are sortof on the right track, but again, there's nothing in them that allows them to be self-aware.

Despite the fact that both the human brain and an arbitrary supercomputer can both be reduced to Turing machines? (Very esoteric and complex machines, but Turing machines nonetheless.)
...And that is how we know the Earth to be banana-shaped.

User avatar
joshz
Posts: 1466
Joined: Tue Nov 11, 2008 2:51 am UTC
Location: Pittsburgh, PA

Re: do errors 'hurt' my PC?

Postby joshz » Tue Jun 22, 2010 7:43 pm UTC

Yeah, except how do you know that the brain can be reduced to a Turing machine?
You, sir, name? wrote:If you have over 26 levels of nesting, you've got bigger problems ... than variable naming.
suffer-cait wrote:it might also be interesting to note here that i don't like 5 fingers. they feel too bulky.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: do errors 'hurt' my PC?

Postby Xanthir » Tue Jun 22, 2010 8:10 pm UTC

Because the brain doesn't, to the best of our considerable knowledge, contain pixie dust of any sort.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
joshz
Posts: 1466
Joined: Tue Nov 11, 2008 2:51 am UTC
Location: Pittsburgh, PA

Re: do errors 'hurt' my PC?

Postby joshz » Tue Jun 22, 2010 8:30 pm UTC

Then how do you simplify QM to a turing machine (if that is, in fact, what the brain uses). Fact is, we don't know for sure how the brain works. We do know for sure how computers work, and, given the same input, a computer program will always give the same output. (assuming any random numbers are counted as input).
You, sir, name? wrote:If you have over 26 levels of nesting, you've got bigger problems ... than variable naming.
suffer-cait wrote:it might also be interesting to note here that i don't like 5 fingers. they feel too bulky.

User avatar
TheChewanater
Posts: 1279
Joined: Sat Aug 08, 2009 5:24 am UTC
Location: lol why am I still wearing a Santa suit?

Re: do errors 'hurt' my PC?

Postby TheChewanater » Tue Jun 22, 2010 8:48 pm UTC

joshz wrote:Yeah, except how do you know that the brain can be reduced to a Turing machine?

I'm sure it could pass the Turing Test. I'm not sure if that means anything.
ImageImage
http://internetometer.com/give/4279
No one can agree how to count how many types of people there are. You could ask two people and get 10 different answers.

User avatar
joshz
Posts: 1466
Joined: Tue Nov 11, 2008 2:51 am UTC
Location: Pittsburgh, PA

Re: do errors 'hurt' my PC?

Postby joshz » Tue Jun 22, 2010 9:16 pm UTC

...please tell me you're joking.
You, sir, name? wrote:If you have over 26 levels of nesting, you've got bigger problems ... than variable naming.
suffer-cait wrote:it might also be interesting to note here that i don't like 5 fingers. they feel too bulky.

User avatar
TheChewanater
Posts: 1279
Joined: Sat Aug 08, 2009 5:24 am UTC
Location: lol why am I still wearing a Santa suit?

Re: do errors 'hurt' my PC?

Postby TheChewanater » Tue Jun 22, 2010 10:27 pm UTC

No, just ignorant.
ImageImage
http://internetometer.com/give/4279
No one can agree how to count how many types of people there are. You could ask two people and get 10 different answers.

User avatar
joshz
Posts: 1466
Joined: Tue Nov 11, 2008 2:51 am UTC
Location: Pittsburgh, PA

Re: do errors 'hurt' my PC?

Postby joshz » Wed Jun 23, 2010 12:34 am UTC

The only thing they have in common is that the same man (Alan Turing) thought of them.
http://en.wikipedia.org/wiki/Turing_machine
http://en.wikipedia.org/wiki/Turing_test
You, sir, name? wrote:If you have over 26 levels of nesting, you've got bigger problems ... than variable naming.
suffer-cait wrote:it might also be interesting to note here that i don't like 5 fingers. they feel too bulky.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: do errors 'hurt' my PC?

Postby Xanthir » Wed Jun 23, 2010 12:44 am UTC

joshz wrote:Then how do you simplify QM to a turing machine (if that is, in fact, what the brain uses). Fact is, we don't know for sure how the brain works. We do know for sure how computers work, and, given the same input, a computer program will always give the same output. (assuming any random numbers are counted as input).

Quantum computers are turing-equivalent. They can just obtain an exponential speedup over traditional computers for certain classes of problems (which is irrelevant for power purposes, as traditional computers gain an exponential speedup over turing machines on nearly all problems). Any problems that defeat a turing machine still defeat quantum computers.

We don't know precisely how the brain produces high-level consciousness. We do know how neurons work. It doesn't involve pixie dust.

So, since we know the brain is *at least* as powerful as a turing machine, and we also know that, so far, we haven't discovered any physical process in the entire universe that can even theoretically allow super-turing computation (note - there are theoretical physical processes that can, just no *discovered* processes that can theoretically do so), the brain is thus turing-equivalent until proven otherwise.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
joshz
Posts: 1466
Joined: Tue Nov 11, 2008 2:51 am UTC
Location: Pittsburgh, PA

Re: do errors 'hurt' my PC?

Postby joshz » Wed Jun 23, 2010 1:17 am UTC

Uh...nothing's a true Turing Machine. Turing Machines mandate infinite storage space.

I'm also not saying the brain has any 'magic pixie dust'-just that no computer currently extant can claim to feel emotion.
You, sir, name? wrote:If you have over 26 levels of nesting, you've got bigger problems ... than variable naming.
suffer-cait wrote:it might also be interesting to note here that i don't like 5 fingers. they feel too bulky.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: do errors 'hurt' my PC?

Postby Xanthir » Wed Jun 23, 2010 5:00 am UTC

joshz wrote:Uh...nothing's a true Turing Machine. Turing Machines mandate infinite storage space.

Turing equivalency glosses over that fact. We just pretend that we can add arbitrary storage space to the machine as needed. Of course that's not quite true, and there are some problems that are perfectly solveable theoretically but can't be run using all the matter in the universe, but we ignore that fact. It's irrelevant for practical purposes.

I'm also not saying the brain has any 'magic pixie dust'-just that no computer currently extant can claim to feel emotion.

The claims you're making are stronger than that. You challenged the idea that the brain was turing-equivalent, which is basically a proxy for "brains are special and can do things that computers will never be able to".

I agree that no extant computer experiences anything strongly analogous to what we call emotion.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

Mavrisa
Posts: 340
Joined: Mon Dec 22, 2008 8:49 pm UTC
Location: Ontario

Re: do errors 'hurt' my PC?

Postby Mavrisa » Wed Jun 23, 2010 6:54 am UTC

I think in order for us to ever agree, we need definitions of various terms like emotion, self-awareness, consciousness and feeling (among others). And we're probably never going to agree on those.

Robert'); DROP TABLE *; wrote:If it looks like a duck and quacks like a duck, how do you say with any conviction that it isn't a duck?
Look at it a little more closely. If someone makes a really convincing duck-robot, it will look like a duck and quack like a duck, but it will not be a duck. We have our ways...

AFAIK, most variations of Linux implement /dev/(u)random as truly random, using disk noise.
Okay, fair point. I was referring to most mac and windows generation methods (which often depend on the processor cycle).

TheChewanater wrote:I'm sure it could pass the Turing Test. I'm not sure if that means anything.

I lol'd. But it's okay because I didn't really know what the test exactly was until a couple of days ago

Xanthir wrote:Quantum computers are turing-equivalent. They can just obtain an exponential speedup over traditional computers for certain classes of problems (which is irrelevant for power purposes, as traditional computers gain an exponential speedup over turing machines on nearly all problems). Any problems that defeat a turing machine still defeat quantum computers.
I thought turing machines gave definite answers (Based on this state and this character, do this and switch to this state) ... quantum computers give probabilities rather than definite answers. Also, while there are non-deterministic variations of turing machines, they can still do various problems that quantum computers cannot.

Xanthir wrote:We don't know precisely how the brain produces high-level consciousness. We do know how neurons work. It doesn't involve pixie dust.
Well the Quantum Brain Theory can apparently explain quite a few things fairly well (though there are still many areas in which it fails). I just can't understand any other way for an ant (10 000 neurons) or fruit fly (100 000) to have such complex, adaptive behaviour (relatively speaking) without somehow involving quantum phenomenon. I know I shouldn't think like that, but it seems the most obvious explanation. If plants evolved to take advantage of it, why wouldn't animals have as well?
"I think nature's imagination is so much greater than man's, she's never gonna let us relax."

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: do errors 'hurt' my PC?

Postby WarDaft » Wed Jun 23, 2010 8:42 am UTC

bitsplit wrote:This model needs to be the driving system of the computer, and not merely a layer on top of the operating system. If this system drives the machine, then I would agree the machine is sufficiently self-aware.

That's a much more stringent requirement than we place on humans. No electron in the brain is self aware. No given neuron, nor the chemical markers or synapses it has built up, is self aware. The (possibly quantum) cellular automaton running on them might not even be self aware either. It is the whole thing acting in concert that results in consciousness. We just can't draw the line between automaton and the beginnings of consciousness much better than a 18th century philosopher could understand let alone draw the line between the machine code, byte code, source code, or an audio stream just by looking at the 1s and 0s. We understand it because we put it together in the first place, but even still, we need tool assistance to easily distinguish between them.

There is a level of abstraction the brain operates on which is (barring pixie dust) no different from a level of abstraction we can run on a computer. The options are:

1) Pixie dust.
2) Consciousness arises purely from information theory.

In case 1, we just need to find it and sprinkle some on the chip, BAM, we have ourselves an AI. In case 2, then either there is a minimal automaton for consciousness - a binary flag of sorts, or there is a sliding scale of conciousness and every Turing machine is conscious to some extent (the extent may obviously be 0 for some).
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
bitsplit
Posts: 57
Joined: Thu May 13, 2010 12:40 pm UTC

Re: do errors 'hurt' my PC?

Postby bitsplit » Wed Jun 23, 2010 12:37 pm UTC

WarDaft wrote:
bitsplit wrote:This model needs to be the driving system of the computer, and not merely a layer on top of the operating system. If this system drives the machine, then I would agree the machine is sufficiently self-aware.

That's a much more stringent requirement than we place on humans. No electron in the brain is self aware. No given neuron, nor the chemical markers or synapses it has built up, is self aware. The (possibly quantum) cellular automaton running on them might not even be self aware either. It is the whole thing acting in concert that results in consciousness. We just can't draw the line between automaton and the beginnings of consciousness much better than a 18th century philosopher could understand let alone draw the line between the machine code, byte code, source code, or an audio stream just by looking at the 1s and 0s. We understand it because we put it together in the first place, but even still, we need tool assistance to easily distinguish between them.

There is a level of abstraction the brain operates on which is (barring pixie dust) no different from a level of abstraction we can run on a computer. The options are:

1) Pixie dust.
2) Consciousness arises purely from information theory.

In case 1, we just need to find it and sprinkle some on the chip, BAM, we have ourselves an AI. In case 2, then either there is a minimal automaton for consciousness - a binary flag of sorts, or there is a sliding scale of conciousness and every Turing machine is conscious to some extent (the extent may obviously be 0 for some).


The word model here referred to the ontological model I was discussing. Ontological models are very complicated, and not at all simplistic, and would never be able to be placed in the confines of a single neuron or a single electron. What I was saying here is that the model runs the body on which it itself runs, much like the brain controls the body on which it lives. It gives the abstract mind (ontological model) living in the brain (operating system, cpu, ram, and storage) a sense that it is, in a sense, the body (communications and other peripherals, along with OS, cpu, ram, and storage) and the mind. This knowledge can be rational or intuitive, and can exist at different levels of awareness.

I agree we can't just sprinkle self-awareness on devices. The quote there is taken out of context. The system there refers to the system described above the sentences quoted, which talks about a full ontological model that drives the whole machine. These sentences by themselves, taken out of context, are false. Perhaps I should edit my previous post to make it more obvious that the model in that sentence is the same as the ontological model I was referring to in previous parts of the post.

edit: After rereading my previous post, I believe it is evident that the model I refer to in the quoted sentences, being part of the same paragraph, refer to the ontological model which is the topic of the paragraph.

User avatar
joshz
Posts: 1466
Joined: Tue Nov 11, 2008 2:51 am UTC
Location: Pittsburgh, PA

Re: do errors 'hurt' my PC?

Postby joshz » Wed Jun 23, 2010 1:36 pm UTC

xanthir: if I'm grokking wikipedia right, the brain isn't turing equivalent, since no computer can model it.
You, sir, name? wrote:If you have over 26 levels of nesting, you've got bigger problems ... than variable naming.
suffer-cait wrote:it might also be interesting to note here that i don't like 5 fingers. they feel too bulky.

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: do errors 'hurt' my PC?

Postby WarDaft » Wed Jun 23, 2010 1:42 pm UTC

bitsplit wrote:The word model here referred to the ontological model I was discussing. Ontological models are very complicated, and not at all simplistic, and would never be able to be placed in the confines of a single neuron or a single electron. What I was saying here is that the model runs the body on which it itself runs, much like the brain controls the body on which it lives. It gives the abstract mind (ontological model) living in the brain (operating system, cpu, ram, and storage) a sense that it is, in a sense, the body (communications and other peripherals, along with OS, cpu, ram, and storage) and the mind. This knowledge can be rational or intuitive, and can exist at different levels of awareness.

I agree we can't just sprinkle self-awareness on devices. The quote there is taken out of context. The system there refers to the system described above the sentences quoted, which talks about a full ontological model that drives the whole machine. These sentences by themselves, taken out of context, are false. Perhaps I should edit my previous post to make it more obvious that the model in that sentence is the same as the ontological model I was referring to in previous parts of the post.

edit: After rereading my previous post, I believe it is evident that the model I refer to in the quoted sentences, being part of the same paragraph, refer to the ontological model which is the topic of the paragraph.
I realize it's the model you were referring to. My point is you don't know that consciousness is not something running on top of some evolved organic operating system. In fact it almost assuredly is, because I don't have to think about which neurons I need to fire to type these messages, I just want it to happen and my hands type it.

If you did not mean the sentence as I read it, then why draw a distinction between a relying on underlying complexity to control a hardware system and directly controlling a hardware system? A consciousness not in full control of it's hardware system is something quite present in humans as all sorts of mental and physical disabilities and disorders... or just plain dreaming. Heck, why even talk about a machine being self aware? It's the software that's self aware, the hardware is just a puppet even for humans.

xanthir: if I'm grokking wikipedia right, the brain isn't turing equivalent, since no computer can model it.
The lack of an existing program that is equivalent to the brain is not evidence that it is incomputable, and being incomputable doesn't prevent something from containing a Turing equivalent subset. In truth, for the brain to be incomputable by a Turing machine would require that there are fundamental laws of the universe that are not computable and yet vital to the operation of the human brain. There is currently no evidence of that.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
bitsplit
Posts: 57
Joined: Thu May 13, 2010 12:40 pm UTC

Re: do errors 'hurt' my PC?

Postby bitsplit » Wed Jun 23, 2010 2:49 pm UTC

WarDaft wrote:
bitsplit wrote:The word model here referred to the ontological model I was discussing. Ontological models are very complicated, and not at all simplistic, and would never be able to be placed in the confines of a single neuron or a single electron. What I was saying here is that the model runs the body on which it itself runs, much like the brain controls the body on which it lives. It gives the abstract mind (ontological model) living in the brain (operating system, cpu, ram, and storage) a sense that it is, in a sense, the body (communications and other peripherals, along with OS, cpu, ram, and storage) and the mind. This knowledge can be rational or intuitive, and can exist at different levels of awareness.

I agree we can't just sprinkle self-awareness on devices. The quote there is taken out of context. The system there refers to the system described above the sentences quoted, which talks about a full ontological model that drives the whole machine. These sentences by themselves, taken out of context, are false. Perhaps I should edit my previous post to make it more obvious that the model in that sentence is the same as the ontological model I was referring to in previous parts of the post.

edit: After rereading my previous post, I believe it is evident that the model I refer to in the quoted sentences, being part of the same paragraph, refer to the ontological model which is the topic of the paragraph.
I realize it's the model you were referring to. My point is you don't know that consciousness is not something running on top of some evolved organic operating system. In fact it almost assuredly is, because I don't have to think about which neurons I need to fire to type these messages, I just want it to happen and my hands type it.

If you did not mean the sentence as I read it, then why draw a distinction between a relying on underlying complexity to control a hardware system and directly controlling a hardware system? A consciousness not in full control of it's hardware system is something quite present in humans as all sorts of mental and physical disabilities and disorders... or just plain dreaming. Heck, why even talk about a machine being self aware? It's the software that's self aware, the hardware is just a puppet even for humans.

xanthir: if I'm grokking wikipedia right, the brain isn't turing equivalent, since no computer can model it.
The lack of an existing program that is equivalent to the brain is not evidence that it is incomputable, and being incomputable doesn't prevent something from containing a Turing equivalent subset. In truth, for the brain to be incomputable by a Turing machine would require that there are fundamental laws of the universe that are not computable and yet vital to the operation of the human brain. There is currently no evidence of that.


I draw the distinction because, from an abstract point of view, if the model is not running the computer, then the model is existing within the computer, and not the computer itself. Therefore, the computer is not self-aware. The computer is, in a sense, an environment for which a virtual self-aware entity exists, but the computer itself is not self-aware. The computer is the universe in which the entity exists. If the entity controls the computer, then it is the computer that is self-aware if the entity identifies itself as the computer. Notice, that I mention that it doesn't have to be a rational ontological model, but an intuitive one. You don't have to think about how to move your hand in order to do so, because the model is intuitive. You just have to be able to have some sort of intuitive notion of an it, call it a hand, and an intuitive ability of how to move it.

A being that merely acts on reflex is not self aware, but one that acts on instinct might be, if that instinct is a representation of an evolved ontological model. We, as humans, could study that model, and we often do. We study the ability of parrots to count, for example. Parrots, do not study the ability of parrots to count, but they instinctively and intuitively know how to do so, to a certain extent.

It is this intuitive model that enables self-awareness. In humans, we have a meta-model, a rational model built on top of that model. The rational model, built on top of the intuitive. We have rational concepts of lions. Even if we did not, we would know to run from them if they were chasing us. The former rational concept would belong to the rational ontology, whereas the latter belongs to the intuitive ontology. It is the latter type of model which truly enables self-awareness. It opens the door to distinguishing me from not me, like me from not like me, and of a kind from not of a kind. Once enough intuitive concepts exist in the however primitive mind to distinguish between me from not me and like me from not like me, I believe the door is opened to self-awareness. It is not necessarily the advanced human, or instinctively protective form of self awareness of animals, but it can be self-awareness of a kind. In a sense, it knows the self, and what kind of self it is, even if only intuitively.

If the self the model identifies is not the computer, then the computer is not self-aware. I concede that the self could, conceptually, perceive itself to be the computer if it can perceive sufficient amounts of what the computer perceives, even if it cannot act on it. This is similar to a person who has had a spinal injury and is no longer able to control his articulations. He is still a person and is still self-aware. So, after rethinking it, it is the perception, or at least partial perception, that would make the computer self aware. Actually, the computer could have several self-aware selves. However, I doubt that such an AI would emerge given its lack of possible interaction with the environment, and control over the computer. but this is speculative, so I cannot argue it as fact, merely belief. It is however, strong belief on my part. I believe it would be difficult for the AI to emerge as something that identified itself with the computer as "itself" without it having a need to do so, and for that it would have to be able to interact with an environment outside the computer, and have at least some control over the computer. There is also the possibility that a self-aware being can exist within a computer that does not identify itself as the computer, and perceives everything from the computer. In a sense, the computer is its universe, and its senses are somewhat self-defined by its code.

However, the OP was questioning whether a computer could experience pain. I believe it is theoretically possible, but is not so given the current state of computing as it is.

edit: My bad, the OP's original question was whether a computer could suffer physical damage from coding errors. Yeah, horribly off-topic by now.
Last edited by bitsplit on Thu Jun 24, 2010 2:07 pm UTC, edited 1 time in total.

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: do errors 'hurt' my PC?

Postby WarDaft » Wed Jun 23, 2010 3:31 pm UTC

Then you do indeed feel it necessary to state that humans are probably not self aware, just hosts for virtual clients that themselves are. We can have a sense of self completely separated from our physical bodies via dreaming and particularly lucid dreaming. The only way you can distinguish between a lucid dream and waking life is logical inconsistencies in the world apparently around you, they can feel perfectly real otherwise, which rather means the conscious mind is not fundamentally bound to the body and any appearance otherwise is a consistent illusion.

We can't even suppose to draw distinction between realizing self, and realizing you are a virtualized self on a machine, because you could be virtualized in a way that provides no possible concept of being virtualized, consider that many people will never even stop to think about how they are really just ordering a construct around rather than their "self" actually doing the things they observe their bodies doing, let alone actually perceive the world that way.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
bitsplit
Posts: 57
Joined: Thu May 13, 2010 12:40 pm UTC

Re: do errors 'hurt' my PC?

Postby bitsplit » Wed Jun 23, 2010 4:36 pm UTC

WarDaft wrote:Then you do indeed feel it necessary to state that humans are probably not self aware, just hosts for virtual clients that themselves are. We can have a sense of self completely separated from our physical bodies via dreaming and particularly lucid dreaming. The only way you can distinguish between a lucid dream and waking life is logical inconsistencies in the world apparently around you, they can feel perfectly real otherwise, which rather means the conscious mind is not fundamentally bound to the body and any appearance otherwise is a consistent illusion.

We can't even suppose to draw distinction between realizing self, and realizing you are a virtualized self on a machine, because you could be virtualized in a way that provides no possible concept of being virtualized, consider that many people will never even stop to think about how they are really just ordering a construct around rather than their "self" actually doing the things they observe their bodies doing, let alone actually perceive the world that way.


Actually, I believe humans are self-aware. The human mind would not have evolved to what it is if it had not been bound by the constraints of the body which is tied to it, and the environment with which that body interacts. That being said, that self-awareness resides in the mind, not the body. So while the self-awareness resides in the mind, and the mind is separate from the body in so far as it is the result of information being processed (my belief), the mind would not exist as it is had it not been for the evolutionary process of the physical world, as opposed to the virtual world in which it resides. So while it is theoretically possible to take a human mind as is and plug it into a machine that is an artificial equivalent of a brain, given technology that is adequate to the task, the mind itself would not have arrived to the state it was in before transfer without the process previously stated, in which the human mind was bound to the human body.

So humans are self-aware, but self-awareness resides in the mind. A human body that is in a coma is alive, and cannot be self aware. A human with certain kinds of brain damage is, in the case of certain lobotomies for example, likely not self aware as well. In such cases, some humans are not self-aware and yet human in the usual sense. Even so, their brains have suffered such damage or are so malformed that they cannot host a "human mind" and are therefore incapable of self-awareness. It is likely that this is the case of other mammals as well, although I have not heard of any studies related to the lobotomies or brain damage of mammals or other animals and the corresponding (if any) effect on their cognitive capabilities.

So in a sense I do believe that mind is separate from the body, and that self-awareness resides in the mind. However, the mind needs a brain (or equivalent technology which does not exist at this point in time) in order to exist. The mind intuitively believes that it is the human, both body and mind. Most humans would not know the difference if it were not for education, sufficient rational thought, philosophy, or science. But they are still self-aware. However, I think we should make a distinction between the human, the human body, the human consciousness, and the human mind. The human consciousness is self aware and resides in the human mind. The human body is not. But the human, as a combination of all three things, is self-aware, by nature of it forming together as one integral entity, since neither of the three, except maybe the body, can usually exist without the other two.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: do errors 'hurt' my PC?

Postby Xanthir » Wed Jun 23, 2010 6:51 pm UTC

Mavrisa wrote:
Xanthir wrote:Quantum computers are turing-equivalent. They can just obtain an exponential speedup over traditional computers for certain classes of problems (which is irrelevant for power purposes, as traditional computers gain an exponential speedup over turing machines on nearly all problems). Any problems that defeat a turing machine still defeat quantum computers.
I thought turing machines gave definite answers (Based on this state and this character, do this and switch to this state) ... quantum computers give probabilities rather than definite answers.

No, they give definite answers, just with a probability of being wrong. You can determine the exact probability of this, and then rerun the algo enough times to get the error below whatever threshold you want. The error of the collection of runs tends to decrease very fast, so the exponential speedup of the algorithm isn't threatened by being forced to run multiple times.

Also, while there are non-deterministic variations of turing machines, they can still do various problems that quantum computers cannot.

No they can't. Adding non-determinism to a turing machine adds precisely 0 additional power. Take a Theory of Automata class sometime - it's extremely interesting what sort of manipulations have an effect on a machine's power.

Xanthir wrote:We don't know precisely how the brain produces high-level consciousness. We do know how neurons work. It doesn't involve pixie dust.
Well the Quantum Brain Theory can apparently explain quite a few things fairly well (though there are still many areas in which it fails).

It actually can't. There's just a bunch of fluffed up popsci surrounding it because to the general public "quantum" equals "magic", and we like to think that our brains are magical. (To be precise, we like to think that our brains aren't deterministic. This is equivalent to magical.)

I just can't understand any other way for an ant (10 000 neurons) or fruit fly (100 000) to have such complex, adaptive behaviour (relatively speaking) without somehow involving quantum phenomenon. I know I shouldn't think like that, but it seems the most obvious explanation. If plants evolved to take advantage of it, why wouldn't animals have as well?

You're arguing from incredulity. ^_^ Quantum phenomena are not at all required for that sort of thing. You can get absolute astonishing behavior from neural networks of 100 nodes. A 10k or 100k node network adapted through a billion-year genetic algorithm can very plausibly explain ant or fruit-fly behavior.

Plants do take advantage of quantum phenomena in photosynthesis, but that's different from quantum brain. For one, quantum photosynthesis actually has proof that it exists. ^_^ For two, we actually had problems explaining photosynthesis with classical physics - it's much more efficient than what we can get out of solar cells. We haven't hit such a problem with the brain - we haven't yet encountered anything that can't in principle be explained by ordinary classical physics and chemistry.

joshz wrote:xanthir: if I'm grokking wikipedia right, the brain isn't turing equivalent, since no computer can model it.

Sorry, but you're grokking wikipedia wrong, or else there's a false statement in wikipedia. Can you point to which article you're reading that suggests that?

Let me elaborate on why that's wrong:

You can split real and hypothetical machines into 3 natural categories: turing-equivalent, sub-turing-equivalent, and super-turing equivalent. This is a natural category because almost every single non-trivial system in the entire world is turing-equivalent (that is, assuming somehow that it was allowed arbitrary memory, it could compute anything that a turing-machine could).

You have to try rather hard to formulate something non-trivial that's weaker than a turing machine. It's really astonishing how simple you can make a computing system and still have it be exactly as powerful as a modern supercomputer (in terms of what types of programs it can theoretically run in finite time given arbitrary memory). It's also astonishing what kinds of powers you can give a machine while having it remain turing-equivalent.

Example:
Spoiler:
For example, one of the simplest type of interesting machines is a DFA (deterministic finite automaton). It accepts or rejects a string by simply eating one character at a time from the start of the string and moving from memory state to memory state based only on that one character. Eventually it's eaten the entire string, and is in an accept or reject state. The DFA is sub-turing - there are a bunch of problems that are trivial for a real computer that a DFA can't solve, such as accepting all and only binary strings with the same number of 1s and 0s. They're still quite useful, though - regexes, for example, are equivalent to DFAs (though, in practice, perl-compatible regexs contain a number of additional abilities that boost them above the power of DFAs).

Now, let's give the DFA a little bit more power. We'll let it carry around a stack for memory, and then it can transition from state to state based on either looking at the top element in its memory, the first element of the string, or both. Whenever it transitions, it can also either push or pop something onto its memory. This is called a pushdown automata, or PDA. PDAs are significantly more powerful. Most grammar generators are built with context-free grammars, which are equivalent to PDAs. They're still sub-turing, though.

Now, let's do a tiny change. Let's give the PDA *two* stacks, and let it read/write from both of them. Suddenly, the machine is turing-equivalent. That tiny little change is enough to catapult the PDA which, though useful, is still rather weak in terms of what kinds of problems it can solve, all the way to full power.

Let's do a different change. Let's keep the PDA with a single memory device, but let's make it a queue instead of a stack. Bam, turing-equivalent again.

Now, let's take a different tack. Non-determinacy is very useful sometimes. In essence, it means that a machine, rather than taking a single definite path, can take multiple paths at the same time, and decide whether to accept or reject based on the state of *all* the paths at the end.

If we make a DFA non-deterministic, surprisingly, we gain no additional power. DFAs are so simple that following every path at once doesn't actually add any power, it just makes the machine smaller - turning an NFA into a DFA generally requires an exponential increase in the number of states.

If we make a PDA non-deterministic, though, it *does* make a big difference. A deterministic and non-deterministic pda are substantially different in power. The non-deterministic one is generally considered more "natural" - context free grammars are equivalent to a non-deterministic PDA; the equivalent class of grammars for deterministic PDAs has several additional restrictions that make them harder to work with.

In general, everything more powerful than a DFA gains additional power when we make it non-deterministic. This is unsurprising. However, a Turing machine doesn't! Turing machines are *so* powerful that the extra abilities afforded by non-determinism don't actually grant any additional power. Similarly, giving a turing machine extra tapes to work with (similar to giving a PDA extra memory stacks) doesn't add any power.

As it turns out, *nothing* that we can physically give to a real turing machine increases its power, and quite a bit of things that we can only do in theory don't increase its power either. You usually have to get *really* magical to allow a turing machine to finally solve new classes of problems that it couldn't solve before.


As far as we know, there's nothing in the brain that can't be modeled to an arbitrary precision by existing computers. We may need more computing power than we readily have available right now, but that's irrelevant to the question. The speed of a computer doesn't affect its turing-equivalence. That's what I mean by "the brain doesn't contain pixie dust" - pixie dust is "something that can't be modelled by a traditional computer; something that allows computations to be performed that can't be done on a turing machine running for a finite amount of time".
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
joshz
Posts: 1466
Joined: Tue Nov 11, 2008 2:51 am UTC
Location: Pittsburgh, PA

Re: do errors 'hurt' my PC?

Postby joshz » Wed Jun 23, 2010 6:58 pm UTC

Wiki says: "Turing equivalence --- Two computers P and Q are called Turing equivalent if P can simulate Q and Q can simulate P." Since no computer can simulate the brain, the brain isn't turing equivalent with any computer, is it?
You, sir, name? wrote:If you have over 26 levels of nesting, you've got bigger problems ... than variable naming.
suffer-cait wrote:it might also be interesting to note here that i don't like 5 fingers. they feel too bulky.

User avatar
bitsplit
Posts: 57
Joined: Thu May 13, 2010 12:40 pm UTC

Re: do errors 'hurt' my PC?

Postby bitsplit » Wed Jun 23, 2010 7:19 pm UTC

joshz wrote:Wiki says: "Turing equivalence --- Two computers P and Q are called Turing equivalent if P can simulate Q and Q can simulate P." Since no computer can simulate the brain, the brain isn't turing equivalent with any computer, is it?

I think it is possible for a computer to simulate a brain, given enough memory. It is just an intractable problem. By the time the simulation is done, the people interested in the result will be long gone. This doesn't mean the machine isn't capable, it just means the solution isn't practical. Try using a simualted turing machine to play chess as well as GNU Chess can, and you will find that the problem is impractical as well; yet, GNU Chess runs on a machine that is Turing equivalent (to an extent, since it is limited to finite RAM).

Also, in your P and Q statement, one of the machines must be either a Turing machine or a machine that has already been proven to be Turing equivalent.

As a matter of fact, Alan Turing modeled the Turing machine from the concept of a human with an unlimited amount of pencil and paper, and working on a finite piece of information at a time. After all, a computer used to be a person sitting at a desk computing numbers using pencil and paper, producing more pencils and more paper as needed to help in calculation. The brain is a finite, albeit fast and quite complex, state machine. The infinite amounts of paper are what gave rise to the infinite tape, and the human sitting at a desk, working with one paper at a time using his brain, the finite state machine that read and wrote a symbol on the page, and picked a new page to read or write from depending on whether or not he ran out of space to work with.
Last edited by bitsplit on Wed Jun 23, 2010 7:59 pm UTC, edited 2 times in total.

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: do errors 'hurt' my PC?

Postby WarDaft » Wed Jun 23, 2010 7:36 pm UTC

joshz wrote:Wiki says: "Turing equivalence --- Two computers P and Q are called Turing equivalent if P can simulate Q and Q can simulate P." Since no computer can simulate the brain, the brain isn't turing equivalent with any computer, is it?

You're confusing "no computer has emulated the human brain" with "no computer can emulate the human brain".

There is no evidence that anything going on in the human brain is beyond the bounds of computable functions. The brain is simply a vast program definition, think trillions of pages of redundant code and input data, and even more data states. We haven't emulate it yet, not because it can't be done, but because it is a tremendous amount of work.


If anything, Turing machines are vastly more powerful than the brain, because the brain does not have unbounded memory storage - after all it exists in a finite universe, there are plenty of astonishingly small Turing machines that we will simply never be able to determine much about because they require more resources than are available in the universe to actually calculate in full. The busy beaver max steps function goes 1, 6, 21, 107, >47176870, >3.8 × 1021132, the machine assuring the lower bound on the 6th one there being trivially definable in 41 characters from an alphabet of 10 symbols, one used strictly for delimitation, another only used once. In fact, here it is.. BR2AL4:BR3AR6:BL3BL1:AL5BLH:BL1AR2:AR3AR5. It seems trivially small doesn't it? But we don't know the upper bound on the halting time for Turing machines even that size, and we may never. We will certainly never know the 7th term for sure unless we magical up ourselves a Zeno machine or something.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: do errors 'hurt' my PC?

Postby Xanthir » Wed Jun 23, 2010 7:40 pm UTC

joshz wrote:Wiki says: "Turing equivalence --- Two computers P and Q are called Turing equivalent if P can simulate Q and Q can simulate P." Since no computer can simulate the brain, the brain isn't turing equivalent with any computer, is it?

"no computer can simulate the brain" is incorrect. As far as we know, everything in the brain can be simulated by computers just fine. The only problem is one of complexity - human brains are big and complicated enough that we can't *yet* fully simulate one on current hardware at reasonable speeds. (Requiring days of computation to simulate a millisecond of brain activity isn't useful.)

Again, speed limitations have zero effect on turing equivalence. Any problem which can be solved by saying "let's add more RAM" isn't a problem, as far as computer science is concerned.

To illustrate this more plainly, the computer you are using to view this post cannot be simulated on an actual physical turing machine in a reasonable amount of time. Turing machines are exponentially slower than electronic computers for most problems. However, it can simulate it in a *finite* amount of time (finite but very large). Thus the speed difference is irrelevant. The only problems occur when P can't simulate Q without either requiring infinite memory at once or infinite running time. (Turing machines are theoretically allowed an infinite tape, but they can only use a finite portion of it at any time. It's equivalent to say that turing machines have an *unbounded* tape - it's finite, but always large enough for the problem at hand.)

Spoiler:
That's why a DFA is less powerful than a turing machine. To solve the problem of "accept all binary strings with an equal number of 0s and 1s, and reject everything else", a Turing machine only requires a handful of states, but a DFA requires an infinite number of states. This corresponds to an infinite amount of memory, though the running time for the DFA is finite. Thus the DFA can't be said to be "simulating" the turing machine, and the two are not equivalent in power.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
joshz
Posts: 1466
Joined: Tue Nov 11, 2008 2:51 am UTC
Location: Pittsburgh, PA

Re: do errors 'hurt' my PC?

Postby joshz » Wed Jun 23, 2010 7:46 pm UTC

I see, that makes sense.
You, sir, name? wrote:If you have over 26 levels of nesting, you've got bigger problems ... than variable naming.
suffer-cait wrote:it might also be interesting to note here that i don't like 5 fingers. they feel too bulky.

Mavrisa
Posts: 340
Joined: Mon Dec 22, 2008 8:49 pm UTC
Location: Ontario

Re: do errors 'hurt' my PC?

Postby Mavrisa » Wed Jun 23, 2010 11:37 pm UTC

What's the simplest brain that we could simulate with a computer which we would consider to be conscious, self aware, and relatively similar to humans? I think we should simulate a couple of seconds and see if we get any comparable behaviour...

Xanthir wrote:No, they give definite answers, just with a probability of being wrong. You can determine the exact probability of this, and then rerun the algo enough times to get the error below whatever threshold you want. The error of the collection of runs tends to decrease very fast, so the exponential speedup of the algorithm isn't threatened by being forced to run multiple times.

Makes sense.

Also, while there are non-deterministic variations of turing machines, they can still do various problems that quantum computers cannot.

No they can't. ... Take a Theory of Automata class sometime - it's extremely interesting what sort of manipulations have an effect on a machine's power.

Hah, turns out I just misread something. My bad :P Also - If I get the chance, I think I will.

Xanthir wrote:To be precise, we like to think that our brains aren't deterministic. This is equivalent to magical

I disagree... I do like to think my brain is non-deterministic, but that's not magic. If you make transistors too small, they stop being deterministic (random leaks, tunnelling, pico-scale capacitance, etc.). As a side effect, they stop working... but who says neurons would? You say there's nothing in the brain that can't be explained by classical physics... do you mean out of everything we've seen/modelled so far, or have we successfully modelled the whole thing?

You can get absolute astonishing behavior from neural networks of 100 nodes.

I'd love to see an example...

A 10k or 100k node network adapted through a billion-year genetic algorithm

True. Keep in mind, though, that a billion years earlier, brains didn't exist, but I see your point. Still... where does it keep its instruction set (or equivalent)?

I'm wondering.. you said there is a class of machines which are super-turing equivalent, but then you said "*nothing* that we can physically give to a real turing machine increases its power"... do they only exist theoretically? Could you elaborate a bit?
"I think nature's imagination is so much greater than man's, she's never gonna let us relax."

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: do errors 'hurt' my PC?

Postby Xanthir » Wed Jun 23, 2010 11:47 pm UTC

Mavrisa wrote:
Xanthir wrote:To be precise, we like to think that our brains aren't deterministic. This is equivalent to magical

I disagree... I do like to think my brain is non-deterministic, but that's not magic. If you make transistors too small, they stop being deterministic (random leaks, tunnelling, pico-scale capacitance, etc.). As a side effect, they stop working... but who says neurons would?

Ah, sorry, when I'm using the term like that, I include "random" as well. People don't like thinking that their brains works randomly, either. But that's all it is - some combination of deterministic and random behavior.

You say there's nothing in the brain that can't be explained by classical physics... do you mean out of everything we've seen/modelled so far, or have we successfully modelled the whole thing?

Everything we've seen, and everything that most neurobiologists expect to see.

You can get absolute astonishing behavior from neural networks of 100 nodes.

I'd love to see an example...

Read up on NNs and play with them a bit. You'll have a lot more fun that way. ^_^

A 10k or 100k node network adapted through a billion-year genetic algorithm

True. Keep in mind, though, that a billion years earlier, brains didn't exist, but I see your point. Still... where does it keep its instruction set (or equivalent)?

NNs don't work like that. Their operation is encoded in the linkages between nodes and the weights of those linkages.

I'm wondering.. you said there is a class of machines which are super-turing equivalent, but then you said "*nothing* that we can physically give to a real turing machine increases its power"... do they only exist theoretically? Could you elaborate a bit?

Yes, super-turing machines (machines capable to solving problems that turing machines can't do without either infinite memory usage or infinite runtime) don't exist in real life. There's some models of physics they could possibly exist in, but none have been confirmed to actually match the universe. Wikipedia has a decent article on it under the name Hypercomputation.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

Mavrisa
Posts: 340
Joined: Mon Dec 22, 2008 8:49 pm UTC
Location: Ontario

Re: do errors 'hurt' my PC?

Postby Mavrisa » Thu Jun 24, 2010 12:19 am UTC

Xanthir wrote:NNs don't work like that. Their operation is encoded in the linkages between nodes and the weights of those linkages.
:shock: I guess I'll have to go play around with them... somewhere.
"I think nature's imagination is so much greater than man's, she's never gonna let us relax."

User avatar
styrofoam
Posts: 256
Joined: Sat May 08, 2010 3:28 am UTC

Re: do errors 'hurt' my PC?

Postby styrofoam » Thu Jun 24, 2010 12:53 am UTC

Mavrisa wrote:I'm wondering.. you said there is a class of machines which are super-turing equivalent, but then you said "*nothing* that we can physically give to a real turing machine increases its power"... do they only exist theoretically? Could you elaborate a bit?

viewtopic.php?f=12&t=58941

Sadly, only theoretical...
aadams wrote:I am a very nice whatever it is I am.

User avatar
phlip
Restorer of Worlds
Posts: 7573
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: do errors 'hurt' my PC?

Postby phlip » Thu Jun 24, 2010 4:08 am UTC

This thread has gone horribly off-topic... and there is already a thread for what it has become...

The OP's question was whether an error did physical damage, not caused some metaphysical notion of "pain". And it was answered: no, it doesn't, in 99% of cases.

Code: Select all

enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}
[he/him/his]


Return to “Coding”

Who is online

Users browsing this forum: No registered users and 3 guests