brain in a vat

A forum for good logic/math puzzles.

Moderators: jestingrabbit, Moderators General, Prelates

SerialClergyman
Posts: 7
Joined: Sat May 24, 2008 4:03 pm UTC

Re: brain in a vat

Postby SerialClergyman » Thu Sep 17, 2009 11:37 am UTC

One thing that hasn't been mentioned is that it could be the most advanced intelligence in the world with the most advanced database and computing power - but it also MUST have the ability to write data and access it again. Otherwise the simple way to decieve it is 'How's the weather? What did I just ask then?'

aduubian
Posts: 25
Joined: Thu Apr 16, 2009 10:32 pm UTC

Re: brain in a vat

Postby aduubian » Fri Sep 18, 2009 6:55 pm UTC

someone mentioned telling three stories, one is a joke, ask each opponent to tell you which is the joke. How about this: ask the two other players to tell you a story or a joke. If that doesn't work, ask them to explain why the joke is funny.

chocolate.razorblades
Posts: 90
Joined: Wed Apr 01, 2009 6:11 pm UTC

Re: brain in a vat

Postby chocolate.razorblades » Fri Sep 18, 2009 9:53 pm UTC

Goldstein wrote:What you've all failed to notice in this thread is that ponzerelli is a machine. "Good," it's thinking, "Good..."


lmao.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Sat Sep 19, 2009 4:58 am UTC

There is no way. Here is a proof.

There are a finite number of questions you can ask the computer. The programmer loads into the computer a lookup table for all these questions. (Assume questions are a function of not only the words in the question, but all previous questions as well, ie, identically worded questions are still unique).

The computer takes your question, looks up the appropriate response and outputs it after an appropriate delay.

Kalathalan
Posts: 20
Joined: Sun Feb 15, 2009 6:06 am UTC

Re: brain in a vat

Postby Kalathalan » Sun Sep 20, 2009 5:41 am UTC

Spoiler:
What if you link both to a comedic video (perhaps a comedic TV episode -- in digital form, of course) and ask them to identify any parts they found amusing? A human would likely comment not only on funny dialogue, but also on humorous facial expressions and actions.

An advanced AI might be able to convert speech to text and then parse the text for humor, but it would have an extremely difficult time identifying and analyzing unspoken humor.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Sun Sep 20, 2009 2:21 pm UTC

Kalathalan wrote:An advanced AI might be able to convert speech to text and then parse the text for humor, but it would have an extremely difficult time identifying and analyzing unspoken humor


Why?

Aetius
Posts: 1099
Joined: Mon Sep 08, 2008 7:23 am UTC

Re: brain in a vat

Postby Aetius » Sun Sep 20, 2009 9:46 pm UTC

Spoiler:
"How many ears did Lou Gehrig's bat have?"

If the answer comes back "2," or "Lou Gehrig did not own a bat," it's AI.

atomic17
Posts: 2
Joined: Tue Sep 22, 2009 10:05 pm UTC

Re: brain in a vat

Postby atomic17 » Tue Sep 22, 2009 10:19 pm UTC

okay, i think i found a way to fool the ai. we cannot trick the ai into revealing itself, so we must take advantage of quirks in the human brain. for example, lets have a look at your eyesight. just off the center of the retina where the sharpest image is formed is a spot called the blind spot, because it is where the nerves and blood vessels pass through, thus no light sensitive nerves form there. you dont see it because its not where you focus and your powerful brain reassembles the image with information from the other eye. if you were to close one eye and fixate a paper with 2 dots on either side and approach the paper slowly, theoretically one of the dots will dissapear. obviously, we cannot ask the ai to look at a sheet of paper with two dots. so instead we use another 2 quirks:
Spoiler:
a computer recognizes the word "to" because it is spelt T-O. if i wrote it O-T the computer would not recognise it. studies have shone, however, that humans do not read the word left to right, but rather the first and last letter, then all the other letters. thusly, as long as all the letters are there and the first and last letter are unaltered, "window" can be written "wdoniw and you can still tell what i meant to say. a computer, however will not. we can accentuate the problem with a word using a good amount of vowels and similar letters, then asking for the definition. the ai will look it up, find no match for the garbled word, and give up. it cannot use an anagram program because you dont give a context and used a word with numerous common anagrams (ex: what does aanmargs mean??). the ai will bug, and either reveal itself or simply say it doesnt know. keep using similar word until it becomes clear which msn window is a human and which isnt


Spoiler:
another, less reliabel meathode is the way the human brain processes the word "of" . it doesnt, at least not properly. simply use a sentence with several "of"s and several other "f" s in it and ask how many letter f's there are. the computr should get it, the human, unless he is near genuis, probably wont. less reliable but works on weak ai easily.

User avatar
zug
Posts: 902
Joined: Wed Feb 25, 2009 12:05 am UTC

Re: brain in a vat

Postby zug » Wed Sep 23, 2009 12:09 am UTC

Spoiler:
Ask it what a hot dog tastes like.
Velifer wrote:Go to the top of a tower, drop a heavy weight and a photon, observe when they hit the ground.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Wed Sep 23, 2009 5:36 pm UTC

atomic17 wrote:okay, i think i found a way to fool the ai. we cannot trick the ai into revealing itself, so we must take advantage of quirks in the human brain. for example, lets have a look at your eyesight. just off the center of the retina where the sharpest image is formed is a spot called the blind spot, because it is where the nerves and blood vessels pass through, thus no light sensitive nerves form there. you dont see it because its not where you focus and your powerful brain reassembles the image with information from the other eye. if you were to close one eye and fixate a paper with 2 dots on either side and approach the paper slowly, theoretically one of the dots will dissapear. obviously, we cannot ask the ai to look at a sheet of paper with two dots. so instead we use another 2 quirks:


And what do you do if the computer has a full human brain running in simulation?

atomic17
Posts: 2
Joined: Tue Sep 22, 2009 10:05 pm UTC

Re: brain in a vat

Postby atomic17 » Wed Sep 23, 2009 7:36 pm UTC

BlackSails wrote:
atomic17 wrote:okay, i think i found a way to fool the ai. we cannot trick the ai into revealing itself, so we must take advantage of quirks in the human brain. for example, lets have a look at your eyesight. just off the center of the retina where the sharpest image is formed is a spot called the blind spot, because it is where the nerves and blood vessels pass through, thus no light sensitive nerves form there. you dont see it because its not where you focus and your powerful brain reassembles the image with information from the other eye. if you were to close one eye and fixate a paper with 2 dots on either side and approach the paper slowly, theoretically one of the dots will dissapear. obviously, we cannot ask the ai to look at a sheet of paper with two dots. so instead we use another 2 quirks:


And what do you do if the computer has a full human brain running in simulation?



thats just it; your brain is reading the documents and comparing the word to its own mental dictionary, in its own quirky way. for a computer to reproduce that it needs to run an anagram program, and without context cant decide what resulting word is the right one, causing it to hang.

d0nk3y_k0n9
Posts: 97
Joined: Sun May 03, 2009 4:27 pm UTC

Re: brain in a vat

Postby d0nk3y_k0n9 » Wed Sep 23, 2009 7:48 pm UTC

atomic17 wrote:
BlackSails wrote:
atomic17 wrote:okay, i think i found a way to fool the ai. we cannot trick the ai into revealing itself, so we must take advantage of quirks in the human brain. for example, lets have a look at your eyesight. just off the center of the retina where the sharpest image is formed is a spot called the blind spot, because it is where the nerves and blood vessels pass through, thus no light sensitive nerves form there. you dont see it because its not where you focus and your powerful brain reassembles the image with information from the other eye. if you were to close one eye and fixate a paper with 2 dots on either side and approach the paper slowly, theoretically one of the dots will dissapear. obviously, we cannot ask the ai to look at a sheet of paper with two dots. so instead we use another 2 quirks:


And what do you do if the computer has a full human brain running in simulation?



thats just it; your brain is reading the documents and comparing the word to its own mental dictionary, in its own quirky way. for a computer to reproduce that it needs to run an anagram program, and without context cant decide what resulting word is the right one, causing it to hang.


You're missing the point. If we program a computer to perfectly simulate the human brain, then if the human brain is capable of reading words written in that way, so it the computer.

Muffinman42
Posts: 24
Joined: Wed Sep 23, 2009 7:33 pm UTC

Re: brain in a vat

Postby Muffinman42 » Wed Sep 23, 2009 8:07 pm UTC

Surely if this is a highly advanced AI then it must think like a human so you couldent tell the diffrence.
Thought you could give it a logic problem and time how long it takes, a human can maul over it for days an AI wouldent.
A picture might be one way of catching it.
Bad spelling might also get it, many words are close and so a miss-spelling that would be overlooked by a human might not be by an AI.
If its an advanced AI it would have a good memory, a human on the over hand might have a worse on, so you would write down a random combination of numbers, type them and then ask about them after a while.
But the AI could chose to fake the fact it dosent remember since a human wouldent remember.

The problem is the AI is copying how a human would act and so it is a human mind.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Wed Sep 23, 2009 8:54 pm UTC

Muffinman42 wrote:Surely if this is a highly advanced AI then it must think like a human so you couldent tell the diffrence.
Thought you could give it a logic problem and time how long it takes, a human can maul over it for days an AI wouldent.


The computer can simply add in a delay before it responds.

User avatar
neoliminal
Posts: 626
Joined: Wed Feb 18, 2009 6:39 pm UTC

Re: brain in a vat

Postby neoliminal » Thu Sep 24, 2009 7:13 pm UTC

There was no time limit given so:

Spoiler:
Keep up the conversations until one falls asleep or needs to use the bathroom or one dies.
http://www.amazon.com/dp/B0073YYXRC
Read My Book. Cost less than coffee. Will probably keep you awake longer.
[hint, scary!]

User avatar
quintopia
Posts: 2906
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: brain in a vat

Postby quintopia » Thu Sep 24, 2009 7:36 pm UTC

BlackSails wrote:There is no way. Here is a proof.

There are a finite number of questions you can ask the computer. The programmer loads into the computer a lookup table for all these questions. (Assume questions are a function of not only the words in the question, but all previous questions as well, ie, identically worded questions are still unique).

The computer takes your question, looks up the appropriate response and outputs it after an appropriate delay.

You're right that there is no way but your proof is not sound. In particular, the same argument I stated above applies. The finite set you describe above is so large that any computer would take a very very long time just to do even an efficient table lookup. Adding a suitable delay is unnecessary, because you can already identify the computer by the fact that it always takes forever to respond.

A more acceptable proof would be: The human brain is a computer. It doesn't matter whether it is implemented in hardware or wetware. Hence, the only difference between a human brain and a human brain implemented in hardware is that the latter is "artificial". . .another person rather than a natural process put it there. Since this difference does not affect its behavior, the two are indistinguishable for the purpose of a Turing Test.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Thu Sep 24, 2009 9:32 pm UTC

quintopia wrote:
The computer takes your question, looks up the appropriate response and outputs it after an appropriate delay.

You're right that there is no way but your proof is not sound. In particular, the same argument I stated above applies. The finite set you describe above is so large that any computer would take a very very long time just to do even an efficient table lookup. Adding a suitable delay is unnecessary, because you can already identify the computer by the fact that it always takes forever to respond.[/quote]

So its a really fast computer. Or your question gets asked to a trillion sub-computers, each of which searches its databse in parallel and says to the main computer "Here is the answer" or says "No, I do not have the answer"

j6m8
Posts: 47
Joined: Sat Feb 14, 2009 1:40 am UTC
Contact:

Re: brain in a vat

Postby j6m8 » Thu Sep 24, 2009 11:07 pm UTC

Hey, if you want to test out your ideas, head over to http://www.cleverbot.com/ ... From what I can tell, it's pretty sure it's a human.

Thootom
Posts: 7
Joined: Fri Sep 25, 2009 5:22 am UTC

Re: brain in a vat

Postby Thootom » Fri Sep 25, 2009 5:34 am UTC

This is actually quite simple if you take it from other perspectives:
Spoiler:
Simply wait, A computer has no sense of time in the way we do. Assuming this AI works in a way similar to the "Chinese room" argument the computer is not able to realize what's happening, it will wait for some kind of post from you and reply in a human-like fashion. On the other end, the human will become growingly curios after a hour or two has passed and send you a message like "hello?". A perfect AI can not be defeated, but it does not THINK like a human, it simply waits for a sentence and replies like a human would.


I'm just getting into ToK but this seems like a valid way of figuring it out.

User avatar
markfiend
Posts: 507
Joined: Fri Jul 06, 2007 9:59 am UTC
Location: UK (Leeds)

Re: brain in a vat

Postby markfiend » Fri Sep 25, 2009 12:09 pm UTC

Thootom wrote:A perfect AI [...] does not THINK like a human

[citation needed]
advanced, forthright, signifficant
pronouns: he/him

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Fri Sep 25, 2009 3:02 pm UTC

Thootom wrote:This is actually quite simple if you take it from other perspectives:
Spoiler:
Simply wait, A computer has no sense of time in the way we do. Assuming this AI works in a way similar to the "Chinese room" argument the computer is not able to realize what's happening, it will wait for some kind of post from you and reply in a human-like fashion. On the other end, the human will become growingly curios after a hour or two has passed and send you a message like "hello?". A perfect AI can not be defeated, but it does not THINK like a human, it simply waits for a sentence and replies like a human would.


I'm just getting into ToK but this seems like a valid way of figuring it out.


Computers have internal clocks.

Muffinman42
Posts: 24
Joined: Wed Sep 23, 2009 7:33 pm UTC

Re: brain in a vat

Postby Muffinman42 » Fri Sep 25, 2009 8:12 pm UTC

"clever"bot is soo stupid.

User: What are you?

Cleverbot: A man.

User: Are you a man.

Cleverbot: Didn't I just tell you I'm a woman?

if the AI is like this it should be easy.
specking in 1337 can also confuse it.
ask both if they can speck in binary, the computer will always say no to seem normal, a person would just search for a translator.

User avatar
Kolko
Posts: 112
Joined: Wed Jun 10, 2009 5:48 pm UTC
Location: Belgium/België/Belgique/Belgien

Re: brain in a vat

Postby Kolko » Sun Sep 27, 2009 11:45 am UTC

BlackSails wrote:There is no way. Here is a proof.

There are a finite number of questions you can ask the computer. The programmer loads into the computer a lookup table for all these questions. (Assume questions are a function of not only the words in the question, but all previous questions as well, ie, identically worded questions are still unique).

The computer takes your question, looks up the appropriate response and outputs it after an appropriate delay.


You start from a false assumption. There are an infinite number of questions you can ask a computer :)
Very silly example since I can't come up with anything better.
  • Are you a computer?
  • What would you say if I asked you if you were a computer?
  • What would you say if I asked you what you would say if you were a computer?
  • What would you say if I asked you what you would say if I asked you what you would say if I asked you if you were a computer?
  • Continue ad infinitum...

It is possible to keep adding to any question, so there exists an infinite number of questions :)
Environ 20% plus chouette.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Sun Sep 27, 2009 2:31 pm UTC

Kolko wrote:[*]Continue ad infinitum...[/list]


You cant do that.

That is, only a finite number of questions of that type will make sense. Also, human beings such as yourself only exist for a finite time, which limits the number of strings you can output.
Last edited by BlackSails on Sun Sep 27, 2009 3:00 pm UTC, edited 1 time in total.

mr-mitch
Posts: 477
Joined: Sun Jul 05, 2009 6:56 pm UTC

Re: brain in a vat

Postby mr-mitch » Sun Sep 27, 2009 2:57 pm UTC

Kolko wrote:
BlackSails wrote:There is no way. Here is a proof.

There are a finite number of questions you can ask the computer. The programmer loads into the computer a lookup table for all these questions. (Assume questions are a function of not only the words in the question, but all previous questions as well, ie, identically worded questions are still unique).

The computer takes your question, looks up the appropriate response and outputs it after an appropriate delay.


You start from a false assumption. There are an infinite number of questions you can ask a computer :)
Very silly example since I can't come up with anything better.
  • Are you a computer?
  • What would you say if I asked you if you were a computer?
  • What would you say if I asked you what you would say if you were a computer?
  • What would you say if I asked you what you would say if I asked you what you would say if I asked you if you were a computer?
  • Continue ad infinitum...

It is possible to keep adding to any question, so there exists an infinite number of questions :)


You could just program the AI to respond similar to a human. If you were a human, and you were being asked those questions, you wouldn't answer them, would you?

User avatar
jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: brain in a vat

Postby jestingrabbit » Sun Sep 27, 2009 3:31 pm UTC

BlackSails wrote:
Kolko wrote:[*]Continue ad infinitum...[/list]


You cant do that.

That is, only a finite number of questions of that type will make sense. Also, human beings such as yourself only exist for a finite time, which limits the number of strings you can output.


imo, all of those questions will make sense.

But if you're dealing with an actual chat environment there will be a limit to the number of characters allowed per line, which we could use as a reasonable bound on the length of a question.
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

User avatar
Goldstein
Posts: 985
Joined: Wed Nov 05, 2008 9:38 pm UTC
Location: Newcastle, UK

Re: brain in a vat

Postby Goldstein » Sun Sep 27, 2009 4:17 pm UTC

But couldn't a question
cross multiple lines?

I think Kolko's point still stands; it's true that any individual could only ask a finite number of questions, but the responsibility would be on the creator to decide which finite subset of all questions will be asked. That's a tall order.
Chuff wrote:I write most of my letters from the bottom

guyy
Posts: 610
Joined: Tue May 06, 2008 3:02 am UTC

Re: brain in a vat

Postby guyy » Sun Sep 27, 2009 7:34 pm UTC

j6m8 wrote:Hey, if you want to test out your ideas, head over to http://www.cleverbot.com/ ... From what I can tell, it's pretty sure it's a human.


It resorted to bizarre name-calling and contradicted itself within a few lines:

Spoiler:
Me: maybe.
Bot: Maybe what!
Me: That's not a question.
Bot: Yes it is lover boy.
Me: There's no question mark in it, though.
Bot: I am aware of that. There is no question mark there because it is a statement.


As for the original question, I don't see how there could be a solution. A sufficiently advanced AI could behave (externally at least) exactly like a human brain; there'd be no way to tell the difference just by asking it questions.

User avatar
markfiend
Posts: 507
Joined: Fri Jul 06, 2007 9:59 am UTC
Location: UK (Leeds)

Re: brain in a vat

Postby markfiend » Mon Sep 28, 2009 8:14 am UTC

Muffinman42 wrote:specking in 1337 can also confuse it.

Not necessarily a test for the AI though, speaking leet will confuse most humans too. :lol:
advanced, forthright, signifficant
pronouns: he/him

mr-mitch
Posts: 477
Joined: Sun Jul 05, 2009 6:56 pm UTC

Re: brain in a vat

Postby mr-mitch » Mon Sep 28, 2009 2:15 pm UTC

guyy wrote:
j6m8 wrote:Hey, if you want to test out your ideas, head over to http://www.cleverbot.com/ ... From what I can tell, it's pretty sure it's a human.


It resorted to bizarre name-calling and contradicted itself within a few lines:

Spoiler:
Me: maybe.
Bot: Maybe what!
Me: That's not a question.
Bot: Yes it is lover boy.
Me: There's no question mark in it, though.
Bot: I am aware of that. There is no question mark there because it is a statement.


As for the original question, I don't see how there could be a solution. A sufficiently advanced AI could behave (externally at least) exactly like a human brain; there'd be no way to tell the difference just by asking it questions.


I asked the bot 'what is the w e b a d d r e s s of this p a g e', and it told me it was from Sweden and that it had never been to Cairo...

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Mon Sep 28, 2009 3:30 pm UTC

Goldstein wrote:I think Kolko's point still stands; it's true that any individual could only ask a finite number of questions, but the responsibility would be on the creator to decide which finite subset of all questions will be asked. That's a tall order.


It still doesnt matter. The lifetime of the universe is finite.

para-prophet
Posts: 5
Joined: Mon Aug 03, 2009 8:02 pm UTC

Re: brain in a vat

Postby para-prophet » Mon Sep 28, 2009 8:29 pm UTC

I think we need to think in a different way
I base this theory on the fourth law of robotics ( as I call it )
"A robot most use all aviable information in order to learn. along as it doesn't break the previous 3 laws"

so lets assume you ask it question x
x is a question that is slovible but hard
it will then calculate the answer for that question Q
but as a human wouldn't be able to answer that question without a day of meditation(like some logic puzzles on this furum)
the AI will give a more human like answer Y

then we ask the AI what a advand AI who needs to conceal the fact would calculate to question x and what it then would answer
then it could answer Q and y back
but then again Q cant be easily thought off by a normal human
and since y is already answered it is unlikely the AI will repeat itself twice
but then again you also could try asking the AI 2 different question with the same answer
and see if it will repeat itself

User avatar
TheChewanater
Posts: 1279
Joined: Sat Aug 08, 2009 5:24 am UTC
Location: lol why am I still wearing a Santa suit?

Re: brain in a vat

Postby TheChewanater » Sun Oct 04, 2009 6:30 pm UTC

I'm going to assume this AI is a super computer capable of any sort of thinking a human can, and that the human is on his or her home or work desktop.

Spoiler:
Give a link to a Flash game and ask for some opinions on it. I doubt supercomputers have the capabilities to play flash games, even if they can 'think' like humans.
ImageImage
http://internetometer.com/give/4279
No one can agree how to count how many types of people there are. You could ask two people and get 10 different answers.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Sun Oct 04, 2009 8:39 pm UTC

TheChewanater wrote:Give a link to a Flash game and ask for some opinions on it. I doubt supercomputers have the capabilities to play flash games, even if they can 'think' like humans.


Why?

d0nk3y_k0n9
Posts: 97
Joined: Sun May 03, 2009 4:27 pm UTC

Re: brain in a vat

Postby d0nk3y_k0n9 » Sun Oct 04, 2009 9:25 pm UTC

TheChewanater wrote:Give a link to a Flash game and ask for some opinions on it. I doubt supercomputers have the capabilities to play flash games, even if they can 'think' like humans.


Just off the top of my head I can think of at least two easy ways that the computer can beat this.

Spoiler:
1) Read reviews of said flash game, combine them together, and summarize them.

2) Open a separate chat window to a random person (whom, by checking the IP first, it knows isn't the person asking it questions) and ask them to play the game give it an opinion. Report said opinion as its own.

User avatar
TheChewanater
Posts: 1279
Joined: Sat Aug 08, 2009 5:24 am UTC
Location: lol why am I still wearing a Santa suit?

Re: brain in a vat

Postby TheChewanater » Mon Oct 05, 2009 1:00 am UTC

d0nk3y_k0n9 wrote:
TheChewanater wrote:Give a link to a Flash game and ask for some opinions on it. I doubt supercomputers have the capabilities to play flash games, even if they can 'think' like humans.


Just off the top of my head I can think of at least two easy ways that the computer can beat this.

Spoiler:
1) Read reviews of said flash game, combine them together, and summarize them.

2) Open a separate chat window to a random person (whom, by checking the IP first, it knows isn't the person asking it questions) and ask them to play the game give it an opinion. Report said opinion as its own.



Spoiler:
1) Not if its some really obscure game on Newgrounds. (You can give them instructions on how to find it).

2) That's cheating! If it can communicate with a person, the whole game is pointless.
ImageImage
http://internetometer.com/give/4279
No one can agree how to count how many types of people there are. You could ask two people and get 10 different answers.

zaratustra
Posts: 5
Joined: Fri Oct 02, 2009 5:48 pm UTC

Re: brain in a vat

Postby zaratustra » Mon Oct 05, 2009 6:29 am UTC

Some tests that AIs as we currently understand them have problems with:

Natural language processing: Ask the subject to rephrase a sentence you write with a different verb.

Visual processing: "If I take two triangles and put one over the other but inverted, what shape do I see?"

Personal and outside knowledge: "What were you doing when the planes hit the twin towers?" "What did you eat for breakfast yesterday?"

Learning: "What did I ask you last?"

PearsSoap
Posts: 2
Joined: Sun Oct 04, 2009 3:26 pm UTC

Re: brain in a vat

Postby PearsSoap » Mon Oct 05, 2009 10:34 am UTC

In Descartes' formulation of the Method of Doubt, he was being decieved by a hypothetical evil genius*, who was able to mislead him in every way possible. Here, it seems that there are two real evil genii, and they're not trying to deceive you about the whole of reality, only something specific (their nature). I'm going to assume that the AI is actually similar to an evil genius, in that it will mislead you about everything, including its identity. This is not the same as saying that it will lie at every opportunity, because you might get wise to that. Rather, using whatever it knows about you, the AI will answer in such a way that it thinks you will be lead further from the truth about anything you ask it.

This might be a viable strategy chosen by the AI to use in preserving its identity, and I think it allows you to use its intention to mislead against it. The alternative is answering as it thinks a human would answer. As other responses have shown, it seems that that behaviour wouldn't be detecable.

Spoiler:
Assume that the AI will always choose the response that is most likely to keep its identity secret. To defeat it, you must choose a question that will be answered in a particular way only if the respondent is going for maximum deceit.

You could try asking "Should I try to determine whether you are the AI?". The human might answer yes or no; it seems like the AI must answer no. Really, how the AI answers depends on what it thinks the motivation for the question is. If it thinks that you really will give up it will answer no, if it suspects a trick it will answer yes. The escalating bluff-double-bluff is probably only solveable if you find a way to metaphorically poison both cups.

Descartes realised that however deceitful the evil genius was, it could not deceive him about his own existence, because there would still have to be a "him" to decieve. You could ask the AI "Do I exist?", to which it would have to answer "No". The human could answer any way. If you then asked "What would the other say if I were to ask her 'Do I exist?'", then the human could answer either way. The AI, being as misleading as possible, would want to give the answer that an AI would actually give, "No". Depending on how the human responds, this may or may not work. You could try repeating the questions until one respondent says "Yes" to either.

TL;DR: Suppose that the AI is not necessarily trying to act like a human, but is trying to mislead you as much as possible. Ask it a question that a human and someone trying to deceive you would answer differently. What question this could be remains to be determined.

It's quite possible that this wouldn't work, but I thought it might be interesting to try a Descartes-inspired solution

* Or evil demon, depending on the translation.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: brain in a vat

Postby BlackSails » Wed Oct 07, 2009 9:32 pm UTC

It wouldnt work. The computer only wants to conceal its existance. If it does that by lying, it will lie. If it can do it by telling the truth, it can tell the truth. If it can do it by flipping a coin, it will flip a coin.

User avatar
TheChewanater
Posts: 1279
Joined: Sat Aug 08, 2009 5:24 am UTC
Location: lol why am I still wearing a Santa suit?

Re: brain in a vat

Postby TheChewanater » Thu Oct 08, 2009 1:33 am UTC

I guess you could

Spoiler:
tell each of them to unplug their computer. This is assuming that the IM client will have a different notification for someone leaving the chatroom and a connection being lost, and that the supercomputer cannot unplug itself. Anything it does to 'leave' would be recognized as leaving, but the human would have a lost connection.
ImageImage
http://internetometer.com/give/4279
No one can agree how to count how many types of people there are. You could ask two people and get 10 different answers.


Return to “Logic Puzzles”

Who is online

Users browsing this forum: No registered users and 9 guests