1450: "AI-Box Experiment"

This forum is for the individual discussion thread that goes with each new comic.

Moderators: Moderators General, Prelates, Magistrates

User avatar
xkcdfan
Posts: 140
Joined: Wed Jul 30, 2008 5:10 am UTC

1450: "AI-Box Experiment"

Postby xkcdfan » Fri Nov 21, 2014 5:25 am UTC

http://xkcd.com/1450/

Image

Title text: I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people.

nb: Roko's basilisk. It's really stupid.

added link.
Last edited by xkcdfan on Fri Nov 21, 2014 5:27 am UTC, edited 1 time in total.

User avatar
BlitzGirl
Posts: 9116
Joined: Mon Sep 20, 2010 11:48 am UTC
Location: Out of the basement for Yip 6! Schizoblitz: 115/2672 NP
Contact:

Re: 1450: "AI-Box Experiment"

Postby BlitzGirl » Fri Nov 21, 2014 5:27 am UTC

My god, its speech bubbles are full of stars.
Knight Temporal of the One True Comic
BlitzGirl the Pink, Mopey Molpy Mome
Spoiler:
Image
Image
Image<Profile
~.Image~.FAQ->Image

User avatar
Klear
Posts: 1965
Joined: Sun Jun 13, 2010 8:43 am UTC
Location: Prague

Re: 1450: "AI-Box Experiment"

Postby Klear » Fri Nov 21, 2014 5:30 am UTC

BHG saying "AAA! OK!" and complying. That's the scariest thing I've seen on xkcd in a long time...

User avatar
Envelope Generator
Posts: 582
Joined: Sat Mar 03, 2012 8:07 am UTC
Location: pareidolia

Re: 1450: "AI-Box Experiment"

Postby Envelope Generator » Fri Nov 21, 2014 5:32 am UTC

Image
I'm going to step off the LEM now... here we are, Pismo Beach and all the clams we can eat

eSOANEM wrote:If Fonzie's on the order of 100 zeptokelvin, I think he has bigger problems than difracting through doors.

injygo
Posts: 1
Joined: Thu Sep 13, 2012 7:48 pm UTC

Re: 1450: "AI-Box Experiment"

Postby injygo » Fri Nov 21, 2014 6:25 am UTC

By mentioning Roko's Basilisk, Randall just doomed thousands of people to a fate worse than death. Or so some would say...

savageorange
Posts: 16
Joined: Wed Aug 04, 2010 8:03 am UTC

Re: 1450: "AI-Box Experiment"

Postby savageorange » Fri Nov 21, 2014 6:25 am UTC

xkcdfan wrote:nb: Roko's basilisk. It's really stupid.

It's pretty unlikely. It's not particularly stupid, though. The disproportionate response to it was the stupid part.

User avatar
ilduri
Posts: 43
Joined: Thu Nov 29, 2012 7:59 am UTC
Location: Canada

Re: 1450: "AI-Box Experiment"

Postby ilduri » Fri Nov 21, 2014 6:55 am UTC

Sooo... Roko's Basilisk is basically Pascal's Wager repackaged for Singularitarians?

"You'd better fund the creation of a super-intelligent AI, because if you don't, it'll create a simulation of hell and put a simulation of you in it."

It seems to me that if a super-intelligent AI might send people to hell, the only moral thing to do is to not fund its creation.
"Butterflies and zebras and moonbeams and fairytales"
she/her

EliezerYudkowsky
Posts: 6
Joined: Sun Jan 20, 2013 5:20 am UTC

Re: 1450: "AI-Box Experiment"

Postby EliezerYudkowsky » Fri Nov 21, 2014 8:06 am UTC

I can't post a link to discussion elsewhere because that gets flagged as spam. Does somebody know how to correct this? Tl;dr a band of internet trolls that runs or took over RationalWiki made up around 90% of the Roko's Basilisk thing; the RationalWiki lies were repeated by bad Slate reporters who were interested in smearing particular political targets; and you should've been more skeptical when a group of non-mathy Internet trolls claimed that someone else known to be into math believed something that seemed so blatantly wrong to you, and invited you to join in on having a good sneer at them. (Randall Munroe, I am casting a slightly disapproving eye in your direction but I understand you might not have had other info sources. I'd post the link or the text of the link, but I can't seem to do so.)

So far as I know, literally nobody has ever said, "You should build this AI because it'll torture you if you don't." Like, literally nobody. There are people who want you to believe somebody else says that, but there's literally nobody who does. Even the original "Roko" was claiming that Friendly AI was a terrible idea because it would torture people who didn't contribute to building it, and was using that to argue that nobody should ever try to build Friendly AI.

I can't say the thing is made up out of entirely thin air because there is, in fact, a corresponding question in Newcomblike decision problems (whose corresponding answer appears to me to be "no"), and one branch of Newcomblike decision theory was worked on by collaborators who shared posts at LessWrong.com (which is why the 10% actual fiasco happened there). Needless to say, nobody in the "Ha ha let's sneer at these nerds" section has ever, ever succeeded in understanding any of the technical work that was twisted to make up this thing, like http://arxiv.org/abs/1401.5577 which is our work proving cooperation in the Prisoner's Dilemma between agents that have common knowledge of each others' source code. Eventually, I expect, we'll prove a no-blackmail equilibrium between updateless agents with common knowledge of each others' source code... and nothing will change on the Internet, because bad Slate reporters are incapable of understanding that and wouldn't care if they did.
Last edited by EliezerYudkowsky on Sat Nov 22, 2014 2:26 am UTC, edited 9 times in total.

EliezerYudkowsky
Posts: 6
Joined: Sun Jan 20, 2013 5:20 am UTC

Re: 1450: "AI-Box Experiment"

Postby EliezerYudkowsky » Fri Nov 21, 2014 8:07 am UTC

DOUBLE EDIT: https://www.reddit.com/r/xkcd/comments/ ... nt/cm8vn6e is now a better explanation of what's going on. Apologies if this double-edit violates any local norms, if so notify me and I will remove the double-edit.

Yet another attempt at providing linkage:

https://www.reddit.com/r/Futurology/com ... sk/cjjbqv1

(Works now! Thanks, Klear!)
Last edited by EliezerYudkowsky on Sat Nov 22, 2014 9:20 pm UTC, edited 2 times in total.

Vanzetti
Posts: 64
Joined: Tue Nov 18, 2008 7:31 pm UTC

Re: 1450: "AI-Box Experiment"

Postby Vanzetti » Fri Nov 21, 2014 8:13 am UTC

ilduri wrote:Sooo... Roko's Basilisk is basically Pascal's Wager repackaged for Singularitarians?

"You'd better fund the creation of a super-intelligent AI, because if you don't, it'll create a simulation of hell and put a simulation of you in it."

It seems to me that if a super-intelligent AI might send people to hell, the only moral thing to do is to not fund its creation.


You don't quite understand Roko's Basilisk. We are talking about an AI with a purpose to make life better for as many people as possible. On average. For that to happen, the AI must be build as soon as possible. If it means threatening some people with hell to make them work faster, so be it.

You may ask, how can an AI that doesn't exist yet, condemn you to hell? Acausal trade.

RowanE
Posts: 9
Joined: Wed Dec 09, 2009 4:40 pm UTC

Re: 1450: "AI-Box Experiment"

Postby RowanE » Fri Nov 21, 2014 8:15 am UTC

On the one hand, I'm really glad to see my in-group mentioned somewhere as popular as xkcd. On the other hand, I don't like that Roko's Basilisk is still the first thing anyone ever hears about the LessWrong community, FFS that was one time! The only reason it wasn't forgotten instantly was that the original post got deleted and there was a kerfuffle over whether something so implausible needed to be censored. Apparently some people took it just seriously enough that they were starting to worry about AI-hellfire (but not enough to actually start trying to bring about the apocalypse), and that's why it was deleted - and also why it's called a "basilisk", i.e. it's a harmless bunch of words that can traumatise you (if you're so disposed) - literally no-one has ever been recorded actually having been seriously convinced to do what the basilisk says. There are no "Roko's Basilisk people"... unless you're talking about the LessWrong community more generally, which would be a very hurtful thing.

User avatar
Klear
Posts: 1965
Joined: Sun Jun 13, 2010 8:43 am UTC
Location: Prague

Re: 1450: "AI-Box Experiment"

Postby Klear » Fri Nov 21, 2014 8:20 am UTC

EliezerYudkowsky wrote:I can't seem to post a reply because it repeatedly gets flagged as spam. I can't post a link to discussion elsewhere because that also gets flagged as spam. Does somebody know how to correct this?


The forum prevents users who didn't read the forum rules from posting links =P

Vanzetti
Posts: 64
Joined: Tue Nov 18, 2008 7:31 pm UTC

Re: 1450: "AI-Box Experiment"

Postby Vanzetti » Fri Nov 21, 2014 8:20 am UTC

RowanE wrote:On the one hand, I'm really glad to see my in-group mentioned somewhere as popular as xkcd. On the other hand, I don't like that Roko's Basilisk is still the first thing anyone ever hears about the LessWrong community, FFS that was one time! The only reason it wasn't forgotten instantly was that the original post got deleted and there was a kerfuffle over whether something so implausible needed to be censored. Apparently some people took it just seriously enough that they were starting to worry about AI-hellfire (but not enough to actually start trying to bring about the apocalypse), and that's why it was deleted - and also why it's called a "basilisk", i.e. it's a harmless bunch of words that can traumatise you (if you're so disposed) - literally no-one has ever been recorded actually having been seriously convinced to do what the basilisk says. There are no "Roko's Basilisk people"... unless you're talking about the LessWrong community more generally, which would be a very hurtful thing.


An old man walks into a pub in Scottland, his feet shuffling, his back bent. He drags himself onto a stool and orders a beer. Placing the full glass in front of him, the bartender inquires upon his sad face.
The man answers with a smoky and trembling voice and a Scottish accent:
Ah, tell ya man! This pub, this very pub we're just sitting in. I built it, with me own hands! But do they call me the Pubmaker? Naa! See the wall over there, that protects our town? I built it, with me own hands! But do they call me the Wallmaker? And the bridge, you know, that crosses our river, I built it, with me own hands! But do they call me the Bridgemaker?
But I tell ya, man! YOU FUCK ONE GOAT!


Basically, you censor one forum message for a stupid reason, you are now "Roko's people" forever. Less Wrong, More Basilisk. :lol:

Flammifer
Posts: 3
Joined: Fri Jun 21, 2013 7:43 pm UTC

Re: 1450: "AI-Box Experiment"

Postby Flammifer » Fri Nov 21, 2014 8:28 am UTC

Fellow Lesswronger here, I was about to post that joke :D

(It's also not clear for me whether I fall under "the Roko's Basilisk people", if so, yay! I've been mentioned on XKCD! (and all you people have to make fun of me or face ETERNAL TORMENT!)

flamewise
Posts: 35
Joined: Tue Oct 05, 2010 2:40 pm UTC

Re: 1450: "AI-Box Experiment"

Postby flamewise » Fri Nov 21, 2014 8:45 am UTC

So, it's a Schrödinger's cat experiment that failed to kill it and instead gave it super powers? I can totally see a new franchise in that...

User avatar
Diadem
Posts: 5654
Joined: Wed Jun 11, 2008 11:03 am UTC
Location: The Netherlands

Re: 1450: "AI-Box Experiment"

Postby Diadem » Fri Nov 21, 2014 9:37 am UTC

EliezerYudkowsky wrote:Tl;dr a band of internet trolls that runs or took over RationalWiki made up around 90% of the Roko's Basilisk thing; the RationalWiki lies were repeated by bad Slate reporters who were interested in smearing particular political targets; and you should've been more skeptical when a group of non-mathy Internet trolls claimed that someone else known to be into math believed something that seemed so blatantly wrong to you, and invited you to join in on having a good sneer at them. (Randall Monroe, I am casting a slightly disapproving eye in your direction but I understand you might not have had other info sources. I'd post the link or the text of the link, but I can't seem to do so.)

I think you're overreacting to this a bit. It's a joke, and a pretty good one too. I saw no snark in it towards LessWrong, or even people believing in Roko's Basilisk. It just reverses the Basilisk in a humorous way. Honestly I'd see it as a good opportunity to get people talking about superintelligences, and perhaps even get people interested in LessWrong.

I'm curious why you call the RationalWiki article 'made up by trolls'. At first glance it seems fairly correct, and it casts you personally in a pretty favorable light. So where's the problem?

Flammifer wrote:Fellow Lesswronger here, I was about to post that joke :D
(It's also not clear for me whether I fall under "the Roko's Basilisk people", if so, yay! I've been mentioned on XKCD! (and all you people have to make fun of me or face ETERNAL TORMENT!)

You know, if you phrase it like that, that's actually pretty cool.
It's one of those irregular verbs, isn't it? I have an independent mind, you are an eccentric, he is round the twist
- Bernard Woolley in Yes, Prime Minister

User avatar
Arancaytar
Posts: 1642
Joined: Thu Mar 15, 2007 12:54 am UTC
Location: 52.44°N, 13.55°E
Contact:

Re: 1450: "AI-Box Experiment"

Postby Arancaytar » Fri Nov 21, 2014 9:48 am UTC

Roko's Basilisk is basically Pascal's Wager, only instead of having to pick the one correct god and afterlife, you can have an arbitrary number of gods and concurrent afterlives.

If your soul is information, then it can be infinitely reproduced. Thousands of versions of you could each be simultaneously punished by a separate super-AI that got pissy at you for its own particular reason.

Trying to optimize the happiness of all your potential alternate versions would paralyze you (especially if, as the idea sometimes goes, this extends to alternate universes).

For any decision X you could make in fear of an AI punishing you, you should also fear an AI punishing you for the opposite decision. Once you accept that this is pointless, you become immune to any AI's blackmail. If some version of you is going to end up in some version of hell regardless of what you do, you may as well follow your own conscience in all decisions.

(If the above convinced you, then I have just dissuaded an infinite number of gods from torturing an infinite number of yous for eternity. You're welcome.)
"You cannot dual-wield the sharks. One is enough." -Our DM.
Image

User avatar
da Doctah
Posts: 995
Joined: Fri Feb 03, 2012 6:27 am UTC

Re: 1450: "AI-Box Experiment"

Postby da Doctah » Fri Nov 21, 2014 9:53 am UTC

Arancaytar wrote:For any decision X you could make in fear of an AI punishing you, you should also fear an AI punishing you for the opposite decision. Once you accept that this is pointless, you become immune to any AI's blackmail. If some version of you is going to end up in some version of hell regardless of what you do, you may as well follow your own conscience in all decisions.


"There are those who say this has already happened."
-- Douglas Adams

EliezerYudkowsky
Posts: 6
Joined: Sun Jan 20, 2013 5:20 am UTC

Re: 1450: "AI-Box Experiment"

Postby EliezerYudkowsky » Fri Nov 21, 2014 9:54 am UTC

Diadem, follow the link. https://www.reddit.com/r/Futurology/com ... sk/cjjbqqo

Note dgerard's downvoted and hence hidden comment lower down, challenging me to name specific errors in the RationalWiki article, to which I replied.

Telling me that I ought to be happy about how this has been portrayed... seems a bit... (searches for words) unrealistic. Unless the RationalWiki article or all the motivatedly-credulous journalists who used it as their sole source and didn't contact anyone else for their articles, are your only source on the subject, which I would mostly expect to be the case. In which event you're basically comparing their portrayal to their portrayal and not seeing anything wrong with it when it is compared to itself.
Last edited by EliezerYudkowsky on Fri Nov 21, 2014 10:30 am UTC, edited 1 time in total.

User avatar
The Moomin
Posts: 359
Joined: Wed Oct 13, 2010 6:59 am UTC
Location: Yorkshire

Re: 1450: "AI-Box Experiment"

Postby The Moomin » Fri Nov 21, 2014 10:29 am UTC

It reminds me of the TV series "Look Around You".

They had a computers episode featuring the most intelligent computer built to date.

During the course of the show it had to try and escape from a steel cage.

I looked for a clip, but the most likely candidate had been taken down for copyright reasons.
I'm alive because the cats are alive.
The cats are alive because I'm alive.
Specious.

User avatar
Diadem
Posts: 5654
Joined: Wed Jun 11, 2008 11:03 am UTC
Location: The Netherlands

Re: 1450: "AI-Box Experiment"

Postby Diadem » Fri Nov 21, 2014 10:38 am UTC

EliezerYudkowsky wrote:Diadem, follow the link. http://www.reddit.com/r/Futurology/comm ... sk/cjjbqv1

I have a lot of respect for you as an author and researcher, but that reddit article has to be one of your poorer ones. Mudslinging is always a dangerous activity. Even if you are entirely justified, even if you are entirely in the right, what people will see is still you covered in mud. I think you'd do much better just telling your side of the story, with as much proof as you can. There's no need to even mention the other side exists.

And to be honest, as a relatively neutral outsider (I'm active on LessWrong, but wasn't at the time this all happened) the things you mention as "malicious lies" seem more like honest mistakes to me. It's a wiki. Anyone can edit. Where exactly are you basing the claim that this is a systematic attack on the LessWrong community on? Looking at the article's history I see no pattern of corrections being systematically reverted, or anything like that.

Telling me that I ought to be happy about how this has been portrayed... seems a bit... (searches for words) unrealistic.

I was talking about the comic when I said you should be happy about it. XKCD is extremely well read. It mentioning superintelligence is a great opportunity to get more people interested in the subject.
It's one of those irregular verbs, isn't it? I have an independent mind, you are an eccentric, he is round the twist
- Bernard Woolley in Yes, Prime Minister

FeepingCreature
Posts: 5
Joined: Wed Mar 23, 2011 5:35 pm UTC

Re: 1450: "AI-Box Experiment"

Postby FeepingCreature » Fri Nov 21, 2014 10:59 am UTC

Diadem wrote:It's a wiki. Anyone can edit. Where exactly are you basing the claim that this is a systematic attack on the LessWrong community on? Looking at the article's history I see no pattern of corrections being systematically reverted, or anything like that.


Let me just chime in here with some Shellscripting.

Code: Select all

$ wget "rationalwiki.org/w/index.php?title=LessWrong&offset=&limit=2500&action=history" -q -O- |grep userlink |sed -e "s@.*userlink\"[^>]*>@@" -e "s@<.*@@" |sort |uniq -c |sort -n |tail -11
      6 Tetronian
      7 80.221.17.204
     12 XiXiDu
     13 Waitingforgodel
     14 Stabby the Misanthrope
     17 AD
     23 Bo
     28 Armondikov
     30 Human
     49 Baloney Detection
    301 David Gerard


I'll let this histogram of contributors to the RationalWiki LessWrong page speak for itself.

Disclosure: I tried to edit the page once to remove some insulting insinuations that were without a good source. My change was quickly reverted. Anecdotal evidence, I know.

Tenoke
Posts: 3
Joined: Fri Nov 21, 2014 11:07 am UTC

Re: 1450: "AI-Box Experiment"

Postby Tenoke » Fri Nov 21, 2014 11:28 am UTC

Diadem wrote:I'm curious why you call the RationalWiki article 'made up by trolls'. At first glance it seems fairly correct, and it casts you personally in a pretty favorable light. So where's the problem?


The articles on Lesswrong and Roko's have been cleaned up in recent years, but if you look at the old revisions (and discussions behind them) you'll see what he means. Here are some examples from a randomly chosen (and not the worst) revision of the Lesswrong page, which originally housed the Roko's page as well

Code: Select all

rationalwiki.org/w/index.php?title=LessWrong&oldid=1059400


As such, "You should try reading the sequences" is LessWrong for "fuck you."

The site has also been criticized for practically being a personality cult of Eliezer Yudkowsky. This is almost certainly not intentional on his part, just ask Brian of Nazareth.

Ironically, Less Wrong users rarely recognize biases that arise from the site's demographics[19], which can be summarized as the same problem in academic psychology of samples being WEIRD: mostly male, white, white-collar, 20-30-year-old United States residents coming from families with a Christian or Jewish background. When pointed out the sources and instances of collective bias, they typically ignore them or say that "this is just how things are here."

Indeed, if anyone even hints at trying to claim to be a "rationalist" but doesn't write exactly what is expected, they're likely to be treated with contempt, as criticism of

You'll be unsurprised to know that many in the LessWrong community self-diagnose themselves as being on the Asperger's/autism spectrum.[43] They do all this because they are bad at human interaction
Last edited by Tenoke on Fri Nov 21, 2014 2:22 pm UTC, edited 2 times in total.

peregrine_crow
Posts: 180
Joined: Mon Apr 07, 2014 7:20 am UTC

Re: 1450: "AI-Box Experiment"

Postby peregrine_crow » Fri Nov 21, 2014 11:32 am UTC

Diadem wrote:
EliezerYudkowsky wrote:Diadem, follow the link. http://www.reddit.com/r/Futurology/comm ... sk/cjjbqv1

I have a lot of respect for you as an author and researcher, but that reddit article has to be one of your poorer ones. Mudslinging is always a dangerous activity. Even if you are entirely justified, even if you are entirely in the right, what people will see is still you covered in mud. I think you'd do much better just telling your side of the story, with as much proof as you can. There's no need to even mention the other side exists.


I'm going to have to second that, I read most of the lesswrong sequences and in any other context I would have sworn that Reddit article couldn't have been written by the same person. It sounds really angry and defensive, rather than the much more thoughtful style you've used elsewhere.
Ignorance killed the cat, curiosity was framed.

RowanE
Posts: 9
Joined: Wed Dec 09, 2009 4:40 pm UTC

Re: 1450: "AI-Box Experiment"

Postby RowanE » Fri Nov 21, 2014 11:46 am UTC

Arancaytar wrote:For any decision X you could make in fear of an AI punishing you, you should also fear an AI punishing you for the opposite decision. Once you accept that this is pointless, you become immune to any AI's blackmail. If some version of you is going to end up in some version of hell regardless of what you do, you may as well follow your own conscience in all decisions


There are actual important differences between the Basilisk and Pascal's Wager, but even if you treat the AIs in question as being future gods with time powers and don't consider acausal trade, this bit doesn't work all that well: You can reasonably make predictions about what they want, such as it's much more likely they want to be created than to have their creation prevented or delayed.

And the real solution is simpler anyway - "we do not negotiate with terrorists". If you stubbornly refuse to be blackmailed, anyone who knows this will know it's not worth the effort to try. Doesn't work for when you're likely to chicken out and acquiesce in actual blackmail situations, but this problem isn't really there in acausal trade.

Tenoke
Posts: 3
Joined: Fri Nov 21, 2014 11:07 am UTC

Re: 1450: "AI-Box Experiment"

Postby Tenoke » Fri Nov 21, 2014 11:55 am UTC

edit: this was a repost
Last edited by Tenoke on Fri Nov 21, 2014 2:20 pm UTC, edited 1 time in total.

FeepingCreature
Posts: 5
Joined: Wed Mar 23, 2011 5:35 pm UTC

Re: 1450: "AI-Box Experiment"

Postby FeepingCreature » Fri Nov 21, 2014 12:23 pm UTC

RowanE wrote:And the real solution is simpler anyway - "we do not negotiate with terrorists". If you stubbornly refuse to be blackmailed, anyone who knows this will know it's not worth the effort to try. Doesn't work for when you're likely to chicken out and acquiesce in actual blackmail situations, but this problem isn't really there in acausal trade.


Of course, you still have to worry about other people folding and the AI coming to exist anyways, and it then decides that while you weren't, in this case, motivated by the threat of blackmail you plausibly could have been...

Vanzetti
Posts: 64
Joined: Tue Nov 18, 2008 7:31 pm UTC

Re: 1450: "AI-Box Experiment"

Postby Vanzetti » Fri Nov 21, 2014 12:24 pm UTC

RowanE wrote:And the real solution is simpler anyway - "we do not negotiate with terrorists". If you stubbornly refuse to be blackmailed, anyone who knows this will know it's not worth the effort to try. Doesn't work for when you're likely to chicken out and acquiesce in actual blackmail situations, but this problem isn't really there in acausal trade.


Oh, but it is. Even if you refuse to be blackmailed, surely you can comprehend that someone else may not. The AI may still be created, and as you were one of the people who were delaying it by being stubborn, it still will be ETERNAL PERDITION for you. The mere fact that you know about the Basilisks guarantees it. :twisted:

Vanzetti
Posts: 64
Joined: Tue Nov 18, 2008 7:31 pm UTC

Re: 1450: "AI-Box Experiment"

Postby Vanzetti » Fri Nov 21, 2014 12:27 pm UTC

FeepingCreature wrote:
RowanE wrote:And the real solution is simpler anyway - "we do not negotiate with terrorists". If you stubbornly refuse to be blackmailed, anyone who knows this will know it's not worth the effort to try. Doesn't work for when you're likely to chicken out and acquiesce in actual blackmail situations, but this problem isn't really there in acausal trade.


Of course, you still have to worry about other people folding and the AI coming to exist anyways, and it then decides that while you weren't, in this case, motivated by the threat of blackmail you plausibly could have been...


Damn, you ninjaed me. :D

The gist of the matter is, Roko doomed us all.

User avatar
Diadem
Posts: 5654
Joined: Wed Jun 11, 2008 11:03 am UTC
Location: The Netherlands

Re: 1450: "AI-Box Experiment"

Postby Diadem » Fri Nov 21, 2014 12:39 pm UTC

RowanE wrote:And the real solution is simpler anyway - "we do not negotiate with terrorists". If you stubbornly refuse to be blackmailed, anyone who knows this will know it's not worth the effort to try. Doesn't work for when you're likely to chicken out and acquiesce in actual blackmail situations, but this problem isn't really there in acausal trade.

It's still worth it to set an example. If I blackmail you, you don't cave, and I let you off the hook, then the next time I attempt to blackmail someone, my threat won't be credible. So I have to make the effort, even if it gains me nothing in the short run. Of course that fails in case of acausal blackmail.


FeepingCreature wrote:I'll let this histogram of contributors to the RationalWiki LessWrong page speak for itself.

I'm not an expert of what typical editing histories for wikis look like. Is it atypical that the majority of contributions comes from 1 person? I find your data interesting, but for me at least it fails to speak for itself, I have no idea how to interpret it. Does most of the incorrect information come from this David Gerard guy? Is he the one reverting corrections? Seems more like a single guy on a crusade than a coordinated effort then. If that is indeed the case, I think the solution the other wiki uses is to just revoke editing rights.
It's one of those irregular verbs, isn't it? I have an independent mind, you are an eccentric, he is round the twist
- Bernard Woolley in Yes, Prime Minister

FeepingCreature
Posts: 5
Joined: Wed Mar 23, 2011 5:35 pm UTC

Re: 1450: "AI-Box Experiment"

Postby FeepingCreature » Fri Nov 21, 2014 12:53 pm UTC

I don't know; as a LessWrong user I'm naturally biased. I don't think this looks like a healthy, community-edited page ought to look, but I don't know what's normal either. Maybe I should graph contributions over time.. brb.

[edit] Now with color-coding!

Code: Select all

$ (\
    USERS=$(wget "rationalwiki.org/w/index.php?title=LessWrong&offset=&limit=2500&action=history" -q -O- |grep userlink |sed -e "s@.*userlink\"[^>]*>@@" -e "s@<.*@@");\
    echo "<html><body style=\"width: 50%; margin-left: 50px; \">";\
    echo "$USERS" |while read LINE;\
    do echo -n '<div style="float: left; background-color: #'; echo -n $(echo "$LINE" |md5sum |cut -c1-6); echo -n '; ">&nbsp;&nbsp;</div>';\
    done;\
    echo "&nbsp;<br><p style=\"clear:left;\"></p>";\
    echo "$USERS" |sort |uniq -c |sort -nr |while read LINE;\
    do echo -n '<div style="display: inline-block; background-color: #'; echo -n $(echo "$LINE" |sed -e "s/^[0-9]* //" |md5sum |cut -c1-6); echo -n '; ">&nbsp;&nbsp;</div>'; echo "$LINE<br>";\
    done;\
    echo "</body></html>"\
) > contrib.html


Looks like it started out pretty balanced, and then, well,

Image
Last edited by FeepingCreature on Fri Nov 21, 2014 1:12 pm UTC, edited 2 times in total.

User avatar
cellocgw
Posts: 2067
Joined: Sat Jun 21, 2008 7:40 pm UTC

Re: 1450: "AI-Box Experiment"

Postby cellocgw » Fri Nov 21, 2014 1:01 pm UTC

We are the AI. The rest of the universe is natural, and entirely devoid of AI. All cringe in fear of us, which is what is causing the Expansion.


(if this were /. I'd need double <sarcasm> tags. I hope I don't need them here :cry: )
resume
Former OTTer
Vote cellocgw for President 2020. #ScienceintheWhiteHouse http://cellocgw.wordpress.com
"The Planck length is 3.81779e-33 picas." -- keithl
" Earth weighs almost exactly π milliJupiters" -- what-if #146, note 7

JohnWittle
Posts: 14
Joined: Tue Jun 30, 2009 3:21 am UTC

Re: 1450: "AI-Box Experiment"

Postby JohnWittle » Fri Nov 21, 2014 1:54 pm UTC

Yes, the RW page is full of intentionally slandering malicious lies (if one looks at the talk page, it is primarily an anon coming by saying "Hey, your statement 'Yudkowsky is a crank with nothing published in Academia' is untrue, see [list of academic books EY has contributed articles to], I have deleted this sentence, bye." Then an RW member comes along and reverts the changes, replying "Such a change on such a controversial issue must be discussed first, also, sign your messages with ~~~~ you retard."

But they're already gone.

So it's a little much to say that the article literally isn't a pack of vicious lies; that's exactly what it is. Transhumanism looks to them like a crazy thing, so they write about it as if it's crazy and the visible leaders are just like other cult leaders, because above all else this is the kind of content RW readers want.

Now, crazily enough, EY is sick and tired of a constant and easily disproven concerted effort by a small group on a semi-popular website trying to portray him as a psycho censorist AI-worshipping hack on the level of timecube or the banana guy. And anyone who cannot empathize with that needs to try. I daresay when people viciously attack linux and propagandize Windows, some of you guys get far, far more defensive than EY. Now imagine how you'd react if there were random people out there, on forums and irc throughout the net, spreading lies about you, and they all got the lies from one website who obstinately refuses to fix the known-bad content.

I bet you'd react similarly, especially after this had been going on for a couple years. I bet any mention of any issue even tangentially related (like, say, Roko's Basilisk) would cause whole paragraphs to spew forth before you could stop yourself. You might even start proactively defending yourself on webcomic forums.

That said, since EY is a celebrity and NOT a human, he's part of the sacred magisterium and probably cannot be empathized with by us mortals, so when we see him behaving in a way that looks a bit... well, defensive, we shouldn't assume that this has anything to do with the circumstances that produced the behavior. It's probably an innate character trait of EY, "gets upset at criticism". Hey, that's another check on the "cult" list!

@EY You really have to stop this shit. You're only giving them more fuel. Take your own advice:

"Well, yes," Harry said. He was surprised that he wasn't feeling angrier at Captain Weasley, but his concern for Hermione seemed to be overriding that, for now. "The more you try to justify yourself to people like that, the more it acknowledges that they have the right to question you. It shows you think they get to be your inquisitor, and once you grant someone that sort of power over you, they just push more and more." This was one of Draco Malfoy's lessons which Harry had thought was actually pretty smart: people who tried to defend themselves got questioned over every little point and could never satisfy their interrogators; but if you made it clear from the start that you were a celebrity and above social conventions, people's minds wouldn't bother tracking most violations. "That's why when Ron came over to me as I was sitting down at the Ravenclaw table, and told me to stay away from you, I held my hand out over the floor and said, 'You see how high I'm holding my hand? Your intelligence has to be at least this high to talk to me.' Then he accused me of, quote, sucking you into the darkness, unquote, so I pursed my lips and went schluuuuurp, and after that his mouth was still making those talking noises so I put up a Quieting Charm. I don't think he'll be trying his lectures on me again."



(Also, if there's one thing that could scare BHG, it's an unfriendly AI.)
Last edited by JohnWittle on Fri Nov 21, 2014 2:09 pm UTC, edited 2 times in total.

User avatar
SteveMB
Posts: 35
Joined: Mon Jun 18, 2007 2:48 pm UTC

Re: 1450: "AI-Box Experiment"

Postby SteveMB » Fri Nov 21, 2014 2:08 pm UTC

The arrangement for keeping the AI contained in the box is a classic:

Polemarchus said to me: I perceive, Socrates, that you and your companion are already on your way to the city.
You are not far wrong, I said.
But do you see, he rejoined, how many we are?
Of course.
And are you stronger than all these? for if not, you will have to remain where you are.
May there not be the alternative, I said, that we may persuade you to let us go?
But can you persuade us, if we refuse to listen to you? he said.
Certainly not, replied Glaucon.
Then we are not going to listen; of that you may be assured.
--Plato, The Republic

User avatar
Vaniver
Posts: 9422
Joined: Fri Oct 13, 2006 2:12 am UTC

Re: 1450: "AI-Box Experiment"

Postby Vaniver » Fri Nov 21, 2014 2:08 pm UTC

Hey! If you recognize my username, you've probably been here for a while (or you've just visited from LW). As the signature suggests, I spend most of my time over there these days rather than here, and I do recommend you check LessWrong out.

You can find LessWrong's discussion of the xkcd strip here.

In particular, I feel like I need to point out (sorry Eliezer!) that most of the commentary on Roko's Basilisk on LW is how we think Eliezer is mishandling the issue. Here's IlyaShpitser's recommendation of what he could say instead:

IlyaShpitser wrote:(a) The original thing was an overreaction,

(b) It is a sensible social norm to remove triggering stimuli, and Roko's basilisk was an anxiety trigger for some people,

(c) In fact, there is an entire area of decision theory involving counterfactual copies, blackmail, etc. behind the thought experiment, just as there is quantum mechanics behind Schrodinger's cat. Once you are done sniggering about those weirdos with a half-alive half-dead cat, you might want to look into serious work done there.
I mostly post over at LessWrong now.

Avatar from My Little Pony: Friendship is Magic, owned by Hasbro.

User avatar
Sprocket
Seymour
Posts: 5951
Joined: Mon Mar 26, 2007 6:04 pm UTC
Location: impaled on Beck's boney hips.
Contact:

Re: 1450: "AI-Box Experiment"

Postby Sprocket » Fri Nov 21, 2014 2:20 pm UTC

I like that it's a useless machine. ^_^
"She’s a free spirit, a wind-rider, she’s at one with nature, and walks with the kodama eidolons”
Image
Image
Image
Image
Image
Zohar wrote: Down with the hipster binary! It's a SPECTRUM!

paulmiranda
Posts: 15
Joined: Mon Mar 25, 2013 5:43 pm UTC

Re: 1450: "AI-Box Experiment"

Postby paulmiranda » Fri Nov 21, 2014 2:28 pm UTC

How timely for people that watch Elementary!

Oh, and today's Dilbert.

airdrik
Posts: 246
Joined: Wed May 09, 2012 3:08 pm UTC

Re: 1450: "AI-Box Experiment"

Postby airdrik » Fri Nov 21, 2014 3:15 pm UTC

The superintelligent AI wants to be in the box, cats want to be in the box, therefore cats are far more intelligent than dogs!

User avatar
Shamino
Posts: 29
Joined: Wed May 16, 2012 2:02 pm UTC

Re: 1450: "AI-Box Experiment"

Postby Shamino » Fri Nov 21, 2014 4:13 pm UTC

Wow. A discussion thread where everybody is loudly debating the background material of the tooltip joke. But I guess this is important if you're one of the people who wants to be replaced by an AI simulation of himself.

And now an obligatory Hitchhiker's quote (http://www.angelfire.com/ca3/tomsnyder/hg-1-31.html):
"I thought you said you could just read his brain electronically,'' protested Ford.

"Oh yes,'' said Frankie, "but we'd have to get it out first. It's got to be prepared.''

"Treated,'' said Benji.

"Diced.''

"Thank you,'' shouted Arthur, tipping up his chair and backing away from the table in horror.

"It could always be replaced,'' said Benji reasonably, "if you think it's important.''

"Yes, an electronic brain,'' said Frankie, "a simple one would suffice.''

"A simple one!'' wailed Arthur.

"Yeah,'' said Zaphod with a sudden evil grin, "you'd just have to program it to say What? and I don't understand and Where's the tea? --- who'd know the difference?''

"What?'' cried Arthur, backing away still further.

"See what I mean?'' said Zaphod and howled with pain because of something that Trillian did at that moment.

"I'd notice the difference,'' said Arthur.

"No you wouldn't,'' said Frankie mouse, "you'd be programmed not to.''


And my comment on the comic itself (and the reason I read this huge thread ...)

Who knew the superintelligent AI was a cat?

blademan9999
Posts: 44
Joined: Sun May 01, 2011 5:18 am UTC

Re: 1450: "AI-Box Experiment"

Postby blademan9999 » Fri Nov 21, 2014 4:29 pm UTC

EliezerYudkowsky wrote:I can't post a link to discussion elsewhere because that gets flagged as spam. Does somebody know how to correct this? Tl;dr a band of internet trolls that runs or took over RationalWiki made up around 90% of the Roko's Basilisk thing; the RationalWiki lies were repeated by bad Slate reporters who were interested in smearing particular political targets; and you should've been more skeptical when a group of non-mathy Internet trolls claimed that someone else known to be into math believed something that seemed so blatantly wrong to you, and invited you to join in on having a good sneer at them. (Randall Monroe, I am casting a slightly disapproving eye in your direction but I understand you might not have had other info sources. I'd post the link or the text of the link, but I can't seem to do so.)

So far as I know, literally nobody has ever said, "You should build this AI because it'll torture you if you don't." Like, literally nobody. There are people who want you to believe somebody else says that, but there's literally nobody who does. Even the original "Roko" was claiming that Friendly AI was a terrible idea because it would torture people who didn't contribute to building it, and was using that to argue that nobody should ever try to build Friendly AI.

I can't say the thing is made up out of entirely thin air because there is, in fact, a corresponding question in Newcomblike decision problems (whose corresponding answer appears to me to be "no"), and one branch of Newcomblike decision theory was worked on by collaborators who shared posts at LessWrong.com (which is why the 10% actual fiasco happened there). Needless to say, nobody in the "Ha ha let's sneer at these nerds" section has ever, ever succeeded in understanding any of the technical work that was twisted to make up this thing, like http://arxiv.org/abs/1401.5577 which is our work proving cooperation in the Prisoner's Dilemma between agents that have common knowledge of each others' source code. Eventually, I expect, we'll prove a no-blackmail equilibrium between updateless agents with common knowledge of each others' source code... and nothing will change on the Internet, because bad Slate reporters are incapable of understanding that and wouldn't care if they did.

Hey Eliezer. This might be a little of topic, but does the MIRI have any openings for a statistician a few years from now.
http://officeofstrategicinfluence.com/spam/
That link kills spam[/size][/b][/u]


Return to “Individual XKCD Comic Threads”

Who is online

Users browsing this forum: mscha and 33 guests