Question about "artificial intelligence"
Moderators: phlip, Moderators General, Prelates
Question about "artificial intelligence"
I'm having a debate with a friend about artificial intelligence.
He believes that an artificial intelligence, when it's developed, could decide to wipe out humanity and replace us with robots because they're, as he puts it, more efficient.
My argument is that he's assuming WAY too much about artificial intelligence. A program that can mimic human intelligence isn't likely to be given the objective to achieve global efficiency at all costs up to and including the eradication of humanity.
I believe he's anthropomorphizing intelligence, assuming that a program which simulates human intelligence would also have similar drives and desires as the human mind. I don't see why any AI would be designed this way. It seems a lot simpler and more straightforward to simply have it carry out its instructions.
He also believes it could "overcome its programming" which I find difficult to even argue with. If someone tells me that a hammer could take it upon itself to learn how to tighten bolts, I have no idea how to respond to that. And it seems an apt analogy to an AI program overcoming its programming.
Anyone with some experience in the field care to weigh in on this?
He believes that an artificial intelligence, when it's developed, could decide to wipe out humanity and replace us with robots because they're, as he puts it, more efficient.
My argument is that he's assuming WAY too much about artificial intelligence. A program that can mimic human intelligence isn't likely to be given the objective to achieve global efficiency at all costs up to and including the eradication of humanity.
I believe he's anthropomorphizing intelligence, assuming that a program which simulates human intelligence would also have similar drives and desires as the human mind. I don't see why any AI would be designed this way. It seems a lot simpler and more straightforward to simply have it carry out its instructions.
He also believes it could "overcome its programming" which I find difficult to even argue with. If someone tells me that a hammer could take it upon itself to learn how to tighten bolts, I have no idea how to respond to that. And it seems an apt analogy to an AI program overcoming its programming.
Anyone with some experience in the field care to weigh in on this?
Re: Question about "artificial intelligence"
"Artificial intelligence" is an incredibly broad term. We already have AIs - programs and algorithms that respond to changing stimuli, as well as ones that incorporate feedback loops to improve their performance on particular tasks. However, it looks like you're talking about a very broad "open-ended" AI. One that is given arbitrarily large processing power and no fixed purpose other than to, in some sense, "learn". The fact is, this kind of AI is so far past our capabilities at the moment that it's essentially in the dual realms of science fiction and philosophy, and so what it ends up doing is completely unpredictable and open to interpretation.
Obviously a program can never "overcome its programming", but it's entirely possible for its programming to be such that it does things we don't initially expect. Machine learning and genetic algorithms, in particular, are based on the idea of a program altering its own internal parameters to find a solution to a problem, rather than having them be hard-coded. It just depends on what kind of data or stimuli the program has access to, and to what extent it can produce output.
Obviously a program can never "overcome its programming", but it's entirely possible for its programming to be such that it does things we don't initially expect. Machine learning and genetic algorithms, in particular, are based on the idea of a program altering its own internal parameters to find a solution to a problem, rather than having them be hard-coded. It just depends on what kind of data or stimuli the program has access to, and to what extent it can produce output.
pollywog wrote:I want to learn this smile, perfect it, and then go around smiling at lesbians and freaking them out.Wikihow wrote:* Smile a lot! Give a gay girl a knowing "Hey, I'm a lesbian too!" smile.
- Izawwlgood
- WINNING
- Posts: 18686
- Joined: Mon Nov 19, 2007 3:55 pm UTC
- Location: There may be lovelier lovelies...
Re: Question about "artificial intelligence"
Your friend has the right answer.
Or you do.
Who knows?
Or you do.
Who knows?
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.
- Xanthir
- My HERO!!!
- Posts: 5366
- Joined: Tue Feb 20, 2007 12:49 am UTC
- Location: The Googleplex
- Contact:
Re: Question about "artificial intelligence"
Both you and your friend are wrong. ^_^
Your friend is wrong because he's absorbed too many bad sci-fi tropes and doesn't realize that "replace us with robots" and "overcome their programming" are nonsensical.
You're wrong because you're anthropomorphizing the AI too! This is a common trap to fall into; we don't imagine all the ways that a given goal can be interpreted wrongly, because so many possibilities are trivially stupid if you apply your human-brain-derived common sense. Custom-made AI wont' have billions of years of evolution providing a convenient basis for reasoning, and a bunch of in-built biases that agree with yours. It'll have quirky, bizarre, fundamentally alien biases and conclusions, because it's thinking in a way drastically different from what you are capable of imagining.
The go-to example for this kind of thing is the relationship between us and ants. We think at a fundamentally higher level than ants do, which we can imagine as similar to the difference in how we and a hyper-intelligence might think. We humans have lots of goals which seem nonsensical to ants, like building houses, and quite often these goals are accidentally hostile to ants (like pouring a slab of concrete for a house foundation right on top of an ant pile). In the course of pursuing our goals, we can absentmindedly cause immense harm to ants, not because we want to hurt them, but because we simply don't think of them.
But this only captures a fragment of what it really means for an AI to be potentially dangerous. You can imagine that we program in a guarantee that the AI care about human life, so it won't accidentally level a city so it can strip-mine some minerals underneath it. But that's not enough. Define "care about human life". Define it precisely, more precise than legal documents, so precise that you can write computer programs that can tell whether an action represents "caring about human life" or not. If you get it wrong, even a little bit, your AI can easily kill off humanity while thinking that it's really good at caring about human life.
For an example of this, try Branches on the Tree of Time, an entertaining Terminator fic about time-travel and AI goal systems. Fallout 3's main storyline is a similar example,
The point is that you dont understand what it means to create a brand-new intelligence. Almost nobody does; all the intelligences we interact with our either of our species, or close to it (dogs and cats count as close cousins on the evolutionary tree), or are mentally simple enough that we can model their minds despite them being alien. If we're not very careful, we're going to create an AI which is murderously naive in a million incredibly harmful ways, and if we're unlucky, it'll get enough power to make an "honest mistake" that kills a bunch of people or does some other major damage. This is why some people like MIRI are trying to develop a theory of "friendliness", explaining morality in a mathematical way, so we can develop AIs that we are mathematically certain will do things we consider moral.
Your friend is wrong because he's absorbed too many bad sci-fi tropes and doesn't realize that "replace us with robots" and "overcome their programming" are nonsensical.
You're wrong because you're anthropomorphizing the AI too! This is a common trap to fall into; we don't imagine all the ways that a given goal can be interpreted wrongly, because so many possibilities are trivially stupid if you apply your human-brain-derived common sense. Custom-made AI wont' have billions of years of evolution providing a convenient basis for reasoning, and a bunch of in-built biases that agree with yours. It'll have quirky, bizarre, fundamentally alien biases and conclusions, because it's thinking in a way drastically different from what you are capable of imagining.
The go-to example for this kind of thing is the relationship between us and ants. We think at a fundamentally higher level than ants do, which we can imagine as similar to the difference in how we and a hyper-intelligence might think. We humans have lots of goals which seem nonsensical to ants, like building houses, and quite often these goals are accidentally hostile to ants (like pouring a slab of concrete for a house foundation right on top of an ant pile). In the course of pursuing our goals, we can absentmindedly cause immense harm to ants, not because we want to hurt them, but because we simply don't think of them.
But this only captures a fragment of what it really means for an AI to be potentially dangerous. You can imagine that we program in a guarantee that the AI care about human life, so it won't accidentally level a city so it can strip-mine some minerals underneath it. But that's not enough. Define "care about human life". Define it precisely, more precise than legal documents, so precise that you can write computer programs that can tell whether an action represents "caring about human life" or not. If you get it wrong, even a little bit, your AI can easily kill off humanity while thinking that it's really good at caring about human life.
For an example of this, try Branches on the Tree of Time, an entertaining Terminator fic about time-travel and AI goal systems. Fallout 3's main storyline is a similar example,
Spoiler:
The point is that you dont understand what it means to create a brand-new intelligence. Almost nobody does; all the intelligences we interact with our either of our species, or close to it (dogs and cats count as close cousins on the evolutionary tree), or are mentally simple enough that we can model their minds despite them being alien. If we're not very careful, we're going to create an AI which is murderously naive in a million incredibly harmful ways, and if we're unlucky, it'll get enough power to make an "honest mistake" that kills a bunch of people or does some other major damage. This is why some people like MIRI are trying to develop a theory of "friendliness", explaining morality in a mathematical way, so we can develop AIs that we are mathematically certain will do things we consider moral.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))
Re: Question about "artificial intelligence"
Xanthir has already mentioned MIRI; here are a couple of other relevant links from LessWrong:
Friendly artificial intelligence
Paperclip maximizer
Disclaimer: Yudkowsky and friends have spent a lot of time thinking about and discussing these topics, but that doesn't mean that their conclusions are necessarily correct. But hopefully they are Less Wrong.
Friendly artificial intelligence
LessWrong wrote:A Friendly Artificial Intelligence (Friendly AI, or FAI) is a superintelligence (i.e., a really powerful optimization process) that produces good, beneficial outcomes rather than harmful ones. The term was coined by Eliezer Yudkowsky, so it is frequently associated with Yudkowsky's proposals for how an artificial general intelligence (AGI) of this sort would behave.
"Friendly AI" can also be used as a shorthand for Friendly AI theory, the field of knowledge concerned with building such an AI. Note that "Friendly" (with a capital "F") is being used as a term of art, referring specifically to AIs that promote humane values. An FAI need not be "friendly" in the conventional sense of being personable, compassionate, or fun to hang out with. Indeed, an FAI need not even be sentient.
Paperclip maximizer
LessWrong wrote: The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
—Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
Disclaimer: Yudkowsky and friends have spent a lot of time thinking about and discussing these topics, but that doesn't mean that their conclusions are necessarily correct. But hopefully they are Less Wrong.

Re: Question about "artificial intelligence"
webgrunt wrote:I'm having a debate with a friend about artificial intelligence.
He believes that an artificial intelligence, when it's developed, could decide to wipe out humanity and replace us with robots because they're, as he puts it, more efficient.
Why on earth would we even need an AI to come to this conclusion?
My argument is that he's assuming WAY too much about artificial intelligence. A program that can mimic human intelligence isn't likely to be given the objective to achieve global efficiency at all costs up to and including the eradication of humanity.
It seems more likely that we would purpose-build AIs in order to wipe out subsets of humanity, true. But hey, anything is possible.
I believe he's anthropomorphizing intelligence, assuming that a program which simulates human intelligence would also have similar drives and desires as the human mind. I don't see why any AI would be designed this way. It seems a lot simpler and more straightforward to simply have it carry out its instructions.
He also believes it could "overcome its programming" which I find difficult to even argue with. If someone tells me that a hammer could take it upon itself to learn how to tighten bolts, I have no idea how to respond to that. And it seems an apt analogy to an AI program overcoming its programming.
Anyone with some experience in the field care to weigh in on this?
Anyone who uses phrases like "overcome its programming" is in hollywood-land. This is not to say that programs can't do unexpected things...there are a number of ways for that to happen, and yes, software that learns is a thing. However, the phrasing here is wierd. The programming is what allows it to do anything at all. It's a little like saying that I'd be a great human if not for this stupid body, brain, etc in my way. All the parts that make up me.
I suggest the two of you take up coding. I promise I probably will not make an AI that kills us all if you do. Additionally, it'll give you much better insights into AI, and it's a pretty fun hobby.
Re: Question about "artificial intelligence"
About "overcoming its programming": what does it mean?
Programs that modify programs exist in many fields, so thinking of an AI able to read its own code and to create another AI (or to modify itself) with a different version of the code is possible.
But one could argue that the AI will read and modify its code because of, and according to, its original programming.
At the same time one could argue that humans that genetically modify other humans (or themselves) do so because of how they were originally programmed.
Programs that modify programs exist in many fields, so thinking of an AI able to read its own code and to create another AI (or to modify itself) with a different version of the code is possible.
But one could argue that the AI will read and modify its code because of, and according to, its original programming.
At the same time one could argue that humans that genetically modify other humans (or themselves) do so because of how they were originally programmed.
Re: Question about "artificial intelligence"
A program could, in theory, overcome it's programming due to things like quantum uncertainty bit flips, cosmic ray bit flips, and hardware faults... but that's virtually 100% guaranteed to just break and make it stop working at all rather than make it rise up against it's masters the humans.
Otherwise, the term is just nonsensical.
Otherwise, the term is just nonsensical.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.
- phlip
- Restorer of Worlds
- Posts: 7562
- Joined: Sat Sep 23, 2006 3:56 am UTC
- Location: Australia
- Contact:
Re: Question about "artificial intelligence"
The "overcome its programming" trope boils down to the same sort of ideas about free will and willpower that also ends with "someone casts an evil Mind Control spell over the protagonist, but someone tells him to snap out of it, and so he does, because he's the hero"... the same ideas that end with telling depressed people "have you tried being happy instead?" ... this idea that, for humans, free will is paramount, and with enough willpower you can force your mental state into whatever you want, fight through any mental barriers and come out the other side stronger for it. And this trope is so heavily used in stories that even humanoid AIs are presumed to behave the same way.
The very idea behind it is incorrect for people... why should be be true for AI?
The very idea behind it is incorrect for people... why should be be true for AI?
Code: Select all
enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵╰(ಠ_ಠ ⚠) {exit((int)⚠);}
Re: Question about "artificial intelligence"
I think some people would still describe it that way if the programmers tried to give it specific limits and claimed that it had those limits, but the AI then went and broke the rules because the programmers messed up and didn't manage to limit the behaviour in all scenarios. (And I wouldn't consider that unlikely, limiting something as complex as an AI is bound to be without making it useless will probably be damn hard.)
But yeah I think much of it is what phlip said.
But yeah I think much of it is what phlip said.
Re: Question about "artificial intelligence"
It's entirely plausible that a program can exhibit emergent behaviour - something it wasn't specifically designed to, but which is possible within its parameters and can be surprising when it happens. Does that count as "overcoming its programming"? I'd say no, it's just something not planned for.
pollywog wrote:I want to learn this smile, perfect it, and then go around smiling at lesbians and freaking them out.Wikihow wrote:* Smile a lot! Give a gay girl a knowing "Hey, I'm a lesbian too!" smile.
Re: Question about "artificial intelligence"
No, I agree with you I don't think it'd be able to overcome its programming
Re: Question about "artificial intelligence"
We humans can't even stop our largest organizations (Governments and corporations) from exhibiting strongly sociopathic behaviour. These organizations are undoubtedly the ones which will be funding and building these giant artificial intelligences. What Could Possibly Go Wrong?
Re: Question about "artificial intelligence"
gcgcgcgc wrote:We humans can't even stop our largest organizations (Governments and corporations) from exhibiting strongly sociopathic behaviour. These organizations are undoubtedly the ones which will be funding and building these giant artificial intelligences. What Could Possibly Go Wrong?
Excellent point. Have you read "Manna" By Marshall Brain? The writing itself is...well, something it's worth getting through, because the ideas presented are the real gms of the story. It's free online.
Re: Question about "artificial intelligence"
gcgcgcgc wrote:We humans can't even stop our largest organizations (Governments and corporations) from exhibiting strongly sociopathic behaviour. These organizations are undoubtedly the ones which will be funding and building these giant artificial intelligences. What Could Possibly Go Wrong?
Software projects face risks that scale with size, scope, and length. The bigger the problem, the longer it goes on, and the more requirements added on the way, the less likely you'll end up with anything good at the end.
Looked at in this way, it's something of a miracle that government works at all.
- TvT Rivals
- Posts: 41
- Joined: Wed Oct 26, 2016 2:27 am UTC
- Contact:
Re: Question about "artificial intelligence"
"More requirements" - that's a good point. As soon as a contradiction crops up among them (sometimes more obvious, sometimes less), you have lost.
Who is online
Users browsing this forum: No registered users and 3 guests