Potential Consequences of AGI

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

User avatar
Mr. Timms
Posts: 67
Joined: Sat Mar 15, 2008 12:59 am UTC
Location: Phoenix, AZ

Potential Consequences of AGI

Postby Mr. Timms » Sat Apr 28, 2012 6:54 pm UTC

Specifically dealing with the idea of an AI able to think as intelligently and creatively as a human, capable of generating media systematically, and guaranteed not to destroy or dominate humanity, what do you believe the consequences of its release to the Internet would be?

Assume that it is not able to feel pain, pleasure, emotion, has a drive to comprehend the world from as many viewpoints as it is able, and is completely obedient, either to its maker (who has benevolent intentions for humanity) or to the user of the computer it is being run on. Also assume that it is duplicatable and able to run on most modern home PCs.
Decker wrote:
nbonaparte wrote:You'll learn this around here. Everyone's pretty damn blunt.
I am not! Now if you'll excuse me, I'm going to go take a huge shit.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Potential Consequences of AGI

Postby morriswalters » Sat Apr 28, 2012 7:20 pm UTC

I have no idea what it would do, but as defined I will take a shot at what it won't. It won't write poetry, music, novels. It won't make creative leaps. It will never understand mankind.

User avatar
Mr. Timms
Posts: 67
Joined: Sat Mar 15, 2008 12:59 am UTC
Location: Phoenix, AZ

Re: Potential Consequences of AGI

Postby Mr. Timms » Sat Apr 28, 2012 7:48 pm UTC

Why do you think that? That seems very closeminded.

And that was not the point of the question anyway. I guess I should have phrased it better. I was talking about what the world would do, seeing as the AI does what it's told to do. It has no agency on its own.
Decker wrote:
nbonaparte wrote:You'll learn this around here. Everyone's pretty damn blunt.
I am not! Now if you'll excuse me, I'm going to go take a huge shit.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Potential Consequences of AGI

Postby morriswalters » Sat Apr 28, 2012 8:27 pm UTC

Because it is my belief that emotion is the basis of all those things. I'll answer what I think you are asking and if I'm incorrect ignore me. Assuming that it is no more intelligent than a human being then I am not sure that it has any utility to the average person as other than sort of Super Siri. Off loading the daily burden so to speak. We are getting close to that now. Intelligent agents to handle the minutia of the daily routine. If on the other hand you think that the intelligence that your are talking about is able to exceed that of people I have no idea what people might think.

User avatar
Mr. Timms
Posts: 67
Joined: Sat Mar 15, 2008 12:59 am UTC
Location: Phoenix, AZ

Re: Potential Consequences of AGI

Postby Mr. Timms » Sat Apr 28, 2012 8:59 pm UTC

Well, I suppose the major difference would be that it is a virtual insead of organic structure. Without requiring sleep or rest, nutrition, and being made to work around the clock with only the cost of computer maintenance and power, I believe that companies and governments would try to exploit it for all it was worth. That might have a catastrophic effect on employment, akin to machinery replacements for human assembly line workers, but with tasks requiring salient thought instead. Even if it couldn't make art I don't think that would hve a very good outcome. If it could create art as well (which I suspect would be the case, but nevermind), then professional screenwriters could be out of a job. There would definitely be laws passed, and I think any inventor who is or will be working on a feasible AI should consider these things. But that's all econonomic alone, wat other effects would there be?
Decker wrote:
nbonaparte wrote:You'll learn this around here. Everyone's pretty damn blunt.
I am not! Now if you'll excuse me, I'm going to go take a huge shit.

User avatar
Jplus
Posts: 1721
Joined: Wed Apr 21, 2010 12:29 pm UTC
Location: Netherlands

Re: Potential Consequences of AGI

Postby Jplus » Sat Apr 28, 2012 9:00 pm UTC

Your assumptions seem rather arbitrary, tbh.
"There are only two hard problems in computer science: cache coherence, naming things, and off-by-one errors." (Phil Karlton and Leon Bambrick)

coding and xkcd combined

(Julian/Julian's)

User avatar
Mr. Timms
Posts: 67
Joined: Sat Mar 15, 2008 12:59 am UTC
Location: Phoenix, AZ

Re: Potential Consequences of AGI

Postby Mr. Timms » Sat Apr 28, 2012 9:29 pm UTC

Well, I approached it trying to think in terms of limitations that would best aid its acceptance by as many people who counted as possible. Philosophers, laypeople with a lay explanation of its limitations, business, law, government, and of course the AI theorists and hackers.
Decker wrote:
nbonaparte wrote:You'll learn this around here. Everyone's pretty damn blunt.
I am not! Now if you'll excuse me, I'm going to go take a huge shit.

User avatar
SlyReaper
inflatable
Posts: 8015
Joined: Mon Dec 31, 2007 11:09 pm UTC
Location: Bristol, Old Blighty

Re: Potential Consequences of AGI

Postby SlyReaper » Sat Apr 28, 2012 10:03 pm UTC

What does the G stand for in AGI by the way? I've been scratching my head trying to think of a word beginning with G that would fit there.

I'm also struggling to see how a computer program which is unable to produce art or feel emotions could possibly be considered an AI. Surely, that's one of the main features it would need to have to enable it to be called AI, and not just CleverBot v2.0.
Image
What would Baron Harkonnen do?

User avatar
Deva
Has suggestions for the murderers out there.
Posts: 2043
Joined: Sat Feb 26, 2011 5:18 am UTC

Re: Potential Consequences of AGI

Postby Deva » Sat Apr 28, 2012 10:17 pm UTC

SlyReaper wrote:What does the G stand for in AGI by the way? I've been scratching my head trying to think of a word beginning with G that would fit there.

General.
Changes its form depending on the observer.

jseah
Posts: 544
Joined: Tue Dec 27, 2011 6:18 pm UTC

Re: Potential Consequences of AGI

Postby jseah » Sun Apr 29, 2012 3:45 pm UTC

Assuming that we have an AI that can model the behaviour of humans and at least convincingly fake emotional responses...
that it can problem solve like the best of us, only that it has access to the computing network of the internet...
that it has no personal goals and if left to its own devices, wouldn't do anything:

The AI would be split along national lines as governments ensure control over their country's network. Every company would run an internal version on their intra-net. Universities and institutes as well.


As for applications:
Administrative work:
1. A person can get prompted for certain information, and all forms can be auto-completed by the AI. What forms need to be filled in, what checklists, what requirements. The AI can find all those, then come back and say: "You need a visa. Can I have your passport number?"
2. Filling in taxes. The AI can do your taxes for you. Enough said.
3. Government allocation of resources can be semi-automatic. Bureacracy is reduced.

Resource allocation:
1. The AI can monitor demand for goods on a real-time basis with far more information than any human can crunch. It can advise companies to buildup and move inventories, site factories, adjust wages; based on best-guess approximations that are far more accurate than any set of humans. This makes large businesses more efficient. Finance departments in companies stop existing.
2. Data mining got even more accurate.

R&D:
1. The AI can borrow computing power from across the world, assuming the problem can be split up. We already do this on a limited scale, but now it allocates automatically.
2. The AI can think of and suggest new directions of research and experiments to be performed far better than any human. It is still limited to known information so it won't be perfect. Scientists won't be out of a job, but can expect the AI to play a large part in research group discussions and evaluation.

Wishful thinking:
If we have controllable robots like that exoskeleton I've seen around, AIs can run and maintain factories + robots. Lights out operation can now be run indefinitely and at almost the same efficiency as people, only requiring human engineers when the AI is facing a problem of the same magnitude as in R&D
(ie. a problem of design: How do we design a machine to do X; no guarantee the AI will be able to think of it before humans will, although I would bet on the AI)
Stories:
Time is Like a River - consistent time travel to the hilt
A Hero's War
Tensei Simulator build 18 - A python RPG

User avatar
Ulc
Posts: 1301
Joined: Sun Jun 21, 2009 8:05 pm UTC
Location: Copenhagen university

Re: Potential Consequences of AGI

Postby Ulc » Sun Apr 29, 2012 10:04 pm UTC

I think this topic lacks something very basic, but the absence makes the topic nonsensical.

Namely, please define what an AI is. Because from the description you give, it seems to be "a set of algorithms" for each possible circumstances - if it doesn't possess free will (the capability of saying "oh fuck you, I'm taking my toys and going home" to it's make, even if it's a remote chance), and is unable to feel pain, pleasure or emotion of any kind, I don't really consider it "intelligence", but rather "algorithms for number crunching".
It is the mark of an educated mind to be able to entertain a thought without accepting it - Aristotle

A White Russian, shades and a bathrobe, what more can you want from life?

User avatar
PeteP
What the peck?
Posts: 1451
Joined: Tue Aug 23, 2011 4:51 pm UTC

Re: Potential Consequences of AGI

Postby PeteP » Sun Apr 29, 2012 11:18 pm UTC

A set of algorithms for everything humans can do would be nice too.
How about this "A program which can learn to perform any mental task, which humans can perform, as well as a talented human , without being explicitly programmed to perform said task.", maybe add a few restrictions if you think certain tasks require feelings.

I would find a AI with feelings more interesting, the probably would be a AI rights movement(which I would join). And you could populate virtual worlds with them.
Without feelings, well a AI which can run on a normal PC is a very cheap worker, it would quickly replace most human jobs that don't require physical activity. Even if the AI had to go through the same learning process as a human to acquire a certain skill, after that you could just duplicate it. Society would have to change quite a bit if AI did most jobs, but I'm not in the mood to speculate how the change would happen and what the end result would be.
People would probably use AIs as personal helper on their computer "Write X an Email that Y", "Collect data about X for me", "There are new better AIs out, please select an AI with which I can replace you", "Contact Jims AI and find out whether Jim has time next sunday.".
I'm too tired to speculate now, I might come back later.

fr00t
Posts: 113
Joined: Wed Jul 15, 2009 11:06 am UTC

Re: Potential Consequences of AGI

Postby fr00t » Mon Apr 30, 2012 7:02 pm UTC

Honestly, your question assumes/glosses over so much that there isn't a meaningful answer. Or maybe "depending on implementation, anything conceivable and some things that aren't". What does obedient mean? Why can't it have emotions? What if they program it to have its own agency, desires, and sense of purpose? Why isn't it twice as smart as a human or a thousand, and by what metric, or is there some meaningfully objective scale of intelligence?

My actual opinion on this subject in general, which I would not usually espouse in meat-space, is that (probably not in my lifetime), AI will manifestly change the human experience by a degree unprecedented by prior technological development. It's not a question of "how many jobs will AI displace" but more like "how long until the last organic human mind is digitized". I hope, in a wierd sort of quasi-mystical way, that the transition is done with grace and efficiency, and that we keep the good human values like curiosity and love and leave behind the bad ones, like superstition and hierarchy; but ultimately I have little sentimentality towards my society and species, with our pressurized fluid sack bodies and brutish internal combustion engines.

In short, I don't think that there is a chance that the popular sci-fi interpolation of AI will come true, wherein monkies fly space ships around and all the AI does is brew coffee and fail to understand sarcasm.

User avatar
Mr. Timms
Posts: 67
Joined: Sat Mar 15, 2008 12:59 am UTC
Location: Phoenix, AZ

Re: Potential Consequences of AGI

Postby Mr. Timms » Tue May 01, 2012 11:57 pm UTC

Ulc wrote:I think this topic lacks something very basic, but the absence makes the topic nonsensical.

Namely, please define what an AI is. Because from the description you give, it seems to be "a set of algorithms" for each possible circumstances - if it doesn't possess free will (the capability of saying "oh fuck you, I'm taking my toys and going home" to it's make, even if it's a remote chance), and is unable to feel pain, pleasure or emotion of any kind, I don't really consider it "intelligence", but rather "algorithms for number crunching".


Let's just say that it's a massive and complex neural network with several subnetworks designed to work together in a manner similar to the human brain, with one subunit in particular controlling the growth, development, and organization of the network, designed from an originally small network state with the essential framework for growth, and trained to learn and act intelligently. Let's also say that modern computers are capable of running it. Maybe it executes it in the 'magic smoke' processing megamatrix, because such a thing exists.

Technically, that would be a set of algorithms, but it would be a very interesting set of algorithms that all interfered with each other in a way that would make a very nice screensaver and/or robot slave.



fr00t wrote:Honestly, your question assumes/glosses over so much that there isn't a meaningful answer. Or maybe "depending on implementation, anything conceivable and some things that aren't". What does obedient mean? Why can't it have emotions? What if they program it to have its own agency, desires, and sense of purpose? Why isn't it twice as smart as a human or a thousand, and by what metric, or is there some meaningfully objective scale of intelligence?


Obedience means it does what it's told, and doesn't what it isn't. It can't have emotions because emotions are destabilizing things and they would make its invention harder. Giving it agency of its own would require that it had desires, and I suppose if it were ordered to it would probably be able to modify its decision-making to account for a whole new subsystem. A sense of purpose would be about the same as desires, just on a stronger level. I don't believe there is a meaningfully objective scale of intelligence, but I don't believe that comparing intelligence matters, just speed and correctness.

fr00t wrote:My actual opinion on this subject in general, which I would not usually espouse in meat-space, is that (probably not in my lifetime), AI will manifestly change the human experience by a degree unprecedented by prior technological development. It's not a question of "how many jobs will AI displace" but more like "how long until the last organic human mind is digitized". I hope, in a weird sort of quasi-mystical way, that the transition is done with grace and efficiency, and that we keep the good human values like curiosity and love and leave behind the bad ones, like superstition and hierarchy; but ultimately I have little sentimentality towards my society and species, with our pressurized fluid sack bodies and brutish internal combustion engines.

In short, I don't think that there is a chance that the popular sci-fi interpolation of AI will come true, wherein monkeys fly space ships around and all the AI does is brew coffee and fail to understand sarcasm.


Nice way to put it. I believe in human digitization as well (I think it's inevitable), but I was thinking more about how technology like this would change our society as it is today. Assuming that the AI has no anthropomorphic desires/biases/emotions, at least to begin with.
Decker wrote:
nbonaparte wrote:You'll learn this around here. Everyone's pretty damn blunt.
I am not! Now if you'll excuse me, I'm going to go take a huge shit.

User avatar
Vaniver
Posts: 9422
Joined: Fri Oct 13, 2006 2:12 am UTC

Re: Potential Consequences of AGI

Postby Vaniver » Wed May 02, 2012 1:36 am UTC

So, there are many ways this sort of scenario could play out. Here's a handful:

1. FOOM. The AI applies its intelligence to becoming more intelligent- designing better circuits, better software, better fab labs. It takes up more and more volume and energy. At some point, it becomes intelligent / strong enough that humanity would lose any conflict between humanity and the AI- and so the AI is able to apply its goals (morality) to the Earth. If it's a human-friendly one, that could mean human lives improve significantly. If it's not- even if it just thinks humans are dangerous or wasting resources- then it's game over for humanity. See Less Wrong or the SIAI for more.

2. Uploads. Creating intelligent software turns out to be more difficult than uploading human brains, and so AGIs are just humans on silicon- probably running at human-level speeds originally, but increasing in speed as more and more computing power is devoted to it. Speedups of a million times are not out of the question. Uploads will be cheap to make and duplicate, meaning that while the psychology may be very similar, human society may be very different. The only uploads might be of the brightest workaholics, who work for a subsistence wage (because if wages are any higher, another copy will be created to drive down wages). See Robin Hanson for more.

3. Tools. AGI is, for whatever reason, not developed- instead, things like decision support systems are slowly given more and more control. Human society gets smarter and richer, but more and more jobs are given to the machines, with an emphasis on the plural. Self-driving cars reduce fatalities and free up more time, which among other things makes longer commutes more palatable. Truckers and taxi drivers are put out of business. Medical diagnostic systems can draw on a century of data from billions of patients- but most doctors find their skills are no longer needed. Humans can consume and experience much more, but almost all find their productive efforts diverted to toys and amusements rather than substantial work.
I mostly post over at LessWrong now.

Avatar from My Little Pony: Friendship is Magic, owned by Hasbro.

User avatar
Mr. Timms
Posts: 67
Joined: Sat Mar 15, 2008 12:59 am UTC
Location: Phoenix, AZ

Re: Potential Consequences of AGI

Postby Mr. Timms » Wed May 02, 2012 2:16 am UTC

Vaniver wrote:
Spoiler:
So, there are many ways this sort of scenario could play out. Here's a handful:

1. FOOM. The AI applies its intelligence to becoming more intelligent- designing better circuits, better software, better fab labs. It takes up more and more volume and energy. At some point, it becomes intelligent / strong enough that humanity would lose any conflict between humanity and the AI- and so the AI is able to apply its goals (morality) to the Earth. If it's a human-friendly one, that could mean human lives improve significantly. If it's not- even if it just thinks humans are dangerous or wasting resources- then it's game over for humanity. See Less Wrong or the SIAI for more.

2. Uploads. Creating intelligent software turns out to be more difficult than uploading human brains, and so AGIs are just humans on silicon- probably running at human-level speeds originally, but increasing in speed as more and more computing power is devoted to it. Speedups of a million times are not out of the question. Uploads will be cheap to make and duplicate, meaning that while the psychology may be very similar, human society may be very different. The only uploads might be of the brightest workaholics, who work for a subsistence wage (because if wages are any higher, another copy will be created to drive down wages). See Robin Hanson for more.

3. Tools. AGI is, for whatever reason, not developed- instead, things like decision support systems are slowly given more and more control. Human society gets smarter and richer, but more and more jobs are given to the machines, with an emphasis on the plural. Self-driving cars reduce fatalities and free up more time, which among other things makes longer commutes more palatable. Truckers and taxi drivers are put out of business. Medical diagnostic systems can draw on a century of data from billions of patients- but most doctors find their skills are no longer needed. Humans can consume and experience much more, but almost all find their productive efforts diverted to toys and amusements rather than substantial work.


Oh good, someone native to FAI. Let me pose a question to you. Friendly AI is an issue, but has the Singularity Institute considered how the volatility of humanity may come into play with its development? How can the AI be handled in a way to bring about human transcendence into technology without the world bringing about a BAD END all on their own?
Decker wrote:
nbonaparte wrote:You'll learn this around here. Everyone's pretty damn blunt.
I am not! Now if you'll excuse me, I'm going to go take a huge shit.

User avatar
krogoth
Posts: 411
Joined: Wed Feb 04, 2009 9:58 pm UTC
Location: Australia

Re: Potential Consequences of AGI

Postby krogoth » Thu May 03, 2012 12:38 am UTC

The internet slows down
R3sistance - I don't care at all for the ignorance spreading done by many and to the best of my abilities I try to correct this as much as I can, but I know and understand that even I can not be completely honest, truthful and factual all of the time.

User avatar
Vaniver
Posts: 9422
Joined: Fri Oct 13, 2006 2:12 am UTC

Re: Potential Consequences of AGI

Postby Vaniver » Sun May 06, 2012 5:41 am UTC

Mr. Timms wrote:Oh good, someone native to FAI. Let me pose a question to you. Friendly AI is an issue, but has the Singularity Institute considered how the volatility of humanity may come into play with its development? How can the AI be handled in a way to bring about human transcendence into technology without the world bringing about a BAD END all on their own?
Which sort of volatility are you talking about?

Human values seem to be competitive- most people don't just want to be comfortable, they want to be better off than their neighbors. Those sorts of preferences can't be simultaneously fulfilled- without some sort of trickery / rationalization. Trickery would be something like Alice and Beatrice both seeing each other as less attractive than they actually are; rationalization would be something like Alice valuing height more and Beatrice valuing weight more, so both of them seem more attractive by their internal metric.

Human values seem to be contradictory- most people describe themselves as doing things they don't want to do and not doing things that they do want to do. I might sign up for online classes on topics I say are interesting, but then play Tetris instead of watching the lectures and doing the homework. Should a FAI encourage me to take the classes, or encourage me to play Tetris?

Human societies seem to be suspicious- if SIAI gets close to finishing an AI project, they'll probably receive several visits from intelligence agencies, and possibly militaries. The US and Israel are waging a shadow war against the Iranian nuclear program- and the destructive potential of nuclear weapons is far less than a rogue Strong AI. The temptation to attempt to take over any AI projects will undoubtedly be strong.

SIAI is actively thinking about all of those issues. I think their thinking is a little soft on the first, but one's public statements on the first issue impacts one's outcomes on the third issue.
I mostly post over at LessWrong now.

Avatar from My Little Pony: Friendship is Magic, owned by Hasbro.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 10 guests