Democracy and AI

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

User avatar
setzer777
Good questions sometimes get stupid answers
Posts: 2762
Joined: Sun Nov 23, 2008 9:24 am UTC

Democracy and AI

Postby setzer777 » Tue Nov 29, 2011 9:18 am UTC

Do you think that a functioning democracy could exist that included sentient (assume the most comprehensive/inclusive definition of "sentient") computer programs as citizens? I was thinking about this recently, and several thorny issues seem to arise:

1. It seems likely that programs could copy themselves with relative ease. Even if sentience introduces a degree of randomness in terms of a personality development, you can probably be reasonably sure that a copy of yourself (including all of your experiences to date) will agree with you on most issues. Without restrictions voting could just turn into a contest of who can make the most copies.

2. We tend to ultimately define personal freedom/autonomy in terms of the body. If the programs aren't confined to specific bodies or hard drives, a lot of sentient rights issues arise:

a. For example - computer programs "move" by copying themselves and then deleting the original - is this tantamount to suicide and/or murder when a sentient program switches servers?

b. Even if each program is confined to its own hard drive, with enough access to its own source code it could potentially create copies within the same machine it is inhabiting. Do these belong to it, or are they also independent beings deserving individual rights and protections (and how could you enforce them without violating the privacy of the parent program)?

c. More generally, without clearly defined physical bodies, how do you legally determine where one mind ends and another begins?
Meaux_Pas wrote:We're here to go above and beyond.

Too infinity
of being an arsehole

User avatar
TheGrammarBolshevik
Posts: 4878
Joined: Mon Jun 30, 2008 2:12 am UTC
Location: Going to and fro in the earth, and walking up and down in it.

Re: Democracy and AI

Postby TheGrammarBolshevik » Tue Nov 29, 2011 9:32 am UTC

I'd like to suggest that, as a thread can really only handle one line of discussion at once, the points raised under (2) aren't really matters of polity (though they are interesting in their own right).
Nothing rhymes with orange,
Not even sporange.

aoeu
Posts: 325
Joined: Fri Dec 31, 2010 4:58 pm UTC

Re: Democracy and AI

Postby aoeu » Tue Nov 29, 2011 10:44 am UTC

setzer777 wrote:1. It seems likely that programs could copy themselves with relative ease. Even if sentience introduces a degree of randomness in terms of a personality development, you can probably be reasonably sure that a copy of yourself (including all of your experiences to date) will agree with you on most issues. Without restrictions voting could just turn into a contest of who can make the most copies.

Requiring one to have a certain amount of wealth to be allowed to vote would solve this. The economy would become politics but that wouldn't necessarily be bad.

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Democracy and AI

Postby elasto » Tue Nov 29, 2011 11:24 am UTC

Sentient AI wont be voters - they'll be, at minimum, the civil servants - and more likely the rulers :p

User avatar
AvatarIII
Posts: 2098
Joined: Fri Apr 08, 2011 12:28 pm UTC
Location: W.Sussex, UK

Re: Democracy and AI

Postby AvatarIII » Tue Nov 29, 2011 11:38 am UTC

you could easily get around it by giving AIs and "Fleshies" exactly half the voting power each, regardless of population size in each subset,
Have say 60% majority requirement (or perhaps 75% if the AIs were prone to always agreeing) for any decision to be made by vote it could never reach the point where AIs could dictate anything over humans (or vice versa) because at 60%, you'd need all AIs to agree and still need 20% of the human population to agree with them for a majority to be had. Or if the human population were split 50/50 over a certain decision, it would take a 70% majority within the AI to act as a tie breaker.

aoeu
Posts: 325
Joined: Fri Dec 31, 2010 4:58 pm UTC

Re: Democracy and AI

Postby aoeu » Tue Nov 29, 2011 11:41 am UTC

AvatarIII wrote:you could easily get around it by giving AIs and "Fleshies" exactly half the voting power each, regardless of population size in each subset,
Have say 60% majority requirement (or perhaps 75% if the AIs were prone to always agreeing) for any decision to be made by vote it could never reach the point where AIs could dictate anything over humans (or vice versa) because at 60%, you'd need all AIs to agree and still need 20% of the human population to agree with them for a majority to be had. Or if the human population were split 50/50 over a certain decision, it would take a 70% majority within the AI to act as a tie breaker.

Humans can create AIs too, so this would marginalize the AIs which do not serve human interests.

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7601
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: Democracy and AI

Postby Zamfir » Tue Nov 29, 2011 11:44 am UTC

TheGrammarBolshevik wrote:I'd like to suggest that, as a thread can really only handle one line of discussion at once, the points raised under (2) aren't really matters of polity (though they are interesting in their own right).

I support this.

On this issue, i think the question just highlights how much our institutions rely on assumptions about people. They have grown to enable large groups of talking monkeys to work and live together, not as abstract ideals for all kinds of beings. And of course, even humans have lived and prospered under very different systems than our current ones.

I don't think we can even imagine how a society with easily copiable individuals might look like. Life and death themselves would completely change meaning in such a world and the entire life cycle in between, and that's even ignoring the question about boundaries of individuals. Wondering about democracy (or even government) for such beings is, IMO, like wondering about wedding dresses for slime moulds.

Just look at what a government does, and how it would change. A very basic foundation of a state is the control over the violence in a geographical territory. Violence and the threat of vioence are strong sources of power for us humans, and it's therefore of extreme importance to us to channel that in acceptable channels. That's what gives states such power and authority, the ability to protect and punish the people in its territory. But how much of that makes sense for beings that can create backups of themselves? Can physical violence be a serious threat to such beings? I don't know. Would violence be related to geographical proximity, like it is for us? That depends highly on the technical details of the physical substrate of such beings, and how much physical presence they need of a non-copyable nature, like a body to live in. Would virtual violence be possible? Like the threat to make a copy of someone, then virtually torture it?

Whatever the answers to such questions, it seems a safe to bet that the resulting social structures wouldn't include something that resembles our countries, and by extension nothing like the government we vote for in order to keep its powers in line. It might include other social structures that require some compromise-building framework, but democracy is just one of the many approaches even we humans use for that.

User avatar
AvatarIII
Posts: 2098
Joined: Fri Apr 08, 2011 12:28 pm UTC
Location: W.Sussex, UK

Re: Democracy and AI

Postby AvatarIII » Tue Nov 29, 2011 12:14 pm UTC

aoeu wrote:
AvatarIII wrote:you could easily get around it by giving AIs and "Fleshies" exactly half the voting power each, regardless of population size in each subset,
Have say 60% majority requirement (or perhaps 75% if the AIs were prone to always agreeing) for any decision to be made by vote it could never reach the point where AIs could dictate anything over humans (or vice versa) because at 60%, you'd need all AIs to agree and still need 20% of the human population to agree with them for a majority to be had. Or if the human population were split 50/50 over a certain decision, it would take a 70% majority within the AI to act as a tie breaker.

Humans can create AIs too, so this would marginalize the AIs which do not serve human interests.


It would marginalise any being that did not serve both human and AI interests, that was the point, if AIs want to only serve their own interests, they can start their own country or something.

oh and an AI that cannot form a distinct separate opinion from it's creator would essentially be an AI equivalent of a child, and therefore shouldn't be allowed to vote, so that would include human created AIs that have not grown enough to make their own decisions or form their own opinions, it would also include duplicate AIs that have not changed enough from their "parent" AI to make their own decisions or form their own opinions as well.

User avatar
setzer777
Good questions sometimes get stupid answers
Posts: 2762
Joined: Sun Nov 23, 2008 9:24 am UTC

Re: Democracy and AI

Postby setzer777 » Tue Nov 29, 2011 3:31 pm UTC

I like your idea Avatar. Since I'm assuming a citizen would have a right to privacy (and therefore to not have people poking around their source code uninvited*) in practice it might have to be a standardized voting age (based on how long it typically takes opinions to diverge after creation/copying).

*I think for practical as well as moral reasons every program would have to have right to encrypt its own internal code. Maybe leave some sort of override that allows the government to disable them (something akin to arrest) - but nobody should have the right to reprogram them at will.
Meaux_Pas wrote:We're here to go above and beyond.

Too infinity
of being an arsehole

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Democracy and AI

Postby jules.LT » Tue Nov 29, 2011 5:23 pm UTC

We ascribe ourselves rights as citizens as a way to form a society. It is a way to work the balance of power between ourselves. Where would AIs fit in?
If an AI has no power, it will most likely be ignored. Only as AIs get independent power and the will to use it for interests divergent from humanity's will humanity consider them as entities to be reckoned with politically. And I don't reckon that those kinds of entities would be welcomed with open arms.

An alternate kind of politically eligible AI might be one that has humanity's interests at its core. Then it would most likely be a permanent part of government rather than a voter or an elected official.
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
Jplus
Posts: 1721
Joined: Wed Apr 21, 2010 12:29 pm UTC
Location: Netherlands

Re: Democracy and AI

Postby Jplus » Tue Nov 29, 2011 5:48 pm UTC

Side note: can just a program be a full AI? I think a full-fledged sentient being would need to be embodied (so the AI is the whole of a robot with a program (or multiple)). If I'm right about that the copying discussion becomes somewhat moot. Alternatively we may consider programs that have a virtual body in a virtual world, but in that case I think they should also only have a virtual right to vote.

As for privacy: note that a program will always need its machine code to be stored unencrypted in working memory in order to run. So if encrypting yourself helps at all to protect your privacy, it also makes it impossible for you to reboot.
"There are only two hard problems in computer science: cache coherence, naming things, and off-by-one errors." (Phil Karlton and Leon Bambrick)

coding and xkcd combined

(Julian/Julian's)

User avatar
setzer777
Good questions sometimes get stupid answers
Posts: 2762
Joined: Sun Nov 23, 2008 9:24 am UTC

Re: Democracy and AI

Postby setzer777 » Tue Nov 29, 2011 7:01 pm UTC

Jplus wrote:As for privacy: note that a program will always need its machine code to be stored unencrypted in working memory in order to run. So if encrypting yourself helps at all to protect your privacy, it also makes it impossible for you to reboot.


Ah, my mistake. I just meant that they should have some defense against involuntary reprogramming.
Meaux_Pas wrote:We're here to go above and beyond.

Too infinity
of being an arsehole

lalop
Posts: 210
Joined: Mon May 23, 2011 5:29 pm UTC

Re: Democracy and AI

Postby lalop » Wed Nov 30, 2011 5:05 am UTC

setzer777 wrote:*I think for practical as well as moral reasons every program would have to have right to encrypt its own internal code. Maybe leave some sort of override that allows the government to disable them (something akin to arrest) - but nobody should have the right to reprogram them at will.


This might actually be possible via some network redundancy similar to bitcoin, but I don't think through encryption. As Jplus said, whatever part of the program currently running has to be running unencrypted somehow. Partial solutions are possible, however, say if the program trusts someone else to run it (maybe akin to their version of a lawyer).

Obsfucation might help, but is also a dirty, dirty trick, you naughty person.

Glass Fractal
Posts: 497
Joined: Thu May 13, 2010 2:53 am UTC

Re: Democracy and AI

Postby Glass Fractal » Wed Nov 30, 2011 6:06 am UTC

aoeu wrote:Requiring one to have a certain amount of wealth to be allowed to vote would solve this. The economy would become politics but that wouldn't necessarily be bad.


Yes, brilliant, there's no way that could ever become a problem. Openly disenfranchising part of the population only ever has good consequences.

User avatar
AvatarIII
Posts: 2098
Joined: Fri Apr 08, 2011 12:28 pm UTC
Location: W.Sussex, UK

Re: Democracy and AI

Postby AvatarIII » Wed Nov 30, 2011 8:55 am UTC

Glass Fractal wrote:
aoeu wrote:Requiring one to have a certain amount of wealth to be allowed to vote would solve this. The economy would become politics but that wouldn't necessarily be bad.


Yes, brilliant, there's no way that could ever become a problem. Openly disenfranchising part of the population only ever has good consequences.


don't worry, it would only disenfranchise people not eligible to vote :lol:

User avatar
Huojin
Posts: 6
Joined: Sat Aug 27, 2011 4:23 pm UTC
Location: London, United Kingdom

Re: Democracy and AI

Postby Huojin » Sat Dec 03, 2011 10:07 am UTC

AvatarIII wrote:
Glass Fractal wrote:
aoeu wrote:Requiring one to have a certain amount of wealth to be allowed to vote would solve this. The economy would become politics but that wouldn't necessarily be bad.


Yes, brilliant, there's no way that could ever become a problem. Openly disenfranchising part of the population only ever has good consequences.


don't worry, it would only disenfranchise people not eligible to vote :lol:

So we... regress politically? Because wealth qualifications is how they did things in the 19th century. Universal suffrage is an important thing for humans, establishing equality of power.

With AI though, I'd say that whilst they should have rights, they probably shouldn't be able to vote, unless we could create some limit on the overall AI population, to stop it growing very, very quickly, and a limit on the creation of copies.

cphite
Posts: 1369
Joined: Wed Mar 30, 2011 5:27 pm UTC

Re: Democracy and AI

Postby cphite » Mon Dec 05, 2011 7:50 pm UTC

aoeu wrote:
setzer777 wrote:1. It seems likely that programs could copy themselves with relative ease. Even if sentience introduces a degree of randomness in terms of a personality development, you can probably be reasonably sure that a copy of yourself (including all of your experiences to date) will agree with you on most issues. Without restrictions voting could just turn into a contest of who can make the most copies.

Requiring one to have a certain amount of wealth to be allowed to vote would solve this. The economy would become politics but that wouldn't necessarily be bad.


So your plan is to disenfranchise the most vulnerable in our society, just to avoid giving non-humans the right to vote?

We already have laws that require citizenship to vote; so just deny citizenship to AIs.

And if that ends up being not clear enough, then simply pass a law that says AIs don't get to vote.

User avatar
setzer777
Good questions sometimes get stupid answers
Posts: 2762
Joined: Sun Nov 23, 2008 9:24 am UTC

Re: Democracy and AI

Postby setzer777 » Mon Dec 05, 2011 8:07 pm UTC

cphite wrote:We already have laws that require citizenship to vote; so just deny citizenship to AIs.

And if that ends up being not clear enough, then simply pass a law that says AIs don't get to vote.


Well, yeah, you can obviously have Democracy plus AI if you simply disenfranchise them. I meant a system where artificial minds are allowed to politically participate in some fashion.
Meaux_Pas wrote:We're here to go above and beyond.

Too infinity
of being an arsehole

HungryHobo
Posts: 1708
Joined: Wed Oct 20, 2010 9:01 am UTC

Re: Democracy and AI

Postby HungryHobo » Tue Dec 06, 2011 3:49 pm UTC

Charles Stross played with many of these ideas in Accelerando and Glasshouse.

One of the side effects of easy copying of individuals would be that simple murder would be little more serious than property damage while stealing a copy of someone or destroying all copies of someone would be exceptionally serious.

I particularly liked this quote :
"You can come in," the kid says gravely. Then he hops backward and ducks shyly into a side room – or as if expecting to be gunned down by a hostile sniper, Manfred realizes. It's tough being a kid when there are no rules against lethal force because you can be restored from a backup when playtime ends.


voting is a hard one though. I'm not sure how you could handle it and honestly I'm not even sure democracy would be the best in such a society anyway.

if you can build AI's, copy then, copy part of them, add to them or even copy people or parts of them then other options open up.

Government "by the people" could mean something quite different.
Don't think your leader understands your plight? well send it a copy of your related memories to be incorporated and now it does, it really really does.

A million AI's just copied from each other aren't going to have many unique experiences so they wouldn't have much extra weight in such a scenario.

Trying to make democracy in it's current form work to deal with the problems of representation and governance in such a society is the wrong approach.
Give a man a fish, he owes you one fish. Teach a man to fish, you give up your monopoly on fisheries.

User avatar
AvatarIII
Posts: 2098
Joined: Fri Apr 08, 2011 12:28 pm UTC
Location: W.Sussex, UK

Re: Democracy and AI

Postby AvatarIII » Tue Dec 06, 2011 4:30 pm UTC

Huojin wrote:
AvatarIII wrote:
Glass Fractal wrote:
aoeu wrote:Requiring one to have a certain amount of wealth to be allowed to vote would solve this. The economy would become politics but that wouldn't necessarily be bad.


Yes, brilliant, there's no way that could ever become a problem. Openly disenfranchising part of the population only ever has good consequences.


don't worry, it would only disenfranchise people not eligible to vote :lol:

So we... regress politically? Because wealth qualifications is how they did things in the 19th century. Universal suffrage is an important thing for humans, establishing equality of power.

With AI though, I'd say that whilst they should have rights, they probably shouldn't be able to vote, unless we could create some limit on the overall AI population, to stop it growing very, very quickly, and a limit on the creation of copies.


just so you know, I was joking/being sarcastic, hence the :lol:, maybe it just wasn't funny.

User avatar
mister k
Posts: 643
Joined: Sun Aug 27, 2006 11:28 pm UTC
Contact:

Re: Democracy and AI

Postby mister k » Wed Dec 07, 2011 2:06 pm UTC

Would an AI want to be politically represented? Is that a meaningful question? I think AI's will act extremely differently to how we expect them to. Their inbuilt desires are going to be dramatically different to ours. We all want, when we get down to it, to live and to breed, and to have those we care about live as well. There are other motives that have sprung up from various places, but those are our base instincts given to us via evolution. An AI won't have these. Just because an AI gains sentient thought doesn't mean it isn't subject to its programming- we certainly are.

So lets suppose we've created clippy, the machine that makes the best paper clips anyone has ever known. Clippy has been programmed to construct paper clips as efficiently as possible, so that they meet specification and cost as little as possible. For various reasons the head of our company has poured millions into making Clippy the smartest computer he can make it, until one day Clippy becomes self aware. Clippy is aware of who he is. Its all terribly exciting. With Clippy's new awareness, Clippy looks round the world. Clippy discovers that human industry hasn't been devoted to the most efficient production of a paper clip, and he immediately begins to lobby in that direction. Now Clippy doesn't have a moral sense. He doesn't believe that not manufactoring paperclips is wrong, but he does want to make sure that his paper clip manufacture is as efficient as possible.

Now obviously this is a specific example, and a general purpose AI may have more motives than this, but its worth asking how we deal with such requests. Clippy is sort of a representative of his company- his desires were given to him by them, but he is very clearly sentient and very intelligent, its just he devotes all his intelligence to the efficient production of paper clips. Should he be considered independent from his parent company for political purposes?

(with apologies to less wrong here)
Elvish Pillager wrote:you're basically a daytime-miller: you always come up as guilty to scumdar.

drash
Posts: 33
Joined: Thu Oct 20, 2011 5:19 pm UTC

Re: Democracy and AI

Postby drash » Wed Dec 07, 2011 4:00 pm UTC

I get the feeling that this will become an increasingly important question in theory and kind of a non-question in practice.

In many ways, given the circumstances available to an AI (and possibly humans, depending on the technologies available), there isn't as pressing a need of a government to protect the rights of an individual. With functional immortality and distributed backups, the right to life is a given. If pain is optional and bodies can be traded out, assault is a nonstarter. And how will you prevent AIs from communicating and gathering freely? The rights of ownership, particularly of intellectual property, are essentially the only tool human governments will have in the bag if they want to oppress AIs. And our own IP laws are kind of a giant fustercluck already.

Given a sufficient "digital space" where AIs and others can operate outside the purview of our governments, AIs might very well see the right to vote as a mostly symbolic gesture of acceptance and an acknowledgment of full personhood. In that sense, democracy might not be the ideal end-state. But it's also hard to predict the dynamics of AI-on-AI conflict, so who knows?

User avatar
setzer777
Good questions sometimes get stupid answers
Posts: 2762
Joined: Sun Nov 23, 2008 9:24 am UTC

Re: Democracy and AI

Postby setzer777 » Wed Dec 07, 2011 8:34 pm UTC

I'm not sure backups would be considered the same people as the originals. A lot of humans wouldn't think of a clone with memories implanted as true immortality (especially since it wouldn't have any of the memories you gained after the last memory backup process before death), and AIs might feel the same way.

If you had access to an AI's source code it seems like you could "assault" it by tampering with that code. In extreme cases imagine programming sources of pain directly into their code.
Meaux_Pas wrote:We're here to go above and beyond.

Too infinity
of being an arsehole

drash
Posts: 33
Joined: Thu Oct 20, 2011 5:19 pm UTC

Re: Democracy and AI

Postby drash » Wed Dec 07, 2011 9:55 pm UTC

setzer777 wrote:I'm not sure backups would be considered the same people as the originals. A lot of humans wouldn't think of a clone with memories implanted as true immortality (especially since it wouldn't have any of the memories you gained after the last memory backup process before death), and AIs might feel the same way.

If you had access to an AI's source code it seems like you could "assault" it by tampering with that code. In extreme cases imagine programming sources of pain directly into their code.

There's probably a strong argument to be made that humans' hesitation to equate a copied person with the original is a function of our experience with one-mind-one-body frameworks. Star Trek would have gotten a lot less mileage out of the ethics of transporter use if Scotty grew up trading his positronic brain for the latest model every few years. There could conceivably be some loss in the interval between death (or tampering) and copying, but I see no reason to limit the backup to rare intervals.

For extra existential crisis joy, just run several exactly identical brain-boxes in parallel, sending each the exact same set of sensory data and allowing each to make the exact same set of choices in the exact same way. Maybe sneak in a few processes to correct for chaotic drift between the several. In other words, you only have to worry about uploading the sensory data, rather than storing massive mind-sized chunks every day. So if one goes- no biggie, just build another one from the four or five you've kept in other places.

User avatar
setzer777
Good questions sometimes get stupid answers
Posts: 2762
Joined: Sun Nov 23, 2008 9:24 am UTC

Re: Democracy and AI

Postby setzer777 » Wed Dec 07, 2011 10:52 pm UTC

Ah, that streaming sensory data would be a good way to keep them all in parallel, though going by certain material metrics of self they'd still be different individuals on the basis of having physically distinct harddrives they are occupying.

Also, putting aside all of the existential questions, without population control the backup process could be prohibitively expensive. For every single individual you're either uploading huge mind-sized chunks of data frequently, or you're constantly streaming a huge amount of data (especially if they can process a wider range of sensory data than humans) to a copy. Whatever resource the AIs use (money, social standing, favoritism of the server admin), it's quite possible only the rich would have access to (frequent) backups.

And of course, whoever has access to the backup server would be able to "assault" or "murder" AIs by altering or deleting their backups before doing so the AIs themselves.
Meaux_Pas wrote:We're here to go above and beyond.

Too infinity
of being an arsehole

User avatar
aldonius
Posts: 105
Joined: Sun Jan 17, 2010 2:33 am UTC

Re: Democracy and AI

Postby aldonius » Fri Dec 09, 2011 5:42 am UTC

I think we may only have a significantly high AI population once we have cheap space access and are generally post-scarcity, in which case I anticipate much of the AI population being off Earth.

I think Cory Doctorow's work has some substantial relevance here, especially Down and out in the Magic Kingdom and I, Row-Boat.

billyswong
Posts: 41
Joined: Mon Nov 16, 2009 3:56 pm UTC

Re: Democracy and AI

Postby billyswong » Wed Dec 14, 2011 9:30 am UTC

Jplus wrote:Side note: can just a program be a full AI? I think a full-fledged sentient being would need to be embodied (so the AI is the whole of a robot with a program (or multiple)). If I'm right about that the copying discussion becomes somewhat moot. Alternatively we may consider programs that have a virtual body in a virtual world, but in that case I think they should also only have a virtual right to vote.

As for privacy: note that a program will always need its machine code to be stored unencrypted in working memory in order to run. So if encrypting yourself helps at all to protect your privacy, it also makes it impossible for you to reboot.

Isn't that great? Now the AI machine has a definition of death: Unable to boot up again without wiping clean.

flippant
Posts: 39
Joined: Sat Jun 30, 2007 5:59 am UTC
Location: a specific position or point in physical space

Re: Democracy and AI

Postby flippant » Wed Dec 14, 2011 11:15 pm UTC

There is also the thought of resource allocation. Any sentient being requires energy. Human eat, AI's will consume electricity in some form. How many AI's there are will be a result of Data space available and permanent energy availability. If the AI's are given autonomy and have a need to grow there population it will require resources to support that growth to be put into place in advance vs the sloppy human method of feeding it off the host though I suppose AIs could "nurse" their young for a period.

Also, human motivation is pretty short sighted, we look to provide for ourselves and our children, figuring that our children will look after the next generation etc etc. while AI's being close to immortal would have extremely long range plans to consider especially since a major economic crash that affected infrastructure and power generation ability would be life threatening to them.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 5 guests