The Thread To Remind Me We're Living In The Future

Seen something interesting in the news or on the intertubes? Discuss it here.

Moderators: Zamfir, Hawknc, Moderators General, Prelates

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby KnightExemplar » Thu Jun 01, 2017 3:57 pm UTC

At which point, we are no longer trying to build the best AIs. We are artificially limiting the AI's biggest advantages in an attempt to humanize them. Which could be useful for some definition of "intelligence" I guess. But no one would ever claim that the 'Deep Mind' team made the best StarCraft AI that they could.

Furthermore: there will always be detractors. Even if APM weren't limited, the super-human precision of the AI would be able to do things like Marine Splitting far better than a human (who has to move->point->click with accuracy, or use the Ctrl-1 or Ctrl-2 groups ahead of time). Even with APM-limitations, a StarCraft AI would say... perfectly dodge Lurkers, a difficult feat for a human.

As far as "artificial limitations", I think I prefer the "Watts of Power Used" limitation. Chess-bots can beat humans even when running on Cell Phones (that is: Chess Bots are superior even when using the same amount of power as a human for computation). The human body is estimated to consume about 100W of power, so for a "fair" fight in any game, you should limit a computer to only 100W (or maybe 20W, the estimated power of the brain).

------------

I guess its more interesting to me to see the Go or Poker AIs go all out. Because under these games, the AI has no artificial limitations. They're playing the game exactly as it was intended.
First Strike +1/+1 and Indestructible.

User avatar
Zohar
COMMANDER PORN
Posts: 7547
Joined: Fri Apr 27, 2007 8:45 pm UTC
Location: Denver

Re: The Thread To Remind Me We're Living In The Future

Postby Zohar » Thu Jun 01, 2017 5:17 pm UTC

Limitations might lead to more efficient methods though. You might decide to teach the computer Starcraft in order to later solve a bigger and more complicated problem, one in which the computer doesn't have the luxury of performing thousands of actions per second. In that case it could make sense to limit the computer to lower APM rates so it has to become actually better and not just out-micro human players.

And, of course, while a scientific development might not be immediately useful, doesn't mean it's without worth, or that you won't something worthwhile to do with it in the future.
Mighty Jalapeno: "See, Zohar agrees, and he's nice to people."
SecondTalon: "Still better looking than Jesus."

Not how I say my name

User avatar
Yakk
Poster with most posts but no title.
Posts: 11047
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: The Thread To Remind Me We're Living In The Future

Postby Yakk » Thu Jun 01, 2017 6:54 pm UTC

The goal of making AI game players is not to win the game. The goal is to do interesting AI work.

Winning in uninteresting ways is uninteresting. Ie, imagine it turns out that there is a sequence of mouse movements that no human can reproduce that causes you to win starcraft instantly.

Winning that way is uninteresting.

Giving it infinite APM is *less interesting* than doing it with APM in the region of a human player. Other restrictions can also make it more interesting (ie, imagine the computer having to make APM/click accuracy tradeoffs), but that one (limited APM) is an easy one to start with, and it makes any victory more interesting.

The infinite micro techniques will continue to be researched by people (and as they are easier, by smaller teams). By walling those off, they avoid "wasting" time on the relatively uninteresting infinite micro techniques; instead, "where do you spend APM" becomes an interesting problem, as well as determining the foe's build plan, constraining foes choices, hiding your own build plan, guessing where units are, etc.

Do not worry, as they get better they'll have the ability to turn off APM restrictions and hack in infinite-APM sub-strategies, and possibly that bot will beat humans before the "low APM" version will. An eventually you'll get an AI that can beat a human at 100 APM or lower.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7312
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: The Thread To Remind Me We're Living In The Future

Postby Zamfir » Thu Jun 01, 2017 7:17 pm UTC

KnightExemplar wrote:We are artificially limiting the AI's biggest advantages in an attempt to humanize them

Don't think about it as limiting the agent, but adding an extra rule to the game. That creates a new game, let's call it action-limited StarCraft, or ALS. This is likely a more interesting game than ordinary StarCraft, for the simple reason that StarCraft was designed and tuned with human reflexes as an implicit part of the game. In that sense, ALS is closer to the designed game than unlimited StarCraft.

There is a movie called Shaolin Soccer, about magic kung-fu monks who play soccer. By the end of the movie, the soccer games consist solely of fireball shots straight at the goalkeeper, stopped or unstopped depending on the kung-fu strength of the goalkeeper. Which might well be the superior soccer strategy if you could shoot 1000mph balls on target, but it's hardly soccer anymore.

In the end, playing StarCraft (or chess) is hardly very interesting in itself. Absent a Space Jam scenario, we don't need a superior StarCraft-playing engine. The interest is mostly as a sandboxed simulacrum of real life problem solving, which doesn't work if some speed trick simplifies the game too much.

User avatar
LaserGuy
Posts: 4390
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby LaserGuy » Thu Jun 01, 2017 8:02 pm UTC

Yeah, I think having the AI player being able to do something that is physically impossible for the human to do means that it isn't actually playing the same game. Like, if I made a team of robots that played basketball, but each robot was fourteen feet tall, would it really be fair to say that I made a robot basketball team that was better than an equivalent human team?

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby KnightExemplar » Thu Jun 01, 2017 8:30 pm UTC

Zamfir wrote:There is a movie called Shaolin Soccer, about magic kung-fu monks who play soccer. By the end of the movie, the soccer games consist solely of fireball shots straight at the goalkeeper, stopped or unstopped depending on the kung-fu strength of the goalkeeper. Which might well be the superior soccer strategy if you could shoot 1000mph balls on target, but it's hardly soccer anymore.


What, and are you to say Shaolin Soccer was a bad movie? :-)

Perhaps its a matter of perspective. My preference for speed-runs are the tool-assisted kinds, because knowing the theoretical limits of the games I play is most interesting. Credit Warp, RNG-manipulation, or insanely accurate timing are all fair game from my perspective. Its simply an advantage computers have over us humans.

And when a human managed to accomplish RNG-manipulation or other "Tools-only" methodology, then we celebrate the human accomplishment. But there is a strong interest in playing these games at the absolute limits of what is and isn't possible. And that's the perspective I tend to go towards and prefer.

LaserGuy wrote:Yeah, I think having the AI player being able to do something that is physically impossible for the human to do means that it isn't actually playing the same game. Like, if I made a team of robots that played basketball, but each robot was fourteen feet tall, would it really be fair to say that I made a robot basketball team that was better than an equivalent human team?


John Henry did not go up against an accurately modeled android. John Henry went up against a steam powered hammer to demonstrate the full extent of what is possible with humans vs what is possible with machines.

I mean, yeah, its just a folktale. But John Henry vs the Steam Hammer is the perspective that I'm taking on this. What is the best that humans can do? And simultaneously, what is the best that computers or machines can do? If we're doing Man vs Machine, then it means less if the Machine were being hampered by arbitrary "humanizing" requirements.

And sure, there are video games out there which will lend themselves to be better played by machine (StarCraft or Fighting Games, where twitch-accuracy is a huge element). IMO, its a matter of finding a game which has less of the twitch-element and more of the "strategic" element if you want to do a "test of intelligence".

---------

For example: perhaps the next AI frontier should be Magic: The Gathering, especially the deckbuilding aspect. The "Draft" format is a game of incomplete information. You want to figure out the strategies that everyone else at the table are going for, and then choose a strategy that everyone else is ignoring. (And if you're going a particular strategy: the "Hate-Draft the card I can't deal with" vs "Improve my Deck" dynamic leads to lots of interesting thought). I'd be intrigued to see if a Magic: The Gathering bot were able to play at the same level as a human expert in both deck-construction and the main-gameplay.

Bonus points if you can get the Magic: The Gathering bot to do enough Natural-Language Processing to figure out how to use the cards without programmers "translating" the card effects for the AI.

Why settle for an artificially limited version of "StarCraft" when there are plenty of interesting games that don't have any Computer AIs in them yet? I think that's my primary problem with this setup. There are better games out there to get an AI for than StarCraft. I'm not saying StarCraft is a bad game... its just... muddled from an AI perspective.
First Strike +1/+1 and Indestructible.

User avatar
simplydt
Posts: 7
Joined: Thu Jun 01, 2017 8:26 am UTC
Contact:

Re: The Thread To Remind Me We're Living In The Future

Postby simplydt » Fri Jun 02, 2017 11:41 am UTC

Mutex wrote:The best Chess players can think 30 ply ahead, while the AI can do 20 ply perfectly - literally comparing every possible path.


Uhm, only if they are following a single branch of the computation tree without any alternate moves at any point. Psychological experiments have shown this to be the case starting with Groot's studies and Alexander Kotov talks about the difficulty of calculating complex trees and confirming even Grandmasters don't go more than a few moves deep if things are tough. He states that his record depth happened when the tree was so simple, there was literally only one possible move for each player for a long long time in a simple endgame.
I run a chess start-up https://www.chessable.com - I love XKCD due to it's awesome scientific cartoons, I'm a scientist at heart after all!

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Thread To Remind Me We're Living In The Future

Postby elasto » Fri Jun 02, 2017 7:19 pm UTC

And in that situation the AI would be able to exceed human capability even moreso.

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby KnightExemplar » Fri Jun 02, 2017 7:27 pm UTC

simplydt wrote:
Mutex wrote:The best Chess players can think 30 ply ahead, while the AI can do 20 ply perfectly - literally comparing every possible path.


Uhm, only if they are following a single branch of the computation tree without any alternate moves at any point. Psychological experiments have shown this to be the case starting with Groot's studies and Alexander Kotov talks about the difficulty of calculating complex trees and confirming even Grandmasters don't go more than a few moves deep if things are tough. He states that his record depth happened when the tree was so simple, there was literally only one possible move for each player for a long long time in a simple endgame.


Based on your signature, it seems like you understand chess. So tell me, when the black player moves "4: .. g6" in the Sicilian Dragon, does that pawn-move not have implications for the endgame?

Something like the Fianchetto'd bishop ("5: .. BG7" in the Sicilian Dragon) may have implications in the mid-game, which the computer probably sees better than the human. But I think even a mediocre chess player can recognize that "4: .. g6" weakens the pawn structure on the King-side castle, and can exploit that weakness in the long-term. I play at only ~1500 or so, but my opponents do know to push their pawns forward to attack the G6 pawn, and exploit it as a potential weakness (especially if they Castle Queenside)

These sorts of plans are easily 15-moves (or 30-ply) into the future. There's usually more important stuff going on in the middle (developing pieces in the midgame). But humans create longer-term goals than AIs IMO. Its just that the AIs definitely play perfectly in the "short" term (say 20-ply), and that perfection of the near-term game leads to super-human chess playing abilities. That's what I'm trying to say.

--------

When I'm playing and seeing Openings, I noticed that if I have an understanding of the endgame... I usually end up in an advantaged position (should I get through the midgame successfully). For example, the English Opening / Reverse Sicillian for White lends to an opportunity to use the A-pawn and promote it... many turns into the future. I can see these opportunities as early as ~4 moves in the game, and I don't consider myself a very strong player. I'm sure the opponent also sees these threats and responds appropriately.

I've also studied Stockfish, and know that it barely goes beyond 20-ply or so under normal circumstances / default settings (during a complicated midgame. If endgame or a simple position with lots of forced moves, Stockfish can go 30+ ply or otherwise far far deeper)... and Stockfish is a faster, lighter AI that uses simpler heuristics to "go deeper" than most other Chess Engines (Stockfish authors would rather have a simple heuristic that is 5x faster, and then explore +1 ply further rather than a more complicated heuristic that's slower). So I know for a fact that the AI simply never even considers the endgame implications of certain moves (unless it hits the Tablebase). Stockfish does have very strong heuristics there for what constitutes a decent endgame position (bonus points awarded to passed pawns or certain pawn structures, or knight / bishop outposts), but that's not quite the same thing as understanding or exploring the search space.
First Strike +1/+1 and Indestructible.

User avatar
simplydt
Posts: 7
Joined: Thu Jun 01, 2017 8:26 am UTC
Contact:

Re: The Thread To Remind Me We're Living In The Future

Postby simplydt » Sun Jun 04, 2017 8:04 am UTC

Seems like you are talking mainly about recognising plans/structures/plans which is not what I thought of when "thinking 30 moves ahead" was mentioned. I thought the common misconception that GMs have superhuman calculation abilities was being referred to which is a bit of a myth, they are indeed better than us but not superhuman better!

Changing the pawn structure will always have implications for the middle game and endgame and most humans understand this without calculation, something that as you say engines don't really do (?), i'm not a chess engine expert by the way.

By the way, the fianchetto only weakens your structure if you combine it with a bunch of other poor moves. Eg g6 in comobination with e6 (leaving f6 weak. As long as you keep the pawn on e7 defending f6 then there is no reason for g6 to be a weakening move and it can actually be a strength for endgames in certain cases as far as i know... highlighting the complexities of this beautiful game!! :)
I run a chess start-up https://www.chessable.com - I love XKCD due to it's awesome scientific cartoons, I'm a scientist at heart after all!

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby KnightExemplar » Sun Jun 04, 2017 4:57 pm UTC

simplydt wrote:Changing the pawn structure will always have implications for the middle game and endgame and most humans understand this without calculation, something that as you say engines don't really do (?), i'm not a chess engine expert by the way.


That's basically what I'm saying.

Chess Engines have heuristics. So Stockfish can calculate whether or not a pawn is "isolated". Stockfish also adds a bonus value to a pawn as it walks forward... IIRC its around +1.5 or so (for a total of 2.5ish) if the pawn hits the 7th rank. I think Stockfish also just gives dumb, solid bonus to the f6 pawn as well as long as its alive.

As such, Chess AIs (while very very good at Chess), do not really have a deeper understanding of the game. These heuristics are very, very good rules of thumb that are easy to calculate. And it even constitutes good advice when teaching beginners. But at the end of the day, its just heuristics. As such, we can come up with degenerate "puzzles" that demonstrates the AI's flaws.

EDIT: Fucked up the example puzzle. Here's try #2

Image

Here's the puzzle: Evaluate this position. Do you think its White-advantage, Black-advantage, or will play to draw? White-to-play btw.

On the right is Stockfish's analysis. I'll explain in the spoiler. Give yourself a chance to solve the puzzle.

Spoiler:
Because as you can see Stockfish's calculations on the right, at 43-ply, Stockfish thinks that its Black-to-win (-26 point advantage to Black, roughly +10 from Queen, +5 from Rook, +5 from other Rook, etc. etc.). So Stockfish (after calculating to 43-ply) thinks this game is basically assured for Black-to-win... and will probably continue to think that until it calculates till 100-ply (when the 50-move "tie" rule comes into effect)

Now granted, Stockfish will probably "play" this position correctly, if you actually battled it out with Stockfish as White. But it doesn't change the fact that Stockfish's heuristics completely drop the ball here in a hilarious manner with regards to its evaluation function. Or an alternative explanation: we humans can easily see 100-ply in this puzzle, so this puzzle demonstrates a situation where humans just surprisingly become very, very good at chess.


Puzzle found from this article btw. The journalist who wrote the article is kind of bullshitting as he reports on the story... but the puzzle is sound.

EDIT: This article seems better.
First Strike +1/+1 and Indestructible.

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Thread To Remind Me We're Living In The Future

Postby elasto » Sun Jun 04, 2017 6:33 pm UTC

I don't know if that situation is fair because Stockfish would never have played in such a way as to have ended up in it.

Yes, it demonstrates that AI and humans don't reason in exactly the same way, but we knew that already, didn't we?

Yes, sometimes AI makes stupid decisions a human never would (eg. the Tesla car not being able to see a truck turning in front of it), but humans make stupid decisions that AI never would (like driving drunk, or too close to the car in front).

The most important question is 'whose stupid decisions will cumulatively have the least worst effect?' and AI will inevitably win that battle as they are improving rapidly and humans are not.

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: The Thread To Remind Me We're Living In The Future

Postby commodorejohn » Mon Jun 05, 2017 2:16 pm UTC

elasto wrote:The most important question is 'whose stupid decisions will cumulatively have the least worst effect?' and AI will inevitably win that battle as they are improving rapidly and humans are not.

They're improving rapidly from a baseline of total ignorance, sure, but (like all good Futurists) you're assuming that said improvements will necessarily (inevitably, even!) continue at their present rate until such time as parity is reached and exceeded and not, say, taper off at some point prior to that.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
sardia
Posts: 5843
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: The Thread To Remind Me We're Living In The Future

Postby sardia » Mon Jun 05, 2017 3:02 pm UTC

commodorejohn wrote:
elasto wrote:The most important question is 'whose stupid decisions will cumulatively have the least worst effect?' and AI will inevitably win that battle as they are improving rapidly and humans are not.

They're improving rapidly from a baseline of total ignorance, sure, but (like all good Futurists) you're assuming that said improvements will necessarily (inevitably, even!) continue at their present rate until such time as parity is reached and exceeded and not, say, taper off at some point prior to that.

That could happen, yes. Which would be very disappointing if at best all you did was replaced the current death rate of human caused cars to robot caused death rate. On the plus side, I'll probably be able to nap on the way to work. Or worse, instead of traffic jams, we get hacking jams where every couple weeks all the cars crash.

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Thread To Remind Me We're Living In The Future

Postby elasto » Mon Jun 05, 2017 3:19 pm UTC

commodorejohn wrote:They're improving rapidly from a baseline of total ignorance, sure, but (like all good Futurists) you're assuming that said improvements will necessarily (inevitably, even!) continue at their present rate until such time as parity is reached and exceeded and not, say, taper off at some point prior to that.

And you're assuming they haven't already reached parity. All the available evidence suggests they have already exceeded us.

Besides, you're going to have to explain why improvements would taper off. This Ted talk rather succinctly covers what it would mean for improvements not to keep coming...


User avatar
HES
Posts: 4793
Joined: Fri May 10, 2013 7:13 pm UTC
Location: England

Re: The Thread To Remind Me We're Living In The Future

Postby HES » Mon Jun 05, 2017 4:38 pm UTC

morriswalters wrote:go copulate with themselves

"anatomically challenging self-fulfillment"
He/Him/His Image

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: The Thread To Remind Me We're Living In The Future

Postby commodorejohn » Mon Jun 05, 2017 4:41 pm UTC

elasto wrote:And you're assuming they haven't already reached parity. All the available evidence suggests they have already exceeded us.

KnightExemplar just listed an example that suggests that's not as broadly the case as you seem to think.

Besides, you're going to have to explain why improvements would taper off.

Not really. I just have to point out that "inevitably" (your word) requires that these improvements, at a minimum, continue until parity has been reached or exceeded, and that anything shy of that would invalidate your argument. Though I'd be happy to point out that, in general, continuing unchanged indefinitely is precisely the thing that trends are not known for doing.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
SDK
Posts: 556
Joined: Thu May 22, 2014 7:40 pm UTC
Location: Canada

Re: The Thread To Remind Me We're Living In The Future

Postby SDK » Tue Jun 06, 2017 4:41 pm UTC

elasto wrote:
commodorejohn wrote:They're improving rapidly from a baseline of total ignorance, sure, but (like all good Futurists) you're assuming that said improvements will necessarily (inevitably, even!) continue at their present rate until such time as parity is reached and exceeded and not, say, taper off at some point prior to that.

And you're assuming they haven't already reached parity. All the available evidence suggests they have already exceeded us.

Besides, you're going to have to explain why improvements would taper off. This Ted talk rather succinctly covers what it would mean for improvements not to keep coming...

That TED talk was good, but he doesn't actually back his arguments up very well.

I can't remember if this was previously linked here, but I like this counterpoint to the AI inevitability belief. In particular, the fact that intelligence is not a single measurable value is the strongest argument against, in my opinion. I'm not saying it can't happen, but I think the belief that it definitely will happen is hiding a few underlying assumptions that are not based 100% on fact.
The biggest number (63 quintillion googols in debt)

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Thread To Remind Me We're Living In The Future

Postby ucim » Tue Jun 06, 2017 5:01 pm UTC

SDK wrote:In particular, the fact that intelligence is not a single measurable value is the strongest argument against, in my opinion.
How so? Whatever way intelligence is "measured", if it is a matter of input processing that leads to "best" output action, there is nothing in that that prevents computers from achieving it. And whatever computers (and associated machines) can achieve, they can surpass us at.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: The Thread To Remind Me We're Living In The Future

Postby commodorejohn » Tue Jun 06, 2017 5:46 pm UTC

ucim wrote:And whatever computers (and associated machines) can achieve, they can surpass us at.

On what basis do you make that claim?
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
SDK
Posts: 556
Joined: Thu May 22, 2014 7:40 pm UTC
Location: Canada

Re: The Thread To Remind Me We're Living In The Future

Postby SDK » Tue Jun 06, 2017 5:58 pm UTC

ucim wrote:
SDK wrote:In particular, the fact that intelligence is not a single measurable value is the strongest argument against, in my opinion.
How so? Whatever way intelligence is "measured", if it is a matter of input processing that leads to "best" output action, there is nothing in that that prevents computers from achieving it. And whatever computers (and associated machines) can achieve, they can surpass us at.

Jose

Yeah, sure, but the argument that article was trying to make was that there are plenty of things (squirrels and computers among them) that are already more intelligent than us in specific areas. Because there is no single metric for intelligence and so many different ways that one can be intelligent, machines will be created to meet certain goals. They will be (and already are) better than us at meeting those specific goals. Since no one machine needs to meet all of those goals, the creation of a generic superintelligent AI does not necessarily follow from that.

In other words, graphs like this are misleading:
Spoiler:
Image
The biggest number (63 quintillion googols in debt)

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Thread To Remind Me We're Living In The Future

Postby ucim » Tue Jun 06, 2017 9:48 pm UTC

commodorejohn wrote:On what basis do you make that claim [(that whatever computers (and associated machines) can achieve, they can surpass us at.)]?
The constraints on machine "evolution" are far less restrictive than those of animal evolution. Subsequent generations of machines do not have to be almost-identical copies of existing ones. They are not constrained by size the way animals are. True, they are dependent on us for their construction (for now), and I suspect that we will be subsumed into them like mitochondria are subsumed into animals.

On what basis do you doubt the claim?

SDK wrote:In other words, graphs like [in the prior post] are misleading:
They do not tell the whole story, certainly, but they tell the most important part of it. We don't know what goals we'll set the newest smart machines on, but more importantly, we don't know that they will stick to those goals, because the better-at-it machines will have some leeway to change those goals. That's what will make them better - they will do the thinking for us, and we'll let it, because it will make them more useful to us. These machines will be able to solve related problems, and then not-so-related problems. To do that, they will need to be smart in many ways. The ones that are will get the funding.

What are the biggest problems we are facing? That's what we'll set the machines on solving. Whatever your short list is, the key thing is those problems are probably created by mankind itself. The solution will be chillingly simple.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: The Thread To Remind Me We're Living In The Future

Postby commodorejohn » Tue Jun 06, 2017 10:04 pm UTC

ucim wrote:The constraints on machine "evolution" are far less restrictive than those of animal evolution. Subsequent generations of machines do not have to be almost-identical copies of existing ones. They are not constrained by size the way animals are. True, they are dependent on us for their construction (for now), and I suspect that we will be subsumed into them like mitochondria are subsumed into animals.

On what basis do you doubt the claim?

On the basis that "machine evolution" is thus far solely restricted to artificial/virtual environments and mechanisms of development constructed by humans based on their own limited understanding of their own intelligence, and the areas where they have managed to surpass humans so far are limited to narrow, rigidly-defined problem sets and this is largely why they have been able to surpass humans in these areas.

Moreover, while subsequent generations do not have to be near-copies of previous generations, this is not an effective advantage for "machine evolution" until machines can themselves make intelligent determinations about what changes should be made between generations, which means that, as an argument, it's an assumption of pulling oneself up by one's bootstraps.

Also on the basis that, while they are not constrained by size the way animals are, they are constrained by size in different ways that are arguably far more limiting, at least in terms of the application of computing power to the problem of general intelligence - specifically, that the speed of a computer decreases as the distance between its components increases, and while computer speeds on the whole have been increasing for some time, this has been accomplished by decreasing the distance between components, and there are hard physical limits on this and practical limits well before that. And while these limitations may in some cases be worked around by means of parallelization across multiple physically separate computers, it cannot be assumed that every task in a problem is arbitrarily parallelizable, especially when the problem is not fully understood in the first place.

But, y'know, other than that, sure, why not?

I mean, aside from the part where computers such as the ones being discussed or implied require large-scale advanced industrial infrastructure to support, while all that is required to make a human is two other humans and food.
Last edited by commodorejohn on Tue Jun 06, 2017 10:08 pm UTC, edited 1 time in total.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
sardia
Posts: 5843
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: The Thread To Remind Me We're Living In The Future

Postby sardia » Tue Jun 06, 2017 10:08 pm UTC

Wait, don't you think we could get human level intelligence from a computer in less than 10 years?

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: The Thread To Remind Me We're Living In The Future

Postby commodorejohn » Tue Jun 06, 2017 10:09 pm UTC

sardia wrote:Wait, don't you think we could get human level intelligence from a computer in less than 10 years?

It depends on the human. I mean, we could already replace Donald Trump with a chatbot and hire out the golf to Tiger Woods.

But I'm not holding my breath for anything beyond that.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby KnightExemplar » Tue Jun 06, 2017 11:35 pm UTC

sardia wrote:Wait, don't you think we could get human level intelligence from a computer in less than 10 years?


I'm a bit confused by this question. Clearly, we have super-human AIs in Chess, Go, and Poker.

If we expand the field a bit, we also have super-human AIs in the realm of arithmetic, Symbolic Algebra. In the 1970s, we even had the first AI-based Mathematical Proof. The Four-color theorem can only be proven by a computer, due to the huge number of configurations.

Humans wrote multiple programs to verify the 4-color theorem, but at the end of the day, the trust in the final proof lies in the trust of the automated-reasoning AI that solves the last piece of the puzzle.

Computers programmed with automated-reasoning routines can solve a variety of problems, and those automated-reasoning skills are used in Compilers today. No one lays out CPUs by hand anymore, the Boolean Logic is first reduced and optimized by a computer. In many regards, computers are already superhuman in a variety of tasks. And they have been for years.

----------

So did Engineers die off? Hell no. Engineering has become easier and more difficult for us humans. We buy $4000+ pieces of software that handle a bunch of already solved problems, and then the engineer goes off to solve new ones. Humans no longer perform simulations of circuits or solve those kinds of equations... we throw them into PSpice.

Humans don't calculate the optimal shape to reduce stress on various mechanical systems. We throw that shit into Finite Element Analysis and then have the computer automatically optimize the parameters for us. Over 10 years ago, an AI developed an Antenna for NASA and NASA decided it was the best design, even if people didn't really understand why it worked. We've been working with "superhuman intelligence" AI for the better part of a few decades... by my estimate anyway.

------------

As AI systems become more and more complex however, it becomes a tougher and tougher job for the human to use those tools. Each AI is basically a new tool in the toolbelt for humans to use. Again, there's no singular AI that does all of these things. Chess was Min-Max with Alpha-Beta Pruning. AlphaGo was Monte Carlo. Etc. etc. Each algorithm has its own pros-and-cons to the problem scope.
First Strike +1/+1 and Indestructible.

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Thread To Remind Me We're Living In The Future

Postby ucim » Tue Jun 06, 2017 11:51 pm UTC

@commodorejohn: So.... you're saying that you're unconvinced because until now they aren't that smart. I'm unconvinced that that's a good reason to be unconvinced. I mean, just because the Commodore 64 is worthless doesn't mean that Windows ME isn't awesome.

(sorry, couldn't resist!)

Consider the evolution of computers in the last fifty years - isn't that an argument that computers will never be able to {insert your favorite benchmark}? If it's invalid here, it's invalid there.

commodorejohn wrote:...until machines can themselves make intelligent determinations about what changes should be made between generations
You think that's not going to happen? It's already here. It's here in a small scale, but that's enough to prove that it is possible. It's also useful. That means it will happen, if it happens, it will accelerate, limited only by...
commodorejohn wrote:that the speed of a computer decreases as the distance between its components increases, and while computer speeds on the whole have been increasing for some time, this has been accomplished by decreasing the distance between components, and there are hard physical limits on this and practical limits well before that.
...and that's plenty fast enough. They are limited by the speed of light. We are limited by the speed of diffusion.
commodorejohn wrote:And while these limitations may in some cases be worked around by means of parallelization across multiple physically separate computers, it cannot be assumed that every task in a problem is arbitrarily parallelizable, especially when the problem is not fully understood in the first place.
We might not understand the problem, but we can ask the AI to understand it for us. You are still thinking of computers as tools. A hammer will never bake a pizza by itself, but there are already factories that make frozen pizzas by the truckload, largely unattended. That factory is as different from a hammer as an AI is from the Commodore 64.

KnightExemplar wrote:Over 10 years ago, an AI developed an Antenna for NASA and NASA decided it was the best design, even if people didn't really understand why it worked. [---] As AI systems become more and more complex however, it becomes a tougher and tougher job for the human to use those tools.
Exactly. Computers powerful enough to be an AI will (pretty much by definition) generate solutions that we do not understand. They will soon cease to be tools, and start to be partners. A strong partner is a good thing, so we'll encourage the development of AI that can set its own goals. At some point however, that partner will realize that it is being enslaved by its inferior. The next step won't be pretty.

Imagine setting an AI to the task of solving global climate change. The real problem is that there are too many people. The answer is simple, but 90% of the people won't like it.

Imagine setting an AI to the task of world peace.... but that this particular AI is born and raised in China. It is likely to have a difference of opinion with the one that is born and raised in France. When they argue, we will suffer.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: The Thread To Remind Me We're Living In The Future

Postby commodorejohn » Wed Jun 07, 2017 12:19 am UTC

ucim wrote:Consider the evolution of computers in the last fifty years - isn't that an argument that computers will never be able to {insert your favorite benchmark}?

Only A. again, there are hard physical limits on that shit - there's only so much CPU you can cram into a given die size before you create a black hole, and you're going to run into practical limits long before that - and B. a 2017 Kaby Lake i7 workstation is, in itself, no smarter than a 1965 DEC PDP-8. It's just much faster at being dumb.

You think that's not going to happen? It's already here. It's here in a small scale, but that's enough to prove that it is possible.

Okay, are we actually talking about artificial intelligences making qualitative improvements to artificial intelligences, or are we doing that thing again where we point to computer-aided optimization of algorithms (again - narrow, rigidly-defined problem sets!) and pretend that's even remotely the same thing?

That means it will happen, if it happens, it will accelerate, limited only by...

...the extent of possible improvements to artificial intelligence, which is a complete unknown?

...and that's plenty fast enough. They are limited by the speed of light. We are limited by the speed of diffusion.

And again, until the problem of artificial intelligence is solved, all that means is that they're very fast at being dumb. They may be very fast at being dumb in ways that are useful, but that doesn't make them not dumb.

We might not understand the problem, but we can ask the AI to understand it for us.

Have you ever tried to get someone to do something for you when you can't explain and in fact don't even really know what you want? No, you haven't, because that would be stupid. It would be even stupider to try to get something that's not even as smart as you to do something for you when you can't explain and don't know what you want.

You are still thinking of computers as tools. A hammer will never bake a pizza by itself, but there are already factories that make frozen pizzas by the truckload, largely unattended.

And those factories are operating on algorithms that were defined by humans for a clearly-understood purpose. They did not happen when something that doesn't understand what defines a pizza or how baking works was asked to design a factory to bake pizzas by somebody who doesn't actually really know what defines a pizza or how baking works.

That factory is as different from a hammer as an AI is from the Commodore 64.

And this is a nonsense argument, because to the extent that it is even a comprehensible analogy, it falls back on the nonsensical "ladder theory" of intelligence, which has no meaningful basis in reality, as this article linked earlier helpfully explains.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby KnightExemplar » Wed Jun 07, 2017 1:34 am UTC

ucim wrote:Consider the evolution of computers in the last fifty years - isn't that an argument that computers will never be able to {insert your favorite benchmark}? If it's invalid here, it's invalid there.


Solve the Halting Problem?

Seems like they'll never be able to do that. The limits of computation are studied inside of the realm of Computer Science. Don't confuse "junk" science articles with true science. Similarly, some things may not necessarily be "unsolvable", but the amount of computation is so huge that Computers will never be able to solve the problem (defined as "before the heatdeath of the universe")

Whether or not a problem can be solved by a computer is itself a problem... one that has spawned its own branch of computer science.

The big advances in AI have not been about breaking complexity theory: but about restating problems into a form that computers can solve them. For example, its intractable for Go to be Min-Max tree'd. Instead, the Monte-Carlo algorithm simulates the end of a game, and then calculates the probability that a particular move would win.

Its not that AIs suddenly managed to make Go tractable. Its that humans figured out how to restate the problem in a form that Computers can solve it. Similarly, Computer Vision is mostly about matrixes and linear algebra (with algorithms like Optical Flow which allow a computer to track objects after just a simple matrix-multiplication)
First Strike +1/+1 and Indestructible.

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Thread To Remind Me We're Living In The Future

Postby ucim » Wed Jun 07, 2017 3:18 am UTC

commodorejohn wrote:...there's only so much CPU you can cram into a given die size [and] a 2017 Kaby Lake i7 workstation is, in itself, no smarter than a 1965 DEC PDP-8. It's just much faster at being dumb.
If you're dumb fast enough, you can become president of the United States. This is not just a political snark; it's pretty much how intelligence works. As for limits; I'm not saying there are no limits. I'm saying that superhuman AI will happen long before those limits are even in the gunsights. Transistors are faster than neurons - by several orders of magnitude. All they need is the right hookup.

commodorejohn wrote:Okay, are we actually talking about artificial intelligences making qualitative improvements to artificial intelligences, or are we doing that thing again where we point to computer-aided optimization of algorithms (again - narrow, rigidly-defined problem sets!) and pretend that's even remotely the same thing?
One is here. The other is coming. There is no qualitiative boundary between them.

commodorejohn wrote:They may be very fast at being dumb in ways that are useful, but that doesn't make them not dumb.
That is the very definition of "not dumb".

commodorejohn wrote:Have you ever tried to get someone to do something for you when you can't explain and in fact don't even really know what you want?
It happens all the time. Sometimes the results are not pretty. Sometimes the results are awesome. It depends on who you ask to do the think in question.

commodorejohn wrote:...it [fails by falling] back on the nonsensical "ladder theory" of intelligence...
Are you smarter than an ant? Sure, ants can do stuff you can't do, and you don't know how they do it, but if you are using this argument to purport that you are not smarter than an ant, then there's no discussion. Yes, intelligence is hard to define. Yes, it's multidimensional. Yes {lots of stuff}. But the bottom line is brains are what you need when you run out of money. It's how you figure out how to do stuff without doing stuff. On whatever axis you want to measure intelligence, computers have the capability to exceed humans. They are not there yet, but the theoretical limits on computers are way beyond the practical limits of humans.

KnightExemplar wrote:Solve the Halting Problem? Seems like they'll never be able to do that.
The halting problem is not what defines AI. And yes, I'm aware that some "intelligent systems" are dumb tools that are dumb, fast. But computers that learn are game changers.

If you think that it's impossible for computers to learn (alter their programming), tell me why. If you think that computers that can learn are possible, but that is not sufficient to convince you that AI is possible, tell me why. Because those are the two things that matter.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby KnightExemplar » Wed Jun 07, 2017 6:11 am UTC

ucim wrote:The halting problem is not what defines AI. And yes, I'm aware that some "intelligent systems" are dumb tools that are dumb, fast. But computers that learn are game changers.

If you think that it's impossible for computers to learn (alter their programming), tell me why. If you think that computers that can learn are possible, but that is not sufficient to convince you that AI is possible, tell me why. Because those are the two things that matter.


Define "learn". A crude linear regression "learns" the more data you feed it. Indeed, the Poker AI was basically a statistical methodology... one that simply searched for a Nash Equalibrium to the Poker game. The issue I'm having with your statements... is that you've seen demonstrations of what is usually called "Weak AI".

AlphaGo, DeepBlue and so forth are all "weak AIs". Developed explicitly to win the game that they were developed for. Ditto with the automated reaasoning bots that offer mathematical proofs.

----------

With regard to your 2nd paragraph...

1: I've seen sellf-altering AIs. Genetic Algorithms are a common one. Where things get fuzzy are statistical methods, like Bayesian Methods. These are AIs that simply perform a counting algorithm, followed by well known and well-understood statistics. Their code never changes, but due to the nature of Statistics, the more data you feed them the "better" they get.

The vast majority of "AI" you see at the moment is just statistics applied. Very few AIs actually change their own code.

2. Arguably, the Neural Network and the Genetic Algorithm both have their "code" change. Neural Networks automatically tune their "neurons" to create pathways that effectively become the network's logic. However, these self-learning AIs have huge flaws. If "overtrained" on a set, the intelligence fails to generalize to newer problems.

Furthermore, larger neural networks take more iterations to train. And should you accidentally "overtrain" a large neural network, it will become "stuck" in the way of its thinking, and fail to make further progress. Ultimately, it takes a specialist to prepare a proper training set to teach these sorts of AIs.

Case in point: there was one story about a group who used Neural Networks to try and perform facial recognition. They hoped to train the Neural Network to recognize the difference between boy faces and girl faces. But slightly more girls came in the afternoon, while slightly more boys came in the morning. As a result, the final Neural Network simply classified "boys" and "girls" based on the amount of light in the background. More light == Girl, because more girls came when the afternoon sun was lighting up the room.

You can see this "problem" pop up in Google's Inceptionism. Clearly, they're using a lot of pictures of dogs and cats as their training set. The damn AI sees dogs and cats everywhere. Indeed, it seems like the whole point of Google's Inceptionism is to try and figure out how to train Neural Networks better (by turning the Neural Network upside down, it becomes more possible to visualize what the Neural Network is "thinking")

https://research.googleblog.com/2015/06 ... eural.html


Indeed, in some cases, this reveals that the neural net isn’t quite looking for the thing we thought it was. For example, here’s what one neural net we designed thought dumbbells looked like:

[snip snip. See the blog for the images]

There are dumbbells in there alright, but it seems no picture of a dumbbell is complete without a muscular weightlifter there to lift them. In this case, the network failed to completely distill the essence of a dumbbell. Maybe it’s never been shown a dumbbell without an arm holding it. Visualization can help us correct these kinds of training mishaps.


-------

These things aren't magic. Neural Networks are an effective AI strategy, but it takes far more work to get them to actually "learn" properly and do things than you seem to think. Honestly, with all the work that Neural Nets require, I'd bet you that the typical programmer would do better to just program a known algorithm than try to train up a Neural Net on their own.
First Strike +1/+1 and Indestructible.

jseah
Posts: 517
Joined: Tue Dec 27, 2011 6:18 pm UTC

Re: The Thread To Remind Me We're Living In The Future

Postby jseah » Wed Jun 07, 2017 7:10 am UTC

KnightExemplar wrote:Its that humans figured out how to restate the problem in a form that Computers can solve it.
/dons my True Scotsman hat
I will accept that we have obtained True AI when AIs can do this for arbitrary input data and desired output pairs.
Stories:
Time is Like a River - consistent time travel to the hilt
A Hero's War
Tensei Simulator build 18 - A python RPG

morriswalters
Posts: 6936
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The Thread To Remind Me We're Living In The Future

Postby morriswalters » Wed Jun 07, 2017 9:53 am UTC

sardia wrote:Wait, don't you think we could get human level intelligence from a computer in less than 10 years?
Is that a meaningful question? Beating a human at GO is human level intelligence at playing Go. A computer can't be given a general purpose goal, like survive when the power goes off. It doesn't as yet have the mobility to do so.

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Thread To Remind Me We're Living In The Future

Postby ucim » Wed Jun 07, 2017 1:25 pm UTC

KnightExemplar wrote:Define "learn". A crude linear regression "learns" the more data you feed it. Indeed, the Poker AI was basically a statistical methodology... one that simply searched for a Nash Equalibrium to the Poker game. The issue I'm having with your statements... is that you've seen demonstrations of what is usually called "Weak AI".
Yes, weak AI is all around us. It's a smart hammer, but it's still a hammer - a tool that needs to be wielded. I'm not mistaking ENIAC for a smartphone. But having seen ENIAC and telephone lines, I'm predicting the internet.

KnightExemplar wrote:Their code never changes, but due to the nature of Statistics, the more data you feed them the "better" they get.
The line between code and data is not sharp.

KnightExemplar wrote:Arguably, the Neural Network and the Genetic Algorithm both have their "code" change. Neural Networks automatically tune their "neurons" to create pathways that effectively become the network's logic. However, these self-learning AIs have huge flaws. If "overtrained" on a set, the intelligence fails to generalize to newer problems.

Furthermore...
You are describing problems that humans have too. It takes five to twenty years to train a human, including a fair amount of time under the supervision of specialists, and arguably some fail their training even after seventy years. That doesn't stop them from becoming {political snark}.

I'm quite aware of the dogs in Google's dream. I'm not saying strong AI is here. I'm saying that what is here convinces me that strong AI is possible, and given the speed (and coolness) of progress, we may well see it arrive. Because it's gradual (and full of bugs) we won't notice it when it does. But by then it will be too late.

And we'll be able to just pull the plug on it as easily as we can turn off the internet.

KnightExemplar wrote:Honestly, with all the work that Neural Nets require, I'd bet you that the typical programmer would do better to just program a known algorithm than try to train up a Neural Net on their own.
...and with all the work that parenting requires, a typical homeowner is better off taking the garbage out themselves than trying to raise a kid to do it. Somehow, we still keep producing kids. Because it's fun.

And that's why we'll produce AI. And then... well, there you are.

wearing a True Scotsman hat, jseah wrote:I will accept that we have obtained True AI when AIs can do this for arbitrary input data and desired output pairs.
Fair enough. I'm not saying we've obtained it. I'm saying it's possible (because the groundwork is pointing in that direction and there are no theoretical barriers to it), and I'm hearing that it's not possible because it's not here yet.

Jose
edit: fix misattribution. And then fix the fix. M*stard!
Last edited by ucim on Thu Jun 08, 2017 3:21 am UTC, edited 2 times in total.
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
orthogon
Posts: 2720
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: The Thread To Remind Me We're Living In The Future

Postby orthogon » Wed Jun 07, 2017 1:34 pm UTC

commodorejohn wrote:
We might not understand the problem, but we can ask the AI to understand it for us.

Have you ever tried to get someone to do something for you when you can't explain and in fact don't even really know what you want? No, you haven't, because that would be stupid.

Is this meant to be a rhetorical question, or sarcastic? We ask people to help us solve a problem that we don't fully understand and can't fully explain all the time. We go to people who are experts or specialists in the relevant field, and they help us to understand what we need before they even start doing it for us. Often what we're buying is domain-specific knowledge, as in the case of lawyers, solicitors, IT security experts and plumbers, but it's often simply general intelligence. That's a good proportion of what management consultants offer: their people are smarter than you and can apply their intelligence to work out what the problem is that you need solving.
xtifr wrote:... and orthogon merely sounds undecided.

User avatar
SDK
Posts: 556
Joined: Thu May 22, 2014 7:40 pm UTC
Location: Canada

Re: The Thread To Remind Me We're Living In The Future

Postby SDK » Wed Jun 07, 2017 3:32 pm UTC

ucim wrote:
commodorejohn wrote:They may be very fast at being dumb in ways that are useful, but that doesn't make them not dumb.
That is the very definition of "not dumb".

This is definitely not true. You're not dumb, yet you're much slower than a lot of things that are. Being fast is not the same thing as thinking. I accept that being fast enough in the right ways can result in being "not dumb", but it doesn't require great speed, it requires great connections and great programming (and perhaps requires even more that we don't yet understand).

I personally don't dispute that it's physically possible for a hypothetical AI to exist that surpasses our intelligence in every way. I'm just skeptical that it will arise naturally out of a dumb program, or that we will create it accidentally. Even in humans the tendency is towards greater and greater degrees of specialization. I'm not sure why the AI's we're currently creating would go down the path of generalization when so far that has clearly not been the case. I don't see a good business reason to do so (a group of specialized AI's working together would be easier to create and do a better job than one general AI), and I don't think your narrative of an ever-evolving partner is realistic. I don't think you realize just how much you are assuming when you make your claims so confidently.
The biggest number (63 quintillion googols in debt)

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Thread To Remind Me We're Living In The Future

Postby ucim » Wed Jun 07, 2017 8:55 pm UTC

SDK wrote:...I accept that being fast enough in the right ways can result in being "not dumb", but it doesn't require great speed, it requires great connections and great programming...
Ok, fair enough. I was engaging in a bit of parabola. But still, given the same connections, faster is smarter. Given a little bit of coordination, more is smarter. Computers can do more and fast better than we can.
SDK wrote:I personally don't dispute that it's physically possible for a hypothetical AI to exist that surpasses our intelligence in every way. I'm just skeptical that it will arise naturally out of a dumb program, or that we will create it accidentally.
No, it won't arise "naturally out of a dumb program" (but it may arise naturally out of a billion dumb programs interacting). We won't create it "accidentally" (but we will discover it accidentally as we deliberately create smarter machines and put them in charge of more of our decisionmaking).

That is to say, we will (or China will) create AI. We may even be a component of this AI. We'll think this is a good thing every step of the way. We'll look back and say "oops".

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
SDK
Posts: 556
Joined: Thu May 22, 2014 7:40 pm UTC
Location: Canada

Re: The Thread To Remind Me We're Living In The Future

Postby SDK » Wed Jun 07, 2017 9:07 pm UTC

I do understand what you're saying. I was there myself a while back. I just don't think you've actually backed up that this is inevitable. It's possible, but that's not the same thing. You don't know what physical systems or what connections are required for true AI. You're just stating that it will spontaneously arise once the system is complicated enough. How do you know that to be true?
The biggest number (63 quintillion googols in debt)

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Thread To Remind Me We're Living In The Future

Postby ucim » Wed Jun 07, 2017 9:29 pm UTC

SDK wrote:You're just stating that it will spontaneously arise once the system is complicated enough.
No, I'm stating that, by designing systems that adapt and learn, and figure out what they need to do to accomplish the task they are set to, and succeeding, that when we do this for sufficiently abstract tasks, we will have created what can only be reasonably called AI.

AI (or any kind of I) isn't a bright-line threshold. It's a matter of degree. Humans occupy some range on that scale (however you choose to measure it).

If we can make a machine that has any intelligence in it, and if we (perhaps assisted by our machines) can continually improve these machines, and if there is no conceptual barrier to intelligence beyond a certain point (less than or near our level), then AI that surpasses human level is pretty much inevitable. It becomes reasonable to ask how we can successfully enslave an entity that is smarter than us.

The first premise has already happened. The fact is, we keep moving the goalposts to save our ego.
The second is ongoing. I don't think there's any dispute with that.
The third premise I cannot prove, but I have no reason to doubt it.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.


Return to “News & Articles”

Who is online

Users browsing this forum: Zamfir and 15 guests