AI discussion from The Darker Side of the News

Seen something interesting in the news or on the intertubes? Discuss it here.

Moderators: Zamfir, Hawknc, Moderators General, Prelates

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

AI discussion from The Darker Side of the News

Postby commodorejohn » Sun Apr 02, 2017 10:20 pm UTC

The Great Hippo wrote:Isn't that effectively what the brain is? -- an extremely complex, extremely interdependent tool that regulates internal functions while performing pattern-matching operations?

That's part of what the brain is, certainly. Maybe it's all of what the brain is. Problem is, we don't have a full enough understanding of it to say.

So, would it be fair to say that you think the difference between AI and a human brain is that you can use a human brain to better understand a human brain?

Yes! That's it in a nutshell.

I'm not setting you up for a gotcha, there; I'm just trying to understand what you think a human mind can do that a computer can't do. You're using a lot of abstractions, talking about examining answers from outside the scope of the question -- but we can program computers to examine the question, too.

Can we, now? That's news to me.

It sounds like you think the difference is that brains think, while computers follow their programming -- but we're following our programming, too. It just happens that our programmer is a hyper-aggressive four billion year old optimization process.

You can call it what you like (but that's straying out of the more practical side of the question and starting to verge on metaphysical whatsit.) And again, I'll admit that I can't say for certain that that's impossible. But to the best of my knowledge, there's yet to be a man-made system that exhibits that kind of capacity for introspection, or even seriously indicates that it'll be coming any time soon.

Aren't CAPTCHAs kind of proving my point, though? The ever-escalating arms race between questions designed to prove you're human and programs that can beat them indicates that the space of what humans can do and computers can't do is ever-narrowing.

I think it more indicates that they've been picking through the low-hanging fruit where "human tests" are concerned and have been gradually moving up to the stuff in the higher branches. It's impressive in its way, but it's still just glorified grep.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
CorruptUser
Posts: 8829
Joined: Fri Nov 06, 2009 10:12 pm UTC

Re: The Darker Side of the News

Postby CorruptUser » Sun Apr 02, 2017 10:23 pm UTC

The Great Hippo wrote:(Sidenote: Idea for a CAPTCHA! Present a problem only a computer could solve. Only wrong answers let you through!)


Better idea. Ask to identify whether they want a puppy, a pretty flower from their sweetie, or a large properly formatted data file.

And no, the puppy is not mechanical in any way, it is the bad kind of puppy.

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 6861
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The Darker Side of the News

Postby The Great Hippo » Sun Apr 02, 2017 10:40 pm UTC

Liri wrote:That... is pretty easy to distinguish. I'm sure in some number of years it won't be, but not now. I think the biggest issue, by far, I hear in synthetic speech programs (including this one) is inappropriate vowel duration.
Some of the samples are hard for me to distinguish from real voices -- but to be fair, I have a lot of difficulty parsing speech -- so maybe the issue is I'm just bad at spoken language.
commodorejohn wrote:That's part of what the brain is, certainly. Maybe it's all of what the brain is. Problem is, we don't have a full enough understanding of it to say.
I don't know how you can get much more vague than that, though? Like, we agree that a brain is a thing in a human skull that lets us think, right? We don't need to have a full understanding of the brain to say that. Similarly, I don't think we need a full understanding of the brain to recognize that it allows us to think via some sort of pattern-matching process.

I mean, it's kind of tricky because obviously you can't just separate the human brain from the human body -- these systems are deeply co-dependent, and thought itself is an emergent property that comes from everything inside us, not just the gray spongy stuff between our ears. But I don't think it's being presumptuous to say that the brain receives inputs, matches them to patterns, and produces outputs based on those matches. How much more basic can we get than that?
commodorejohn wrote:Yes! That's it in a nutshell.
Okay, and I think a big problem with that is -- and mind you, it was my word, not yours, so this is not me coming down on you -- the word 'understand' is very wishy-washy. What do we mean when we say we can use the human brain to 'understand' things? So long as we're not willing to get specific and concrete about what that means, then yeah, of course a computer can't 'understand' things -- because we don't even understand what 'understand' means!

Computers excel at accomplishing concrete goals with clear parameters for success. When we can't define clear goals, that's when they start to struggle. Notably, a lot of these unclear goals tend to involve simulating or satisfying some sort of human behavior -- because we're really fuzzy thinkers.
commodorejohn wrote:I think it more indicates that they've been picking through the low-hanging fruit where "human tests" are concerned and have been gradually moving up to the stuff in the higher branches. It's impressive in its way, but it's still just glorified grep.
So you think there's going to come a point when CAPTCHAs cannot be broken by computers? Does such a CAPTCHA already exist?

User avatar
Liri
Healthy non-floating pooper reporting for doodie.
Posts: 948
Joined: Wed Oct 15, 2014 8:11 pm UTC
Contact:

Re: The Darker Side of the News

Postby Liri » Sun Apr 02, 2017 11:06 pm UTC

Hippo, your line of thinking seems roughly akin to a New Yorker article I just read about the quest for anti-aging therapies by silicon valley folks. Namely, that they perceive biological systems as being "simple" input-output and having a readable, writable, and "hackable" code. That's reducing it down a lot.

A lot of our complexity comes from weird evolutionary detours that we get stuck with and have to deal with in creative ways. Our intelligence happened to be an adaptive response to overcome some limitation. This is much more hypothetical, but a "perfectly evolved" being wouldn't need intelligence, per se.

You're right though that we are quite fuzzy. That's sort of the argument against what you're saying.
Last edited by Liri on Sun Apr 02, 2017 11:08 pm UTC, edited 1 time in total.
He wondered could you eat the mushrooms, would you die, do you care.

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Darker Side of the News

Postby ucim » Sun Apr 02, 2017 11:07 pm UTC

The Great Hippo wrote:So, would it be fair to say that you think the difference between AI and a human brain is that you can use a human brain to better understand a human brain?
We are already using AI to better understand a human brain, and (more scary and to the point) human behavior. It's getting good. Eight years ago it couldn't touch politics. Today it may have been a factor in giving Trump the presidency (of the US) and Brexit. Eight years from now, while I don't see skynet yet, I do see the networked AI "understanding" us, perhaps individually, where "understanding" is the abstraction used inbetween input (observing our activity via charge cards, click tracking, and "security" cameras) and output (facebook feeds, customized news, discount coupons, and Tinder dates).

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: The Darker Side of the News

Postby commodorejohn » Sun Apr 02, 2017 11:17 pm UTC

The Great Hippo wrote:I don't know how you can get much more vague than that, though? Like, we agree that a brain is a thing in a human skull that lets us think, right? We don't need to have a full understanding of the brain to say that. Similarly, I don't think we need a full understanding of the brain to recognize that it allows us to think via some sort of pattern-matching process.

But we do need a fuller understanding of the brain to say that it's only some sort of pattern-matching process.

Okay, and I think a big problem with that is -- and mind you, it was my word, not yours, so this is not me coming down on you -- the word 'understand' is very wishy-washy. What do we mean when we say we can use the human brain to 'understand' things? So long as we're not willing to get specific and concrete about what that means, then yeah, of course a computer can't 'understand' things -- because we don't even understand what 'understand' means!

Okay, sure - but we've already been discussing what we mean by it in this context. As you yourself say next:
Computers excel at accomplishing concrete goals with clear parameters for success. When we can't define clear goals, that's when they start to struggle.

This is exactly my point. The ability to define goals and parameters, and to analyze methods in light of them, is precisely what we've been talking about.

So you think there's going to come a point when CAPTCHAs cannot be broken by computers?

That's a good question, and I'm not sure. I'd imagine before we ever get to that hypothetical point, we'll get to the point where they're impractical for other reasons. It's already hard for me to deal with those Google CAPTCHAs because it's never clear whether they consider the signpost part of the sign, or whether a gate leading to a hallway that may hypothetically contain a store counts as a storefront - imagine what it'll be like when the question is "which of these images makes you feel melancholy?" or "why aren't you helping this turtle?"
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 6861
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The Darker Side of the News

Postby The Great Hippo » Mon Apr 03, 2017 12:34 am UTC

Liri wrote:Hippo, your line of thinking seems roughly akin to a New Yorker article I just read about the quest for anti-aging therapies by silicon valley folks. Namely, that they perceive biological systems as being "simple" input-output and having a readable, writable, and "hackable" code. That's reducing it down a lot.
I don't think it's simple, and I certainly don't think it's readable, writable, or hackable.

I emphasized how we're a product of a 4 billion year old optimization because I wanted to make it clear that I think the idea of understanding -- nevermind 'editing' -- our 'source-code' is far beyond laughable; it presumes that such a 'source-code' even exists. What we call biological 'systems' are overlapping structures so deeply entwined that it's nearly impossible to accurately describe them in isolation. Once removed from the context of a human body, a brain is no more a functioning brain than a wheel is a functioning Ferrari 488GTB.

Even 'simplifying' us down to DNA doesn't accomplish nearly as much as we think: A sequence of genetic code might express itself one way in a certain environment, and a completely different way in another. It's like trying to write a program in a language where all your tokens, built-ins, and libraries are shuffled randomly with one another at run time.

This shit is complex, yo -- and I am definitely aware of that.
Liri wrote:You're right though that we are quite fuzzy. That's sort of the argument against what you're saying.
My argument is that the reason AI hasn't met the goal we've set for it is because we have no idea what the goal actually is. How does that contradict what I'm saying?
commodorejohn wrote:This is exactly my point. The ability to define goals and parameters, and to analyze methods in light of them, is precisely what we've been talking about.
But computers can do that; it's simple to write a program that defines its own goals. I can write a program that defines a goal as a random number; I can then have this program try to do a lot of different things to try and reach this random number. I can even have this program remember which methods worked best to accomplish this goal, remember those methods, and use them preferentially in the future.

Now, I can't have the computer spontaneously decide that it wants to reach a random number on its own, sure; but why would I want to? Computers can't create their own goals until you define goal-creation as a goal. That's because they're tools; they don't have goals in of themselves -- they're not supposed to. We create them with goals in mind and they work to accomplish them.

Like, what do you think fully-realized AI would actually look like? Can you give me a concrete example of what AI should be able to do that is impossible with currently-existing technology?

User avatar
Liri
Healthy non-floating pooper reporting for doodie.
Posts: 948
Joined: Wed Oct 15, 2014 8:11 pm UTC
Contact:

Re: The Darker Side of the News

Postby Liri » Mon Apr 03, 2017 12:55 am UTC

The Great Hippo wrote:
Liri wrote:You're right though that we are quite fuzzy. That's sort of the argument against what you're saying.
My argument is that the reason AI hasn't met the goal we've set for it is because we have no idea what the goal actually is. How does that contradict what I'm saying?

My point was that we could well require a lot of that fuzziness to have what we consider intelligence. And I'm guessing programming-in fuzziness would be rather difficult. Like, I don't think we can have an AI that can do research for us without it also getting a bit moody or flaking off from work occasionally.
He wondered could you eat the mushrooms, would you die, do you care.

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 6861
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The Darker Side of the News

Postby The Great Hippo » Mon Apr 03, 2017 12:59 am UTC

Liri wrote:My point was that we could well require a lot of that fuzziness to have what we consider intelligence. And I'm guessing programming-in fuzziness would be rather difficult. Like, I don't think we can have an AI that can do research for us without it also getting a bit moody or flaking off from work occasionally.
--not just that, but our fuzzy thinking is what stops us from making fuzzy AI. Because you can't program fuzzy thinking if you don't understand fuzzy thinking... and you're not going to understand fuzzy thinking when you're working with a fuzzy brain.

Ironically, computers may one day be able to think clearly about our fuzzy thinking, and produce fuzzy AI based on it. But why would we even want that? Like, this is the scenario I want answered: You're alone at home. You boot up your computer; you tell it to do something for you. What task can you give it that it can't accomplish with our current technology level -- but another human could?

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Darker Side of the News

Postby ucim » Mon Apr 03, 2017 1:35 am UTC

The Great Hippo wrote:Computers can't create their own goals until you define goal-creation as a goal. That's because they're tools; they don't have goals in of themselves -- they're not supposed to. We create them with goals in mind and they work to accomplish them.
But computer networks can. We program individual computers to do specific things, but we let them interact with each other in whatever way they like. Sure, we define the communications protocols, but we have no idea how a group of asynchronous networked computers is going to arrive at a decision.

AI is not going to be some machine we (thought we) programmed, that rises up and decides to take over humanity.

Rather, we are going to let networked computers make more and more of our decisions for us, they will get better and better at it, enough of us will like the results more and more, and a feedback loop will be created. We'll wake up one day and realize that AI has been all around us for years, and it's smarter than we are. And the AI will know this before we do.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
sardia
Posts: 5843
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: The Darker Side of the News

Postby sardia » Mon Apr 03, 2017 2:13 am UTC

ucim wrote:
The Great Hippo wrote:Computers can't create their own goals until you define goal-creation as a goal. That's because they're tools; they don't have goals in of themselves -- they're not supposed to. We create them with goals in mind and they work to accomplish them.
But computer networks can. We program individual computers to do specific things, but we let them interact with each other in whatever way they like. Sure, we define the communications protocols, but we have no idea how a group of asynchronous networked computers is going to arrive at a decision.

AI is not going to be some machine we (thought we) programmed, that rises up and decides to take over humanity.

Rather, we are going to let networked computers make more and more of our decisions for us, they will get better and better at it, enough of us will like the results more and more, and a feedback loop will be created. We'll wake up one day and realize that AI has been all around us for years, and it's smarter than we are. And the AI will know this before we do.

Jose

Speaking of programming advantages, Uber noticed that it could make more money by manipulating drivers into working longer hours for less pay.
https://www.nytimes.com/interactive/201 ... ricks.html
as Uber talks up its determination to treat drivers more humanely, it is engaged in an extraordinary behind-the-scenes experiment in behavioral science to manipulate them in the service of its corporate growth — an effort whose dimensions became evident in interviews with several dozen current and former Uber officials, drivers and social scientists, as well as a review of behavioral research.

Uber’s innovations reflect the changing ways companies are managing workers amid the rise of the freelance-based “gig economy.” Its drivers are officially independent business owners rather than traditional employees with set schedules. This allows Uber to minimize labor costs, but means it cannot compel drivers to show up at a specific place and time. And this lack of control can wreak havoc on a service whose goal is to seamlessly transport passengers whenever and wherever they want.

Uber helps solve this fundamental problem by using psychological inducements and other techniques unearthed by social science to influence when, where and how long drivers work. It’s a quest for a perfectly efficient system: a balance between rider demand and driver supply at the lowest cost to passengers and the company.
Most of it involves gameficiation of the app, loss aversion, and queuing up rides as easily as possible so that drivers will continue working without thinking if it's worth the cost.

commodorejohn
Posts: 961
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: The Darker Side of the News

Postby commodorejohn » Mon Apr 03, 2017 4:40 am UTC

The Great Hippo wrote:But computers can do that; it's simple to write a program that defines its own goals. I can write a program that defines a goal as a random number; I can then have this program try to do a lot of different things to try and reach this random number. I can even have this program remember which methods worked best to accomplish this goal, remember those methods, and use them preferentially in the future.

I...I don't even. What. Somehow we regressed from discussing the possibility of computers doing research, innovation, and complex judgement calls to talking about writing a program to choose a random number and then count to it?

I mean, if you can point me to a not-absurdly-trivial example of a program that's capable of assessing its own goals on a meta level and deciding whether they need to be adjusted, I'd be seriously interested in hearing about it. Or, hell, I'd even be interested to hear about a system that can examine methods for reaching its goals and come up with new ones that it hasn't been pre-programmed with. But this is just silliness.

Now, I can't have the computer spontaneously decide that it wants to reach a random number on its own, sure; but why would I want to? Computers can't create their own goals until you define goal-creation as a goal. That's because they're tools; they don't have goals in of themselves -- they're not supposed to. We create them with goals in mind and they work to accomplish them.

I mean, I agree. (Though, again, I'm skeptical about the prospect of "defining goal-creation as a goal," and I'd be seriously curious to hear about any system that purports to do that.) But that's the exact opposite of what you were arguing earlier.

Like, what do you think fully-realized AI would actually look like? Can you give me a concrete example of what AI should be able to do that is impossible with currently-existing technology?

I...think you may have been seriously confused on what my point was. My point wasn't that AI research is useless unless it produces introspective, autonomous machine entities. My point was that as long as computer programs require human intervention to assign goals and criteria, and cannot determine these on their own, there will always be things that they're incapable of fully replacing humans at.

ucim wrote:Rather, we are going to let networked computers make more and more of our decisions for us, they will get better and better at it, enough of us will like the results more and more, and a feedback loop will be created. We'll wake up one day and realize that AI has been all around us for years, and it's smarter than we are. And the AI will know this before we do.

Yeah, I'll believe that when I see it.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 6861
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The Darker Side of the News

Postby The Great Hippo » Mon Apr 03, 2017 5:23 am UTC

commodorejohn wrote:I...I don't even. What. Somehow we regressed from discussing the possibility of computers doing research, innovation, and complex judgement calls to talking about writing a program to choose a random number and then count to it?
Sorry; I picked that example because it was extremely simple! I can give you a much more complex example, though: There's neural-net software people build to play video games. It accomplishes research (by playing the game), innovation (finding and exploiting glitches no human player knew existed -- indeed, in some cases, the glitches cannot be replicated by a human player!), and complex judgment calls (by building an extremely complex network of nodes that serve as an sort of enormous, incredibly dense decision tree).
commodorejohn wrote:I mean, if you can point me to a not-absurdly-trivial example of a program that's capable of assessing its own goals on a meta level and deciding whether they need to be adjusted, I'd be seriously interested in hearing about it. Or, hell, I'd even be interested to hear about a system that can examine methods for reaching its goals and come up with new ones that it hasn't been pre-programmed with. But this is just silliness.
Does the video-game example I've given above qualify? It certainly comes up with new methods to reach its goals!

(In fact, there's one particularly funny case: An AI playing Tetris would, upon recognizing it was about to lose, pause the game and refuse to unpause)
commodorejohn wrote:I...think you may have been seriously confused on what my point was. My point wasn't that AI research is useless unless it produces introspective, autonomous machine entities. My point was that as long as computer programs require human intervention to assign goals and criteria, and cannot determine these on their own, there will always be things that they're incapable of fully replacing humans at.
No, when I said "should be able to do", I was asking what concrete tasks AI must perform to convince you that it's capable of research and innovation. I'd still like to know the answer, btw!

Because until you can actually concretely define your expectations for AI, AI is never going to meet your expectations. If you can't tell me what you think something must do to qualify as 'AI' (strong or otherwise), then of course nothing will reach the bar you've set -- you don't even know what that bar is.

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Darker Side of the News

Postby elasto » Mon Apr 03, 2017 12:19 pm UTC

The Great Hippo wrote:No, when I said "should be able to do", I was asking what concrete tasks AI must perform to convince you that it's capable of research and innovation. I'd still like to know the answer, btw!

Because until you can actually concretely define your expectations for AI, AI is never going to meet your expectations. If you can't tell me what you think something must do to qualify as 'AI' (strong or otherwise), then of course nothing will reach the bar you've set -- you don't even know what that bar is.

Exactly!

At one point being able to answer general knowledge quizzes would have been an example of almost unimaginably futuristic AI. But now one has done so (and beaten the best human in the world at it to boot) there's a lot of 'yes... but...' going on.

Rather like people are disappointed when they find out how a magic trick is done, so when AI achieves amazing stuff it's dismissed as ordinary and boring because 'we know how it's done', and the goalposts get moved. The same thing will happen when AI can drive cars better than people (a day which has probably already arrived).

AI can't win.

Chen
Posts: 5274
Joined: Fri Jul 25, 2008 6:53 pm UTC
Location: Montreal

Re: The Darker Side of the News

Postby Chen » Mon Apr 03, 2017 1:22 pm UTC

The Great Hippo wrote:Because until you can actually concretely define your expectations for AI, AI is never going to meet your expectations. If you can't tell me what you think something must do to qualify as 'AI' (strong or otherwise), then of course nothing will reach the bar you've set -- you don't even know what that bar is.


I don't know why we need to get into that level of detail. At a high level (and the initial point of the discussion I think) was AI replacing jobs. I can see plenty of jobs being replaced by AI or other automation. I mean consider fast food restaurant cashiers. Already there are kiosks at a number of McDonalds and A&Ws around here where you can order and pay without interacting with anyone. This is a type of job that seems easy to be replaced. The cooks perhaps less so, though there are burger making robots out there already. An interesting job was the recently mentioned one where a number of insurance workers were replaced by a Watson AI computer system (Link). When I look at jobs I've done in the past a number were data analysis. These could likely be replaced by sufficiently advanced AI/automation. Other jobs such as requirements determination and interpretation with customers starts becoming more difficult to fully automate. At an even higher level project management and direct dealing with suppliers and customers to ensure things are finished properly and on time, becomes even more difficult.

I don't necessarily think these latter ones will never be replaced. But we're not that close to those yet. Similarly as I had mentioned in another thread (or maybe this one) some jobs like childcare (daycare) are ones people tend to want other humans working. Though I imagine even if that weren't the case a child watching/caring for robot is probably a really difficult one to make anyways.

User avatar
Zohar
COMMANDER PORN
Posts: 7547
Joined: Fri Apr 27, 2007 8:45 pm UTC
Location: Denver

Re: The Darker Side of the News

Postby Zohar » Mon Apr 03, 2017 1:40 pm UTC

It's not just about replacing one person's work with one AI. If I'm part of a team of 10 people, and a piece of software comes out that cuts my workload in half, then five people will go home, even though the software possibly can't replace all of what I do.
Mighty Jalapeno: "See, Zohar agrees, and he's nice to people."
SecondTalon: "Still better looking than Jesus."

Not how I say my name

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 25815
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: The Darker Side of the News

Postby gmalivuk » Mon Apr 03, 2017 1:45 pm UTC

commodorejohn wrote:I'll believe that when I see it, and not before. Nobody has ever yet (to my knowledge) created a system that did more than automate the grunt work of science and engineering, and I've yet to see any compelling reason to believe that that will change any time soon. People have insisted that the change in that is just around the corner!!! for decades now. Obviously I can't say for certain that it's impossible, but I'm not going to hold my breath.
Automating grunt work still means replacing jobs. If x% of your job is grunt work (i.e. work that a computer can do), then x% of the people in your field can be replaced with computers, and the rest can take up the real work those people were doing because their time is now freed from the grunt work.

We'll still need people in $profession, but we won't need as many of them, and that's forseeably true of pretty nearly all (broadly-defined) professions. [ninja'd on this point by Zohar]

Like, sure, there may always remain a market for human artists (if only for the novelty or prestige of paying for something human-made), but that doesn't mean everyone who wants a 15-second commercial jingle is forever going to pay a human to write it. And even if synthesized voices still sound different from the real thing, they're already good enough to replace humans in many situations. (I suspect an artificial voice could replace a human in every situation where there's a static scripted statement and gleaning emotional cues from things like intonation isn't important.)
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Ai discussion from The Darker Side of the News

Postby ucim » Mon Apr 03, 2017 6:42 pm UTC

The Great Hippo wrote:AI can't win.
But it will have won.

As AI becomes good enough to replace {job}, it will be able to use that ability to become good enough to replace {JOB}... AI isn't standing still. It is evolving much faster than meatbags.

There's a difference between saying "we don't yet have the technology to build an airplane, and the kites we've made keep crashing", and "flying is impossible unless you're a bird".

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 6861
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: Ai discussion from The Darker Side of the News

Postby The Great Hippo » Mon Apr 03, 2017 10:30 pm UTC

If you want computers building new technology, it's easy: Hook up one of these neural nets to a 3D printer, give it some sort of automated mechanism by which it could test the printer's outputs ("every print-out is dropped 30 feet, then scanned for structural damage"), then set some goals ("the goal is a print-out that receives no structural damage after falling 30 feet"). Voila; you now have a computer that is developing and testing new technology.

I mean, it's gonna take a lot of time, and a lot of resources; humans benefit from the enormous wealth of data evolution gave us -- things like neural nets have to achieve similar levels of data in real-time. They achieve this data much faster than the mechanisms of biology (they're not saddled by the wait for mutations in the next generation; they can just mutate their models right now and try again), but that's still four billion years worth of data they've got to catch up on.

Still, it's incredibly impressive how far they're coming -- the toughest problems for AI to crack aren't things like "how can we make a better airplane"; it's problems like "how can we replicate human behavior". Because, as noted previously, humans are fuzzy thinkers -- and it's hard to build machines to simulate fuzzy thinking when you don't even understand that fuzzy thinking yourself.

The rest is spoiler'd, because it's a rant, and tangential:
Spoiler:
When humans write code, they often seek to achieve a certain elegance and readability; this is so they (and anyone else) can easily understand the code later on. They'll create divisions between data-structures that aren't really optimal, or don't actually represent what's truly going on -- but these abstractions help make the code more digestible and expandable. Similarly, biologists use taxonomy to divide organisms into nice, neat little groups so it's easier to talk about them and study them -- even if these groups aren't always precisely reflective of reality (an octopus is a mollusk, but it doesn't have a shell).

When computers write code, they typically write purely for optimization. Writing for optimization decreases the legibility of code; it makes things far more complicated -- because it doesn't enforce artificial boundaries. As an example: As a weird quirk of the Python programming language, "x == 3" is actually slightly slower than "x in (3,)". That is, it's slightly quicker to check and see if x's value is INSIDE A TUPLE BY ITSELF rather than if x's value EQUALS something. Of course, no human is ever going replace all their equality checks with "is it in this 1 dimensional tuple" checks (I hope). But a computer? A computer will do that shit in a heart-beat.

The reason computer-based solutions look so fucking weird to us is because they're coded for optimization, not abstraction and elegance. When you watch a neural net play a video-game, you can typically tell -- because while players will rely on elegant abstractions and rules-of-thumb, neural nets will just pursue the most optimal strategies. You see this same problem with genetics: Part of why understanding DNA is so hard is because it's coded for optimization, not elegance. Information regarding the expression of a gene isn't even necessarily contained in DNA itself; it can be part of the environment ("this gene only expresses if the mother has lots of vitamin D in her system prior to conception"). In fact, DNA is probably one of the best examples we have of what AI solutions will eventually look like -- near-incomprehensible stretches of confusing, self-contradictory code filled with 'junk' data that may or may not be incredibly important (change it and find out! xD).

And that's going to be an interesting problem: As AI starts writing code -- as it starts developing technologies -- as it starts taking a larger and larger role in our lives -- we're going to start realizing just how incomprehensible and alien its solutions are to us. Code an AI to cure cancer: Suddenly, it starts pouring all of its investments into turnip-crops. WTF? Why is it doing that?! Who knows! You can try to figure out why, but it's going to be like trying to figure out why we have an appendix using no reference besides our DNA. You can program the AI to explain to us why -- but it's going to be like a human trying to find a way to explain quantum physics to a dog.

The incredibly aroundabout point I'm making here is this: I suspect that in the next decade or so, as we allow AI to make more and more decisions for us, we are going to start FLIPPING THE FUCK OUT once we realize we don't actually understand AI's decision-making process. Which is a shame -- because the primary reason we don't understand it is on account of it being way, way better than our own.


EDIT: Also, since I'm just throwing rants around: You know what the actual threat of AI is? It isn't AI going all SkyNet and deciding we need to be nuked; it's people allowing AI to solve problems without giving it sufficient constraints. Give an AI access to a bank account with a billion dollars and tell it to "go cure cancer"; it crunches the numbers and realizes that it can accomplish this by financing terrorism. Didn't want your AI bank-rolling ISIS? Well, you should have added a "FundingTerrorism" error!

There's a pretty hilarious example of this problem in Rick and Morty (warning; spoilers for season 2, episode 6 -- also, CW for cartoon violence, gore, emotional "counter-measures", and telepathic spiders).

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Darker Side of the News

Postby KnightExemplar » Mon Apr 03, 2017 11:14 pm UTC

gmalivuk wrote:I suspect an artificial voice could replace a human in every situation where there's a static scripted statement and gleaning emotional cues from things like intonation isn't important.


Are you familiar with Hatsune Miku?

In any case, Volcaloids are closer to a musical instrument than an AI. Plus a bit of anime culture and 3d hologram effects for the "live concert". But the "human author" of the song and script plays around with all of those intonation settings. There's a big difference between Hatsune Miku'sNyancat (a lack of any intonation, more of an internet-meme song) and say... Puzzle, a song with a bit more emotion.

---------

In any case, I think the Volcaloid thing is closer to what the future holds. Its not so much that humans are being replaced: the job has moved from performer to the Vocaloid Programmer / Synthesizer. In many respects, its closer to how synthesized music, drum-machines, and auto-tune have become more tools in the modern musician's toolset.

There's still a demand for old-school acoustic bands and performances. There's a big difference in pre-scripted music and a live-music concert where the band interacts with the audience. However, pre-scripted stuff (including light effects, drum-machines, lip-syncing to auto-tune effects) have their place in modern entertainment.

---------

Similarly, robots will never be superior to humans for rapid prototyping and "small custom runs". If you're making a small-run (ie: building ~2-feet of stairs for somebody's patio)... its unlikely that a robot would be as useful for the final assembly as a human. A human can come in with the pre-formed stairs parts, take a measurement, cut out the stairs with a portable saw, and be on his way relatively quickly.

As such, I expect AIs to win out on any mass produced part. After all, you're not really up against a "computer", but you're up against a team of ~100 to 1000 people (not just the programmers, but the artists and other experts who have created the mass-produced part) who have created a flexible design.

But no AI will cut out the proper length of PVC pipe for the sink I have in my kitchen and repair my sink. Even if the mass-produced PVC pipe is good. It will take a human to do that.
Last edited by KnightExemplar on Mon Apr 03, 2017 11:23 pm UTC, edited 2 times in total.
First Strike +1/+1 and Indestructible.

User avatar
Liri
Healthy non-floating pooper reporting for doodie.
Posts: 948
Joined: Wed Oct 15, 2014 8:11 pm UTC
Contact:

Re: Ai discussion from The Darker Side of the News

Postby Liri » Mon Apr 03, 2017 11:14 pm UTC

Spoilered response to spoiler
Spoiler:
The wildest, and most intimidating, part about genetics is that by far the largest fraction of genes in our genome are devoted to regulating ...our genome. This is a paper I read just a couple days ago on chromatin regulation - it is rather complex, but I recommend at least skimming it if you're interested. It's really satisfying to study this stuff, because from milk money days through most of my undergrad, my burning question was how cells know what genes to express, how much of them to express, when to express them, which copy of a gene to express, and on and on.

One of my favorite epigenetics hypotheses is about memory - this is from a paper we read in my epigenetics class: these folks were looking at epigenetic modifications during formation of memories. They set up a learning regimen for these mice where they had to find hidden food or something along those lines. The hippocampi of the mice were collected at different time points before and after the regimen that correlated with times known to be associated with memory formation. They were assayed for expression of DNA methyltransferases 3a and 3b (Dnmt3a and b), which perform de novo methylation of cysteine in DNA (Dnmt1 maintains methylation when genomes are duplicated and Dnmt3L is a catalytic subunit). They found increased expression of Dnmt3a/b at the time points they expected for when memory formation is thought to happen, indicating that DNA in the cells of the hippocampus were getting new methyl groups added. Now, so far, that's pretty cool - evidence that learning affects our epigenome. Maybe expected, but still cool to see. Now, the really awesome hypothetical part - when cytosine gets methylated at the 5' carbon, it makes it look more like thymine. It just takes a deamination (removing an amine group) reaction to turn a 5'-methylated C into a T. A mutation in the DNA. A lot of prior research has shown methylated cytosine deaminates into thymine much more easily than unmethylated cytosines. In the case of memory, every time we practice something, specific cytosines get methylated in our hippocampus, which, after enough repetitions, are more and more likely to mutate into thymines, permanently encoding the memory into our DNA. How awesome is that?

I know way the heck less about computers and programming.
He wondered could you eat the mushrooms, would you die, do you care.

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: The Darker Side of the News

Postby KnightExemplar » Mon Apr 03, 2017 11:33 pm UTC

ucim wrote:
The Great Hippo wrote:So, would it be fair to say that you think the difference between AI and a human brain is that you can use a human brain to better understand a human brain?
We are already using AI to better understand a human brain, and (more scary and to the point) human behavior. It's getting good. Eight years ago it couldn't touch politics. Today it may have been a factor in giving Trump the presidency (of the US) and Brexit. Eight years from now, while I don't see skynet yet, I do see the networked AI "understanding" us, perhaps individually, where "understanding" is the abstraction used inbetween input (observing our activity via charge cards, click tracking, and "security" cameras) and output (facebook feeds, customized news, discount coupons, and Tinder dates).

Jose


The main issue is the difference between statistics and AI. In some cases, statistics IS AI.

The "Counterfactual Regret Minimization" AI which just won a massive Poker game against pros... is really just fancy statistics.

While its clear that the computer was crunching the numbers here (and indeed: the Poker "AI" played against itself and "self learned" to come up with those statistics)... it should surprise nobody that extremely accurate and optimized statistics are a solution to many problems. And there's a fine line between "self learning AIs" that a lot of people talk about here... and a statistical-methodology to optimize itself against a Monte-carlo simulation.

Nash proved the "Nash Equalibrium" exists in all games. And as better and better statistical methodologies for finding and discovering the "Nash Equalibriums" are found... then computers (which can explore random-space very efficiently through Monte Carlo methodology + a fast source of random bits) will naturally find Nash Equalibriums faster than us.

-------

The Brexit / Trump research stuff was also just simple statistics applied to a wide scale. (well, "simple" from the perspective of "Hire an expert Statistician and force him to work on the problem"). Statistics is a very powerful group of Mathematical operations, and statistics can predict the future to a limited extent.

I'm not sure if we should call advanced statistics an "AI" however. I mean, it is in some cases (when its playing Poker), but it really doesn't feel like AI when you understand the mechanics. At some point, a lot of statistics comes down to fundamental concepts... like Prior Probabilities or Linear Regressions. They're just mathematical tools that find patterns... and if these patterns repeat in the future... these tools become useful at predicting the future.

-----------

I guess what I'm saying is... the future does not belong to AIs. The future belongs to the statisticians (or programmers) who can apply Counterfactual Regret Minimization to new problems, like say... negotiating trade deals between countries. There are always new situations that come up every day.

Beyond that, AIs will be used for training purposes. I'm telling you, AIs solving Chess was one of the greatest things ever. You can now have Stockfish analyze your Chess games... and my personal play in Chess has improved dramatically... now that I have an AI that analyzes my games and tells me what I've done wrong. I didn't need to know how Stockfish works or PC vs SCID works... I just push the "analyze" button and am able to read the output.

Indeed, Stockfish may be significantly stronger than any human... but the "Centaur" players (that is, a human controlling an AI) are even better than the best pure AIs in the world. This should be no surprise: in "Centaur Chess"... the human no longer attempts to even play chess perfectly. The human simply farms that job out to the AI. Instead, the human specializes in understanding the weaknesses of the various AIs... theorizing about higher-levels of chess play... and then chooses the best chess engine for a particular job.

That is, Stockfish can only run Stockfish. A Centaur player however, has any AI at his disposal.
First Strike +1/+1 and Indestructible.

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Ai discussion from The Darker Side of the News

Postby ucim » Tue Apr 04, 2017 2:30 am UTC

KnightExemplar wrote:The main issue is the difference between statistics and AI. In some cases, statistics IS AI.
Are you really so sure that statistics isn't the basis of meatbag intelligence too? That's what forming memories, learning stuff, "muscle memory", and creativity are, except that it's statistics applied on a much broader and deeper scale, uncovering deeper and deeper patterns to exploit. Creativity is trying this stuff out, hiding the stuff that didn't work, and basking in the glory when it does work out.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
HES
Posts: 4793
Joined: Fri May 10, 2013 7:13 pm UTC
Location: England

Re: The Darker Side of the News

Postby HES » Tue Apr 04, 2017 9:58 am UTC

KnightExemplar wrote:But no AI will cut out the proper length of PVC pipe for the sink I have in my kitchen and repair my sink. Even if the mass-produced PVC pipe is good. It will take a human to do that.

A quick drone-mounted laser scan and your smart-home AI will print an exact-fitting 3D printed replacement before you even realised the part was busted.
He/Him/His Image

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: Ai discussion from The Darker Side of the News

Postby KnightExemplar » Tue Apr 04, 2017 12:05 pm UTC

ucim wrote:
KnightExemplar wrote:The main issue is the difference between statistics and AI. In some cases, statistics IS AI.
Are you really so sure that statistics isn't the basis of meatbag intelligence too? That's what forming memories, learning stuff, "muscle memory", and creativity are, except that it's statistics applied on a much broader and deeper scale, uncovering deeper and deeper patterns to exploit. Creativity is trying this stuff out, hiding the stuff that didn't work, and basking in the glory when it does work out.

Jose


The human brain is built up of neurons, which are simulated with neural networks. These are 'universal self learning computation blocks'.

Statistics is a very 'smooth' way of learning. There is one value per situation that a particular algorithm looks for. Counterfactual regret minimization isn't quite looking for a Nash equilibrium... But it's looking for something similar. In effect, the algorithm is hardwired to look for a very peculiar strategy that all games have.

In any case, the statistical method is unable to learn new or different strategies. It just so happens that there are particular strategies that are very hard to beat (It is proven that the Nash equalibrium will tie vs perfect play on the average)

In contrast, the neural network... Say AlphaGo... Will try many strategies and come up with new strategies. That arguably has more AI with regards to what this thread is discussing... But humans do not fear neural nets because they're kinda unreliable and only work well if trained properly.

Overtraining, local optima, etc etc can hamper the performance of a neural net. In essence, it takes a domain expert as well as a Neural network expert to create a good training regiment for a neural network.

HES wrote:
KnightExemplar wrote:But no AI will cut out the proper length of PVC pipe for the sink I have in my kitchen and repair my sink. Even if the mass-produced PVC pipe is good. It will take a human to do that.

A quick drone-mounted laser scan and your smart-home AI will print an exact-fitting 3D printed replacement before you even realised the part was busted.


I know you are being a bit facetious here... But pvc pipe is extremely efficient to mass produce through simple extrusion. 3d printing with many materials have accuracy issues as well... typical mass production methods can be accurate to micrometers, while the typical cheap 3d printers are maybe 100ish micrometer accurate.

Classic production is simpler, cheaper, and more accurate. You can only make limited numbers of shapes... But look at a typical stapler (sheet metal processing) or pvc pipe (extrusion) or Samsung Phone (die cast). There are a lot of shapes available through normal means.

Also, drones likely won't have the mass to open the door under the sink.
First Strike +1/+1 and Indestructible.

morriswalters
Posts: 6936
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Ai discussion from The Darker Side of the News

Postby morriswalters » Tue Apr 04, 2017 12:46 pm UTC

Not having robots repair your sink is a matter of economics, not technology. Robots are expensive, plumbers less so. The only reason to make a robot plumber is because you need more plumbers than you have the capacity to produce through existing methods. Not a problem currently, or so I believe. However it might be nifty if the plumber could call in a drone to deliver the part after he assesses the problem. Or as HES suggests, fab it on site.

What current attempts at AI seem to be good at is something that humans aren't, large, fast changing data sets. IBM's work on the medical uses of Deep Blue or whatever they call it appears to me to be instructive. How many cancer cases can even an above average Oncologist study, much less draw comparisons against?

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: Ai discussion from The Darker Side of the News

Postby KnightExemplar » Tue Apr 04, 2017 1:17 pm UTC

morriswalters wrote:Not having robots repair your sink is a matter of economics, not technology. Robots are expensive, plumbers less so. The only reason to make a robot plumber is because you need more plumbers than you have the capacity to produce through existing methods. Not a problem currently, or so I believe. However it might be nifty if the plumber could call in a drone to deliver the part after he assesses the problem. Or as HES suggests, fab it on site.


AIs are very bad at solving the general problem. Right now anyway. Look at the "champion" of the 2015 Amazon picking challenge: https://www.youtube.com/watch?v=UrpMfdj-Mpc

Robots are very bad at general tasks right now. Make a contest between the best researchers in the world, each of which with access to millions of dollars of research money... the best robot can only pick up 10 of the 12 requested objects from a shelf over the course of ~17 minutes.

There are some tasks computers are good at, mostly because programmers have spent literally decades exploring various algorithms for particular tasks. But as soon as you deviate from the task and go for a "general" problem (a problem like "pickup these 12 things from the shelf over there"), the best of the best bots start having issues.

Plumbing is a similar "general" task. Where is the kitchen? Which pipe is connected to which sink? Which sink is having an issue? Are there stuff in the way that should be temporarily moved? Where should that stuff go? Is the PVC pipe warped or otherwise damaged? Is the issue in the garbage disposer? Is it ink the PVC pipe itself? Or is it the faucet? Etc. etc. These are the kinds of situations that computers are very, very bad at right now. And aside from the Amazon contest... I don't see too many researchers trying to solve the "general" problems like that. (Even then, Amazon's picking challenge is very much born out of Amazon's requirements to pick things out of their warehouse)

Kitchen sinks aren't made to exact specifications either. The pipes under every Kitchen sink are basically custom-made by a plumber in every house. Its not very hard to cut PVC pipes to certain lengths and make a custom solution for everybody. Without standardization, it becomes incredibly difficult for a computer to work across the problem set.

------------

Also, Plumbers are fucking expensive. If you're a master-plumber, you can get into 6-figures if you work overtime... with most master plumbers getting into the $70k+ range without any degree. Plumbers ain't a low-paying job. Its kinda complicated, and requires good skills as well as a mastery of local laws and regulations.

I know that its grossly different depending on the area. But I live in basically a nanny state... so there are licenses for "Master Plumber" status as well as a myriad of state and local laws that differ from county to county. I hear that plumbers make less in other areas (probably where there are fewer regulatory hurdles)... but there are good reasons to regulate and standardize plumbing issues across houses in a local area.

A lot of the "Master Plumber" crap is about liability as well. People rely on "Master Plumbers" to make sure that the water sprinklers will actually work during a fire. Sure, maybe an AI or other invention may make basic plumbing tasks easier for the layperson in the future... but if there's one thing that's constant... its liability. Someone needs to take responsibility. That's a job in of itself, and it can never be farmed out to an AI. (At best, an AI will centralize responsibility to a company. IE: If an automated car started to crash a lot, then the company who wrote the AI would be liable)

What current attempts at AI seem to be good at is something that humans aren't, large, fast changing data sets. IBM's work on the medical uses of Deep Blue or whatever they call it appears to me to be instructive. How many cancer cases can even an above average Oncologist study, much less draw comparisons against?


IBM's Deep Blue was the Chess-playing robot. I think the one you're talking about is IBM's Watson (the one on Jeopardy). Really, Watson isn't an "AI" in the classic sense as much as it is a very advanced automated-language processing system hooked up to a database. Playing against Watson in Jeopardy was like trying to play against fucking Wikipedia database (which was innately created by humans). Advanced automated-language processing can perform hugely important tasks, but its difficult to call it AI because all its really doing is reading and regurgitating basic facts.

Watson will be useful in maybe... phone communication systems (automated voice machines and whatnot) or drive through windows sorts of tasks. And with its "reading" ability, its also good at searching and indexing data. Due to its ability to correlate text and read English, there are intriguing applications of Watson but I really still don't consider it an AI... any more than I consider "Google" to be an AI (or Amazon's "You might also like" feature)

In contrast... there are "Expert System AIs" or "Machine Learning AIs" which are beating humans in the medical field of diagnosing mental health issues. But these are closer to a set of tools and still require the doctor to perform the interview questions. Nonetheless, this is closer to the field of AI IMO. Its actually "intelligence", in that the computer system is coming up with the answers (as opposed to just searching through a database and finding an answer)
Last edited by KnightExemplar on Tue Apr 04, 2017 1:53 pm UTC, edited 1 time in total.
First Strike +1/+1 and Indestructible.

morriswalters
Posts: 6936
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Ai discussion from The Darker Side of the News

Postby morriswalters » Tue Apr 04, 2017 1:51 pm UTC

My apologies. I've been code named to death.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 25815
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Ai discussion from The Darker Side of the News

Postby gmalivuk » Tue Apr 04, 2017 5:44 pm UTC

KnightExemplar wrote:Plumbing is a similar "general" task. Where is the kitchen? Which pipe is connected to which sink? Which sink is having an issue? Are there stuff in the way that should be temporarily moved?
Some of those questions could be simplified by having more consistent modular designs for homes themselves. If there are ever affordable plumbing-bots that only service certain combinations of pluming setups, then it will be economically beneficial for property owners to use those setups.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
Liri
Healthy non-floating pooper reporting for doodie.
Posts: 948
Joined: Wed Oct 15, 2014 8:11 pm UTC
Contact:

Re: Ai discussion from The Darker Side of the News

Postby Liri » Tue Apr 04, 2017 5:47 pm UTC

And/Or the houses themselves become robots.
He wondered could you eat the mushrooms, would you die, do you care.

User avatar
freezeblade
Posts: 1091
Joined: Fri Aug 24, 2012 5:11 pm UTC
Location: Oakland

Re: Ai discussion from The Darker Side of the News

Postby freezeblade » Tue Apr 04, 2017 5:54 pm UTC

Liri wrote:And/Or the houses themselves become robots.


The final endgame of the "internet of things." The boundry between computer and home will become so blurred that it is indistinguishable, much like how indoor plumbing and lighting were once individual things that were once brought into rooms and later became incorporated seamlessly into housing design.
Belial wrote:I am not even in the same country code as "the mood for this shit."

User avatar
pseudoidiot
Sexy Beard Man
Posts: 5056
Joined: Mon Apr 21, 2008 9:30 pm UTC
Location: Kansas City
Contact:

Re: Ai discussion from The Darker Side of the News

Postby pseudoidiot » Tue Apr 04, 2017 5:54 pm UTC

KnightExemplar wrote:Kitchen sinks aren't made to exact specifications either. The pipes under every Kitchen sink are basically custom-made by a plumber in every house. Its not very hard to cut PVC pipes to certain lengths and make a custom solution for everybody. Without standardization, it becomes incredibly difficult for a computer to work across the problem set.
I'm not sure how accurate this is. I live in a house that was built in the mid-70s and I've replaced some of the plumbing under 3 different sinks (kitchen + 2 bathrooms) in my house. It was as easy as going to a home supply store and buying some pre-made PVC pipes and fittings. No custom-fitting or cutting necessary to replace what was previously there.
Derailed : Gaming Outside the Box.
SecondTalon wrote:*swoons* I love you, all powerful pseudoidiot!
ShootTheChicken wrote:I can't stop thinking about pseudoidiot's penis.

cphite
Posts: 1163
Joined: Wed Mar 30, 2011 5:27 pm UTC

Re: Ai discussion from The Darker Side of the News

Postby cphite » Tue Apr 04, 2017 6:20 pm UTC

morriswalters wrote:Not having robots repair your sink is a matter of economics, not technology. Robots are expensive, plumbers less so. The only reason to make a robot plumber is because you need more plumbers than you have the capacity to produce through existing methods. Not a problem currently, or so I believe. However it might be nifty if the plumber could call in a drone to deliver the part after he assesses the problem. Or as HES suggests, fab it on site.


We are still a very, very long way away from general purpose robot plumbers, even for the most fantastically wealthy who are obsessed with spending money.

It's one thing to create a robot that can put some standard pieces of something together in a standard way. We've had that for decades in the auto industry for example. We even have robots who can identify what's broken in a standard setup and decide how to fix. What we don't have - not even close - is robots with the ability to go into a random home, identify a problem that is most likely out of sight, and know how to fix said problem. And then, on top of that, know how to wrangle around the almost impossible to even count obstacles and quirks of the average home.

As far as fabricating on site or having parts delivered by drone, that seems like a really ineffective way to get parts. First because you'd be adding an incredible amount of cost for no good reason; and second because you're adding a whole lot of time for no good reason. You really think Joe Homeowner wants to wait around for three hours while you print something you should have had on hand in the first place? Or have it delivered by drone? For that matter, why would you as the plumber want to wait around for those hours? Couldn't that time be used for something more valuable, like say, seeing other paying customers?

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: Ai discussion from The Darker Side of the News

Postby KnightExemplar » Tue Apr 04, 2017 6:39 pm UTC

pseudoidiot wrote:
KnightExemplar wrote:Kitchen sinks aren't made to exact specifications either. The pipes under every Kitchen sink are basically custom-made by a plumber in every house. Its not very hard to cut PVC pipes to certain lengths and make a custom solution for everybody. Without standardization, it becomes incredibly difficult for a computer to work across the problem set.
I'm not sure how accurate this is. I live in a house that was built in the mid-70s and I've replaced some of the plumbing under 3 different sinks (kitchen + 2 bathrooms) in my house. It was as easy as going to a home supply store and buying some pre-made PVC pipes and fittings. No custom-fitting or cutting necessary to replace what was previously there.


Fair enough. I'm no plumber, so I can't talk about this issue with too much accuracy. The most I've done is basically replace a garbage disposer when mine began to leak.

Looking at some pictures online though, I can see that the "P-Trap" is relatively standardized, even dual-sink setups seem to have very similar looking parts. But extending beyond the P-Trap into the wall, or maybe running a pipe out... these PVC parts look like they are at very least "cut-to-size". They are a standard PVC pipe, but the exact length seems to differ. After all, the distance from the sink to the water system differs from place to place.

I checked a few sinks online, and here are the variations I'm talking about:

* https://www.oriolesoutsider.com/wp-cont ... umbing.jpg
* http://www.nettally.com/palmk/DoubleSinkDrain.jpg
* http://homesfeed.com/wp-content/uploads ... k-pipe.jpg

So while individual PVC pipes may be standardized (all of them have a P-Trap), the particular arrangement of pipes seems to differ from house to house.

----------

Lets talk about a relatively simple task I do know how to do: replacing a garbage disposer. Now, if you buy the same brand, most of your previous parts will fit. But if you're upgrading to another brand... you may need to:

* Remove the previous disposal unit (various brands, and variations even within the same brand).
* Remove and replace the sink mount, which typically includes sealing the mount with putty.
* Attach the new Disposal Unit (typically a simple twist-on. The hard part is the brand-specific mount)
* Reconnect the PVC pipes and screws. Including the dishwasher.

None of these tasks seem particularly well suited for an AI to do. And this is probably about as simple as it gets from a pluming perspective.
First Strike +1/+1 and Indestructible.

User avatar
Thesh
Made to Fuck Dinosaurs
Posts: 5506
Joined: Tue Jan 12, 2010 1:55 am UTC
Location: Colorado

Re: Ai discussion from The Darker Side of the News

Postby Thesh » Tue Apr 04, 2017 6:46 pm UTC

KnightExemplar wrote:None of these tasks seem particularly well suited for an AI to do. And this is probably about as simple as it gets from a pluming perspective.


Garbage disposals are designed for humans, and most AI isn't going to be replicating human labor precisely; they will change how the task is performed to accommodate the machines. We didn't need to invent OCR to design computers that could read labels, we designed labels that could be easily read by computers.
Honesty replaced by greed, they gave us the reason to fight and bleed
They try to torch our faith and hope, spit at our presence and detest our goals

User avatar
LaserGuy
Posts: 4390
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: Ai discussion from The Darker Side of the News

Postby LaserGuy » Tue Apr 04, 2017 6:52 pm UTC

cphite wrote:
morriswalters wrote:Not having robots repair your sink is a matter of economics, not technology. Robots are expensive, plumbers less so. The only reason to make a robot plumber is because you need more plumbers than you have the capacity to produce through existing methods. Not a problem currently, or so I believe. However it might be nifty if the plumber could call in a drone to deliver the part after he assesses the problem. Or as HES suggests, fab it on site.


We are still a very, very long way away from general purpose robot plumbers, even for the most fantastically wealthy who are obsessed with spending money.

It's one thing to create a robot that can put some standard pieces of something together in a standard way. We've had that for decades in the auto industry for example. We even have robots who can identify what's broken in a standard setup and decide how to fix. What we don't have - not even close - is robots with the ability to go into a random home, identify a problem that is most likely out of sight, and know how to fix said problem. And then, on top of that, know how to wrangle around the almost impossible to even count obstacles and quirks of the average home.

As far as fabricating on site or having parts delivered by drone, that seems like a really ineffective way to get parts. First because you'd be adding an incredible amount of cost for no good reason; and second because you're adding a whole lot of time for no good reason. You really think Joe Homeowner wants to wait around for three hours while you print something you should have had on hand in the first place? Or have it delivered by drone? For that matter, why would you as the plumber want to wait around for those hours? Couldn't that time be used for something more valuable, like say, seeing other paying customers?


But this kind of thing happens even with human plumbers. If you call in a professional plumber, there's a decent chance that she isn't going to have the part for some random custom-made unit. She'll come in, identify the problem, order the part, and come in two weeks later to fix it, charging you $100/hour for each visit. Or if you're doing it yourself, you still need to identify the problem, figure who out sells that particular part, drive to/order that part yourself, bring it home, attach the unit (hopefully it's actually the right one, and hopefully you've actually identified the right problem). Fabricating on site and/or drone delivery would be WAY more efficient than what we have now for any difficult-to-get piece. Standard pieces, sure, but an automated plumber would presumably be able to bring those things just as well as a human could.

KnightExemplar
Posts: 5489
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: Ai discussion from The Darker Side of the News

Postby KnightExemplar » Tue Apr 04, 2017 7:27 pm UTC

LaserGuy wrote:But this kind of thing happens even with human plumbers. If you call in a professional plumber, there's a decent chance that she isn't going to have the part for some random custom-made unit. She'll come in, identify the problem, order the part, and come in two weeks later to fix it, charging you $100/hour for each visit. Or if you're doing it yourself, you still need to identify the problem, figure who out sells that particular part, drive to/order that part yourself, bring it home, attach the unit (hopefully it's actually the right one, and hopefully you've actually identified the right problem). Fabricating on site and/or drone delivery would be WAY more efficient than what we have now for any difficult-to-get piece. Standard pieces, sure, but an automated plumber would presumably be able to bring those things just as well as a human could.


Did you see my earlier video about how bad computers are at picking out things off a shelf?

Humans are insanely good at looking at a thing, identifying its name, and purchasing a replacement. Well... trained humans anyway. I couldn't do it, but I bet you a plumber can pick out the right stuff... or at least have a decent guess.

Its incredibly unlikely for an AI to come out to identify a general product... even a general pluming product... from just sight alone. Maybe sometime in the future an algorithm will be invented, but typical AI techniques have not been able to solve that problem with much accuracy or speed.
First Strike +1/+1 and Indestructible.

Chen
Posts: 5274
Joined: Fri Jul 25, 2008 6:53 pm UTC
Location: Montreal

Re: Ai discussion from The Darker Side of the News

Postby Chen » Tue Apr 04, 2017 7:57 pm UTC

KnightExemplar wrote:Did you see my earlier video about how bad computers are at picking out things off a shelf?

Humans are insanely good at looking at a thing, identifying its name, and purchasing a replacement. Well... trained humans anyway. I couldn't do it, but I bet you a plumber can pick out the right stuff... or at least have a decent guess.

Its incredibly unlikely for an AI to come out to identify a general product... even a general pluming product... from just sight alone. Maybe sometime in the future an algorithm will be invented, but typical AI techniques have not been able to solve that problem with much accuracy or speed.


I mean eventually image recognition will become good enough for this. But that's quite a ways off. The way this would be done in the nearer future would be that your various plumbing parts would have a bar code or the like on them which the drone would be able to read along with other codes to determine what it's looking at. You'd call a plumber and they'd ask "ok do you have X and Y type pipes (which include barcodes)". No? Ok I need to send someone over and it'll cost this amount. You do? Great I can send the drone over and it'll only cost this lesser amount.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 25815
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Ai discussion from The Darker Side of the News

Postby gmalivuk » Tue Apr 04, 2017 8:03 pm UTC

cphite wrote:It's one thing to create a robot that can put some standard pieces of something together in a standard way. We've had that for decades in the auto industry for example. We even have robots who can identify what's broken in a standard setup and decide how to fix.
So what you're saying is that we could already have plumbing robots if more people elected to set up their plumbing in a standard way.

Thesh wrote:
KnightExemplar wrote:None of these tasks seem particularly well suited for an AI to do. And this is probably about as simple as it gets from a pluming perspective.


Garbage disposals are designed for humans, and most AI isn't going to be replicating human labor precisely; they will change how the task is performed to accommodate the machines. We didn't need to invent OCR to design computers that could read labels, we designed labels that could be easily read by computers.
Exactly. The idea that robots would be replacing human workers in exactly the role humans currently fill is as outdated and irrelevant as the idea that robots will all look like C-3PO.

Sure, a robot might not be able to identify the make of my garbage disposal by sight, but slap a barcode on the side and voila! And measuring the lengths of existing pipes also doesn't seem a terribly insurmountable task for a robot to do.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

morriswalters
Posts: 6936
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Ai discussion from The Darker Side of the News

Postby morriswalters » Tue Apr 04, 2017 8:53 pm UTC

@cphite
LaserGuy ninja'd my thinking, and said it clearer.

@KnightExemplar
All the plumbers I know call disposals the plumbers friend.


Return to “News & Articles”

Who is online

Users browsing this forum: CorruptUser and 22 guests