Voices in AI – Episode 10: A Conversation with Suchi Saria

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Suchi talk about understanding, data, medicine, and waste.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 10: A Conversation with Suchi Saria”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-59-07)-suchi-saria.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/09\/voices-in-ai-cover.png”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png’
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Byron Reese: This is “Voices in AI” brought to you by Gigaom. I am Byron Reese. Today, my guest is Suchi Saria. Where do I start when going through her career? She has an undergraduate degree in both computer science and physics. She has a PhD in computer science from Stanford, where she studied under Daphne Koller. She interned as a researcher at IBM and at Microsoft Research, where she worked with Eric Horvitz. She is an NSF Computing Innovation fellow at Harvard, a DARPA Young Faculty Award Winner, and is presently a professor at Johns Hopkins. Welcome to the show.

Suchi Saria: Thank you.

Let’s start off with the biggest, highest-level question there is. What is artificial intelligence? How do you answer that question when it’s posed to you?

That’s a great question. I think AI means very different things to different people. And I think experts in the field, at the high level, understand to a degree of what AI is; but they never really posit a very concrete, mathematical description of it. Overall, our goal is… We want computers to be able to behave intelligently, and that’s really the origin of how the AI field of computer science emerged.

Now along the way, what has happened is… Starting from really classical applications— autonomous driving or image recognition or diagnostics… As lots and lots of data has been collected, people have started to develop numerical tools, or statistical methods, or computational methods that allow us to leverage this data, to build computers or machines that can do useful things that can help humans along the way. And so that then has also become part of AI.

Effectively, the question that as a field we often ask ourselves is: Does AI really mean useful tools that help humans, automating a task that humans can do, giving computers the ability to also do? Or is it going at properties like creativity and emotion, that are very interesting and unique aspects of what humans often exhibit? And do computers have to exhibit that to be considered ‘artificially intelligent’? So really there’s a debate about what is intelligence, and what does it really mean; and I think different experts in these fields have very different takes on it.

I only ask it because it’s a really strange question. If you ask somebody at NASA, “What is space travel?”, maybe there’s… “Where does space begin?” or “How many miles up?”—If you ask all these different fields, they kind of know what the field is about. You said something I never heard anybody say, which is: “Those of us who are researchers in it, we have a consensus on what it is.”

I would say at a very high level, we all agree. At a high level, it is the ability to systematize or help computers behave and reason intelligently. The part that is left to be agreed upon is ‘behave and reason intelligently the way humans do’. The way humans do things is important because, in some fields, we should study humans; we should understand the way humans do it and biological systems do it, and then build computers to do it the way humans do it.

In other fields, it’s not so important that we do it exactly the way humans do it. Computers have their own strengths, and effectively, perhaps what’s more important is the ability to do something, rather than the process by which we’re getting there. So we all agree that the goal is to build intelligent machines.

Intelligent machines that crunch a lot of data, intelligent machines that can reason through information that’s provided, produce what needs to be done, interact intelligently—and by that, we mean understand the person that’s in front of you, and understand the scenario that’s being presented to you, and react appropriately.

Those are all things we’ll agree on. And then, effectively, the question is: Do we need to do it the way humans are doing it? In other words, is it in the making of human intelligence, or is it about giving this capability to machines by whichever way the machines are able to learn that?

I won’t spend too much time here, because it may not be interesting to everyone else, but to say artificial intelligence is teaching machines to reason intelligently—you’re using ‘to reason intelligently’ to define the term ‘intelligence’. Doesn’t that all obfuscate what intelligence is?

Because at one extreme end, it’s defined simply as something that reacts towards its environment; a sprinkler system that comes on when the grass is dry is intelligent. On another extreme end, it’s something that learns, teaches itself; it evolves in a way that your sprinkler system doesn’t. It’s a learning system that changes its programming as it’s given more data.

Isn’t there some element of what intelligence is that we all have to circle around, if we are going to use this term? And if we’re not going to circle around it, is there a preferred way to refer to this technology?

Yeah, I think the preferred way is the way we think about it. I think the other aspect of the field that I really love is the fact that it’s very inclusive. The reason the field has moved forward so quickly is because, as a field, we’ve been very inclusive of ideas from psychology, from physics, from neuroscience, from statistics, from mathematics—and of course, computer science.

And what this really means is as a field, we move forward really quickly and there’s really room for multiplicity of opinions and ideas. The way I often think about it is: What’s the preferred way people like me think about it—and others might give you different opinions about it, but fundamental to all this is the idea of learning.

Rather than building brittle systems that effectively have hard-coded logic which says, “If this happens, then do this. If that happens, then do this”—what’s different here is that effectively these systems are more designed to program their own logic, based upon data. They’re learning in a variety of different ways—they learn from data. Data where in the past, people have presented a scenario.

Let’s say in this scenario, you might consider how another intelligent human or an expert human is reacting to the scenario, and you’re watching how the human behaves or reacts; and from that, the computer is trying to learn what is optimal. Alternatively, they may learn by interacting with their environment itself. For instance, if the environment has a way… Like in the game of Go, the environment here being the board game itself—had a way of giving feedback… A version of feedback would be, if you make a move, you get a score attached to whether or not this is a good move, and whether or not it will help you win, and they’re basically using that feedback.

It’s often the type of feedback we as humans use all the time in real life… Where effectively you could imagine kids… If there’s a pot that’s too hot and they touch it, next time they see a similar object, they’re much less likely to touch it. And you know as adults, we go and we often analyze scenarios around us, and see if something has a positive or a negative feedback.

And then, when we see negative feedback, we sort of register what might have caused it, the reason about what might have caused it, and try to do that process. The notion of learning is pretty fundamental. The way by which it learns is really a huge body of work which has focused on that—which is, how do we develop more general purpose methods by which computers can learn, and learn from many different types of data, many different types of supervision and effectively, learn as quickly as possible?

You used the word ‘understand’ the person, ‘understand’ the situation. There’s a famous thought experiment on that word, and what the implications are. It’s called The Chinese Room Problem, and just to set it up for the listener… There’s a man who speaks no Chinese—we call him the librarian—and he’s in this giant room with thousands and thousands of these very special books.

And people slide questions under the door to him, and they’re written in Chinese. He doesn’t understand them, but he knows to match the very first symbol in the message to the spine of a book, pulls that book down, looks up the second symbol that directs him to another book, and another one, and another one… Until he finally gets to the end of this process.

And he copies down the characters that he sees, slides that back out, and it’s a perfect answer in Chinese. And of course, the man doesn’t know the meaning of what it was about, but he was able to produce this perfect answer using a system. The question is: Does the man understand Chinese?

And of course, the analogy is obvious. That’s all a computer is doing; it’s running a deterministic program, and so forth. So I put the question to you: Does the man understand Chinese? Can a computer understand something, or is understanding just a convenient word we use, but [where], clearly, the computer doesn’t understand anything?

Let’s shift our attention for a second away from computers and to humans. I often think hard about… I try to pull out scenarios where I’m wondering, am I effectively running an algorithm? And what is my own algorithm?—and even considering scenarios where it’s not so prescriptive. Perhaps I needed to be creative.

My job often involves being creative, and coming up with new ideas frequently. And the question I ask myself is: Am I just deriving this idea out of previous experiences that I already had? In other words, am I effectively just engaging in the task of…

Let’s say I have A and B, and this really creative idea… But what my brain has become really good at is, in new scenarios, quickly figuring out what are the relevant elements like A, B and C in my past which are pertinent, and then from that, coming up with something that looks like a combination or variation. In other words, it’s not as big a leap of faith as it [might seem] to someone who doesn’t have my experience, or doesn’t have my background.

And then, I think hard about it, and perhaps it really is just derived from the things I know. What this is getting at is me being a little cynical about my own ability to—my own assessment of how much do I really understand. Is understanding effectively the ability to quickly parse information, determine what’s important, apply rules of logic and a bit of randomness in order to experiment with ideas and then come up with a new idea?

I don’t really have an answer to this. But I’ve often wondered, this is maybe what we do; and our ability to do this really rather quickly is sort of what distinguishes different humans in their ability to understand and come up with a creative idea quickly. And so, if I think about it from this point of view, it doesn’t seem to me a complete stretch to imagine that we could teach computers to do these things.

So let me give you an example. For instance, going back to the very popular news story around AlphaGo, when the AlphaGo started to explore new moves. Many individuals who are not familiar with the topic of AI thought, “Wow, that’s amazing! It’s being creative, it’s coming up with brand-new moves altogether”—that humans, human experts hadn’t really known. But really, all it was doing is doing search, in some super large space.

And its ability to do search is pretty expansive. And the other thing it has is really clever ways of doing search, because it has heuristics that is has built up from its own experience of doing learning. And so, in a way, that’s really what humans are doing, and that’s really what experience gives us. So let’s go back to your question of: Does the [Chinese Room person] really understand?

I think my personal issue is that I don’t know what understanding really means. And the example I gave you… If you were to define understanding that way, then I think in today’s world, we would say maybe that man didn’t understand what he was doing, but maybe he did. I’m not sure. It’s not obvious to me.

Do we measure understanding by the output—the fact that you give an input and they give a reasonable output? Or do we measure it by some other metrics? It’s a really great question though.

You captured the whole debate in what you just said, which is… The room passes the Turing test. You wouldn’t be able to tell—if you’re the Chinese speaker outside the room passing in the messages—you wouldn’t be able to tell if that was not a native speaker on the other side.

And so the machine ‘thinks’. Many people in the field had no problem saying the man understands Chinese, but at a gut level, that doesn’t feel right. Because the man doesn’t know if that was a message about cholera or coffee beans, or what—it’s just blinds on paper. He knows nothing, understands nothing, just walks through some old thing that gets him to copy these marks down on paper.

And to say that is understanding trips people up. The question is: Is that the limit of what a machine will ever be able to do? I will only say one thing, and then I would love your thoughts. Garry Kasparov kind of captured that when he lost to Deep Blue back in ‘97. He said, “Well, at least it didn’t enjoy beating me.”

His experience of the game was different from the computer’s experience of the game. I only think it’s a meaningful question because it really is trying to address the limits of what we can get machines to do. And if, in fact, we don’t understand anything either, then that does imply we can build AGI and so forth.

I agree with you. I think it’s a very meaningful question. And I certainly think it’s a topic we should continue to push on and understand more deeply. I would even go back, to say that I bet there are people around you—maybe not as holistic and expansive a context as the Chinese man you described—but you could imagine scenarios where somebody is really good… their whole job is sort of like, they learned numerous algorithms like this.

And you could imagine colleagues like that… Where they’re effectively really good at fielding certain types of questions, and pushing data out. And maybe they have not built the algorithm, but they understand what the person in front of them is asking, and they understand what kinds of answers they need to hear in order to be able to answer questions in a satisfactory manner.

Effectively, my point is even though we think that in the example… [If] somebody told you he doesn’t understand, that [conclusion] is very possible. If nobody had told you that, and he always was able to produce something that was acceptable or of a high quality, everybody else would always think of this person as, “He understands what he’s doing.” And we probably have people like that around us. We’ve all experienced this to some extent.

It could be. And If that is the case, it really boils down to what the word ‘artificial’ means in ‘artificial intelligence’. If ‘artificial’ means it’s not really intelligence—like artificial turf isn’t really turf —if it really means that, then you’re right. As long as you don’t know that he doesn’t understand, it doesn’t really matter.

I would love to ask one more question along these lines. I’m really intrigued by what we will need to do to build a machine that is equivalent to a human; and I think your approach of, “Let’s start with what humans do and talk about computers later” is really smart.

So I would put this to you… Humans are sentient, which is often a word that is misused to mean intelligent. [But] that’s actually ‘sapient’. ‘Sentient’ means you’re able to feel things—usually pain—but you’re able to feel something, to have an experience of feeling something.

That’s kind of also wrapped up in consciousness, but we won’t talk there yet…

Is it possible for a computer to ever feel anything? It’s clearly possible to set up a temperature sensor that, when you hold a match to it, the computer can sense the temperature; and you can program the computer to scream in agony when it passes a certain temperature. But would it ever be possible for a computer to feel pain, or feel anything?

Let’s step back and ask the following question… Two parts: First is, “To make computers that feel something—can that be done?” The second question is, “Why do we need computers that feel things?” Is that really what separates artificial intelligence from human intelligence?

In other words, is that really the key distinction? And if so, can that be built? Let’s talk about how do we build it. Have you heard, or have you seen, any of the demos out of this terrific company—I think it’s called Hanson Robotics. If you go online, you can Google it, you can search for it. David Hanson is one of the founders, and effectively, what they build is a way to give a robot a face; and he has these actuators that allow very fine-grained movement.

And so, effectively, you see full facial features and full facial expressions projected onto a robot. The robot can smile and the robot can frown, and it can get angry and it can stare and express excitement and joy. Effectively, he’s sort of done a lot of the work of—not just what it takes to build mechanically those parts, but also thinking harder about how it would get expressed, and a little bit about when it would get expressed.

And then independently, there’s great work from MIT—and you know, other labs, too—but I’m just thinking of one example: They looked at learning and interpreting emotion. For example, you might imagine [that] if the person in front of you is angry, you might want the robot to react and respond differently than if the person was happy and excited.

Effectively, you could imagine putting a camera, seeing the stream coming in, [and] the computer processes it to do classification for whatever type of emotion is being expressed—you could specify a list of emotions that are commonly expressed. From that, the computer can then decide what human emotion is being expressed, and then decide what emotion it wants to express.

And now, you can imagine feeding it back into Hanson’s program that allows them to generate robotic facial motions that are effectively expressing emotion, right? So if we had to build it, we could build it. We know how to think about building it. So mechanically, it is not impossible. So now the piece here is—the second question is: If we could do this, and in fact there are studies that…

For instance, when I was with Microsoft Research, there was a robot that would greet you, and it would basically see where you were standing, and it would turn its head to try to point to you. And many, many individuals who weren’t familiar with robotics—many visitors who would come to Microsoft, people that weren’t in the technology industry, but were just visiting—would see that and get really excited, because the idea of a robot turning its head and moving its eyes in response to where you’re standing was cool, and seemed very intelligent.

But effectively, if you break down the mechanics of how it’s doing it, it’s not a big surprise. Similarly, you could augment it by also showing facial expressions, and I think CMU— Carnegie Mellon—has a beautiful robot that’s called the robot receptionist; her name is Valerie. They worked on it at the drama department at Carnegie Mellon.

And they basically filled the robot with lots of stories, and it was really funny… As a graduate student, I was visiting, and met Valerie for the first time… You could ask her for directions, and she would give you directions on where to go. If I could say, “Where’s Manuela’s office?” the robot would point me to where it is.

But in the middle, she would behave like a human, where she would be talking on the phone to her sister; and they’d be talking about what’s going on, what’s been keeping them busy, and they’d hang up or she’d put people on hold if a new visitor came in, and so forth.

So what I’m challenging is this concept of, is it really the lack of human emotion, or what you consider to be human-like emotion—to be very special to humans? Is it that? Is it mimicking that? What does it mean to feel pain? Is it really the action-reaction—somebody’s poking you and you react—or is it the fact that there’s something internal, biological that’s going on, and it’s the perception of that?

That could be. You asked a good question: Does it matter? And there would be three possible reasons it would matter: First, there are those that would maintain that an intelligence has to experience the world, that it isn’t just this abstract ones and zeros it-lives-in-a-computer thing—that a true intelligence would need to be able to actually have experiences.

The second thing that might make it matter is… There was a man named Weizenbaum who famously created a program in the ‘60s called ELIZA, which was a really simple program. You would say, “I’m sad.” It would say, “Why are you sad?”

“I’m sad because my brother yelled at me.”

“Why did your brother yell at you?”

And Weizenbaum turned against it all, because what he saw is that even people who knew it was just a very simple program developed emotional attachment to it. And he said… When the computer says, “I understand,” as Eliza did, he said it’s just a lie. There is no ‘I’ in there, and there’s no understanding.

But really the reason why it might actually matter is another thought experiment, that I will put to you and to those listening: It’s the problem of Mary.

Mary is a hypothetical person who knows everything about color. She knows literally everything, like at a god-like level. She knows everything about photons and cones and how color manifests in the brain. She knows everything that there is to know about it, but the setup is that she has never seen it. She lives in this room that’s all black and white, and only has black-and-white computer monitors.

She walks outside one day and sees red for the first time. And the question is: Did she learn something new? Is experiencing something different than knowing something? And if you say yes… It’s one of those things where most people, at first glance, would say, “Yes, if she’s never seen color and she sees it for the first time; yes, she learns something.”

And if that is the case, then a computer has to be able to experience things in order to learn past a certain point. Do you think Mary learned something new when she saw color for the first time? Or no, she knew exactly what it would look like, and experiencing it would make no difference?

So, you know what Mary knew. Did she know ahead of time what red would look like when she stepped out?

Well, she knew everything about color. She never saw it, but she knew exactly what it would do to her brain—at a molecular level, atomic level—every single thing that would happen in her brain when she saw a color, but she’s never seen it.

As a computer scientist, when you say that you me, I would say that the representation of what Mary understands or know is ambiguous. What I mean by this is, I don’t know what it means to say—I understand what it means to say “she knows at the molecular level what happens.” I understand what it means to say she knows, perhaps, about the relationship between different primary colors, and the derivative colors and so forth.

But are you saying that she knows… Is it the case that she receives an image using her eyes, and her eyes represent it using some form of internal neuronal format?—Are you saying she knows that? Because if she doesn’t know that, then effectively, she still has a partial understanding of what knowing everything about color means.

So this might be an interesting place… Where we think her knowing everything about color…

If you tell me: Somebody presented a red image to her, and she knew what it meant to take that red image and convert it—and these are really hypotheticals; I’d have to understand this more deeply and really study it, and perhaps bring in someone who understands human perception really well—but my first step-check would be: What does it mean for her to know everything about color?

And what if we present her with an image, her visual cortex processes it, and effectively, she is getting data, and she is seeing it internally. Is it stored in RGB format? Is she storing it in some format that she understands? Is she aware? Has that core process happened in her head before? It may not have been due to her stepping out, but the question is: Is that something that she is privy to, or has knowledge of?

And if so, then I would say that when she steps out… And if all she is doing is focusing on the color red, and that is the only sensation that’s being generated in her head; then yeah, this is going to seem familiar to her because it’s something she’s seen before. The word ‘experience’ at that point is a really interesting word. And it would be fun to sit down and try to write down formal definitions for what it means.

And generally, we think of having ‘seen’ and having ‘experienced’ as two different things, in human emotions. But I think from a computer point of view, they don’t seem different. Even as a human, if I think hard about it, I don’t know really what the distinction is. I don’t know what it means to kind of know it, to know it, and then experience it. What is the difference between those things?

It may be that the question imperfectly captures it, because it’s formed very casually, but… Humans experience the world.

You taste a pineapple, and what that pineapple tastes like… Tasting it seems to be a different thing than knowing something. If I know what it tastes like, it’s a different thing than actually having the experience of tasting it.

Knowing how to ride a bicycle is different than having ridden a bicycle, and knowing how you feel balanced when you get on one. Touching something warm feels a certain way that knowing all about warmth does not capture.

And so, the question is: If a machine cannot actually feel things, touch things, taste things, have any experience of the world—then whatever intelligence it has is truly fake. It really is artificial in a sense that’s completely fake.

And you’re right, I think, in asking the question… Why we ask these questions… And a lot of what people are often doing is asking questions about people. Are people machines? Are we…

But then they have this disconnect, to say: “But we feel, and we experience, and we know, and those seem to be different than things my iPhone can do.” So I think I’m trying to connect those dots to say, experiencing something seems to be different than knowing something.

But you’re right; it’s imperfectly formed. I’ll let you comment on that, and then let’s move on to your research, because there’s so much there I would love to hear more about.

Sure! So I think I am going to continue to push back a little bit on… I feel that people’s experience of what they believe a machine or an iPhone can do is very much based on… I think it’s easier to think about a single narrow task.

You could take the task of eating a pineapple, or the task of going and experiencing a warm day… But effectively, the way I think about it is [that] a lot of these capabilities don’t exist because most people haven’t thought that building a machine that eats a pineapple is a very useful thing, so people haven’t bothered to build it.

But let’s imagine I decided that was important, and I wanted to build it. Then, what I would do is much like—going back to David Hanson… I would try to first identify what do I mean by ‘experience eating a pineapple’, and if the idea is that every time I am given it—a tasty pineapple—I can eat it and it’s delicious, and my eyes light up. And if I eat a rotten pineapple, then I’m visibly upset.

Then I could imagine building the sensor to which you feed the pineapple. It runs chemical tests that check, effectively, what’s in the pineapple and… You could start by version one. Version one tests what’s in the pineapple, and based on that—and it’s hooked up to David Hanson’s robot—and it generates the reaction, which is excited, or sad, or unhappy, and visibly unhappy, or sad, depending on how tasty or not-so-tasty the pineapple is.

And you could even take it a step further by saying, “You know what? I’m going to give lots of humans things to eat; and based on that, I will watch what the humans are doing. And then effectively, the computer’s just learning by taking the same fruit and eating it itself. And you didn’t even program anything about how to react. All it did was watch humans eat it, and based on that, it learned that when certain molecular compositions exist in the thing it’s tasting, then it tends to get happy or less happy.

And you might imagine it starts to mimic. In fact, we could take it even another step further and say, “Let’s give a group of robots the same set of sensors, and they have to figure out a way by which they communicate and barter with each other.” So effectively, there’s an objective function, and the objective function—or the goal for the group of robots—is to figure out an effective way to trade.

The trade is such that one group of robots loves apples. The other group of robots loves pineapples. And the way you know that is, effectively, they’ve each lived in different environments and—I don’t like the word ‘live’, because it’s over-interpretive…

What I mean is, they’ve been trained in different environments, and the ones that love to eat apples have learned to get an excited expression to good apples, and the other set of robots get an excited expression to good pineapples. And you want them to work together to trade, such that everybody is as happy as possible.

Then it’s completely possible they’ll be able to effectively learn, on their own, a trading strategy where they say, “You know what? The people who don’t like pineapples should give away their pineapples, and the people who don’t like apples should get rid of apples.” So, effectively, what I was giving you was an example where…

If we understand what is the objective we’re after—which is, what does experiencing a pineapple mean—then very often, you can turn it into some mathematical objective by which the computer can learn how to do similar things, and very quickly… ‘Very quickly’ depends a lot on the complexity of the task—but it can mimic that behavior or goal—and now I use the word ‘mimic’ lightly…

But effectively it can, be it similarly, or—and one could argue, “What does ‘similar’ mean?” and, “What does ‘behave similarly’ mean?”… But for the most part, we would look at this and be pretty satisfied that it’s doing something that we would consider to be intelligent. We would consider it to be experiencing something.

Unless, the only block in our head is we think it’s a machine… So it’s hard because we think humans experience things and machines don’t… But what I think would be really cool is to think about, “Are there tasks where we really experience something, that we think there is no way to build a machine to experience the same thing?” What does it mean to experience in that setup?

I think that would be interesting, and I would love to hear [from] our listeners who have ideas, or want to send me ideas. I would love to hear that!

Well, I think the challenge, though, is that in civilization we’ve developed something called ‘human rights’, where we say: “There are things you can’t do to a person no matter what. You can’t torture people for amusement, and you can’t do these things.”

So we have human rights, and we extend them—actually broadly—to other creatures that can feel pain, so we have laws against cruelty to animals because they feel pain.

It sounds like you’re saying the minute you program a computer to be able to mimic a frown, or to scream and mimic agony, that that is somehow an equivalency; and therefore, we need laws that… Once the temperature hits 480 degrees, the computer screams, and we need to outlaw that; we need to grant those things rights, because they are experiencing things.

And then, you would push it one step further to say, when I am trying to get my car out of the mud, and it’s smoking, and the gears are grinding… That that too is experiencing pain, and therefore that should be…

You run into one of two risks. You would either make the notion of, “Things that feel have rights not to be tortured”—you either make that ludicrous, by applying it to anything that can make a frowny face…

You either try to elevate everything that’s mechanical to that, or you end up debasing people, by saying: “You don’t actually feel anything. That’s just a program. You’re a machine, and you don’t actually have any experience. And you reporting pain, it isn’t real. It’s just you, kind of, programmed to say that.”

How do you have rights in a world where you have that reductionist view of experience?

Personally, I think it’s pretty liberating that computers don’t get tired, and they don’t feel pain. When I say the word ‘feel pain’, I mean feel pain in the sense that, if you ‘hurt’ me a lot in a certain way using a pin, I may screech. But also, I could shut down, I could stop being productive.

But if you take a computer, and it has a hard, metal shell… And you take a pin and you effectively poke it too hard, it doesn’t really do much to the computer because it’s fine.

But then, there are other things… For instance, if you unplug the computer, it’s dead. And there’s an equivalent notion of unplugging me. So for me, I kind of find it liberating that we don’t have to try to do all the same things. The thing that is very exciting to me about it is that this has its own strength. A machine is effectively a very… I think there’s two takeaways for me, personally: One, the fact that it makes me think harder about, “What do I have to do to be special?”—about myself.

So effectively, there are lots of things that I used to consider to be very special—I’m still special, of course [laughs]—but what I mean is, I would attribute this mystical sense to—which is maybe not so necessary… Like the whole task of programming computers and developing these learning machines has really made me a little bit more humble about what I consider to be very hard, and not-so-hard; and effectively realizing that maybe some of these properties that humans exhibit can actually be demystified, right?

I understand a little bit more about, what does it mean to do X and do Y? It makes me think harder about something that comes so naturally to us—how is that we do it? How is it that different beings do it? And the fact that computers can do it, and maybe it’s not exactly the same way, and it’s a slightly different way…

So just having that awareness is actually pretty exciting, because it makes things that are everyday around us, which are pretty rote… not so rote anymore. It’s fun to watch people walk. You’re sort of saying, “Ah, it’s so natural and easy for them,” but if you really think about it, there are just so many complicated things we are doing. And then, you try to make and teach computers how to walk, you sort of very quickly realize how complicated it is, and it’s kind of cool that we as human beings can do it.

So effectively, one of the aspects of it is it teaching me a little bit more about myself, and realizing the complexity and also the steps or procedures it takes for me to do some of the things that I’m doing. The second aspect of it is realizing that perhaps it’s a good thing [that] there are certain things a computer is good at, and things that it’s not good at… And perhaps, taking advantage of that in order to build systems that are useful in practice, and can really make us, as a society, better off is pretty exciting to me.

So I think the idea of trying to exactly mimic humans—or whether we would be able to exactly mimic humans—is sort of interesting; but practically speaking, I don’t think of it as the most interesting consequence of this… or the area of debate for most experts in the field.

We think more of it as, what are areas where we can really build useful things that could then help us make humans faster, make everyday life better, save us work—that would be better to pass off to a computer to do, so that it frees up time for us to do other things.

But… Does that answer your question a little bit more, about human rights? So effectively, I think the issue was, if you are concerned about pain, then perhaps there should be rules about when humans experience pain, we ought not to do X, Y and Z. Maybe computers could have different sorts of rules, because they experience different sorts of things, and they’re good and bad at different sorts of things.

And I think we just haven’t come to a place where there’s a general agreement among scientists building it, about what is and isn’t useful, and we work around those principles. And that has really dictated what gets built.

Fair enough! So tell me about… You have an unusual professorship at Johns Hopkins. What is that? Can you talk about your work there?

Yeah, sure! I’m a faculty in Computer Science and Stats, but also, I’m a faculty in Public Health. Hopkins is one of the largest schools of public health in the country; and in particular, I am in the department of Health Policy and Management. So what’s unique about my appointment is that…

Hopkins has a very large School of Public Health, a very large School of Medicine. And I effectively interact—on a day-to-day basis—not just with engineers, but also people who are clinical experts and public health experts who design policy… [Which brings] a multifaceted view into the kinds of questions we’re trying to answer around using computers, and using data-driven tools to improve medicine and improve public health.

And so, what does that look like on a day-to-day basis? What kinds of projects are you working on?

Let’s see… Let me give you a concrete example.

One area of study that we spend time on is detecting adverse events in hospitals. They’re called ‘hospital-acquired complications’. One example of this is sepsis. And effectively, what happens is, let’s say a patient is coming into the hospital for any condition; and sometimes they come in because they have an infection, and the infection goes undetected, and turns into what’s called sepsis.

Sepsis is effectively when your body is trying to fight the infection, [and] it releases chemicals, and these chemicals start attacking your [own] organs and systems. This has happened in some fraction of the cases, and if it does happen, it ends up causing organ damage, organ failure, and eventually death if it goes untreated.

And so, this is an example where individuals who have sepsis at the moment… Physicians are relying on visible signs and symptoms in the patient in order to be able to initiate treatment. And what our last work has shown is [that] it’s possible to identify very early, based on lots of data… So when they come in, as part of routine care, they’re taking tons of measurements, and these measurements are getting stored electronically…

And so, what we do is we analyze these measurements in real-time, and we can identify subtle signs and symptoms that currently the physicians miss all the—you know, it’s a busy unit. In a 400-bed hospital, there’s persons coming in, there are lots of other patients; it’s a distributed care team. It’s tough. And if the symptoms are not really visible, or are subtle, they sometimes get missed.

And so, an example area where we’ve shown is—with sepsis, for instance—you can identify very early, subtle signs and symptoms, and identify these high-risk patients and bring this to the caregiver; so that they can now start to initiate treatment faster. And so, this is exciting because it really demonstrates the power of computers: They’re tireless; they can sit there, process data from 400 patients continuously, all the time.

We can learn from expert doctors what are signs and symptoms, but not just that! We can look at retrospective data from 10,000 or 70,000 or 100,000 patients, and understand things like what are the subtle signs and symptoms that happen to appear in patients with sepsis and without sepsis, and use that to start displaying this kind of information to physicians.

And now, they’re better off, because suddenly, they are missing fewer patients. The patients are better off because they can go in completely happy that they’re going to be cared for in the best way possible, and the computer is sitting there, and it really has no reason to complain because all it’s doing is processing the data, and it’s good at that. So that’s one example. And there are lots of other areas.

Another area we’ve been spending time looking at is complex patients, patients of… the word ‘complex patients’ is a little… Let me demystify that a little bit. So looking at diseases where there’s a ton of diversity or heterogeneity in symptom profile; so for example diseases like lupus, scleroderma, multiple sclerosis, where the signs and symptoms vary a lot across individuals. And really understanding which person is going to be responsive to which treatment [in these cases] is not so obvious.

So again, going back to the same philosophy: If we can take data from a large patient population, we can analyze this and start to learn what—for a given patient—is their typical course going to look like, and what are they going to be likely to be responsive to. And then [we can] use that to start bringing that information back to our physicians at the point of… They can now use this information to improve and guide their own care. So those are some examples.

I was just reading some analysis which was saying that before World War II, doctors only had five medicines. They had quinine for malaria, they had aspirin for inflammation, and they had morphine for pain… They had five medicines, and then, you think about where we are today. And that gives one a lot of hope.

And then you think about… We kind of have a few challenges. I mean even all the costs, and all the infrastructure and all of that, just treating it as a mental problem… One, as you just said, no two people are the same, and they have completely different DNA; and they have completely different life experiences. They eat different food for lunch, all of this stuff.

So people are very different, and then we don’t have really good ways to collect that data about them and store it and track it. And so it’s really dirty data over a bunch of different kinds of patients. So my question is: How far do you think we’re going to be able to go? How healthy will we be?

You can pick any time horizon you want. Will we cure aging? Will we eliminate disease? Will we get to where we can sequence any pathogen, and model it with the person’s DNA in a computer, and try 10,000 different cures at once, and know in five minutes how to cure them? Or do we even have a clue of what’s eventually going to be possible?

So I think one of the interesting things, when I first joined Hopkins, that I learned very early, is that when we dream of what an ideal health system ought to look like… Wouldn’t it be great if we had cures for everything? [But] one of the most surprising and disappointing facts I learned was that even in cases where we know what the right treatment is; even in cases that we know—where we could have treated them, had we known upfront, who they were and what was the appropriate sort of therapy for them…

Right now, we have many such cases we miss. So I don’t know if you’ve seen this Institute of Medicine report that came out in 2011 or 2010—I can’t remember the date—where they talk about how a third or a quarter of the amount of money that’s spent in healthcare, they think of it as ‘unnecessary waste’.

Unnecessary waste means waste because we are over-treating; waste in cases where we’ve kept people longer than was necessary; waste because there were complications that were preventable; waste because we gave them treatments that weren’t the right treatments to begin with, and we should’ve given them something else.

And I don’t think the answer is as simple as, “Oh, why isn’t our health system better? Is it because we’re not training the most competent doctors? Is it because our medical educational system is broken?” No. I think if you actually sit inside a hospital, and you watch what’s going on, it’s such a multi-disciplinary, multi-person environment…

That every decision touches many, many people, including the patient. And there’s all this information, and all these decisions have to be made very quickly. And so what to know about any given individual, at any given time, to determine the right thing to do is actually very complicated. And it’s pretty amazing to me that we’re as effective as we are, given the way the system is built up.

So effectively, if you really think about it… To me, a part of it is the system’s problem, in the sense that if, going back… Our delivery of healthcare has very much come out of the era where there were only so many medications. They kind of knew what to do, there were only so many measurements, the rules were easy to store in our head, and you could really focus on execution—which is making sure we’re able to look at the individual and sort of glean what is necessary, and apply the knowledge we’ve learned in school very quickly.

And then the top challenge is… Medical literature is expanding at a staggering rate. Like you noted, the number of treatments has expanded at a staggering rate, but much more so, our ability to measure individuals has expanded. And as a result, even sort of knowing our notion of what is a disease…

It’s not just the case that… The rules aren’t so simple anymore. It’s much more challenging. Rather than saying, “For every person with sepsis, give them fluids.” No.

Some are very responsive, and some are not responsive, and the obvious one is if they have any kind of heart failure, don’t give them fluids because it’s going to make the condition worse. What I’m effectively going to is…

I feel there’s a huge low-hanging fruit here, which is… I think we can make human health a lot better by even thinking just harder about even all the treatments we already have, as we start taking many more measurements, and as these measurements are becoming visible to us in ways that they’re accessible.

Improving the precision at which we prescribe these measurements will make a huge difference, and I think that’s very tangible, very easy to… I think something we’ll get to within the next five to ten years. There are lots of areas of medicine that will see a huge improvement, just from better use of lots of data, that we already know how to collect. And thinking about the use of that data and improving how we target therapy.

I’ll give you an example: An area study that I am familiar with is, as I mentioned earlier, these complex diseases—like scleroderma.

They used to think of scleroderma as one disease, and any expert who treats scleroderma patients knows that there’s tremendous diversity among individuals when they come in. Some have huge impact on the kidneys, others have a huge impact on the gastrointestinal tract, and yet others have huge impact on the heart or lungs.

And effectively, when the persons come in, you’re kind of wondering, “Well, I have an array of medications I can give them. Who is this person going to be? And what should I be treating them with?” And our ability to look at this person’s detailed data and understand who this person is likely to be… And then, from that, targeting therapy more effectively, could already influence and improve treatment there.

So I think that’s one area where you’ll see a huge amount of benefits. The second area that I think… is basically increasing our ability to measure more precisely. And you can already see whole genome sequencing, microbiomes, and there are specific disease areas where being able to collect this much more easily will make a big difference.

And then, effectively, they’re going to give rise to new treatments because there are pathways that we are unaware of, that we will discover in the process of having these measurements, and that will lead to new treatment. So, I think the next ten years are going to be very, very exciting in terms of how quickly the field is going to improve. And human health is going to improve from our ability to administer medication and administer medicine more precisely.

That is a wonderful thought. Why don’t we close on that? This has been a fascinating hour and I want to thank you so much for taking the time to join us.

You’re welcome and thank you so much for having me! This was really fun.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Advertisements

Voices in AI – Episode 9: A Conversation with Soumith Chintala

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Soumith talk about transfer learning, child development, pain, neural networks, and adversarial networks.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 9: A Conversation with Soumith Chintala”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-46-22)-soumith-chintala.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/09\/voices-in-ai-cover.png”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png&#8217;
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Soumith Chintala. He is an Artificial Intelligence Research Engineer over at Facebook. He holds a Master’s of Science and Computer Science from NYU. Welcome to the show, Soumith.

Soumith Chintala: Thanks, Byron. I am glad to be on the show.

So let’s start out with your background. How did you get to where you are today? I have been reading over your LinkedIn, and it’s pretty fascinating.

It’s almost accidental that I got into AI. I wanted to be an artist, more of a digital artist, and I went to intern at a visual effects studio. After the summer, I realized that I had no talent in that direction, so I instead picked something closer to where my core strength lies, which is programming.

I started working in computer vision, but just on my own in undergrad. And slowly and steadily, I got to CMU to do robotics research. But this was back in 2009, and still deep learning wasn’t really a thing, and AI wasn’t like a hot topic. I was doing stuff like teaching robots to play soccer and doing face recognition and stuff like that.

And then I applied for master’s programs at a bunch of places. I got into NYU, and I didn’t actually know what neural networks were or anything. Yann LeCun, in 2010, was more accessible than he is today, so I went, met with him, and I asked him what kind of computer vision work he could give me to do as a grad student. And he asked me if I knew what neural networks were, and I said no.

This was a stalwart in the field who I’m sitting in front of, and I’m like, “I don’t know, explain neural networks to me.” But he was very kind, and he guided me in the right direction. And I went on to work for a couple of years at NYU as a master’s student and simultaneously as a junior research scientist. I spent another year, almost a year there as a research scientist while also separately doing my startup.

I was part of a music and machine learning startup where we were trying to teach machines to understand and play music. That startup went south, and I was looking for new things. And at the same time, I’d started maintaining this tool called Torch, which was the industry-wide standard for deep learning back then. And so Yann asked me if I wanted to come to Facebook, because they were using a lot of Torch, and they wanted some experts in there.

That’s how I came about, and once I was at Facebook, I did a lot of things—research on adversarial networks, engineering, building PyTorch, etc.

Let’s go through some of that stuff. I’m curious about it. With regard to neural nets, in what way do you think they are similar to how the brain operates, and in what way are they completely different?

I’d say they’re completely different, period. We think they’re similar in very high-level and vague terms like, “Oh, they do hierarchical learning, like humans seem to think as well.” That’s pretty much where the similarity ends. We think, and we hypothesize, that in some very, very high-level way, artificial neural networks learn like human brains, but that’s about it.

So, the effort in Europe—the well-funded effort—The Human Brain Project, which is deliberately trying to build an AGI based on the human brain… Do you think that’s a worthwhile approach or not?

I think all scientific approaches, all scientific explorations are worthwhile, because unless we know… And it’s a reasonably motivated effort, right? It’s not like some random people with bad ideas are trying to put this together; it’s a very well-respected effort with a lot of experts.

I personally wouldn’t necessarily take that direction, because there are many approaches to these things. One is to reverse-engineer the brain at a very fundamental level, and try to put it back together exactly as it was. It’s like investigating a car engine… not knowing how it works, but taking X-ray scans of it and all that, and trying to put it back together and hoping it works.

I’m not sure if that would work with as complicated a system as the brain. So, in terms of the approach, I’m not sure I would do it the same way. But I think it’s always healthy to explore various different directions.

Some people speculate that a single neuron is as complicated in its operations as a supercomputer, which either implies we won’t get to an AGI, or we certainly won’t get it by building something like the human brain.  

Let’s talk about vision for just a minute. If I show a person just one sample of some object, a statue of a raven, and then I show them a hundred photos with it partially obscured, on its side, in the dark or half underwater, weirdly lit—a person could just boom, boom, boom, pick it all out.

But you can’t train computers anything like that. They need so many symbols, so many examples. What do you think is going on? What are humans doing that we haven’t taught computers how to do?

I think it’s just the diversity of tasks we handle every day. If we had a machine learning model that was also handling so many diverse tasks as humans do, it would be able to just pick out a raven out of a complicated image just fine. It’s just that when machines are being trained to identify ravens, they’re being trained to identify ravens from a database of images that don’t look very much like the complicated image that they’ve been given.

And because they don’t handle a diverse set of tasks, they’re doing very specific things. They kind of over-fit to that dataset they have been given, in some way. I think this is just a matter of increasing the number of tasks we can make a single machinery model do, and over time, they will get as smart. Of course, the hard problem is we haven’t figured out how to make the same model do a wide variety of tasks.

So that’s transfer learning, and it’s something humans seem to do very well.

Yes.

Does it hinder us that we take such an isolated, domain-specific view when we’re building neural AIs? We say, “Well, we can’t teach it everything, so let’s just teach it how to spot ravens,” and we reinvent the wheel each time? Do you have a gut intuition where the core, the secret of transfer learning at scale is hiding?

Yeah. It’s not that we don’t want to build models that can do a wide variety of tasks. It’s just that we haven’t figured it out yet. The most popular research that you see in media, that’s being highlighted, is the research that gets superhuman abilities in some specific niche task.

But there’s a lot of research that we deal with day-to-day, that we read about, that is not highlighted in popular media, which tries to do one-shot learning, and smarter transfer learning and stuff. And as a field, we’re still trying to figure out how to do this properly. I don’t think, as a community of AI researchers, we’re restricting ourselves to just do the expert systems. It’s just like we haven’t figured out as well how to do more diverse systems.

Well, you said neural nets aren’t much like the human brain. Would you say just in general, mechanical intelligence is different than human intelligence? Or should one watch how children learn things, or study how people recognize what they do, and cognitive biases and all of that?

I think there is a lot of value in doing cognitive science, like looking at how child development happens, and we do that a lot. A lot of inspiration and ideas, even in machine learning and neural networks, does come from looking at such aspects of human learning and human intelligence. And it’s being done.

We collaborate, for example at FAIR—Facebook AI Research—with a few researchers who do try to understand child development and child learning. We’ve been building projects in that direction. For example, children learn things like object permanence between certain ages. If you hide something from a child and then make it reappear, does the child understand that you just put it behind your back and then just showed it to them again? Or does a child think that that object actually just disappeared and then appeared again?

So, these kinds of things are heavily-studied, and we try to understand how the mechanisms of learning are… And we’ve been trying to replicate these for neural networks as well. Can a neural network understand what object permanence is? Can a neural network understand how physics works? Children learn how physics works by playing a lot, playing with blocks, playing with various things in their environment. And we’re trying to see if neural networks can do the same.

There’s a lot of inspiration that can be taken from how humans learn. But there is slight separation between whether we should exactly replicate how neurons work in a human brain, versus neurons work in a computer thing; because human brain neurons, their learning mechanisms and their activation mechanisms are using very different chemicals, different acids and proteins.

And the fundamental building blocks in a computer are very different. You have transistors, and they work bit-wise and so on. At a fundamental block level, we shouldn’t really look for exact inspirations, but at a very high level, we should definitely look for inspiration.

You used the word ‘understand’ several times, in that “Does the computer understand?” Do computers actually understand anything? Is that maybe the problem, that they don’t actually have an experiencing self that understands?

There’s—as they say in the field—‘nobody home’, and therefore there are just going to be these limits of things that come easy to us because we have a self, and we do understand things. But all a computer can do is sense things. Is that a meaningful distinction?

We can sense things, and a computer can sense things in the sense that you have a sensor. You can consume visual inputs, audio inputs, stuff like that. But understanding can be as simple as statistical understanding. You see something very frequently, and you associate that frequency with this particular association of a term or an object. Humans have a statistical understanding of things, and they have a causal understanding of things. We have various different understanding approaches.

And machines can, at this point, with neural networks and stuff… We take a statistical or frequentist approach to things, and we can do them really well. There’s other aspects of machine learning research as well that try to do different kinds of understanding. Causal models try to consume data and see if there’s a causal relationship between two sets of variables and so on.

There’s various levels of understanding, and understanding itself is not a magical word that can be broken down. I think we can break it down into what kinds and what approaches of understanding. Machines can do certain types of understanding, and humans can do certain more types of understanding that machines can’t.

Well, I want to explore that for just a moment. You’re probably familiar with Searle’s Chinese Room thought experiment, but for the benefit of the listeners…

The philosopher [Searle] put out this way to think about that word [‘understanding’]. The setup is that there’s a man who speaks no Chinese, none at all, and he’s in this giant room full of all these very special books. And people slide questions written in Chinese under the door. He picks them up, and he has what I guess you’d call an algorithm.

He looks at the first symbol, he finds the book with that symbol on the spine, he looks up the second symbol that directs him to a third book, a fourth book, a fifth book. He works his way all the way through until he gets to the last character, and he copies down the characters for the answer. Again, he doesn’t know what they are talking about at all. He slides it back under the door. The Chinese speaker [outside] picks it up, reads it, and it’s perfect Chinese. It’s a perfect answer. It rhymes, and it’s insightful and pithy.  

The question that Searle is trying to pose is… Obviously, that’s all a computer does. It’s a deterministic system that runs these canned algorithms, that doesn’t understand whether it’s talking about cholera or coffee beans or what have you. That there really is something to understanding.  

And Weizenbaum, the man who wrote ELIZA, went so far as to say that when a computer says, “I understand,” that it is just a lie. Because not only is there nothing to understand, there’s just not even an ‘I’ there to understand. So, in what sense would you say a computer understands something?

I think the Chinese Room thing is an interesting puzzle. It’s a thought-provoking situation, rather. But I don’t know about the conclusions you can come to. Like, we’ve seen a lot of historical manuscripts and stuff that we’ve excavated from various regions of the world, and we didn’t understand that language at all. But, over time, through certain statistical techniques, or certain associations, we did understand which words—what the fundamental letters in these languages are, or what these words mean, and so on.

And no one told us exactly what these words mean, or what this language exactly implies. We definitely don’t know how those languages are actually pronounced. But we do understand them by making frequentist associations with certain words to other words, or certain words to certain symbols. And we understand what the word for a ‘man’ is in a certain historical language, or what the word for a ‘woman’ is.

With statistical techniques, you can actually understand what a certain word is, even if you don’t understand the underlying language beforehand. There is a lot of information you can gain, and you can actually understand and learn concepts by using statistical techniques.

If you look at one example in recent machine learning time… is this thing called word2vec. It’s a system, and what it does is you give it a sentence, and it replaces the center word of the sentence with a random other word from the dictionary… And it uses that sentence with this random word in the middle as a negative example, and without replacing the random word—[using] the sentence as-is—is a positive example.

Just using this simple technique, you’ll learn embeddings of words; that is, numbers associated with each word that will try to give some statistical structure to the word. With just a simple model which doesn’t understand anything about what these words mean, or in what context these words are used, you can do simple things like [ask], “Can you tell me what ‘king’, minus ‘man’, plus ‘woman’ is?”

So, when you think of ‘king’, you think, “Okay, it’s a man, a head of state.” And then you say “minus man,” so “king minus man” will try to give you a neutral character of a head of state; and then you add ‘woman’ up, and then you expect ‘queen’… And that’s exactly what the system returns, without actually understanding what each of these words specifically mean, or how they’re spelled, or what context they’re in.

So I think there is more to the story than we actually understand. That is, I think there is a certain level of understanding we can get [to] even without the prior context of knowing how things work. In the same way, computers, I think, can learn and associate certain things without knowing about the real world.

One of the common arguments is like, “Well, but computers haven’t been there and seen that, just like humans did, so they can’t actually make full associations.” That’s probably true. They can’t make full associations, but I think with partial information, they can understand certain concepts and infer certain things just with statistical and causal models that they have to learn [from].

Let me try my question a little differently, and we will get back to the here and now… But this, to me, is really germane because it speaks to how far we’re going to be able to go—in terms of using our present techniques and our present architectures, to build things that we deem to be intelligent.  

In your mind, could a computer ever feel pain? Surely, you can put a sensor on a computer that can take the temperature, and then you write a program so that when it hits 500 degrees, it should start playing this mp3 of somebody screaming in agony. But could a computer ever feel pain? Could it ever experience anything?

I don’t think so. Pain is something that’s been baked into humans. If you bake pain into computers, then yeah, maybe, but not without it evolving to learn what pain is, or like baking that in ourselves. I don’t think it will—

—But is knowing what pain is really the same thing as experiencing it? You can know everything about it, but the experience of stubbing your toe is something different than the knowledge of what pain is.

Yeah, it probably doesn’t know exactly what pain is. It just knows how to associate with certain things about pain. But, there are certain aspects of humans that a computer probably can’t exactly relate to… But a computer, at this stage of machines, has a visual sensor, has an audio sensor, has a speaker, and has a touch sensor. Now we’re getting to smell sensors.

Yes, the computer probably can experience every single thing that humans experience, in the same way; but I think that’s largely dissociative from what we need for intelligence. I think a computer can have its own specific intelligence, but not necessarily have all [other] aspects of humans covered. We’re not trying to replicate a human; we’re trying to replicate intelligence that the human has.

Do you believe that the techniques that we’re using today, the way we look at machine learning, the algorithms we use, basic architectures… How long is that going to fuel the advance of AI? Do you think the techniques we have now—if just given more data, faster computers, tweaked algorithms—we’ll eventually get to something as versatile as a human?  

Or do you think to get to an AGI or something like it, something that really can effortlessly move between domains, is going to require some completely unknown and undiscovered technology?

I think what you’re implying is: Do we need a breakthrough that we don’t know about yet, that we need AGI for?

And my honest answer is we probably do. I just don’t know what that thing looks like, because we just don’t know ahead of time, I guess. I think we are going in certain directions that we think can get us to better intelligence. Right now, where we are is that we collect a very, very large dataset, and then we throw it into a neural network model; and then it will learn something of significance.

But we are trying to reduce the amount of data the neural network needs to learn the same thing. We are trying to increase the number of tasks the same neural network can learn, and we don’t know how to do either of [those] things properly yet. Not as properly as [we do] if we want to train some dog detector by throwing large amounts of dog pictures at it.

I think through scientific process, we will get to a place where we understand better what we need. Over this process, we’ll probably have some unknown models that will come up, or some breakthroughs that will happen. And I think that is largely needed for us to get to a general AI. I definitely don’t know what the timelines are like, or what that looks like.

Talk about adversarial AI for a moment. I watched a talk you gave on the topic. Can you give us a broad overview of what the theory is, and where we are at with it?

Sure. Adversarial networks are these very simple ways of [using] neural networks that we built.

We’ve realized that one of the most common ways we have been training neural networks is: You give a neural network some data, and then you give it an expected output; and if the neural network gives an output that is slightly off from your expected output, you train the neural network to get better at this particular task. Over time, as you give it more data, and you tune it to give the correct output, the neural network gets better.

But adversarial networks are these slightly different formulations of machines, where you have two neural networks. And one neural network tries to synthesize some data. It takes in no inputs, or it takes some random noise as input, and then it tries to generate some data. And you have another neural network that takes in some data, whether it’s real data or data that is generated by this generator neural network. And this [second] neural network, its job is to discriminate between the real data and the generated data. This is called a discriminator network.

[So] you have two networks: the generator network that tries to synthesize artificial data; and you have a discriminator network that tries to tell apart the real data and the artificially-generated data. And the way these things are trained, is that the generator network gets rewards if it can fool the discriminator—if it can make the discriminator think that the data it synthesized is real. And the discriminator only gets rewards when it can accurately separate out the fake data from the real data.

There’s just a slightly different formulation in how these neural networks learn; and we call this an unsupervised learning algorithm, because they’re not really hooking onto any aspects of what the task at hand is. They just want to play this game between each other, regardless of what data is being synthesized. So that’s adversarial networks in short.

It sounds like a digital Turing test, where one computer is trying to fool the other one to think that it’s got the real data.

Yeah, you could see it that way.

Where are we at, practically speaking… because it’s kind of the hot thing right now. Has this established itself? And what kinds of problems is it good at solving? Just general unsupervised learning problems?

Adversarial networks have gotten very popular because they seem to be a promising method to do unsupervised learning. And we think unsupervised learning is one of the biggest things we need to crack before we get to more intelligent machines. That’s basically the primary reason. They are a very promising method to do unsupervised learning.

Even without an AGI, there’s a lot of fear wrapped up in people about the effects of artificial intelligence, specifically automation, on the job market.

People fall into one of three groups: There’s people who think that we’re going to enter kind of a permanent Great Depression, where there’s a substantial portion of the population that’s not able to add economic value.

And then another group says, “Well, actually that’s going to happen to all of us. Anything a human can do, we’re going to be able to build a machine to do.”

And then there are people who say, “No, we’ve had disruptive technologies come along, like electricity and machines and steam power, and it’s never bumped unemployment. People have just used these new machines to increase productivity and therefore wages.”

Of those three camps, where do you find yourself? Or is there a fourth one? What are your thoughts on that?

I think it’s a very important policy and social question on how to deal with AI. Yes, we have in the past had technology disruptions and adapted to them, but they didn’t happen just by market forces, right? You had certain policy changes and certain incentives and short-term boosts for the Depression. And you had certain parachutes that you had to give to people during these drastically-changing times.

So it’s a very, very important policy question on how to deal with the progress that AI is making, and what that means for the job market. I follow the camp of… I don’t think it will just solve itself, and there’s a big role that government and companies and experts have to play in understanding what kind of changes are coming, and how to deal with them.

Organizations like the UN could probably help with this transition, but also, there’s a lot of non-profit companies and organizations coming up who have the mission of doing AI for good, and they also have policy research going on. And I think this will play more and more of a big role, and this is very, very important to deal with—our transition into a technology world where AI becomes the norm.

So, to be clear, it sounds like you’re saying you do think that automation or AI will be substantially disruptive to the job market. Am I understanding you correctly? And that we ought to prepare for it?  

That is correct. I think, even if we have no more breakthroughs in AI as of now, like if we have literally no significant progress in AI for the next five years or ten years, we will still—just with the current AI technology that we [already] have—we will still be disrupting large domains and fields and markets—

—What do you mean, specifically? Such as?

One of the most obvious is transportation, right? We largely solved the fundamental challenges in building self-driving vehicles—

—Let me interrupt you real quickly. You just said in the next five years. I mean, clearly, you’re not going to have massive displacement in that industry in five years, because even if we get over the technological hurdle, there’s still the regulatory hurdle, there’s still retrofitting machinery. That’s twenty years of transition, isn’t it?  

Umm, what I—

—In which time, everybody will retire who’s driving a truck now, and few people will enter into the field—

—What I specifically said was that even if we have no AI breakthroughs in the next five or ten years. I’m not saying that the markets themselves will change in five years. What I specifically said and meant is that even if you have no AI research breakthroughs in five years, we will still see large markets be disrupted, regardless. We don’t need another AI breakthrough to disrupt certain markets.

I see, but don’t you take any encouragement from the past? You can say transportation, but when you look at something like the replacement of animal power with mechanical power, and if you just think of all of the technology, all of the people that displaced… Or you think of the assembly line, which is—if you think about it—a kind of AI, right?

If you’re a craftsperson who makes cars or coaches or whatever one at a time, and this new technology comes along—the assembly line—that can do it for a tenth of the price and ten times the quality. That’s incredibly disrupting. And yet, in those two instances, we didn’t have upticks in unemployment.

Yes,—

—So why would AI be different?

I think it’s just the scale of things, and the fact that we don’t understand fully how things are going to change. Yes, we can try to associate something similar in the past with something similar that’s happening right now, but I think the scale and magnitude of things is very different. You’re talking about in the past over… like over [the course of] thirty years, something has changed.

And now you’re talking about in the next ten years something will change, or something even sooner. So, the scale of things and the number of jobs that are affected, all these things are very different. It’s going to be a hard question that we have to thoroughly investigate and take proper policy change. Because of the scale of things, I don’t know if market forces will just fix things.

So, when you weigh all of the future, as you said—with the technology we have now—and you look to the future and you see, in one column, a lot of disruption in the job market; and then you see all the things that artificial intelligence can do for us, in all its various fields.

To most people, is AI therefore a good thing? Are you overall optimistic about the future with regard to this technology?

Absolutely. I think AI provides us benefits that we absolutely need as humans. There’s no doubt that the upsides are enormous. You accelerate drug discovery, you accelerate how healthcare works, you accelerate how humans transport from one place to another. The magnitude of benefits is enormous if the promises are kept, or the expectations are kept.

And dealing with the policy changes is essential. But my definite bullish view is that the upsides are so enormous that it’s totally worth it.

What would you think, in an AI world, is a good technology path to go [on], from an employment status? Because I see two things. I saw pretty compelling things that say ‘data scientist’ is a super in-demand thing right now, but that’ll be one of the first things we automate, because we can just build tools that do a lot of what that job is.

Right.

And you have people like Mark Cuban, who believes, by the way, [that] the first trillionaires will come from this technology. He said if he had it to do all over again, if he were coming up now, he would study philosophy and liberal arts, because those are the things machines won’t be able to do.

What’s your take on that? If you were getting ready to enter university right now, and you were looking for something to study, that you think would be a field that you can make a career in long-term, what would you pick?

I wouldn’t pick something based on what’s going to be hot. The way I picked my career now, and I think the way people should pick their careers is really what they’re interested in. Now if their only goal is to find a job, then maybe they should pick what Mark Cuban says.

But I also think just being a technologist of some kind, whether they try to become a scientist, or just being an expert in something technology-wise, or being a doctor… I think these things will still be helpful. I don’t know how to associate…

The question is slightly weird to me, because it’s like, “How do I make the most successful career?” And I’ve never thought about it. I’ve just thought about what do I want to do, that’s most interesting. And so I don’t have a good answer, because I’ve never thought about it deeply.

Do you enjoy science fiction? Is there anything in the science fiction world, like movies or books or TV shows, that you think represents how the future is going to turn out? You look at it and think, “Oh, yes, things could happen that way.”

I do enjoy science fiction. I don’t necessarily have specific books or movies that exactly would depict how the future looks. But I think you can take various aspects from various movies and say, “Huh, that does seem like a possibility,” but you don’t necessarily have to buy into the full story.

For example, if you look at the movie Her: You have an OS that talks to you by voice, has a personality, and evolves with its experience and all that. And that seems very reasonable to me. You probably will have voice assistance that will be smarter, and will be programmed to develop a personality and evolve with their experiences.

Now, will they go and make their own OS society? I don’t know, that seems a bit weird. In popular culture, there are various examples like this that seem like they’re definitely plausible.

Do you keep up with the OpenAI initiative, and what are your thoughts on that?

Well, OpenAI seems to be a very good research lab that does fundamental AI research, tries to make progress in the field, just like all of the others are doing. They seem to have a specific mission to be non-profit, and whatever research they do, they want to try to not tie it to a particular company. I think they’re doing good work.

I guess the traditional worry about it is that an AGI, if we built one is, is of essentially limitless value, if you can make digital copies of it. If you think about it, all value is created, in essence, by technology—by human thought and human creativity—and if you somehow capture that genie in that bottle, you can use it for great good or great harm.

I think there are people who worry that by kind of giving ninety-nine percent of the formula away to everybody, no matter how bad their intentions are, you increase the likelihood that there’ll be one bad actor who gets that last little bit and has, essentially, control of this incredibly powerful technology.  

It would be akin to the Manhattan Project being open source, except for the very last step of the bomb. I think that’s a worry some people have expressed. What do you think?

I think AI is not going to be able to be developed in isolation. We will have to get to progress in AI collectively. I don’t think it will happen in a way where you just have a bunch of people secretly trying to develop AI, and suddenly they come up with this AGI that’s eternally powerful and something that will take over humanity, or something like that.

I don’t think that fantasy—which is one of the most popular ways you see things in fiction and in movies—will happen. The way I think it will happen is: Researchers will incrementally publish progress, and at some point… It will be gradual. AI will get smarter and smarter and smarter. Not just like some extra magic bit that will make it inhumanly smart. I don’t think that will happen.

Alright. Well, if people want to keep up with you, how do they follow you personally, and that stuff that you’re working on?

I have a Twitter account. That’s how people usually follow what I’ve been up to. It’s twitter.com/soumithchintala.

Alright, I want to thank you so much for taking the time to be on the show.

Thank you, Byron.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Voices in AI – Episode 8: A Conversation with Esther Dyson

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Esther talk about intelligence, jobs, her experience in being a backup cosmonaut and more.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 8: A Conversation with Esther Dyson”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-54-51)-esther-dyson.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/09\/voices-in-ai-cover.png”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png&#8217;
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Byron Reese: Today, our guest is Esther Dyson. Esther Dyson is a living legend. She has been an angel investor, and sits on the boards of a number of companies. She is also a best-selling author, a world citizen, and a backup cosmonaut for the Russian Space Program. Now, she serves as the Executive Founder for a non-profit called Way to Wellville. Welcome to the show, Esther.

Esther Dyson: Delighted to be here.

Let’s start with that; that sounds like an intriguing non-profit. Can you talk about what its mission is, and what your role therein is?

Yeah. My role is, I founded it. The reason I founded it, was a question, which was… As I was an angel investor, and doing tech, and getting more and more interested in healthcare, and biotech, and medicine, I also had to ask the basic question; which is: “Why are we spending so much money and countenancing so much tragedy by fixing people when they’re broken, instead of keeping them healthy and resilient, so that they don’t get sick or chronically diseased in the first place?”

The purpose of Way to Wellville is to show what it looks like when you help people stay healthy. I could go on for way too long, but it’s five small communities around the US, so you can get critical mass in a small way, rather than trying to reshape New York City or something.

The basic idea is that this happens in the community. You don’t actually need to experiment and inspect people one-by-one, but change the environment they live in and then look at sort of the overall impact of that. It started a few years ago as a five-year project and a contest. Now, it’s a ten-year project and it’s more like a collaboration among the five communities.

One way AI is really important is that in order to show the impact you’ve had, you need to be able to predict pretty accurately what would’ve happened otherwise. So, in a sense, these are five communities, the United States is the control group.

But, at the same time, you can look at a class of third graders and do your math, and say that one-third of these are going to be obese by the time they’re sixteen, 30% will have dropped out, 10% will be juvenile delinquents, and that’s simply unacceptable. We need to fix that. So, that’s what we’re doing.

We’ll get to the AI stuff here in a moment but I’m just curious, how do you go about doing that? That seems so monumental, as being one of those problems like, where do you start?

Yeah, and that’s why we’re doing it in small communities. Part of the drill was, ask the communities what they want, but at the same time I went in thinking diabetes and heart disease, and exercise, and nutrition. The more we learned, the more we actually, as you say—you’ve got to start at the beginning, which is prenatal care and childhood. If you come from a broken home or with abusive parents, chances are it’s going to be hard for you to eat properly, it’s going to be hard for you to resist drugs.

There’s a concept called adverse childhood experiences. The mind is a very delicate thing. In some ways, we’re incredibly robust and resilient… But then, when you look at a third of the US population is obese—a smaller number is diabetic, according to age. You look at the opioid addiction problem, you look at the number of people who have problems with drinking or other kinds of behavior and you realize, oh, they’re all self-medicating. Again, let’s catch them when they’re kids and help them be addicted to love and children and exciting work, and feeling productive—rather than substances that cause other problems.

What gives you hope that you’ll be successful? Have you had any promising early findings in the first five-year part?

Not the kind you’d want. The first thing is in each community, part of the premise was there’s a group of local leaders who are trying to help the community be healthy. Mostly, they’re volunteers; they don’t have resources; they’re not accountable; so it’s difficult. We’re trying to help bring in some—but not all—of that Silicon Valley startup culture… It’s okay to fail, as long as you learn.

Plan B is not a disaster. Plan B is the result of learning how to fix Plan A, and so forth. If you look at studies, it’s pretty clear that having caring adults in a child’s life is really important. If you look at studies, it’s pretty clear that there’s no way you can eat healthily, if you can’t get healthy food, either because they’re too poor, or it’s inaccessible, or you don’t know what’s healthy.

Some of these things are the result of childhood experiences. Some are the result of poverty, and transportation issues… Yes, you’re right, all these things interact. You can’t go in and fix everything; but if you focus on the kids and their parents, that’s a good place to start.

I learned a lot of concepts. One of them is child storage, as opposed to child enrichment. If your child is going to a preschool that helps them learn how to play, that has caring adults, that can help the kid overcome a horrible home environment… It’s not going to solve all the community’s problems, but it’s definitely going to help some percentage of the children do better. That kind of stuff spreads, just the way the opposite spreads.

In the end, is your hope that you come out of it with, I guess, a set of best practices that you can then disseminate?

People know the best practices. What we really want to do is two things. One, show that it’s possible and inspire people that are [in] regular communities. This is not some multi-million dollar gated community designed for rich people to live healthy and fulfilling lives and go to the spa.

There are five of them, real places in various parts of America: Muskegon, Michigan; Spartanburg, South Carolina; North Hartford, Connecticut; Clatsop County, Oregon; and Lake County, California; that normal people in these places can fundamentally change the community to make it a place where kids are born lucky, instead of unlucky.

Yes, they can look at what we did and there will be certain things we did. One includes… The community needs to come together in different sectors; like the schools, and business people, and the hospital system need to cooperate. And, most likely, somebody needs to pay.

You need coaches to do everything from nurse visits, pre- and post-birth, early childhood education that’s effectively delivered, caring teachers in the schools, healthy school lunches. Really sad to see the government just backtracked on sodium and other stuff in the school lunches… But in a sense, we’re trying to simulate what it would look like, if we had really wonderful policies around fostering healthy childhoods and show the impact that has.

Let’s zoom the lens way out from there, because that might be an example of the kinds of things you hear a lot about today. It seems like it’s a world full of insurmountable problems, and then it’s also a world full of real, legitimate hope that there’s a way to get through them.

If I were to ask you in a broad way, how do you see the future? [Through] what lens do you look at the future, either of this country, or the world, or anything, in ten years, twenty years, thirty years? What do you think is going to happen, and what will be the big driving forces?

Well, I get my dopamine from doing something, rather than sitting around worrying. Intellectually, I feel these problems; and practically, I’m doing something about them the best way I know that will have leverage, which is doing something small and concentrated, rather than diffuse with no impact.

I want a real impact in a small number of dense places. Then, make that visible to a lot of other people and scale by having them do it, not by trying to do it myself. If you didn’t have hope, you wouldn’t do anything. Nothing happens without people doing something. So, I’m hopeful. Yeah, this is very circular.

So, I was journalist and I didn’t persuade people, I told them the truth. Ultimately, I think the truth is extremely powerful. You need to educate people to understand the truth and pay attention to it, but the truth is always much more persuasive than a lot of people just trying to cajole you, or persuade you, or deceive you, or manipulate you.

I want to create a truth that is encouraging and moves people to action, by making them feel that they could do this too; because they can, if they believe they can. This is not believing you will be blessed… It’s more like: Hey, you’ve got to do a lot of the hard work, and you need to change your community, and you need to think about food, and you need to be helping parents become better parents. There are active things you can do.

Is there any precedent for that? That sounds like it calls for changing lots of behaviors.

Well, the precedent is all the lucky people we know whose parents did love them, and who felt secure, and did amazing things. Many of them don’t realize how lucky they are. There’s also, of course, the people who had horrible circumstances and survived somehow anyway.

One of the best examples currently is J.D. Vance in the book Hillbilly Elegy. Many of them were just lucky to have an uncle, or a neighbor lady, or a grandmother, or somebody who gave them that support that they needed to overcome all the obstacles, and then there’s so many others who didn’t [have that].

Yes, certainly, there’s these people who’ve done things like this, but not ones that are visible enough that it really moves people to action. Part of this, we’re hoping to have a documentary that explains what we’re doing. Now, it’s early, because we haven’t done that much.

We’ve done a lot of preparation, and the communities are changing, but believe me: We’re not finished. I will say, when we started we put out a call for applications, and got applications for us to come in and help from forty-two communities.

Then, in the Summer of 2014, Rick Brush, our CEO, and I picked ten of them to go visit. One of them we turned down, because they were too good. That’s the town of Columbus, Indiana, which is, basically, the company town of Cummins Engine, which is just a wonderful place.

They were doing such a good job making their community healthier that we said, “Bless you guys, keep doing it. We don’t want to come in and claim the credit. There’s five other places that need us more.”

There are some pretty wonderful places in America, but there’s also a lot of places that have lost their middle class, people are dispirited, high unemployment. They need employers, they need good parents, they need better schools, they need all this stuff.

It’s not a nice white lady who came from New York to tell you how to live or to give you stuff. It’s this team of five that’s here to help you fix things for yourself, so that when we leave in ten years, you own your community. You will have helped repair it.

That sounds wonderful, in the sense that, if you ever can affect change, it should be kind of a positive reinforcement. Hopefully, it stays and builds on itself.

Yeah. It’s like, if you need us to be there, yes, we believe we’re helping in making a difference. But at some point, it’s their community, they have to own it. Otherwise, it’s not real, because it depends on us and when we leave it, it’s gone.

They’re building it for themselves, we’re just kind of poking them, counseling them, and introducing them to programs. And, “Hey, did you know this is what they’re doing at adverse childhood experiences in this or that study. This is how you can design a program like that for yourselves or hire the right training company, and build capacity in your own community.”

A lot of this is training people in the community to deliver various kinds of coaching and care, and stuff like that.

Your background is squarely in technology. Let’s switch gears and chat about that for a moment. Let’s start with the topic of show, which is artificial intelligence. What are your thoughts about it? Where do you think we’re at? Where do you think we’re going? What do you think it’s all about?

Yeah. Well, so, I first wrote about artificial intelligence inside a newsletter back in the days of Marvin Minsky and expert systems. Expert systems were basically logic. If this, and that, and the other thing, then… If someone shows up, and their blood pressure’s higher than x, and so forth. They didn’t sell very well.

Then they started calling them assistants instead of experts. In other words, we’re not going to replace you with an expert, we’re just going to assist you in doing your job. Pretty soon, they didn’t seem to be AI anymore because they really weren’t. They were simply logic.

The definition of artificial intelligence, to me, is somewhat similar to magic. The moment you really, really understand how it works, it no longer seems artificially-intelligent. It just seems like a tool that you design and it does stuff. Now, of course, we’re moving towards neural nets, and the so-called black boxes and things that actually, in theory, they can explain what they do; but now, they start to program themselves, based on large datasets.

They’re beyond the comprehension of a lot people, what exactly they do, and that’s some of the sort of social/ethical discussions that are happening. Or, you ask a bot to mimic a human being, and you discover most human beings make pretty poor decisions a lot of the time, or reflect biases of their culture.

AI was really hard to do at scale, back when we had very underpowered computers, compared with what we have today. Now, it’s both omnipresent and still pretty pathetic, in terms of… AI is generally still pretty brittle.

There’s not even a consensus definition on what intelligence is, let alone, what an AI is, but whatever it means… Would you say we have it, to at least some degree, today?

Oh, yeah. Again, the definition is becoming… Yes, the threshold of what we call AI is rising from what we called AI twenty years ago.

Where do you think it will go? Do you think that we’re building something that as it gradually gets better, in this kind of incrementalism, it’s eventually going to emerge as a general intelligence? Or do you think the quest to build something as smart and versatile as a human will require dramatically different technology than we have now?

Well, there’s a couple of different things around that. First of all, if something is not general, is it intelligent or is it simply good at doing its specific task? Like, I can do amazing machine translation now—with large enough corpuses—that simply has a whole lot of pattern recognition and translates from one language into another, but it doesn’t really understand anything.

At some point, if something is a super-intelligence, then I think it’s no longer artificial. It may not be wet. It may be totally electronic. If it’s really intelligent, it’s not artificial anymore, it’s intelligent. It may not be human, or conceived, or wet… But that’s my definition, someone else might just simply define it differently.

No, that’s quite legitimate actually. It’s unclear what the word artificial is doing in the phrase. One view is that it’s artificial in the sense that artificial turf is artificial. It may look like turf, but it’s not really turf. That sounds kind of like how you—not to put words in your mouth—but that sounds kind of like how you view it.

It can look like intelligence for a long time to come, but it isn’t really. It isn’t intelligent until it understands something. If that’s the case, we don’t know how to build a machine that understands anything. Would you agree?

Yes. They’re all these jokes, like… The moment it becomes truly intelligent, it’s going to start asking you for a salary. There are all these different jokes about AI. But yeah, until it ‘has a mind of its own’, what is intelligence? Is it because of the soul? Is it purpose? Can you be truly intelligent without having a purpose? Because, if you’re truly intelligent, but you have no purpose, you will do nothing, because you need a purpose to do something.

Right. In the past, we’ve always built our machines with implicit purposes, but they’ve never, kind of, gotten a purpose on their own.

Precisely. It’s sort of like dopamine for machines. What is it that makes a machine do something? Then, they have the runaway machines who do something because they want more electricity to grow, but they’ve been programmed to grow. But then, that’s not their own purpose.

Right. Are you familiar with Searle’s Chinese Room Analogy?

You mean the guy sitting in the backroom who does all the work

Exactly. The point of his illustration is, does this man who’s essentially just looking stuff up in books… He doesn’t speak Chinese, but he does a great job answering Chinese questions, because he can just look stuff up in these special books.

But he has no idea what he’s doing.

Right. He doesn’t know if it’s about cholera or coffee beans, or cough drops, or anything. The punchline is, does the man understand Chinese? The interesting thing is, you’re one of few people I’ve spoken to who unequivocally says, “No, if there’s nobody at home, it’s not intelligent.” Because, obviously, Turing would say, “That thing’s thinking; it understands.”

Well, no, I don’t think Turing would’ve said that. The Turing Test is a very good test for its time, but, I mean… George [Dyson, the futurist and technology historian who happens to be her brother] would know this much better. But the ability to pass the test… Again, what AI was at that point is very different from what it is now.

Right. Turing asked the question, can a machine think? The real question he was asking, in his own words, was something to the effect of: Could it do something radically different than us, that doesn’t look like thinking… But don’t we kind of have to grant that it is thinking? 

That’s when he said… This idea that you could have a conversation with something and therefore, it’s doing it completely differently. It’s kind of cheating. It’s not really, obviously, but it’s kind of shortcutting it’s way to knowing Chinese, but it doesn’t really [know Chinese]. By that analogy and by that logic, you probably think it’s unlikely we’ll develop conscious machines. Is that right?

Well, no. I think we might, but then it’s going to be something quite… I mean, this is the really interesting question. In the end, we evolved from just bits of carbon-based stuff, and maybe there’s another form of intelligence that could evolve from electronic stuff. Yeah, I mean, we’re a miracle and maybe there’s another kind of miracle waiting to happen. But, what we’ve got in our machines now is definitely not that.

It is fascinating. Matt Ridley, wrote Rational Optimist, said in his book that the most important thing to know about life is [that] all life is one, is that life happened on this planet and survived one time… And every living thing shares a huge amount of the same DNA.

Yeah. I think it might’ve evolved multiple times, or little bits went through the same process, but I don’t think we all came from the same cell. I think it’s much more likely there was a lot of soup and there were a whole bunch of random bits that kind of coalesced. There might’ve been bunches of them that coalesced separately, but similarly.

I see. Back in their own day, merged into something that we are all related to?

Yeah. Again, all carbon-based. There are some interesting things at the bottom of the ocean that are quite different.

Right. In fact, that suggests you’re more likely to find life in the clouds on Venus—as inhospitable as it is, at least stuff’s happening there—than you might find on a barren, more hospitable planet.

Yeah.

When you talk to people who believe in an AGI, who believe we’re going to develop an AGI, and then you ask them, “When?” you get this interesting range between five and five hundred years, depending on who you ask. And these are all people who have some amount of training and familiarity with the issues. What does that suggest to you, that you get that kind of a disparity from people? What would you glean from that?

That we really don’t know.

I think that’s really interesting, because so many people are on that spectrum. Nobody says oh, somewhere between five and five hundred years. No person says that. The five-year people—

—They’re all so different. Yeah.

But all very confident, all very confident. You know, “We’ll have something by 2050.” A lot of it I think boils down to whether you think we’re a couple of hops, skips, and a jump away from something that can take off on its own… Or, it’s going to be a long, long, long time.

Yeah. It’s also, how you define it. Again, to me, in a sense, I’ve been thinking about this and reading Yuval Noah Harari’s Homo Deus and various other people… But to me, in the end, there’s something about purpose, which means, again, it really is… It’s the anti-entropy thing.

What is it that makes you grow, makes you reproduce? We know how that works physically, but, then when you talk about a soul or a consciousness, there’s some animating thing or some animating force, and it’s this purpose in life. It’s reproduction to create more life. That’s sort of an accident, of something that had to have purpose to reproduce, and the other stuff didn’t.

Again, there’s more biological descriptions of that. Where that fits in something that’s not wet, how that gets implemented—purpose; we haven’t yet found. It’s like, we found substances that correlate with purpose, but there’s some anti-entropy that moves us. Without which, we wouldn’t do anything.

If you’re right, that without purpose, without understanding—as fantastic as it is with our very stone-knives-and-bearskins kind of AI we have today—I would guess… And not to put words in your mouth, but, I would guess you are less worried about the AI’s taking all the jobs than somebody else might be. What is your view on that?

Yeah. Well, in [terms of] the AIs taking all the jobs… That is something that we can control, not easily. It’s just like saying we can control the government or we can control health. Human beings collectively can—and I believe should—start making decisions about what we do about people and jobs.

I don’t think we want a universal basic income, as much as we want almost universal basic vouchers to… Again, I think people need purpose in their lives. They need to feel useful. Some people can create art and feel useful, and sell it, or just feel good when other people look at their art. But I think a more simple, more practical way to do this is, we need to raise the salaries of people who do childcare, coaching, you know.

We need to give people jobs, for which they are paid, that are useful jobs. And I think some of the most useful things people can do, generally—some people can become coders and design things and program artificial intelligence tools, and so forth, and build things. But a lot of people, I think, can be very effectively employed. This goes back to the Way to Wellville in caring for children, in coaching mothers through pregnancy, in running baseball teams in high schools.

We can sit here and talk about artificial intelligence, but this is a world in which people are afraid to let their kids out to play and everywhere you go, bridges are falling down. I live in New York City, and we’re going to have to close some of our train tunnels, because we haven’t done enough repair work. There actually is an awful lot of work out there.

We need to design our society more rationally. Not by giving everybody a basic income, but by figuring out how to construct a world in which almost everybody is employed doing something useful, and they’re being paid to do that, and it’s not like a giant relief act.

This is a society with a lot of surplus. We can somehow construct it so that people get paid enough that they can live comfortable lives. Not easy lives, but comfortable lives, where you do some amount of work and you get paid.

At the margins, yes, take care of people who’ve fallen off; but let’s do a better job raising our children and creating more people who do, in fact… You know, their childhoods don’t destroy their sense of worth and dignity, and they want to do something useful. And feel that they matter, and they get paid to do that useful thing.

Then, we can use all the AI that makes society, as a whole, very rich. Consumption doesn’t give people purpose. Production does, whether it’s production of services or production of things.

I think you’re entirely right, you could just… on the back of an envelope say, “Yeah, we could use another half-million kindergarten teachers and another quarter-million…”—you can come up with a list of things, from a societal standpoint, [that] would be good and that maybe market forces aren’t creating. It isn’t just make-work, it’s all actually really important stuff. Do you have any thoughts on how that would work practically?

Yeah.

You implied it’s not the WPA again, or is it…?

No. Go to the people who talk about the universal basic income and say, look, why don’t you make this slightly different. Let’s talk about, you get double dollars for buying vegetables with your food stamps. How do we do something that gives everybody an account, that they can apply to pay for service work?

So, every time I use the services of a hairdresser, or a babysitter, or a basketball coach, or a gym teacher, there’s this category of services. This is not simple, there’s a certain amount of complexity here, because you don’t want to be able to—to be gross, you know—hire the teenage girl next door to provide sexual services. I think it needs to be companies, rather than government.

Whether it’s Uber vetting drivers—and that’s a whole other story—but you want an intermediary that does quality control. Both in terms of how the customers behave, and how the providers behave, and manage the training of the providers, and so forth.

Then, there’s a collective subsidy to the wages that are paid to the people who provide the services that foster… Long ago, women didn’t have many occupations open to them, so second-grade teachers tended to be a lot of very smart women, who were dedicated, and didn’t get paid much.

But that was okay, and now that’s changing. Now, we need to pay them more, which is great. There’s a collective benefit to having people teaching second grade that benefits society and should be paid for collectively.

In a way, you could throw away the entire tax code we have and say for every item, whether it’s a wage or buying something, we’re going to either calculate the cost to society or the benefit to society. Those will either be subsidies or taxes on top of that, so that the bag of potato chips—

—The economic term is—

—Internalizing the externalities?

Yes, exactly.

Yeah, exactly. It’s actually the only thing I can think of that doesn’t actually cause perverse incentives, because in theory, all the externalities have been internalized and reflected in the price.

Yes. So, you’re not interfering with the market, you’re just letting the market reflect both the individual and collective costs and stuff like that. It doesn’t need to be perfect. We’re imperfect, life is imperfect, we all die, but let’s sort of improve things in the brief period that we’re alive.

I can’t quite gauge whether you’re ‘in theory’ optimistic, or practically optimistic. Like, do you think we’re going to accomplish these things? Do you think we’re going to do some flavor of them? Or, do you just realize they’re possibilities and we may or may not?

I’m trying to make this happen. The way I would do that is not, “Gee, I’m going to do this myself.” But I’m going to contribute to a bunch of people, both doing it and feeling… A lot more people would be doing this, if they thought it was possible, so let’s get together and become visible to one another.

Just as in what I saw happen in Eastern Europe, where individually people felt powerless, but then, they—and this really was where the Internet did help. People began to say, “Oh, you know, I’m not the only one who is beginning to question our abusive government.” People got together, and felt empowered, and started to change the story, both by telling their own stories and by creating alternative narratives to the one that the government fed them.

In our case, we’re being fed, I don’t know, we’re being fed short-term. Everything in our society is short-term. I’m on the board of The Long Now, just for what it’s worth. Wall Street is short-term. Government politicians are mostly concerned with being reelected. People are consuming information in little chunks and not understanding the long-term narratives or the structure of how things work.

It’s great if you hear someone talk about externalities. If you walk down the street and ask people what an externality is, they’ll say, “Is that, like, a science fiction thing or what?” No, it’s a real concept and one that should be paid attention to. There are people who know this, and they need to bring it together, and change how people think about themselves.

The very question you asked: “Do you think you can do this practically?” No, I can’t alone, but together, yeah, we can change how people think about things, and get them to think more about long-term investments. Not this day-by-day, what’s my ROI tomorrow, or what’s next quarters? But if we do this now, what will be different twenty years from now?

It’s never been easier, so I hear, to make a billion dollars. Google and Facebook each minted something like six billionaires apiece. The number of billionaires continues to grow. The number who made their own money, the percent that made their own money, continues to grow, as opposed to inheriting it.

Right.

But, am I right that all of that money that’s being created at the top, that isn’t… I mean, mathematically, it contributes to income inequality because it’s moving some to the end… But do you think that that’s part of the problem? Do all of those billions get made at the expense of someone else, or do those billions get made just independent of their effect on other people?

There’s no simple answer to that one. It varies. I was very pleased to see the Chan Zuckerberg Foundation. And the people that bother me more, honestly, are… There’s a point at which you stop adding value, and I would say a lot of Wall Street is no longer adding value. Google, it depends what they do with their billions.

I’m less concerned about the money Google makes. It depends what the people who own the shares in Google do with the money they’ve made. Part of the problem is, more the trolls on the Internet are encouraging some of this short-sided thinking, instant gratification. I’d rather look at cat photos than talk to my two-year old, or what have you.

For me, the issue’s not to demonize people but to encourage the ones who have assets and capacity to use them more wisely. Sometimes, they’ll do that when they’re young. Sometimes, they will earn all the money and then start to change later, and so forth.

The problem isn’t that Google has a lot of money and the people in Muskegon don’t. The problem is that the people in Muskegon, or so many other places… They have crappy jobs, the people who are parents now might have had parents who weren’t very good. Things are going downhill rather than uphill. Their kids are no longer more educated than they are. They no longer have better jobs. The food is getting worse, etc.

It’s not simply an issue of more money. It’s how the money is spent, and what the money is spent on. Is it spent accountably for the right things? It’s not just giving people money. It’s having an education system that educates people. It’s having a food system that nourishes them. It’s stuff like that.

We now know how to do those things. We also are much better, because of AI, at predicting what will happen if we don’t. I think the market, and incentives, and individual action are tremendously important; but you can influence them. Which is what I’m trying to do, by showing how much better things could work.

Well, no matter what, the world that you would envision as being a better world, certainly requires lots and lots and lots of people power, right? Like, you need more teachers, you need more nutritionists, you need all of these other things. It’s sounds like you don’t—

Right. And, you need people voting to fix the bridges instead of keep voting on which politician makes promises that are unbelievable or whatever. In a sense, we need to be much more thoughtful about what it is we’re doing and to think more about the long-term consequences.

Do you think there ever was a time that, like, do you have any society that you look at or even, any time in any society when you say… “Well, they weren’t perfect, but here was a society that thought ahead, and planned ahead, and organized things in a pretty smart way”? Do you have any examples?

Yes and no. There was never like a perfect place. A lot of things were worse a hundred years ago, including how the women were treated, how minorities were treated, a lot of people were poor. But there was a lot less entitlement, there was a lot less consumption around instant gratification. People invested.

In many ways, things were much worse, but people took it for granted that they needed to work hard and save. Again, many of them had a sense of purpose. You go back to the 1840s, and the amount of liquor consumed was crazy. There’s no perfect society. The norms were better.

Perhaps there was more hypocrisy. Hey, there was a lot of crime a hundred years ago and, sort of, the notion of polite society was perhaps not all of society. People didn’t aspire to be celebrities. They aspired to be respected, and loved, and productive, and so forth. It just goes back to that word: purpose.

Being a celebrity does not mean having an impact. It means being well-known. There’s something lacking in being a celebrity, versus being of value to society. I think there’s less aspiration towards value and more towards something flashier and emptier. That’s what I’d love to change, without being puritan and boring about it.

Right. It seems you keep coming back to the purpose idea, even when you’re not using that word. You talked about [how] Wall Street used to add value, and [now] they don’t. That’s another way of saying they’ve lost their purpose. We talked about the billionaires… It sounds like you’re fine with it, it depends on what their purpose of it all is with it. How do you think people find their purpose?

It goes back to their parents. There’s this satisfaction that really can’t be beaten. When I spent time in Russia, the women were much better off than the men, because the men felt—many of them—purposeless. They did useless jobs and got paid money that was not worth much, and then their wives took the rubles and stood in line to get food and raise the children.

Having children gives you purpose, ideally. Then, you get to the point where your children become just one more trophy, and that’s unutterably sad. They’re people who love the children and also focus too much on, “Is this child popular?” or “Will he get into the right college and reflect well on me?” But, in the end, children are what give purpose to most people.

Let’s talk about space for a minute. It’s seems that a lot of Silicon Valley folks, noteworthy ones, have a complete fascination with it. You’ve got Jeff Bezos hauling Apollo 11 boosters out of the ocean. Elon is planning to, according to him, “die on Mars, just not on impact.” You, obviously, have a—

—I want to retire on Mars. That’s my line. And, not too soon.

There’s a large part of this country, for instance, that doesn’t really care about space at all. It seemed like a whole lot of wasted money, and emptiness, and all of that. Why do you think it’s so intriguing? What about it is interesting for you? For goodness sakes, I can’t put you as “trained to be backup cosmonaut” in your introduction, and then not—that’s like the worst thing a host can do, and then never mention it again. So please talk about that, if you don’t mind.

It’s our destiny, we should spread. It’s our backup plan if we really screw up the earth and obliterate ourselves, whether it’s with a polluted atmosphere, or an explosion, or some kind of biological disaster. We need another place to go.

Mars… Number one, it’s good backup. Number two, maybe we can learn something. There’s this wonderful new thing call the circular economy. The reality is, yes, we’re in a circular economy, but it’s so large we don’t recognize it. On Mars, because you start out so small, it’s much clearer that there’s a circular economy.

I’m hoping that the National Geographic series is actually going to change some people’s opinions. Yeah, in some sense, our purpose is to explore, to learn, to discover what else might lie beyond our own little planet. Again, it’s always good to have Option B.

Final question: We already talked about what you’re working on, but… What gives you… Because our chat had lots of ups and downs, possibilities, and then worries. What is—if there is anything—what gives you hope? What give you hope that there’s a good chance that we’ll muddle through this?

I’m an optimist. I have hope, because I’m a human being and it’s been bred into me over all those generations. The ones who weren’t hopeful didn’t bother to try, and they mostly disappeared. But now you can survive, even if you’re not hopeful; so maybe that’s why all this pessimism, and lassitude and stuff is spreading. Maybe, we should all go to Mars, where it’s much tougher, and you do need to be hopeful to survive.

Yeah, and have purpose. In closing, anybody who wants to keep up with what you’re doing with your non-profit…

WaytoWellville.net.

And if people want to keep up with you, personally, how do they do that?

Probably on Twitter, @edyson.

Excellent. Well, I want to thank you so much for finding the time.

Thank you. It was really fun.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Millennial Optimism About Workplace Technology Ignores a Key Problem—Ourselves

The bright, shiny future of meetings in augmented reality, AI assistants, smart workspaces built on the internet of things, and other Jetsonian office technologies fast approaches—and American workers can’t wait for them to improve productivity. A year ago, Stowe Boyd presented research here on Gigaom that found significant optimism about the potential for technology to make work easier and more collaborative.(1) Unsurprisingly, the research found this positivity strongest among Millennials.(2)

However, that same research found that nearly half of Millennials believe the biggest time waster at work is glitchy or broken technology. Millennial frustration with current technology might explain their simultaneous wide-eyed excitement about cool, acronymed stuff like VR, AI, and IoT. This is at odds with the overall population, which perceives wasteful meetings and excessive email as the biggest enemy of efficiency.(3)

The problem is, both diagnoses are wrong. Research shows that the most significant barrier to productivity, by far, is the good, old-fashioned problem of getting distracted. It’s not that distractions exist—it’s that we succumb to them.

Put another way: poor tech and erupting inboxes don’t waste our time—we do. We have lost our ability to choose where we spend our attention.

In one survey, 87% of employees admitted to reading political social media posts at work.(4) Other research shows that 60% of online purchases occur between 9am and 5pm and that 70% of U.S. porn viewing also happens during working hours (“working” from home?).(5) And if none of that convinces you, perhaps this will: Facebook’s busiest hours are 1-3pm—right in the middle of the workday.

To be clear, this isn’t just a Millennial problem. The 2016 Nielson Social Media Report reveals that Gen Xers use social media 6 hours, 58 minutes per week—10% more than Millennials.(6) Overall media consumption tells the same story: Gen Xers clock in at 31 hours and 40 minutes per week, nearly 20% more than Millennials.

And if there weren’t enough, each instance of distraction comes at a significant cost. An experiment in Great Britain showed that people who tried to juggle work with e-mails and texts lost an average of 10 IQ points, the same loss as working after a sleepless night.(7) And this affects essentially every office worker, every day.

What’s to be done, then? Fortunately, if you’ve read this far, you’ve already done the most important thing: understand that the true problem doesn’t lie anywhere but in our own lack of focus.

Regaining focus—becoming focus-wise, as I like to call it—doesn’t require a rejection of technology, however. Becoming focus-wise only requires we reconfigure our tech usage habits.

For instance, instead of expecting ourselves (and our employees) to be 100% available throughout the day to emails, chats, and walk-bys, set time aside in “focus vaults” where you are completely unreachable to the outside world for a set period of time. When you emerge, you can have complete freedom to check emails and Facebook, batching those communications so you don’t lose IQ points switching to and from them during the actual work.

Another example is how we use the tech itself. For instance, if you know you can’t resist checking the screen when your phone dings—turn off the sound. Or disable your computer’s internet connection for a period of time. Even something as simple as making your application window full-screen encourages your brain to focus on the single task.

Normalizing simple, focus-wise habits like these throughout your enterprise can reap huge rewards in workplace productivity. As technology starts to fill our offices with artificially intelligent robots, virtual work spaces, and self-configuring environments, you can be confident that you will use the technology to accomplish your goals—rather than letting the technology use you.

About the Author

Curt Steinhorst is on a mission to rescue us from our distracted selves. Having spent years studying the impact of tech on human behavior, Curt founded Focuswise, a consultancy that equips organizations to overcome the distinct challenges of the constantly-connected workplace. He is a leading voice on strategic communication, speaking more than 75 times a year to everyone from global leadership associations and nonprofits to Fortune 100 companies.

Curt is the author of the book Can I Have Your Attention? Inspiring Better Work Habits, Focusing Your Team, and Getting Stuff Done in the Constantly Connected Workplace (John Wiley & Sons, October 2017).

References

1. Boyd, Stowe. “Millennials and the Workplace,” Gigaom.com. Oct 26, 2016. https://gigaom.com/2016/10/26/millennials-and-the-workplace-2/.
2. Dell & Intel Future-Ready Workforce Study U.S. Report. July 15, 2016. http://www.workforcetransformation.com/workforcestudy/us/.
3. Workfront 2016-2017 US State of Enterprise Work Report. Sept 9, 2016. https://resources.workfront.com/workfront-awareness/2016-state-of-enterprise-work-report-u-s-edition.
4. Kris Duggan, “Feeling Distracted by Politics? 29% of Employees Are Less Productive after U.S. Election,” BetterWorks, February 7, 2017, https://blog.betterworks.com/feeling-distracted-politics-29-employees-less-productive-u-s-election.
5. Juline E. Mills, Bo Hu, Srikanth Beldona, and Joan Clay, “Cyberslacking! A Wired-Workplace Liability Issue,” The Cornell Hotel and Restaurant Administration Quarterly, 42, no. 5 (2001): 34–47, http://www.sciencedirect.com/science/article/pii/S0010880401800562.
6. Sean Casey, “2016 Nielsen Social Media Report,” Nielsen, January 17, 2017, 6, http://www.nielsen.com/content/dam/corporate/us/en/reports-downloads/2017-reports/2016-nielsen-social-media-report.pdf.
7. “Emails ‘Hurt More than Pot,’” CNN.com, April 22, 2005, http://www.cnn.com/2005/WORLD/europe/04/22/text.iq