Geostorm – I Await the Sweet Release of Death

Geostorm is trash. Its only positives come from the unintentional laughter you’ll have in between depressing “this is what cinema has become” thoughts.

Advertisements

The difference between Hybrid and Multi-Cloud for the Enterprise

Cloud computing still presents the single biggest opportunity for enterprise companies today. Even though cloud-based solutions have been around for more than 10 years now, the concepts related to cloud continue to confuse many.

Of late, it seems that Hybrid Cloud and Multi-Cloud are the latest concepts creating confusion. To make matters worse, a number of folks (inappropriately) use these terms interchangeably. The reality is that they are very different.

The best way to think about the differences between Hybrid Cloud and Multi-Cloud is in terms of orientation. One addresses a continuum of different services vertically while the other looks at the horizontal aspect of cloud. There are pros and cons to each and they are not interchangeable.

 

Multi-Cloud: The horizontal aspect of cloud

Multi-Cloud is essentially the use of multiple cloud services within a single delivery tier. A common example is the use of multiple Public Cloud providers. Enterprises typically use a multi-cloud approach for one of three reasons:

  • Leverage: Enterprise IT organizations are generally risk-adverse. There are many reasons for this to be discussed in a later post. Fear of taking risks tends to inform a number of decisions including choice of cloud provider. One aspect is the fear of lock-in to a single provider. I addressed my perspective on lock-in here. By using a multi-cloud approach, an enterprise can hedge their risk across multiple providers. The downside is that this approach creates complexities with integration, organizational skills and data transit.
  • Best of Breed: The second reason enterprises typically use a multi-cloud strategy is due to best of breed solutions. Not all solutions in a single delivery tier offer the same services. An enterprise may choose to use one provider’s solution for a specific function and a second provider’s solution for a different function. This approach, while advantageous in some respects, does create complexity in a number of ways including integration, data transit, organizational skills and sprawl.
  • Evaluation: The third reason enterprises leverage a multi-cloud strategy is relatively temporary and exists for evaluation purposes. This third approach is actually a very common approach among enterprises today. Essentially, it provides a means to evaluate different cloud providers in a single delivery tier when they first start out. However, they eventually focus on a single provider and build expertise around that single provider’s solution.

In the end, I find that the reasons that enterprises choose one of the three approaches above is often informed by their maturity and thinking around cloud in general. The question many ask is: Do the upsides of leverage or best of breed outweigh the downsides of complexity?

Hybrid Cloud: The vertical approach to cloud

Most, if not all, enterprises are using a form of hybrid cloud today. Hybrid cloud refers to the vertical use of cloud in multiple different delivery tiers. Most typically, enterprises are using a SaaS-based solution and Public Cloud today. Some may also use Private Cloud. Hybrid cloud does not require that a single application spans the different delivery tiers.

The CIO Perspective

The important take away from this is to understand how you leverage Multi-cloud and/or Hybrid cloud and less about defining the terms. Too often, we get hung up on defining terms more than understanding the benefits from leveraging the solution…or methodology. Even when discussing outcomes, we often still focus on technology.

These two approaches are not the same and come with their own set of pros and cons. The value from Multi-Cloud and Hybrid Cloud is that they both provide leverage for business transformation. The question is: How will you leverage them for business advantage?

Leveraging Artificial Intelligence & GPUs for Cybersecurity

This post is sponsored by NVIDIA. All thoughts and opinions are my own. 

Artificial Intelligence (AI) presents a significant opportunity to solve problems previously either not easy to solve or worse, not possible to solve. The combination of AI along with today’s Graphics Processing Unit (GPU) technology provides an added boost to those leveraging sophisticated algorithms in their deep learning solutions. These sophisticated systems are able to train deep learning models and ultimately lead to predictive insights. The objective is to move from reactive to proactive and finally to predictive insights.

The breadth of opportunities that AI presents is wide, however, a significant opportunity is in the Cybersecurity space. One company leveraging the power of AI & GPUs is recent Nvidia Inception Program award winner Deep Instinct. Deep Instinct leverages deep learning to provide insights for zero-day malware detection in both endpoint and mobile devices; two of the key areas of concern in cybersecurity.

The range of today’s cybersecurity threats present a bevy of challenges. The challenge is with complexity and speed; two characteristics that often work against each. Yet, the combination of AI & deep learning provides the foundation to bring speed to solving complex problems like those in the cybersecurity space.

This powerful combination moves the paradigm from reactive to predictive solutions and can mean the difference between breach and prevention. As zero-day threats become more frequent and damaging, only through the use of sophisticated AI models can one have the potential to mitigate these risks.

Fraud detection is yet another area in the cybersecurity space that is gaining attention. Detecting fraud in real-time has typically been done based on policies set by humans. Yet both of these risks (zero-day and fraud detection) are evolving faster than humans can keep up. We need a better approach.

Solving complicated problems is not easy. One must always be one step ahead of the bad actors. As we move to a paradigm where security is part of our DNA, so must the approach we take. We can no longer afford to be reactive. Only the proactive…and predictive will remain relevant.

Tim Crawford is Gigaom Head of Research, and strategic CIO and consultant. Read Bio »

Voices in AI – Episode 12: A Conversation with Scott Clark

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Scott talk about algorithms, transfer learning, human intelligence, and pain and suffering.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 12: A Conversation with Scott Clark”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-56-02)-scott-clark.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-4.jpg”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png’
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Scott Clark. He is the CEO and co-founder of SigOpt. They’re a SaaS startup for tuning complex systems and machine learning models. Before that, Scott worked on the ad targeting team at Yelp, leading the charge on academic research and outreach. He holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell, and a BS in Mathematics, Physics, and Computational Physics from Oregon State University. He was chosen as one of Forbes 30 under 30 in 2016. Welcome to the show, Scott.

Scott Clark: Thanks for having me.

I’d like to start with the question, because I know two people never answer it the same: What is artificial intelligence?

I like to go back to an old quote… I don’t remember the attribution for it, but I think it actually fits the definition pretty well. Artificial intelligence is what machines can’t currently do. It’s the idea that there’s this moving goalpost for what artificial intelligence actually means. Ten years ago, artificial intelligence meant being able to classify images; like, can a machine look at a picture and tell you what’s in the picture?

Now we can do that pretty well. Maybe twenty, thirty years ago, if you told somebody that there would be a browser where you can type in words, and it would automatically correct your spelling and grammar and understand language, he would think that’s artificial intelligence. And I think there’s been a slight shift, somewhat recently, where people are calling deep learning artificial intelligence and things like that.

It’s got a little bit conflated with specific tools. So now people talk about artificial general intelligence as this impossible next thing. But I think a lot of people, in their minds, think of artificial intelligence as whatever it is that’s next that computers haven’t figured out how to do yet, that humans can do. But, as computers continually make progress on those fronts, the goalposts continually change.

I’d say today, people think of it as conversational systems, basic tasks that humans can do in five seconds or less, and then artificial general intelligence is everything after that. And things like spell check, or being able to do anomaly detection, are just taken for granted and that’s just machine learning now.

I’ll accept all of that, but that’s more of a sociological observation about how we think of it, and then actually… I’ll change the question. What is intelligence?

That’s a much more difficult question. Maybe the ability to reason about your environment and draw conclusions from it.

Do you think that what we’re building, our systems, are they artificial in the sense that we just built them, but they can do that? Or are they artificial in the sense that they can’t really do that, but they sure can think it well?

I think they’re artificial in the sense that they’re not biological systems. They seem to be able to perceive input in the same way that a human can perceive input, and draw conclusions based off of that input. Usually, the reward system in place in an artificial intelligence framework is designed to do a very specific thing, very well.

So is there a cat in this picture or not? As opposed to a human: It’s, “Try to live a fulfilling life.” The objective functions are slightly different, but they are interpreting outside stimuli via some input mechanism, and then trying to apply that towards a specific goal. The goals for artificial intelligence today are extremely short-term, but I think that they are performing them on the same level—or better sometimes—than a human presented with the exact same short-term goal.

The artificial component comes into the fact that they were constructed, non-biologically. But other than that, I think they meet the definition of observing stimuli, reasoning about an environment, and achieving some outcome.

You used the phrase ‘they draw conclusions’. Are you using that colloquially, or does the machine actually conclude? Or does it merely calculate?

It calculates, but then it comes to, I guess, a decision at the end of the day. If it’s a classification system, for example… going back to “Is there a cat in this picture?” It draws the conclusion that “Yes, there was a cat. No, that wasn’t a cat.” It can do that with various levels of certainty in the same way that, potentially, a human would solve the exact same problem. If I showed you a blurry Polaroid picture you might be able to say, “I’m pretty sure there’s a cat in there, but I’m not 100 percent certain.”

And if I show you a very crisp picture of a kitten, you could be like, “Yes, there’s a cat there.” And I think convolutional neural network is doing the exact same thing: taking in that outside stimuli. Not through an optical nerve, but through the raw encoding of pixels, and then coming to the exact same conclusion.

You make the really useful distinction between an AGI, which is a general intelligence—something as versatile as a human—and then the kinds of stuff we’re building now, which we call AI—which is doing this reasoning or drawing conclusions.

Is an AGI a linear development from what we have now? In other words, do we have all the pieces, and we just need faster computers, better algorithms, more data, a few nips and tucks, and we’re eventually going to get an AGI? Or is an AGI something very different, that is a whole different ball of wax?

I’m not convinced that, with the current tooling we have today, that it’s just like… if we add one more hidden layer to a neural network, all of a sudden it’ll be AGI. That being said, I think this is how science and computer science and progress in general works. Is that techniques are built upon each other, we make advancements.

It might be a completely new type of algorithm. It might not be a neural network. It might be reinforcement learning. It might not be reinforcement learning. It might be the next thing. It might not be on a CPU or a GPU. Maybe it’s on a quantum computer. If you think of scientific and technological process as this linear evolution of different techniques and ideas, then I definitely think we are marching towards that as an eventual outcome.

That being said, I don’t think that there’s some magic combinatorial setting of what we have today that will turn into this. I don’t think it’s one more hidden layer. I don’t think it’s a GPU that can do one more teraflop—or something like that—that’s going to push us over the edge. I think it’s going to be things built from the foundation that we have today, but it will continue to be new and novel techniques.

There was an interesting talk at the International Conference on Machine Learning in Sydney last week about AlphaGo, and how they got this massive speed-up when they put in deep learning. They were able to break through this plateau that they had found in terms of playing ability, where they could play at the amateur level.

And then once they started applying deep learning networks, that got them to the professional, and now best-in-the-world level. I think we’re going to continue to see plateaus for some of these current techniques, but then we’ll come up with some new strategy that will blast us through and get to the next plateau. But I think that’s an ever-stratifying process.

To continue on that vein… When in 1955, they convened in Dartmouth and said, “We can solve a big part of AI in the summer, with five people,” the assumption was that general intelligence, like all the other sciences, had a few simple laws.

You had Newton, Maxwell; you had electricity and magnetism, and all these things, and they were just a few simple laws. The idea was that all we need to do is figure out those for intelligence. And Pedro Domingos argues in The Master Algorithm, from a biological perspective that, in a sense, that may be true.  

That if you look at the DNA difference between us and an animal that isn’t generally intelligent… the amount of code is just a few megabytes that’s different, which teaches how to make my brain and your brain. It sounded like you were saying, “No, there’s not going to be some silver bullet, it’s going to be a bunch of silver buckshot and we’ll eventually get there.”

But do you hold any hope that maybe it is a simple and elegant thing?

Going back to my original statement about what is AI, I think when Marvin Minsky and everybody sat down in Dartmouth, the goalposts for AI were somewhat different. Because they were attacking it for the first time, some of the things were definitely overambitious. But certain things that they set out to do that summer, they actually accomplished reasonably well.

Things like the Lisp programming language, and things like that, came out of that and were extremely successful. But then, once these goals are accomplished, the next thing comes up. Obviously, in hindsight, it was overambitious to think that they could maybe match a human, but I think if you were to go back to Dartmouth and show them what we have today, and say: “Look, this computer can describe the scene in this picture completely accurately.”

I think that could be indistinguishable from the artificial intelligence that they were seeking, even if today what we want is someone we can have a conversation with. And then once we can have a conversation, the next thing is we want them to be able to plan our lives for us, or whatever it may be, solve world peace.

While I think there are some of the fundamental building blocks that will continue to be used—like, linear algebra and calculus, and things like that, will definitely be a core component of the algorithms that make up whatever does become AGI—I think there is a pretty big jump between that. Even if there’s only a few megabytes difference between us and a starfish or something like that, every piece of DNA is two bits.

If you have millions of differences, four-to-the-several million—like the state space for DNA—even though you can store it in a small amount of megabytes, there are so many different combinatorial combinations that it’s not like we’re just going to stumble upon it by editing something that we currently have.

It could be something very different in that configuration space. And I think those are the algorithmic advancements that will continue to push us to the next plateau, and the next plateau, until eventually we meet and/or surpass the human plateau.

You invoked quantum computers in passing, but putting that aside for a moment… Would you believe, just at a gut level—because nobody knows—that we have enough computing power to build an AGI, we just don’t know how?

Well, in the sense that if the human brain is general intelligence, the computing power in the human brain, while impressive… All of the computers in the world are probably better at performing some simple calculations than the biological gray matter mess that exists in all of our skulls. I think the raw amount of transistors and things like that might be there, if we had the right way to apply them, if they were all applied in the same direction.

That being said… Whether or not that’s enough to make it ubiquitous, or whether or not having all the computers in the world mimic a single human child will be considered artificial general intelligence, or if we’re going to need to apply it to many different situations before we claim victory, I think that’s up for semantic debate.

Do you think about how the brain works, even if [the context] is not biological? Is that how you start a problem: “Well, how do humans do this?” Does that even guide you? Does that even begin the conversation? And I know none of this is a map: Birds fly with wings, and airplanes, all of that. Is there anything to learn from human intelligence that you, in a practical, day-to-day sense, use?

Yeah, definitely. I think it often helps to try to approach a problem from fundamentally different ways. One way to approach that problem is from the purely mathematical, axiomatic way; where we’re trying to build up from first principles, and trying to get to something that has a nice proof or something associated with it.

Another way to try to attack the problem is from a more biological setting. If I had to solve this problem, and I couldn’t assume any of those axioms, then how would I begin to try to build heuristics around it? Sometimes you can go from that back to the proof, but there are many different ways to attack that problem. Obviously, there are a lot of things in computer science, and optimization in general, that are motivated by physical phenomena.

So a neural network, if you squint, looks kind of like a biological brain neural network. There’s things like simulated annealing, which is a global optimization strategy that mimics the way that like steel is annealed… where it tries to find some local lattice structure that has low energy, and then you pound the steel with the hammer, and that increases the energy to find a better global optima lattice structure that is harder steel.

But that’s also an extremely popular algorithm in the scientific literature. So it was come to from this auxiliary way, or a genetic algorithm where you’re slowly evolving a population to try to get to a good result. I think there is definitely room for a lot of these algorithms to be inspired by biological or physical phenomenon, whether or not they are required to be from that to be proficient. I would have trouble, off the top of my head, coming up with the biological equivalent for a support vector machine or something like that. So there’s two different ways to attack it, but both can produce really interesting results.

Let’s take a normal thing that a human does, which is: You show a human training data of the Maltese Falcon, the little statue from the movie, and then you show him a bunch of photos. And a human can instantly say, “There’s the falcon under water, and there it’s half-hidden by a tree, and there it’s upside down…” A human does that naturally. So it’s some kind of transferred learning. How do we do that?

Transfer learning is the way that that happens. You’ve seen trees before. You’ve seen water. You’ve seen how objects look inside and outside of water before. And then you’re able to apply that knowledge to this new context.

It might be difficult for a human who grew up in a sensory deprivation chamber to look at this object… and then you start to show them things that they’ve never seen before: “Here’s this object and a tree,” and they might not ‘see the forest for the trees’ as it were.

In addition to that, without any context whatsoever, you take someone who was raised in a sensory deprivation chamber, and you start showing them pictures and ask them to do classification type tasks. They may be completely unaware of what’s the reward function here. Who is this thing telling me to do things for the first time I’ve never seen before?

What does it mean to even classify things or describe an object? Because you’ve never seen an object before.

And when you start training these systems from scratch, with no previous knowledge, that’s how they work. They need to slowly learn what’s good, what’s bad. There’s a reward function associated with that.

But with no context, with no previous information, it’s actually very surprising how well they are able to perform these tasks; considering [that when] a child is born, four hours later it isn’t able to do this. A machine algorithm that’s trained from scratch over the course of four hours on a couple of GPUs is able to do this.

You mentioned the sensory deprivation chamber a couple of times. Do you have a sense that we’re going to need to embody these AIs to allow them to—and I use the word very loosely—‘experience’ the world? Are they locked in a sensory deprivation chamber right now, and that’s limiting them?

I think with transfer learning, and pre-training of data, and some reinforcement algorithm work, there’s definitely this idea of trying to make that better, and bootstrapping based off of previous knowledge in the same way that a human would attack this problem. I think it is a limitation. It would be very difficult to go from zero to artificial general intelligence without providing more of this context.

There’s been many papers recently, and OpenAI had this great blog post recently where, if you teach the machine language first, if you show it a bunch of contextual information—this idea of this unsupervised learning component of it, where it’s just absorbing information about the potential inputs it can get—that allows it to perform much better on a specific task, in the same way that a baby absorbs language for a long time before it actually starts to produce it itself.

And it could be in a very unstructured way, but it’s able to learn some of the actual language structure or sounds from the particular culture in which it was raised in this unstructured way.

Let’s talk a minute about human intelligence. Why do you think we understand so poorly how the brain works?

That’s a great question. It’s easier scientifically, with my background in math and physics—it seems like it’s easier to break down modular decomposable systems. Humanity has done a very good job at understanding, at least at a high level, how physical systems work, or things like chemistry.

Biology starts to get a little bit messier, because it’s less modular and less decomposable. And as you start to build larger and larger biological systems, it becomes a lot harder to understand all the different moving pieces. Then you go to the brain, and then you start to look at psychology and sociology, and all of the lines get much fuzzier.

It’s very difficult to build an axiomatic rule system. And humans aren’t even able to do that in some sort of grand unified way with physics, or understand quantum mechanics, or things like that; let alone being able to do it for these sometimes infinitely more complex systems.

Right. But the most successful animal on the planet is a nematode worm. Ten percent of all animals are nematode worms. They’re successful, they find food, and they reproduce and they move. Their brains have 302 neurons. We’ve spent twenty years trying to model that, a bunch of very smart people in the OpenWorm project…

 But twenty years trying to model 300 neurons to just reproduce this worm, make a digital version of it, and even to this day people in the project say it may not be possible.

I guess the argument is, 300 sounds like a small amount. One thing that’s very difficult for humans to internalize is the exponential function. So if intelligence grew linearly, then yeah. If we could understand one, then 300 might not be that much, whatever it is. But if the state space grows exponentially, or the complexity grows exponentially… if there’s ten different positions for every single one of those neurons, like 10300, that’s more than the number of atoms in the universe.

Right. But we aren’t starting by just rolling 300 dice and hoping for them all to be—we know how those neurons are arranged.

At a very high level we do.

I’m getting to a point, that we maybe don’t even understand how a neuron works. A neuron may be doing stuff down at the quantum level. It may be this gigantic supercomputer we don’t even have a hope of understanding, a single neuron.

From a chemical way, we can have an understanding of, “Okay, so we have neurotransmitters that carry a positive charge, that then cause a reaction based off of some threshold of charge, and there’s this catalyst that happens.” I think from a physics and chemical understanding, we can understand the base components of it, but as you start to build these complex systems that have this combinatorial set of states, it does become much more difficult.

And I think that’s that abstraction, where we can understand how simple chemical reactions work. But then it becomes much more difficult once you start adding more and more. Or even in physics… like if you have two bodies, and you’re trying to calculate the gravity, that’s relatively easy. Three? Harder. Four? Maybe impossible. It becomes much harder to solve these higher-order, higher-body problems. And even with 302 neurons, that starts to get pretty complex.

Oddly, two of them aren’t connected to anything, just like floating out there…

Do you think human intelligence is emergent?

In what respect?

I will clarify that. There are two sorts of emergence: one is weak, and one is strong. Weak emergence is where a system takes on characteristics which don’t appear at first glance to be derivable from them. So the intelligence displayed by an ant colony, or a beehive—the way that some bees can shimmer in unison to scare off predators. No bee is saying, “We need to do this.”  

The anthill behaves intelligently, even though… The queen isn’t, like, in charge; the queen is just another ant, but somehow it all adds intelligence. So that would be something where it takes on these attributes.

Can you really intuitively derive intelligence from neurons?

And then, to push that a step further, there are some who believe in something called ‘strong emergence’, where they literally are not derivable. You cannot look at a bunch of matter and explain how it can become conscious, for instance. It is what the minority of people believe about emergence, that there is some additional property of the universe we do not understand that makes these things happen.

The question I’m asking you is: Is reductionism the way to go to figure out intelligence? Is that how we’re going to kind of make advances towards an AGI? Just break it down into enough small pieces.

I think that is an approach, whether or not that’s ‘the’ ultimate approach that works is to be seen. As I was mentioning before, there are ways to take biological or physical systems, and then try to work them back into something that then can be used and applied in a different context. There’s other ways, where you start from the more theoretical or axiomatic way, and try to move forward into something that then can be applied to a specific problem.

I think there’s wide swaths of the universe that we don’t understand at many levels. Mathematics isn’t solved. Physics isn’t solved. Chemistry isn’t solved. All of these build on each other to get to these large, complex, biological systems. It may be a very long time, or we might need an AGI to help us solve some of these systems.

I don’t think it’s required to understand everything to be able to observe intelligence—like, proof by example. I can’t tell you why my brain thinks, but my brain is thinking, if you can assume that humans are thinking. So you don’t necessarily need to understand all of it to put it all together.

Let me ask you one more far-out question, and then we’ll go to a little more immediate future. Do you have an opinion on how consciousness comes about? And if you do or don’t, do you believe we’re going to build conscious machines?

Even to throw a little more into that one, do you think consciousness—that ability to change focus and all of that—is a requisite for general intelligence?

So, I would like to hear your definition of consciousness.

I would define it by example, to say that it’s subjective experience. It’s how you experience things. We’ve all had that experience when you’re driving, that you kind of space out, and then, all of a sudden, you kind of snap to. “Whoa! I don’t even remember getting here.”

And so that time when you were driving, your brain was elsewhere, you were clearly intelligent, because you were merging in and out of traffic. But in the sense I’m using the word, you were not ‘conscious’, you were not experiencing the world. If your foot caught on fire, you would feel it; but you weren’t experiencing the world. And then instantly, it all came on and you were an entity that experienced something.

Or, put another way… this is often illustrated with the problem of Mary by Frank Jackson:

He offers somebody named Mary, who knows everything about color, like, at a god-like level—knows every single thing about color. But the catch is, you might guess, she’s never seen it. She’s lived in a room, black-and-white, never seen it [color]. And one day, she opens the door, she looks outside and she sees red.  

The question becomes: Does she learn anything? Did she learn something new?  

In other words, is experiencing something different than knowing something? Those two things taken together, defining consciousness, is having an experience of the world…

I’ll give one final one. You can hook a sensor up to a computer, and you can program the computer to play an mp3 of somebody screaming if the sensor hits 500 degrees. But nobody would say, at this day and age, the computer feels the pain. Could a computer feel anything?

Okay. I think there’s a lot to unpack there. I think computers can perceive the environment. Your webcam is able to record the environment in the same way that your optical nerves are able to record the environment. When you’re driving a car, and daydreaming, and kind of going on autopilot, as it were, there still are processes running in the background.

If you were to close your eyes, you would be much worse at doing lane merging and things like that. And that’s because you’re still getting the sensory input, even if you’re not actively, consciously aware of the fact that you’re observing that input.

Maybe that’s where you’re getting at with consciousness here, is: Not only the actual task that’s being performed, which I think computers are very good at—and we have self-driving cars out on the street in the Bay Area every day—but that awareness of the fact that you are performing this task, is kind of meta-level of: “I’m assembling together all of these different subcomponents.”

Whether that’s driving a car, thinking about the meeting that I’m running late to, some fight that I had with my significant other the night before, or whatever it is. There’s all these individual processes running, and there could be this kind of global awareness of all of these different tasks.

I think today, where artificial intelligence sits is, performing each one of these individual tasks extremely well, toward some kind of objective function of, “I need to not crash this car. I need to figure out how to resolve this conflict,” or whatever it may be; or, “Play this game in an artificial intelligence setting.” But we don’t yet have that kind of governing overall strategy that’s aware of making these tradeoffs, and then making those tradeoffs in an intelligent way. But that overall strategy itself is just going to be going toward some specific reward function.

Probably when you’re out driving your car, and you’re spacing out, your overall reward function is, “I want to be happy and healthy. I want to live a meaningful life,” or something like that. It can be something nebulous, but you’re also just this collection of subroutines that are driving towards this specific end result.

But the direct question of what would it mean for a computer to feel pain? Will a computer feel pain? Now they can sense things, but nobody argues they have a self that experiences the pain. It matters, doesn’t it?

It depends on what you mean by pain. If you mean there’s a response of your nervous system to some outside stimuli that you perceive as pain, a negative response, and—

—It involves emotional distress. People know what pain is. It hurts. Can a computer ever hurt?

It’s a fundamentally negative response to what you’re trying to achieve. So pain and suffering is the opposite of happiness. And your objective function as a human is happiness, let’s say. So, by failing to achieve that objective, you feel something like pain. Evolutionarily, we might have evolved this in order to avoid specific things. Like, you get pain when you touch flame, so don’t touch flame.

And the reason behind that is biological systems degrade in high-temperature environments, and you’re not going to be able to reproduce or something like that.

You could argue that when a classification system fails to classify something, and it gets penalized in its reward function, that’s the equivalent of it finding something where, in its state of the world, it has failed to achieve its goal, and it’s getting the opposite of what its purpose is. And that’s similar to pain and suffering in some way.

But is it? Let’s be candid. You can’t take a person and torture them, because that’s a terrible thing to do… because they experience pain. [Whereas if] you write a program that has an infinite loop that causes your computer to crash, nobody’s going to suggest you should go to jail for that. Because people know that those are two very different things.

It is a negative neurological response based off of outside stimuli. A computer can have a negative response, and perform based off of outside stimuli poorly, relative to what it’s trying to achieve… Although I would definitely agree with you that that’s not a computer experiencing pain.

But from a pure chemical level, down to the algorithmic component of it, they’re not as fundamentally different… that because it’s a human, there’s something magic about it being a human. A dog can also experience pain.

These worms—I’m not as familiar with the literature on that, but [they] could potentially experience pain. And as you derive that further and further back, you might have to bend your definition of pain. Maybe they’re not feeling something in a central nervous system, like a human or a dog would, but they’re perceiving something that’s negative to what they’re trying to achieve with this utility function.

But we do draw a line. And I don’t know that I would use the word ‘magic’ the way you’re doing it. We draw this line by saying that dogs feel pain, so we outlaw animal cruelty. Bacteria don’t, so we don’t outlaw antibiotics. There is a material difference between those two things.

So if the difference is a central nervous system, and pain is being defined as a nervous response to some outside stimuli… then unless we explicitly design machines to have central nervous systems, then I don’t think they will ever experience pain.

Thanks for indulging me in all of that, because I think it matters… Because up until thirty years ago, veterinarians typically didn’t use anesthetic. They were told that animals couldn’t feel pain. Babies were operated on in the ‘90s—open heart surgery—under the theory they couldn’t feel pain.  

What really intrigues me is the idea of how would we know if a machine did? That’s what I’m trying to deconstruct. But enough of that. We’ll talk about jobs here in a minute, and those concerns…

There’s groups of people that are legitimately afraid of AI. You know all the names. You’ve got Elon Musk, you get Stephen Hawking. Bill Gates has thrown in his hat with that, Wozniak has. Nick Bostrom wrote a book that addressed existential threat and all of that. Then you have Mark Zuckerberg, who says no, no, no. You get Oren Etzioni over at the Allen Institute, just working on some very basic problem. You get Andrew Ng with his “overpopulation on Mars. This is not helpful to even have this conversation.”

What is different about those two groups in your mind? What is the difference in how they view the world that gives them these incredibly different viewpoints?

I think it goes down to a definition problem. As you mentioned at the beginning of this podcast, when you ask people, “What is artificial intelligence?” everybody gives you a different answer. I think each one of these experts would also give you a different answer.

If you define artificial intelligence as matrix multiplication and gradient descent in a deep learning system, trying to achieve a very specific classification output given some pixel input—or something like that—it’s very difficult to conceive that as some sort of existential threat for humanity.

But if you define artificial intelligence as this general intelligence, this kind of emergent singularity where the machines don’t hit the plateau, that they continue to advance well beyond humans… maybe to the point where they don’t need humans, or we become the ants in that system… that becomes very rapidly a very existential threat.

As I said before, I don’t think there’s an incremental improvement from algorithms—as they exist in the academic literature today—to that singularity, but I think it can be a slippery slope. And I think that’s what a lot of these experts are talking about… Where if it does become this dynamic system that feeds on itself, by the time we realize it’s happening, it’ll be too late.

Whether or not that’s because of the algorithms that we have today, or algorithms down the line, it does make sense to start having conversations about that, just because of the time scales over which governments and policies tend to work. But I don’t think someone is going to design a TensorFlow or MXNet algorithm tomorrow that’s going to take over the world.

There’s legislation in Europe to basically say, if an AI makes a decision about whether you should get an auto loan or something, you deserve to know why it turned you down. Is that a legitimate request, or is it like you go to somebody at Google and say, “Why is this site ranked number one and this site ranked number two?” There’s no way to know at this point.  

Or is that something that, with the auto loan thing, you’re like, “Nope, here are the big bullet points of what went into it.” And if that becomes the norm, does that slow down AI in any way?

I think it’s important to make sure, just from a societal standpoint, that we continue to strive towards not being discriminatory towards specific groups and people. It can be very difficult, when you have something that looks like a black box from the outside, to be able to say, “Okay, was this being fair?” based off of the fairness that we as a society have agreed upon.

The machine doesn’t have that context. The machine doesn’t have the policy, necessarily, inside to make sure that it’s being as fair as possible. We need to make sure that we do put these constraints on these systems, so that it meets what we’ve agreed upon as a society, in laws, etc., to adhere to. And that it should be held to the same standard as if there was a human making that same decision.

There is, of course, a lot of legitimate fear wrapped up about the effect of automation and artificial intelligence on employment. And just to set the problem up for the listeners, there’s broadly three camps, everybody intuitively knows this.

 There’s one group that says, “We’re going to advance our technology to the point that there will be a group of people who do not have the educational skills needed to compete with the machines, and we’ll have a permanent underclass of people who are unemployable.” It would be like the Great Depression never goes away.

And then there are people who say, “Oh, no, no, no. You don’t understand. Everything, every job, a machine is going to be able to do.” You’ll reach a point where the machine will learn it faster than the human, and that’s it.

And then you’ve got a third group that says, “No, that’s all ridiculous. We’ve had technology come along, as transformative as it is… We’ve had electricity, and machines replacing animals… and we’ve always maintained full employment.” Because people just learn how to use these tools to increase their own productivity, maintain full employment—and we have growing wages.

So, which of those, or a fourth one, do you identify with?

This might be an unsatisfying answer, but I think we’re going to go through all three phases. I think we’re in the third camp right now, where people are learning new systems, and it’s happening at a pace where people can go to a computer science boot camp and become an engineer, and try to retrain and learn some of these systems, and adapt to this changing scenario.

I think, very rapidly—especially at the exponential pace that technology tends to evolve—it does become very difficult. Fifty years ago, if you wanted to take apart your telephone and try to figure out how it works, repair it, that was something that a kid could do at a camp kind of thing, like an entry circuits camp. That’s impossible to do with an iPhone.

I think that’s going to continue to happen with some of these more advanced systems, and you’re going to need to spend your entire life understanding some subcomponent of it. And then, in the further future, as we move towards this direction of artificial general intelligence… Like, once a machine is a thousand times, ten thousand times, one hundred thousand times smarter—by whatever definition—than a human, and that increases at an exponential pace… We won’t need a lot of different things.

Whether or not that’s a fundamentally bad thing is up for debate. I think one thing that’s different about this than the Industrial Revolution, or the agricultural revolution, or things like that, that have happened throughout human history… is that instead of this happening over the course of generations or decades… Maybe if your father, and your grandfather, and your entire family tree did a specific job, but then that job doesn’t exist anymore, you train yourself to do something different.

Once it starts to happen over the course of a decade, or a year, or a month, it becomes much harder to completely retrain. That being said, there’s lots of thoughts about whether or not humans need to be working to be happy. And whether or not there could be some other fundamental thing that would increase the net happiness and fulfillment of people in the world, besides sitting at a desk for forty hours a week.

And maybe that’s actually a good thing, if we can set up the societal constructs to allow people to do that in a healthy and happy way.

Do you have any thoughts on computers displaying emotions, emulating emotions? Is that going to be a space where people are going to want authentic human experiences in those in the future? Or are we like, “No, look at how people talk to their dog,” or something? If it’s good enough to fool you, you just go along with the conceit?

The great thing about computers, and artificial intelligence systems, and things like that is if you point them towards a specific target, they’ll get pretty good at hitting that target. So if the goal is to mimic human emotion, I think that that’s something that’s achievable. Whether or not a human cares, or is even able to distinguish between that and actual human emotion, could be very difficult.

At Cornell, where I did my PhD, they had this psychology chatbot called ELIZA—I think this was back in the ‘70s. It went through a specific school of psychological behavioral therapy thought, replied with specific ways, and people found it incredibly helpful.

Even if they knew that it was just a machine responding to them, it was a way for them to get out their emotions and work through specific problems. As these machines get more sophisticated and able, as long as it’s providing utility to the end user, does it matter who’s behind the screen?

That’s a big question. Weizenbaum shut down ELIZA because he said that when a machine says, “I understand” that it’s a lie, there’s no ‘I’, and there’s nothing [there] that understands anything. He had real issues with that.

But then when they shut it down, some of the end users were upset, because they were still getting quite a bit of utility out of it. There’s this moral question of whether or not you can take away something from someone who is deriving benefit from it as well.

So I guess the concern is that maybe we reach a day where an AI best friend is better than a real one. An AI one doesn’t stand you up. And an AI spouse is better than a human spouse, because of all of those reasons. Is that a better world, or is it not?

I think it becomes a much more dangerous world, because as you said before, someone could decide to turn off the machine. When it’s someone taking away your psychologist, that could be very dangerous. When it’s someone deciding that you didn’t pay your monthly fee, so they’re going to turn off your spouse, that could be quite a bit worse as well.

As you mentioned before, people don’t necessarily associate the feelings or pain or anything like that with the machine, but as these get more and more life-like, and as they are designed with the reward function of becoming more and more human-like, I think that distinction is going to become quite a bit harder for us to understand.

And it not only affects the machine—which you can make the argument doesn’t have a voice—but it’ll start to affect the people as well.

One more question along these lines. You were a Forbes 30 Under 30. You’re fine with computer emotions, and you have this set of views. Do you notice any generational difference between researchers who have been in it longer than you, and people of your age and training? Do you look at it, as a whole, differently than another generation might have?

I think there are always going to be generational differences. People grow up in different times and contexts, societal norms shift… I would argue usually for the better, but not always. So I think that that context in which you were raised, that initial training data that you apply your transfer learning to for the rest of your life, has a huge effect on what you’re actually going to do, and how you perceive the world moving forward.

I spent a good amount of time today at SigOpt. Can you tell me what you’re trying to do there, and why you started or co-founded it, and what the mission is? Give me that whole story.

Yeah, definitely. SigOpt is an optimization-as-a-service company, or a software-as-a-service offering. What we do is help people configure these complex systems. So when you’re building a neural network—or maybe it’s a reinforcement learning system, or an algorithmic trading strategy—there’s often many different tunable configuration parameters.

These are the settings that you need to put in place before the system itself starts to do any sort of learning: things like the depth of the neural network, the learning rates, some of these stochastic gradient descent parameters, etc.

These are often kind of nuisance parameters that are brushed under the rug. They’re typically solved via relatively simplistic methods like brute forcing it or trying random configurations. What we do is we take an ensemble of the state-of-the-art research from academia, and Bayesian and global optimization, and we ensemble all of these algorithms behind a simple API.

So when you are downloading MxNet, or TensorFlow, or Caffe2, whatever it is, you don’t have to waste a bunch of time trying different things via trial-and-error. We can guide you to the best solution quite a bit faster.

Do you have any success stories that you like to talk about?

Yeah, definitely. One of our customers is Hotwire. They’re using us to do things like ranking systems. We work with a variety of different algorithmic trading firms to make their strategies more efficient. We also have this great academic program where SigOpt is free for any academic at any university or national lab anywhere in the world.

So we’re helping accelerate the flywheel of science by allowing people to spend less time doing trial-and-error. I wasted way too much of my PhD on this, to be completely honest—fine-tuning different configuration settings and bioinformatics algorithms.

So our goal is… If we can have humans do what they’re really good at, which is creativity—understanding the context in the domain of a problem—and then we can make the trial-and-error component as little as possible, hopefully, everything happens a little bit faster and a little bit better and more efficiently.

What are the big challenges you’re facing?

Where this system makes the biggest difference is in large complex systems, where it’s very difficult to manually tune, or brute force this problem. Humans tend to be pretty bad at doing 20-dimensional optimization in their head. But a surprising number of people still take that approach, because they’re unable to access some of this incredible research that’s been going on in academia for the last several decades.

Our goal is to make that as easy as possible. One of our challenges is finding people with these interesting complex problems. I think the recent surge of interest in deep learning and reinforcement learning, and the complexity that’s being imbued in a lot of these systems, is extremely good for us, and we’re able to ride that wave and help these people realize the potential of these systems quite a bit faster than they would otherwise.

But having the market come to us is something that we’re really excited about, but it’s not instant.

Do you find that people come to you and say, “Hey, we have this dataset, and we think somewhere in here we can figure out whatever”? Or do they just say, “We have this data, what can we do with it?” Or do they come to you and say, “We’ve heard about this AI thing, and want to know what we can do”?

There are companies that help solve that particular problem, where they’re given raw data and they help you build a model and apply it to some business context. Where SigOpt sits, which is slightly different than that, is when people come to us, they have something in place. They already have data scientists or machine learning engineers.

They’ve already applied their domain expertise to really understand their customers, the business problem they’re trying to solve, everything like that. And what they’re looking for is to get the most out of these systems that they’ve built. Or they want to build a more advanced system as rapidly as possible.

And so SigOpt bolts on top of these pre-existing systems, and gives them that boost by fine-tuning all of these different configuration parameters to get to their maximal performance. So, sometimes we do meet people like that, and we pass them on to some of our great partners. When someone has a problem and they just want to get the most out of it, that’s where we can come in and provide this black box optimization on top of it.

Final question-and-a-half. Do you speak a lot? Do you tweet? If people want to follow you and keep up with what you’re doing, what’s the best way to do that?

They can follow @SigOpt on Twitter. We have a blog where we post technical and high-level blog posts about optimization and some of the different advancements, and deep learning and reinforcement learning. We publish papers, but blog.sigopt.com and on Twitter @SigOpt is the best way to follow us along.

Alright. It has been an incredibly fascinating hour, and I want to thank you for taking the time.

Excellent. Thank you for having me. I’m really honored to be on the show.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Voices in AI – Episode 13: A Conversation with Bryan Catanzaro

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Bryan talk about sentience, transfer learning, speech recognition, autonomous vehicles, and economic growth.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 13: A Conversation with Bryan Catanzaro”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-54-18)-bryan-catanzaro.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-5.jpg”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png&#8217;
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.

Bryan Catanzaro: Thanks. It’s great to be here.

Let’s start off with my favorite opening question. What is artificial intelligence?

It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.

I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?

Yeah, wow…I’m not a philosopher, so I actually don’t have like a…

Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?

I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.

Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?

I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.

Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”

I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.

If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?

I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.

To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.

Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?

It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.

But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?

I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.

What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?

I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.

With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.

My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.

I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.

Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?

I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.

AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.

The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.

We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?

I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.

As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.

Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.

And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.

So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?

I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.

The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.

The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.

So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.

You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?

I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.

Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?

Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.

Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?

I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.

AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?

I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.

Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?

Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.

Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?

NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.

We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.

I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.

You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?

Yes, there is.

So everything you do, you have an internal customer for already?

That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.

In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”

It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.

And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.

I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.

I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.

I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.

That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”

That’s right.

That’s like a game, right? You have points.

That’s right.

Would you ever productize anything, or is everything that you’re doing just for your own use?

We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.

But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?

Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.

I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?

First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.

But an AI excluded 190 people from that position.

It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.

Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?

I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.

It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.

Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.

We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?

I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.

I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.

All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.

It was fun.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.

Bryan Catanzaro: Thanks. It’s great to be here.

Let’s start off with my favorite opening question. What is artificial intelligence?

It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.

I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?

Yeah, wow…I’m not a philosopher, so I actually don’t have like a…

Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?

I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.

Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?

I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.

Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”

I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.

If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?

I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.

To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.

Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?

It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.

But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?

I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.

What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?

I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.

With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.

My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.

I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.

Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?

I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.

AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.

The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.

We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?

I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.

As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.

Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.

And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.

So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?

I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.

The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.

The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.

So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.

You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?

I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.

Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?

Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.

Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?

I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.

AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?

I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.

Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?

Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.

Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?

NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.

We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.

I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.

You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?

Yes, there is.

So everything you do, you have an internal customer for already?

That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.

In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”

It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.

And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.

I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.

I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.

I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.

That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”

That’s right.

That’s like a game, right? You have points.

That’s right.

Would you ever productize anything, or is everything that you’re doing just for your own use?

We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.

But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?

Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.

I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?

First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.

But an AI excluded 190 people from that position.

It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.

Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?

I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.

It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.

Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.

We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?

I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.

I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.

All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.

It was fun.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Voices in AI – Episode 11: A Conversation with Gregory Piatetsky-Shapiro

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Gregory talk about consciousness, jobs, data science, transfer learning.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 11: A Conversation with Gregory Piatetsky-Shapiro”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-43-05)-gregory-piatestsky.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-3.jpg”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png&#8217;
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Byron Reese: This is “Voices in AI”, brought to you by Gigaom. I’m Byron Reese. Today our guest is Gregory Piatetsky. He’s a leading voice in Business Analytics, Data Mining, and Data Science. Twenty years ago, he founded and continues to operate a site called KDnuggets about knowledge discovery. It’s dedicated to the various topics he’s interested in. Many people think it’s a must-read resource. It has over 400,000 regular monthly readers. He holds an MS and a PhD in computer science from NYU. 

Welcome to the show.

Gregory Piatetsky: Thank you, Byron. Glad to be with you.

I always like to start off with definitions, because in a way we’re in such a nascent field in the grand scheme of things that people don’t necessarily start off agreeing on what terms mean. How do you define artificial intelligence?

Artificial intelligence is really machines doing things that people think require intelligence, and by that definition the goalposts of artificial intelligence are constantly moving. It was considered very intelligent to play checkers back in the 1950s, then there was a program. The next boundary was playing chess, and then computers mastered it. Then people thought playing Go would be incredibly difficult, or driving cars. General artificial intelligence is the field that tries to develop intelligent machines. And what is intelligence? I’m sure we will discuss, but it’s usually in the eye of the beholder.

Well, you’re right. I think a lot of the problem with the term artificial intelligence is that there is no consensus definition of what intelligence is. So, are you saying if we’re constantly moving the goalposts, it sounds like you’re saying we don’t have systems today that are intelligent.

No, no. On the contrary, we have lots of systems today that would have been considered amazingly intelligent 20 or even 10 years ago. And the progress is such that I think it’s very likely that those systems will exceed our intelligence in many areas, you know maybe not everywhere, but in many narrow, defined areas they’ve already exceeded our intelligence. We have many systems that are somewhat useful. We don’t have any systems that are fully intelligent, possessing what is a new term now, AGI, Artificial General Intelligence. Those systems remain still ahead in the future.

Well, let’s talk about that. Let’s talk about an AGI. We have a set of techniques that we use to build the weak or narrow AI we use today. Do you think that achieving an AGI is just continuing to apply to evolve those faster chips, better algorithms, bigger datasets, and all of that? Or do you think that an AGI really is qualitatively a different thing?

I think AGI is qualitatively a different thing, but I think that it is not only achievable but also inevitable. Humans also can be considered as biological machines, so unless there is something magical that we possess that we cannot transfer to machines, I think it’s quite possible that the smartest people can develop some of the smartest algorithms, and machines can eventually achieve AGI. And I’m sure it will require additional breakthroughs. Just like deep learning was a major breakthrough that contributed to significant advances in state of the art, I think we will see several such great breakthroughs before AGI is achieved.

So if you read the press about it and you look at people’s predictions on when we might get an AGI, they range, in my experience, from 5 to 500 years, which is a pretty telling fact alone that it’s that kind of range. Do you care to even throw in a dart in that general area? Like do you think you’ll live to see it or not?

Well, my specialty as a data scientist is making predictions, and I know when we don’t have enough information. I think nobody really knows. And I have no basis on which to make a prediction. I hope it’s not 5 years and I think our experience as a society shows that we have no idea how to make predictions for 100 years from now. It’s very instructive to find so-called futurology articles, things that were written 50 years ago about what will happen in 50 years, and see how naive were those people 50 years ago. I don’t think we will be very successful in predicting in 50 years. I have no idea how long it will take, but I think it will be more than 5 years.

So some people think that what makes us intelligent, or an indispensable part of our intelligence, is our consciousness. Do you think a machine would need to achieve consciousness in order to be an AGI?

We don’t know what is consciousness. I think machine intelligence would be very different from human intelligence, just like airplane flight is very different from a bird, you know. Both airplanes and birds fly, the flight is governed by the same laws of aerodynamics and physics, but they use very different principles. The airplane flight does not copy bird flight, it is inspired by it. I think in the same way, we’re likely to see that machine intelligence doesn’t copy human intelligence, or human consciousness. “What exactly is consciousness?” is more a question for philosophers, but probably it involves some form of self-awareness. And we can certainly see that machines and robots can develop self-awareness. And you know, self-driving cars already need to do some of that. They need to know exactly where they’re located. They need to predict what will happen. If they do something, what will other cars do? They have a form that is called model of the mind, mirror intelligence. One interesting anecdote on this topic is that when Google’s self-driving car was originally started their experiments, it couldn’t cross the intersection because it was always yielding to other cars. It was following the rules as they were written, but not the rules as people actually execute them. And so it was stuck at that intersection supposedly for an hour or so. Then the engineers adjusted the algorithm so it would better predict what people will do and what it will do, and it’s now able to negotiate the intersections. It has some form of self-awareness. I think other robots and machine intelligence will develop some form of self-awareness, and whether it will be called consciousness or not will be to our descendants to discuss.

Well, I think that there is an agreed upon definition of consciousness. I mean, you’re right that nobody knows how it comes about, but it’s qualia, it’s experiencing things. It’s, if you’ve ever had that sensation when you’re driving and you kind of space, and all of a sudden two miles later you kind of snap to and think, “Oh my gosh, I’ve got no recollection of how I got here.” That time you were driving, that’s intelligence without consciousness. And then when you kind of snap to, and all of the sudden you’re aware, you’re experiencing the world again. Do you think a computer can actually experience something? Because wouldn’t it need to experience the world in order to really be intelligent?

Well computers, if they have sensors, actually they already experience the world. The self-driving car is experiencing the world through its radar and LIDAR and various other sensors and so on, so they do experience and they do have sensors. I think it’s not useful to debate computer consciousness, because it’s like a question of, you know, how many angels can fit on the pin of a needle. I think what we can discuss is what they can or cannot do. How they experience it is more a question for philosophers.

So a lot of people are worried – you know all of this, of course – there’s two big buckets of worry about artificial intelligence. The first one is that it’s going to take human jobs and they’re going to have mass unemployment, and any number of dystopian movies play that scenario out. And then other people say, no, every technology that’s come along, even disruptive ones like electricity, and mechanical power replacing animal power and all of that, were merely then turned around and used by humans to increase their productivity, and that’s how you get increases in standard of living. On that question, where do you come down?

I’m much more worried than I am optimistic. I’m optimistic that technology will progress. What I’m concerned with is it will lead to increasing inequality and increasingly unequal distribution of wealth and benefits. In Massachusetts, there used to be many toll collectors. And toll collector is not a very sophisticated job, but recently they were eliminated. And the machines that eliminated them didn’t require full intelligence, basically just an RFID sensor. So we already see many jobs being eliminated by a simpler form of automation. And what society will do about it is not clear. I think the previous disruptions had much longer timespans. But now when people like these toll collectors are being laid off, they don’t have enough time to retrain themselves to become, let’s say computer programmers or doctors. What I’d like to do about it, I’m not sure. But I like a proposal by Andrew Ng, who was from Stanford Coursera. Andrew, he proposed the modified version of basic income, that people who are unemployed and cannot find jobs get some form of basic income. Not just to sit around, but they would be required to learn new skills and learn something new and useful. So maybe that would be a possible solution.

So do you really think that when you look back across time – you know, the United States, I can only speak to that, went from generating 5% of its energy with steam to 80% in just 22 years. Electrification happened electrifyingly fast. The minute we had engines there was wholesale replacement of the animals, they were just so much more efficient. Isn’t it actually the case that when these destructive technologies come along, they are so empowering that they are actually adopted incredibly quickly? And again, just talking about the US, unemployment for 230 years has been between 5% and 9%, other than the Great Depression, but in all the other time, it never bumped. When these highly disruptive technologies came along, it didn’t cause unemployment generally to go up, and they happened quickly, and they eliminated an enormous number of positions. Why do you think this one is different?

The main reason why I think it is different is because it is qualitatively different. Previously, the machines that came, like the steam and electricity-driven, it would eliminate some of the manual work and people could climb up on the pyramid of skills to do more sophisticated work. But nowadays, artificial general intelligence sort of captures this pyramid of skills, and it now competes with people on the cognitive skills. And it can eventually climb to the top of the pyramid, so there will be nowhere to climb to exceed it. And once you generate one general intelligence, it’s very easy to copy it. So you would have a very large number, let’s say, of intelligent robots that will do a very large number of things. They will compete with people to do other things. It’s just very hard to retrain, let’s say, a coal miner to become, let’s say, producer of YouTube videos.

Well that isn’t really how it ever happens, is it? I mean, that’s kind of a rigged set-up, isn’t it? What matters is, can everybody do a job a little bit harder than they have? Because the maker of YouTube videos is a film student. And then somebody else goes to film school, and then the junior college professor decides to… I mean, everybody just goes up a little bit. You never take one group of people and train them to do an incredibly radically different thing, do you?

Well, I don’t know about that exactly, but to return to your analogy, you mentioned that the United States for 200 years the pattern was such. But, you know, the United States is not the only country in the world, and 200 years is a very small part of our history. We look at several thousand years, and look with what happened in the north, we see they’re very complex things. Unemployment rate in the Middle Ages was much higher than 5% or 10%.

Well, I think the important thing, and the reason why I used 200 years is because that’s the period of industrialization that we’ve seen, and automation. And so the argument is Artificial Intelligence is going to automate jobs, so you really only need to look over the period you’ve had other things automating jobs to say, “What happens when you automate a lot of jobs?” I mean, by your analogy, wouldn’t the invention of the calculator have put mathematicians out of business? I mean like with ATM machines, an ATM machine in theory replaces a bank teller. And yet we have more bank tellers today than we did when the ATM was introduced, because that too allows banks to open more branches and hire more tellers. I mean, is it really as simple as, “Well, you’ve built this tool, now there’s a machine doing a job a human did and now you have an unemployed human.” Is that kind of the only force at work?

Of course it’s not simple, there are many forces at work. And there are forces that resist change, as we’ve seen from Luddites in 18th century. And now there are people, for example coal mining districts, who want to go back to coal mining. Of course, it’s not that simple. What I’m saying is we only had a few examples of industrial revolutions, and as data scientists say, it’s very hard to generalize from few examples. It’s true that past technologies have generated more work. It doesn’t follow that this new technology, which is different, will generate more work for all the people. It may very well be different. We cannot rely on three or four past examples to generalize for the future.

Fair enough. So let’s talk, if we can, about how you spend your days, which is in data science, what are some recent advances that you think have materially changed the job of a data scientist? Are there ones? And are there more things that you can kind of see that are about to change and begin? Like how is that job evolving as technology changes?

Yes, well data scientists now live in the golden age of the field. There are now more powerful tools that make data science much easier, tools like Python and R. And Python and R both have a very large ecosystem of tools, like scikit-learn for example in the case of Python, or whatever Hadley Wickham comes up in the case of R. There are tools like Spark and various things on top of that that allow data scientists to access very large amount of data. It’s much easier and much faster for data scientists to build models. The danger for data scientists, again, is automation, because as those tools make it easier and easier, and soon they make the work, you know, a large part of it automated. In fact, there are already companies like DataRobot and others that allow business users who are not data scientists just to plug their data, and DataRobot or their competitors just generate the results. No data scientist needed. That is already happening in many areas. For example, ads on the internet are automatically placed, and there are algorithms that make millions of decisions per second and build lots of models. Again, no human involvement because humans just cannot do millions of models a second. There are many areas where this automation is already happening. And recently I had a poll in KDnuggets asking, when do you think data science work will be automated? Then the median answer was about 20 or 25. So although this is a golden age for data scientists, I think they should enjoy it because who knows what will happen in the next 8 to 10 years.

So, when Mark Cuban was talking about the first – he gave a talk earlier this year – he said the first trillionaires will be in businesses that utilize AI. But he said something very interesting, which is, he said that if he were coming up through university again, he would study philosophy. That’s the last thing that’s going to be automated. What would you suggest to a young person today listening to this? What do you think they should study, in the cognitive area, that is either blossoming or is it likely to go away?

I think what will be very much in demand is at the intersection of humanities and technology. If I was younger I would still study machine learning and databases, which is actually what I studied for my PhD 30 years ago. I probably would study more mathematics. The deep learning algorithms that are making tremendous advances are very mathematically intensive. And the other aspect is, kind of maybe the hardest to automate is human intuition and empathy, understanding what other people need and want, and how to best connect with them. I don’t know how much that can be studied, but if philosophy or social studies or poetry is the way to it, then I would encourage young people to study it. I think we need a balanced approach, not just technology but humanities as well.

So, I’m intrigued that our DNA is– I’m going to be off here, whatever I say. I think is about is about 740 meg, it’s on that order. But when you look at how much of it we share with, let’s say, a banana, it’s 80-something percent, and then how much we share with a chimp, it’s 99%. So somewhere in that 1%, that 7 or 8 meg of code that tells how to build you, is the secret to artificial general intelligence, presumably. Is it possible that the code to do an AGI is really quite modest and simple? Not simple – you know, there’s two different camps in the AGI world. And one is that humans are a hack of 100 or 200 or 300 different skills that you put them all together and that’s us. Another one is, we had Pedro Domingos on the show and he had a book called The Master Algorithm, which posits that there is an algorithm that can solve any problem, or any solvable problem, the way human is. Where on that spectrum would you fall? And do you think there is a simple answer to an AGI?

I don’t think there is a simple answer. Actually, I’m a good friend with Pedro and I moderated his webcast on his book last year. But I think that the master algorithm that he looks for may exist, but it doesn’t exclude having lots of additional specialized skills. I think there is very good evidence that there is such a thing as general intelligence in humans, that people, for example, make have different scores on SAT on verbal and math. I know that my verbal score would be much lower than my math score. But usually if you’re above average on one, you would be above average on the other. And likewise, if you’re below average on one, you will be below average. People seem to have some general skills, and in addition there are a lot of specialized skills. You know, you can be a great chess player but have no idea how to play music, or vice versa. I think there are some general algorithms, and there are lots of specialized algorithms that leverage special structure of the domain. You can think of it this way, that when people were developing chess-playing programs, they initially applied some general algorithms, but then they found that they could speed up these programs by building specialized hardware that was very specific to chess. Likewise, people when they start new skills they approach it generally, then they develop the specialized expertise which speeds up their work. I think likewise it could be with intelligence. There may be some general algorithm, but it would have ways to develop lots of special skills that would leverage whatever specific or particular tasks.

Broadly speaking, I guess data science relies on three things: it relies on hardware, faster and faster hardware; better and better data, more of it and labeled better; and then better and better algorithms. If you kind of had to put those three things side by side, where are we most efficient? Like if you could really amp one of those three things way up, what would it be? 

That’s a very good question. With current algorithms, it seems that more data produces much better results than a smarter algorithm, especially if it is relevant data. For example, for image recognition there was a big quantitative jump when deep learning trained on millions of images as opposed to thousands of images. But I think what we need for next big advances is having somewhat smarter algorithms. One big shortcoming for deep learning is, again, it requests so much data. People seem to be able to learn from very few examples. And the algorithms that we have are not yet able to do that. In algorithm’s defense, I have to say that when I say people can learn from very few examples, we assume those are adults and they’ve already spent maybe 30 or 40 years of training interacting with the world. So maybe if algorithms can spend some years training and interacting with the world, they’ll acquire enough knowledge so they’ll be able to generalize to other similar examples. Yes, I think probably data, then algorithms, and then hardware. That would be my order.

So, you’re alluding to transfer learning, which is something humans seem to be able to do. Like you said, you could show a person who’s never seen an Academy Award, what that little statue that looks like, and then you could show them photographs of it in the dark, on its side, underwater, and they could pick it out. And what you just said is very interesting, which is, well yeah, we only had one photo of this thing, but we had a lifetime of learning how to recognize things underwater and in different lighting and all that. What do you think about transfer learning for computers? Do you think we’re going to be able to use the datasets that we have that are very mature, like the image one, or handwriting recognition, or speech translation, are we going to be able to use those to solve completely unrelated problems? Is there some kind of meta-knowledge buried in those things we’re doing really well now, that we can apply to things we don’t have good data on?

I think so. I think because the world itself is the best representation. So recently I read a paper that applied this negative transformation to ImageNet, and it turns out that now a deep learning system that was trained to recognize, I don’t remember exactly what it was, but let’s say cats, would not be able to recognize negatives of cats, because the negative transformation is not part of its repertoire. But that is very easy to remedy if you just add negative vocabulary image to the training. I think there is maybe a large but finite number of such transformations that humans are familiar with, like the negative and rotated and other things. And it’s quite possible that by doing such transformation to very large existing databases, we could teach those machine learning systems to achieve and exceed human levels. Because humans themselves are not perfect in recognition.

Earlier, this conversation we’re having, we’re taking human knowledge and how people do things and we’re kind of applying that to computers. Do you think AI researchers learn much from brain science? Do they learn much from psychology? Or is it more that’s handy for telling stories or helping people understand things? But as you started at the very beginning with airplanes and birds we were talking, there really isn’t a lot of mapping between how humans do things and how machines do them.

Yes, by the way, the airplanes and birds analogy I think is due to Yann LeCun. And I think some AI researchers are inspired by how humans do things, and the prime example is Geoff Hinton who is an amazing researcher, not only because of what he achieved, but he has extremely good understanding of both computers and human consciousness. And several talks that I’ve heard of him and some conversation afterwards, he suggested he uses his knowledge of how human brain works as an inspiration for coming up with new algorithms. Again, not copying them but inspiring the algorithms. So to answer your question, yes, I think human consciousness is very relevant to understanding how intelligence could be achieved, and as Geoff Hinton says, that’s the only working example we have at the moment.

We were able to kind of do chess in AI so easily because there were so many – not so easily, obviously people worked very hard on it – but because there were so many well-kept records of games that would be training data. We can do handwriting recognition well because we have a lot of handwriting and it’s been transcribed. We do translation well because there is a lot of training data. What are some problems that would be solvable if we just had the data for them, and we just don’t have it nor do we have any good way of getting it? Like, what’s a solvable problem that really our only impediment is that we don’t have the data?

I think at the forefront of such problem is medical diagnosis, because there are many diseases where the data already exists, it’s just maybe not collected in electronic form. There is a lot of genetic information that could be collected and correlated with both diseases and treatment, what works. Again, it’s not yet collected, but Google and 23andMe and many other companies are working on that. Medical radiology recently witnessed great success of a startup called Enlitic, where they were able to identify tumors using deep learning on almost the same quality as human radiologists. So I think in medicine and health care we will see big advances. And in many other areas where there is a lot of data, we can also see big advances. But the flipside of data, or what we can touch on it, is people, at least in some part of the political spectrum, are losing connection on whether it’s actually true or not. Last year’s election saw a tremendous amount of fake news stories that seemed to have significant influence. So while on one hand we’re training machines to do a better and better job in recognizing what is true, many humans are losing their ability to recognize what is true and what is happening. Just to witness denial of climate change by many people in this country.

You mention text analysis on your LinkedIn profile. I just saw that that was something that you evidently know a lot about. Is the problem you’re describing solvable? If you had to say the number one problem of the worldwide web is you don’t know what to believe, you don’t know what’s true, and you just don’t have a way necessarily of sorting results by truthiness, do you think that that is a machine learning problem, or is that not one? Is it going to require moderation in humans? Or is truth not a defined enough concept on which to train 50 billion web pages?

I think the technical part certainly can be solved from machine learning point of view. But the worldwide web does not exist in vacuum, it is embedded in human society. And as such, it suffers from all the advantages and problems of humans. If there are human actors that will find it beneficial to bend the truth and use the worldwide web to convince other people what they want to convince them of, they will find some ways to leverage the algorithms. The operator by itself is not a panacea as long as there are humans with all of our good and evil intentions around it.

But do you think it’s really solvable? Because I remember this Dilbert comic strip I saw once where Dilberts on a sales call and the person that he’s talking to says, “Your salesmen says your product cures cancer!” And Dilbert says, “That is true.” And the guy says, “Wait a minute! It’s true that it cures cancer or it’s true that he said that?” And so it’s like that, that statement, “Your salesperson said your product cures cancer,” is a true statement. But that subtlety, that nuance, that it’s-true-but-it’s-not-true aspect of it, I just wonder, it doesn’t feel like chess, this very clear-cut win/lose kind of situation. And I just wonder even if everybody wanted the true results to rise to the top, could we actually do that?

Again, I think technically it is possible. Of course, you know nothing will work perfectly, but humans also do not do perfect decisions. For example, Facebook already has an algorithm that can identify clickbait. And one of the signals is relatively simple, just look at the number of people, let’s say, who look at a particular headline, click on a particular link, and then how much time they spend there or whether they return and click backwards. The headline like, “Nine amazing things you can do to cure X,” and you go to that website and it’s something completely different, then you quickly return. Your behavior will be different than if you go to a website that matches the headline. And you know, Facebook and Google and other sites, they can measure those signals and they can see which type or which headlines are deceptive. The problem is that the ecosystem that has evolved seems to reward capturing attention of people, and headlines are more likely to be shared, are worth capturing attention of people, generate emotion in either anger or some cute things. We’re evolving toward internet of anger, partisan anger, and cute kittens. That’s the two extreme axes of what gets attention. I think the technical part is solvable. The problem is that, again, there are humans around it that make a very different motivation from you and me. It’s very hard to work when your enemy is using various cyber-weapons against you.

Do you think nutrition may be something that would be really hard as well? Because no two people – you eat however many times a day, however many every different foods, and there is nobody else who does that same combination on the planet, even for seven consecutive days or something. Do you think that nutrition is a solvable thing, or there are too many variables for there to ever be a dataset that would be able to say, “If you eat broccoli, chocolate ice cream, and go to the movie at 6:15, you’ll live longer?

I think that is certainly solvable. Again, the problem is that humans are not completely logical. That’s our duty and our problem. People know what is good for them, but sometimes they just want something else. We sort of have our own animal instinct that is very hard to control. That’s why all the diets work, but just not for a very long time. People who go on diets very frequently and then you know, find that it didn’t work and go on it again. Yes, for information, nutrition can be solved. How motivation to convince people to follow good nutrition, that is a much, much harder problem.

All right! Well it looks like we are out of time. Would you go ahead and tell the listeners how they can keep up with you, go on your website, and any ways they can follow you, how to get hold of you and all of that?

Yes. Thank you, Byron. You can find me on Twitter @KDnuggets, and visit the website KDnuggets.com. It’s a magazine for data scientists and machine learning professionals. We publish only a few interesting articles a day. And I hope you can read it, or if you have something to say, contribute to it! And thank you for the interview, I enjoyed it.

Thank you very much.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}