How Artificial Intelligence Will Personalize How We Work

Artificial intelligence in the workplace is here to stay. However, as enterprise technologies continue to develop and evolve, we must understand how AI will affect our roles and responsibilities at work.

The unknowns about the impact of AI has led to the fear that this emerging technology could be a substitute for – or entirely eradicate – existing jobs. Depending on which stats you refer to, AI will replace over 40% of jobs by 2030, or that 165 million Americans could be out of work before 2025.

Yet it is not all doom and gloom. Given the rate of new systems, processes, and data that we’re exposed to each day, AI can deliver tangible benefits in learning our skills, habits and behaviors, upending how we use technology. When companies are spending over $3.5 trillion on IT and use an average of 831 cloud services, it’s no surprise that we forget 70% of what we learn in a day, unless we immediately apply that knowledge into
our workflows.

There are four tectonic shifts happening within businesses that are propelling the need for greater personalization and efficiency in how we use technology:

● Employee expectations and behaviors have shifted. Unlike their predecessors, Millennials and Gen Z employees are accustomed to digital technologies. While they’re resourceful and can easily access information, they aren’t necessarily able to retain it. Generally speaking, they expect consumer-level technologies, are highly distracted and change positions often – and thus expect technology to be quick, efficient and intuitive.
● Organizations are undergoing a sweeping digital transformation. One of the biggest buzzwords of 2017 is “digital transformation” and has been sweeping across all businesses as they look to modernize their activities, processes and models to become completely digitized.
● Decisions are fragmented between departments. As companies move to more digitalized systems, the decision to implement new technologies has been driven by line of business heads. From HR systems, customer relationship management (CRM) tools to ERP solutions, procurement decisions are based on departmental needs, rather than the traditional approach of it being mandated by the CIO or at the organizational level.
● Cloud technologies are creating a training challenge. Cloud-based technologies indicate that systems are undergoing regular improvements and updates, creating a situation where employees must constantly adjust to new changes that they need to learn and adopt quickly.

Based on these changes, AI is a critical component for tomorrow’s organizations. Coupled with deep analytics, AI can greatly affect individual user behavior, identifying barriers to technology adoption and contextually guiding users on how to use any new solution. In doing so, employees can ultimately become instant pros in using a system – even if they haven’t used the technology before.

This contextual, personalized, and just-in- time approach allows us to abandon traditional training and development methods, which can become quickly outdated as we continue to encounter new systems and interfaces. It doesn’t make sense to set up a classroom style training to familiarize your team with a new HR software, for example, when incremental product updates occur so frequently. When employees are stuck
using a system, they’re more apt to ask a colleague for help, search online for the answer, or worst of all, give up on using the system. All are ineffective uses of our time.

Instead of feeling daunted by the onslaught of new systems we encounter, technology should learn about the user to improve their workflows. Creating systems that learn and automate tedious processes will be a major battleground for technology vendors in the next few years. It won’t be long before we can rely on AI to do all the “learning” for us – leading to a workplace where we train the software to adapt to our needs, rather than forcing us to adapt to the software.

Rephael Sweary is the cofounder and president of WalkMe, which pioneered the digital adoption platform. Previously, Rephael was the cofounder, CEO and then President of Jetro Platforms which was acquired in 2007. Since then, he has funded and helped build a number of companies both in his role as Entrepreneur-inResidence at Ocean Assets and in a personal capacity.

Advertisements

How to Not Squander Your AI Investments

We are more comfortable having conversations with machines than ever before. In fact, by 2020, the average person will have more conversations with bots than with their spouse. Twenty-seven percent of consumers weren’t sure if their last customer service interaction was with a human or a chatbot.

To the average person, conversation is simply a more convenient interface, evidenced by messaging services having supplanted social networks in active users. But when it comes to conversational interactions with bots, what exactly do these exchanges mean for machines?

For machines, these conversations are just data. Despite this newfound abundance of ridiculously valuable data, most companies are still just using AI technologies to deflect calls from the contact center.  We have the bigger opportunity to use this data to impact real business decisions across every role, function and department in the enterprise. Yet here we are, about to kick off 2018, and most companies are still leaving the majority of the value of their AI investments on the table.

It’s time to wake up to the data opportunity created by conversational intelligence as a whole.

The value of artificial intelligence has compounding interest

In the world of AI, there is a popular concept called the network effect. This is the concept that a good or service becomes more valuable when more people use it. For conversational intelligence, platforms become smarter as it gets more customer data, experiencing the data network effect.

It’s not news that conversations are a great data source. Brands have more information than ever about their customers, especially when paired with other data points such as location, device, and even GPS.

The application of the network effect here is apt. Truly, in the context of conversational technologies, intelligence breeds intelligence. It’s a form of compounding interest in which the principle is the data from systems of record that empower conversations, the interest is the conversation, and the compound interest is the intelligence. This virtuous cycle has been a key driver in the innovation of these technologies.

However, up until now, conversational data has been mostly used to simply have better conversations. The product is getting better and better, the back-and-forths far smoother than ever before. The bottom line, bolstered by the speed of resolution, volume of requests answered, and satisfied customers, is improving.

This is all well and good, but the reality is that hidden in these conversations is business value that to date, brands have let go to waste and not fully capitalized on.

Conversational Intelligence is Real-Time Market Research

Language has always been a tool to better understanding humans – what they need and what they want. Many business use this information in the psychology of marketing and sales, via the arduously conducted market surveys that we are all so apt to ignore. This business process of market research has, until recently, been a separate line item.

Now, however, through the core business process of customer service, the data collection is already happening. As customer service has become more and more automated, we are collecting data more consistently, in real-time, and in a format that is ripe for analyzing. The onus is still on the brand to take advantage of this market research, but brands are in a position to innovate, enhance, and improve at a far more rapid pace.

For example, a large public wing company took advantage of their customer data to unlock an entirely new revenue stream. Through their chatbot, powered by Conversable, they realized that their customers were frequently asking for gluten free options. Now a staple offering, the company was able to unlock this opportunity much faster than the traditional model of market research. Imagine the possibilities hidden in the data, from opening hours, location requests, and even menu items.

Not capitalizing on the information in these conversations means brands are leaving more than half of the value on the table, for no good reason at all.

A story told before

Finding value in unexpected places has happened before. When Dunkin Donuts realized that the hole in the middle of their doughnuts was actually chock full of opportunity in the form of a doughnut hole, their business experienced a massive boost. The waste from their original product that previously ignored or discarded is similar to how brands treat AI. There is latent value in the conversational data not being acted upon.

Which brings us back to the challenge of conversational intelligence today. We’ve reached the point where conversational interfaces are technologically proficient enough to carry out simple tasks and speech recognition has progressed to provide the kind of convenience of customer experience that we expect. Still, we’ve only barely begun to tap the potential of conversational intelligence, and key to its progression, and value, in the future is making real use of the data its producing in near real time.

The criticism of conversational AI to date has often been that these kinds of technologies are over-promised and under-delivered. It is undoubtedly a huge investment, and most people consider it to be the magic solution to their business problems. While I abhor the hype of AI, I also recognize that value is being wasted away in the form of untapped conversational data.

It’s time to start paying attention.

Five 2018 Predictions — on GDPR, Robot Cars, AI, 5G and Blockchain

Predictions are like buses, none for ages and then several come along at once. Also like buses, they are slower than you would like and only take you part of the way. Also like buses, they are brightly coloured and full of chatter that you would rather not have in your morning commute. They are sometimes cold, and may have the remains of somebody else’s take-out happy meal in the corner of the seat. Also like buses, they are an analogy that should not be taken too far, less they lose the point. Like buses.

With this in mind, here’s my technology predictions for 2018. I’ve been very lucky to work across a number of verticals over the past couple of years, including public and private transport, retail, finance, government and healthcare — while I can’t name check every project, I’m nonetheless grateful for the experience and knowledge this has brought, which I feed into the below. I’d also like to thank my podcaster co-host Simon Townsend for allowing me to test many of these ideas.

Finally, one prediction I can’t make is whether this list will cause any feedback or debate — nonetheless, I would welcome any comments you might have, and I will endeavour to address them.

1. GDPR will be a costly, inadequate mess

Don’t get me wrong, GDPR is a really good idea. As a lawyer said to me a couple of weeks ago, it is a combination of the the UK data protection act, plus the best practices that have evolved around it, now put into law at a European level with a large fine associated. The regulations are also likely to become the basis for other countries — if you are going to trade with Europe, you might as well set it as the baseline, goes the thinking. All well and good so far.

Meanwhile, it’s an incredible, expensive (and necessary, if you’re a consumer that cares about your data rights) mountain to climb for any organisation that processes or stores your data. The deadline for compliance is May 25th, which is about as likely to be hit as I am going to finally get myself the 6-pack I wanted when I was 25.

No doubt GDPR will one day be achieved, but the fact is that it is already out of date. Notions of data aggregation and potentially toxic combinations (for example, combining credit and social records to show whether or not someone is eligible for insurance) are not just likely, but unavoidable: ‘compliant’ organisations will still be in no better place to protect the interests of their customers than currently.

The challenges, risks and sheer inadequacy of GDPR can be summed up by a single tweet sent by otherwise unknown traveller — “If anyone has a boyfriend called Ben on the Bournemouth – Manchester train right now, he’s just told his friends he’s cheating on you. Dump his ass x.” Whoever sender “@emilyshepss” or indeed, “Ben” might be, the consequences to the privacy of either cannot be handled by any data legislation currently in force.

2. Artificial Intelligence will create silos of smartness

Artificial Intelligence (AI) is a logical consequence of how we apply algorithms to data. It’s as inevitable as maths, as the ability our own brains have to evaluate and draw conclusions. It’s also subject to a great deal of hype and speculation, much of which tends to follow that old, flawed futurist assumption: that a current trend maps a linear course leading to an inevitable conclusion. But the future is not linear. Technological matters are subject to the laws of unintended consequences and of unexpected complexity: that is, the future does not follow a linear path, and every time we create something new, it causes new situations which are beyond its ability to deal with.

So, yes, what we call AI will change (and already is changing) the world. Moore’s, and associated laws are making previously impossible computations now possible, and indeed, they will become the expectation. Machine learning systems are fundamental to the idea of self-driving cars, for example; meanwhile voice, image recognition and so on are having their day. However these are still a long way from any notion of intelligence, artificial or otherwise.

So, yes, absolutely look at how algorithms can deliver real-time analysis, self-learning rules and so on. But look beyond the AI label, at what a product or service can actually do. You can read Gigaom’s research report on where AI can make a difference to the enterprise, here.

In most cases, there will be a question of scope: a system that can save you money on heating by ‘learning’ the nature of your home or data centre, has got to be a good thing for example. Over time we shall see these create new types of complexity, as we look to integrate individual silos of smartness (and their massive data sets) — my prediction is that such integration work will keep us busy for the next year or so, even as learning systems continue to evolve.

3. 5G will become just another expectation

Strip away the techno-babble around 5G and we have a very fast wireless networking protocol designed to handle many more devices than currently — it does this, in principle, by operating at higher frequencies, across shorter distances than current mobile masts (so we’ll need more of them, albeit in smaller boxes). Nobody quite knows how the global roll-out of 5G will take place — questions like who should pay for it will pervade, even though things are clearer than they were. And so on and so on.

But when all’s said and done, it will set the baseline for whatever people use it for, i.e. everything they possibly can. Think 4K video calls, in fact 4K everything, and it’s already not hard to see how anything less than 5G will come as a disappointment. Meanwhile every device under the sun will be looking to connect to every other, exchanging as much data as it possibly can. The technology world is a strange one, with massive expectations being imposed on each layer of the stack without any real sense of needing to take responsibility.

We’ve seen it before. The inefficient software practices of 1990’s Microsoft drove the need for processor upgrades and led Intel to a healthy profit, illustrating the vested interests of the industry to make the networking and hardware platforms faster and better. We all gain as a result, if ‘gain’ can be measured in terms of being able to see your gran in high definition on a wall screen from the other side of the world. But after the hype, 5G will become just another standard release, a way marker on the road to techno-utopia.

On the upside, it may lead to a simpler networking infrastructure. More of a hope than a prediction would be the general adoption of some kind of mesh integration between Wifi and 5G, taking away the handoff pain for both people, and devices, that move around. There will always be a place for multiple standards (such as the energy-efficient Zigbee for IoT) but 5G’s physical architecture, coupled with software standards like NFV, may offer a better starting point than the current, proprietary-mast-based model.

4. Attitudes to autonomous vehicles will normalize

The good news is, car manufacturers saw this coming. They are already planning for that inevitable moment, when public perception goes from, “Who’d want robot cars?” to “Why would I want to own a car?” It’s a familiar phenomenon, an almost 1984-level of doublethink where people go from one mindset to another seemingly overnight, without noticing and in some cases, seemingly disparaging the characters they once were.  We saw it with personal computers, with mobile phones, with flat screen TVs — in the latter case, the the world went from “nah, thats never going to happen” to recycling sites being inundated with perfectly usable screens (and a wave of people getting huge cast-off tellies).

And so, we will see over the next year or so, self-driving vehicles hit our roads. What drives this phenomenon is simple: we know, deep down, that robot cars are safer — not because they are inevitably, inherently safe, but because human drivers are inevitably, inherently dangerous. And autonomous vehicles will get safer still. And are able to pick us up at 3 in the morning and take us home.

The consequences will be fascinating to watch. First that attention will increasingly turn to brands — after all, if you are going to go for a drive, you might as well do so in comfort, right? We can also expect to see a far more varied range of wheeled transport (and otherwise — what’s wrong with the notion of flying unicorn deliveries?) — indeed, with hybrid forms, the very notion of roads is called into question.

There will be data, privacy, security and safety ramifications that need to be dealt with — consider the current ethical debate between leaving young people without taxis late at night, versus the possible consequences of sharing a robot Uber with a potential molester. And I must recall a very interesting conversation with my son, about who would get third or fourth dibs at the autonomous vehicle ferrying drunken revellers (who are not always the cleanliest of souls) to their beds.

Above all, business models will move from physical to virtual, from products to services. The industry knows this, variously calling vehicles ‘tin boxes on wheels’ while investing in car sharing, delivery and other service-based models. Of course (as Apple and others have shown), good engineering continues to command a premium even in the service-based economy: competition will come from Tesla as much as Uber, or whatever replaces its self-sabotaging approach to world domination.

Such changes will take time but in the short term, we can fully expect a mindset shift from the general populace.

5. When Bitcoins collapse, blockchains will pervade

The concept that “money doesn’t actually exist” can be difficult to get across, particularly as it makes such a difference to the lives of, well, everybody. Money can buy health, comfort and a good meal; it can also deliver representations of wealth, from high street bling to mediterranean gin palaces. Of course money exists, I’m holding some in my hand, says anyone who wants to argue against the point.

Yet, still, it doesn’t. It is a mathematical construct originally construed to simplify the exchange of value, to offer persistence to an otherwise transitory notion. From a situation where you’d have to prove whether you gave the chap some fish before he’d give you that wood he offered, you can just take the cash and buy wood wherever you choose. It’s not an accident of speech that pond notes still say, “I promise to pay the bearer on demand…”

While original currencies may have been teeth or shells (happy days if you happened to live near a beach), they moved to metals in order to bring some stability in a rather dodgy market. Forgery remains an enormous problem in part because we maintain a belief that money exists, even though it doesn’t. That dodgy-looking coin still spends, once it is part of the system.

And so to the inexorable rise of Bitcoin, which has emerged from nowhere to become a global currency — in much the same way as the dodgy coin, it is accepted simply because people agree to use it in a transaction. Bitcoin has a chequered reputation, probably unfairly given that our traditional dollars and cents are just as likely to be used for gun-running or drug dealing as any virtual dosh. It’s also a bubble that looks highly likely to burst, and soon — no doubt some pundits will take that as a proof point of the demise of cryptocurrency.

Their certainty may be premature. Not only will Bitcoin itself pervade (albeit at a lower valuation), but the genie is already out of the bottle as banks and others experiment with the economic models made possible by “distributed ledger” architectures such as The Blockchain, i.e. the one supporting Bitcoin. Such models are a work in progress: the idea that a single such ledger can manage all the transactions in the world (financial and otherwise) is clearly flawed.

But blockchains, in general, hold a key as they deal with that single most important reason why currency existed in the first place — to prove a promise. This principle holds in areas way beyond money, or indeed, value exchange — food and pharmaceutical, art and music can all benefit from knowing what was agreed or planned, and how it took place. Architectures will evolve (for example with sidechains) but the blockchain principle can apply wherever the risk of fraud could also exist, which is just about everywhere.

6. The world will keep on turning

There we have it. I could have added other things — for example, there’s a high chance that we will see another major security breach and/or leak; augmented reality will have a stab at the mainstream; and so on. I’d also love to see a return to data and facts on the world’s political stage, rather than the current tub-thumping and playing fast and loose with the truth. I’m keen to see breakthroughs in healthcare from IoT, I also expect some major use of technology that hadn’t been considered arrive, enter the mainstream and become the norm — if I knew what it was, I’d be a very rich man. Even if money doesn’t exist.

Truth is, and despite the daily dose of disappointment that comes with reading the news, these are exciting times to be alive. 2018 promises to be a year as full of innovation as previous years, with all the blessings and curses that it brings. As Isaac Asimov once wrote, “An atom-blaster is a good weapon, but it can point both ways.”

On that, and with all it brings, it only remains to wish the best of the season, and of 2018 to you and yours. All the best!

 

Photo credit: Birmingham Mail

The AI Revolution for Recruitment

In the era of autonomous vehicles, food delivery by drones, and swiping our way to love, it’s clear that the tedious, time consuming, and often fruitless job recruitment system of old is in need of a tech-makeover. Who better to do it than the all mighty AI.

AI is already affecting the technology powering digital advertising, vehicle connectivity, and financial services. The recruitment industry is also ripe for an AI revolution.

In fact, 15% of HR leaders in 40 countries shared that they believe AI is already impacting the workplace, and an additional 40% believe that AI will significantly influence their decision making, in the coming two to five years. And it should, as implementing AI promises to maximize efficiency, reduce annual business costs, and tackle workplace inequality and discrimination, and more.

AI can process tasks at a scale that most HR teams would struggle with, including quickly and efficiently analyzing thousands of candidates’ applications, saving valuable time and money when searching for talent with the most relevant competencies and experienceNot only does this streamline the recruitment process, it also helps companies hire the most suitable candidates, drastically reducing the chances of hiring an ill-suited employee.  

while maximizing resource utilization enabling the best use of resources.

Hiring the wrong person can be crippling for companies, particularly smaller businesses. A bad hire not only presents a wasted opportunity cost, but equally troubling bi-products such as low productivity, and negative morale. When AI takes over the process of sifting through resumes, it will free up managers to refocus their attention on crucial matters such as employee retention, office morale, and of course productivity.

Job seekers are also looking for more efficient methods of job hunting. Tired of dealing with grueling applications, they too are turning to easy-to-use technologies which smartly match them with suitable companies. The level to which AI is empowering candidates goes as far as pinpointing the traits and trajectories of top performers, to tailoring job searches for opportunities that will result in a strengthened career path, ultimately providing an edge when it comes to securing the role and future they want.

While AI is remarkable in many ways, it’s not full-proof and requires oversight and vigilance to make sure conscious and unconscious human bias doesn’t seep into the hiring process. AI systems needs to be programmed what to learn and what data is important. But because they need to be programmed by humans, there is a risk of AI emulating existing human bias. For instance, if a company’s culture is already predominately made up of white males, with similar backgrounds, there is a danger that AI would exacerbate the problem by selecting candidates that match that company’s existing make up -not based on the variable of their actual ethnic background, but rather traits that tend to be more popular among members of a certain group.

As a National Academy of Sciences research paper shows, both male and female managers are twice as likely to recruit men, as opposed to women, based on paper resumes alone. AI can tackle this problem by focusing solely on experience and ability, ignoring demographic information such as name, nationality and/or gender.

Don’t fret! There are ways to consciously program the system to eliminate, or reduce, existing and inherent biases. This is a perfect example of how humans and AI will collaborate in recruitment. So, for those of you who are worried that AI will put you out of a job, it won’t. It will however, change the nature of your day to day work – and for the better.

The AI recruitment revolution is still in its infancy, but HR teams know that there is no better combination than artificial intelligence and emotional intelligence to build a winning team.

Voices in AI – Episode 26: A Conversation with Peter Lee

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Peter talk about defining intelligence, Venn diagrams, transfer learning, image recognition, and Xiaoice.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 26: A Conversation with Peter Lee”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-12-04-(01-04-41)-peter-lee.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview-1-1.jpeg”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png’
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

Byron Reese:  This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Peter Lee. He is a computer scientist and corporate Vice President at Microsoft Research. He leads Microsoft’s New Experiences and Technologies organization, or NExT, with the mission to create research powered technology and products and advance human knowledge through research. Prior to Microsoft, Dr. Lee held positions in both government and academia. At DARPA, he founded a division focused on R&D programs in computing and related areas. Welcome to the show, Peter. 

Peter Lee:  Thank you. It’s great to be here.

I always like to start with a seemingly simple question which turns out not to be quite so simple. What is artificial intelligence?

Wow. That is not a simple question at all. I guess the simple, one line answer is artificial intelligence is the science or the study of intelligent machines. And, I realize that definition is pretty circular, and I am guessing that you understand that that’s the fundamental difficulty, because it leaves open the question: what is intelligence? I think people have a lot of different ways to think about what is intelligence, but, in our world, intelligence is, “how do we compute how to set and achieve goals in the world.” And this is fundamentally what we’re all after, right now in AI.

That’s really fascinating because you’re right, there is no consensus definition on intelligence, or on life, or on death for that matter. So, I would ask that question: why do you think we have such a hard time defining what intelligence is?

I think we only have one model of intelligence, which is our own, and so when you think about trying to define intelligence it really comes down to a question of defining who we are. There’s fundamental discomfort with that. That fundamental circularity is difficult. If we were able to fly off in some starship to a far-off place, and find a different form of intelligence—or different species that we would recognize as intelligent—maybe we would have a chance to dispassionately study that, and come to some conclusions. But it’s a hard when you’re looking at something so introspective.

When you get into computer science research, at least here at Microsoft Research, you do have to find ways to focus on specific problems; so, we ended up focusing our research in AI—and our tech development in AI, roughly speaking—in four broad categories, and I think these categories are a little bit easier to grapple with. One is perception—that’s endowing machines with the ability to see and hear, much like we do. The second category is learning—how to get machines to get better with experience? The third is reasoning—how do you make inferences, logical inferences, commonsense inferences about the world? And then the fourth is language—how do we get machines to be intelligent in interacting with each other and with us through language? Those four buckets—perception, learning, reasoning and language—they don’t define what is intelligence, but they at least give us some kind of clear set of goals and directions to go after.

Well, I’m not going to spend too much time down in those weeds, but I think it’s really interesting. In what sense do you think it’s artificial? Because it’s either artificial in that it’s just mechanicalor that’s just a shorthand we use for thator it’s artificial in that it’s not really intelligence. You’re using words like “see,” “hear,” and reason.” Are you using those words euphemistically—can a computer really see or hear anything, or can it reason—or are you using them literally?

The question you’re asking really gets to the nub of things, because we really don’t know. If you were to draw the Venn diagram; you’d have a big circle and call that intelligence, and now you want to draw a circle for artificial intelligence—we don’t know if that circle is the same as the intelligence circle, whether it’s separate but overlapping, whether it’s a subset of intelligence… These are really basic questions that we debate, and people have different intuitions about, but we don’t really know. And then we get to what’s actually happening—what gets us excited and what is actually making it out into the real world, doing real things—and for the most part that has been a tiny subset of these big ideas; just focusing on machine learning, on learning from large amounts of data, models that are actually able to do some useful task, like recognize images.

Right. And I definitely want to go deep into that in just a minute, but I’m curious So, there’s a wide range of views about AI. Should we fear it? Should we love it? Will it take us into a new golden age? Will it do this? Will it cap out? Is an AGI possible? All of these questions. 

And, I mean, if you ask, “How will we get to Mars? Well, we don’t know exactly, but we kind of know. But if you ask, “What’s AI going to be like in fifty years?” it’s all over the map. And do you think that is because there isn’t agreement on the kinds of questions I’m askinglike people have different ideas on those questionsor are the questions I’m asking not really even germane to the day-to-day “get up and start building something”? 

I think there’s a lot of debate about this because the question is so important. Every technology is double-edged. Every technology has the ability to be used for both good purposes and for bad purposes, has good consequences and unintended consequences. And what’s interesting about computing technologies, generally, but especially with a powerful concept like artificial intelligence, is that in contrast to other powerful technologies—let’s say in the biological sciences, or in nuclear engineering, or in transportation and so on—AI has the potential to be highly democratized, to be codified into tools and technologies that literally every person on the planet can have access to. So, the question becomes really important: what kind of outcomes, what kinds of possibilities happen for this world when literally every person on the planet can have the power of intelligent machines at their fingertips? And because of that, all of the questions you’re asking become extremely large, and extremely important for us. People care about those futures, but ultimately, right now, our state of scientific knowledge is we don’t really know.

I sometimes talk in analogy about way, way back in the medieval times when Gutenberg invented mass-produced movable type, and the first printing press. And in a period of just fifty years, they went from thirty thousand books in all of Europe, to almost thirteen million books in all of Europe. It was sort of the first technological Moore’s Law. The spread of knowledge that that represented, did amazing things for humanity. It really democratized access to books, and therefore to a form of knowledge, but it was also incredibly disruptive in its time and has been since.

In a way, the potential we see with AI is very similar, and maybe even a bigger inflection point for humanity. So, while I can’t pretend to have any hard answers to the basic questions that you’re asking about the limits of AI and the nature of intelligence, it’s for sure important; and I think it’s a good thing that people are asking these questions and they’re thinking hard about it.

Well, I’m just going to ask you one more and then I want to get more down in the nitty-gritty. 

If the only intelligent thing we know of in the universe, the only general intelligence, is our brain, do you think it’s a settled question that that functionality can be reproduced mechanically? 

I think there is no evidence to the contrary. Every way that we look at what we do in our brains, we see mechanical systems. So, in principle, if we have enough understanding of how our own mechanical system of the brain works, then we should be able to, at a minimum, reproduce that. Now, of course, the way that technology develops, we tend to build things in different ways, and so I think it’s very likely that the kind of intelligent machines that we end up building will be different than our own intelligence. But there’s no evidence, at least so far, that would be contrary to the thesis that we can reproduce intelligence mechanically.

So, to say to take the opposite position for a moment. Somebody could say there’s absolutely no evidence to suggest that we can, for the following reasons. One, we don’t know how the brain works. We don’t know how thoughts are encoded. We don’t know how thoughts are retrieved. Aside from that, we don’t know how the mind works. We don’t know how it is that we have capabilities that seem to be beyond what a hunk of grey matter could dowe’re creative, we have a sense of humor and all these other things. We’re conscious, and we don’t even have a scientific language for understanding how consciousness could come about. We don’t even know how to ask that question or look for that answer, scientifically. So, somebody else might look at it and say, “There’s no reason whatsoever to believe we can reproduce it mechanically. 

I’m going to use a quote here from, of all people, a non-technologist Samuel Goldwyn, the old movie magnate. And I always reach to this when I get put in a corner like you’re doing to me right now, which is, “It’s absolutely impossible, but it has possibilities.”

All right.

Our current understanding is that brains are fundamentally closed systems, and so we’re learning more and more, and in fact what we learn is loosely inspiring some of the things we’re doing in AI systems, and making progress. How far that goes? It’s really, as you say, it’s unclear because there are so many mysteries, but it sure looks like there are a lot of possibilities.

Now to get kind of down to the nitty-gritty, let’s talk about difficulties and where we’re being successful and where we’re not. My first question is, why do you think AI is so hard? Because humans acquire their intelligence seemingly simply, right? You put a little kid in playschool and you show them some red, and you show them the number three, and then, all of a sudden, they understand what three red things are. I mean, we, kind of, become intelligent so naturally, and yet my frequent flyer program that I call in can’t tell, when I’m telling it my number if I said 8 or H. Why do you think it’s so hard?

What you said is true, although it took you many years to reach that point. And even a child that’s able to do the kinds of things that you just expressed has had years of life. The kinds of expectations that we have, at least today—especially in the commercial sphere for our intelligent machines—sometimes there’s a little bit less patience. But having said that, I think what you’re saying is right.

I mentioned before this Venn diagram; so, there’s this big circle which is intelligence, and let’s just assume that there is some large subset of that which is artificial intelligence. Then you zoom way, way in, and a tiny little bubble inside that AI bubble is machine learning—this is just simply machines that get better with experience. And then a tiny bubble inside that tiny bubble is machine learning from data—where the models that are extracted, that codify what has been learned, are all extracted from analyzing large amounts of data. That’s really where we’re at today—in this tiny bubble, inside this tiny bubble, inside this big bubble we call artificial intelligence.

What is remarkable is that, despite how narrow our understanding is—for the most part all of the exciting progress is just inside this little, tiny, narrow idea of machine learning from data, and there’s even a smaller bubble inside that that’s called a supervised manner—even from that we’re seeing tremendous power, a tremendous ability to create new computing systems that do some pretty impressive and valuable things. It is pretty crazy just how valuable that’s become to companies, like Microsoft. At the same time, it is such a narrow little slice of what we understand of intelligence.

The simple examples that you mentioned, for example, like one-shot learning, where you can show a small child a cartoon picture of a fire truck, and even if that child has never seen a fire truck before in her life, you can take her out on the street, and the first real fire truck that goes down the road the child will instantly recognize as a fire truck. That sort of one-shot idea, you’re right, our current systems aren’t good at.

While we are so excited about how much progress we’re making on learning from data, there are all the other things that are wrapped up in intelligence that are still pretty mysterious to us, and pretty limited. Sometimes, when that matters, our limits get in the way, and it creates this idea that AI is actually still really hard.

You’re talking about transfer learning. Would you say that the reason she can do that is because at another time she saw a drawing of a banana, and then a banana? And another time she saw a drawing of a cat, and then a cat. And so, it wasn’t really a one-shot deal. 

How do you think transfer learning works in humans? Because that seems to be what we’re super good at. We can take something that we learned in one place and transfer that knowledge to another contextYou know, “Find, in this picture, the Statue of Liberty covered in peanut butter,” and I can pick that out having never seen a Statue of Liberty in peanut butter, or anything like that. 

Do you think that’s a simple trick we don’t understand how to do yet? Is that what you want it to be, like an “a-ha” moment, where you discover the basic idea. Or do you think it’s a hundred tiny little hacks, and transfer learning in our minds is just, like, some spaghetti code written by some drunken programmer who was on a deadline, right? What do you think that is? Is it a simple thing, or is it a really convoluted, complicated thing? 

Transfer learning turns out to be incredibly interesting, scientifically, and also commercially for Microsoft, turns out to be something that we rely on in our business. What is kind of interesting is, when is transfer learning more generally applicable, versus being very brittle?

For example, in our speech processing systems, the actual commercial speech processing systems that Microsoft provides, we use transfer learning, routinely. When we train our speech systems to understand English speech, and then we train those same systems to understand Portuguese, or Mandarin, or Italian, we get a transfer learning effect, where the training for that second, and third, and fourth language requires less data and less computing power. And at the same time, each subsequent language that we add onto it improves the earlier languages. So, training that English-based system to understand Portuguese actually improves the performance of our speech systems in English, so there are transfer learning effects there.

In our image recognition tasks, there is something called the ImageNet competition that we participate in most years, and the last time that we competed was two years ago in 2015. There are five image processing categories. We trained our system to do well on Category 1—on the basic image classification—then we used transfer learning to not only win the first category, but to win all four other ImageNet competitions. And so, without any further kind of specialized training, there was a transfer learning effect.

Transfer learning actually does seem to happen. In our deep neural net, deep learning research activities, transfer learning effects—when we see them—are just really intoxicating. It makes you think about what you and I do as human beings.

At the same time, it seems to be this brittle thing. We don’t necessarily understand when and how this transfer learning effect is effective. The early evidence from studying these things is that there are different forms of learning, and that somehow the one-shot ideas that even small children are very good at, seem to be out of the purview of the deep neural net systems that we’re working on right now. Even this intuitive idea that you’ve expressed of transfer learning, the fact is we see it in some cases and it works so well and is even commercially-valuable to us, but then we also see simple transfer learning tasks where these systems just seem to fail. So, even those things are kind of mysterious to us right now.

It seemsand I don’t have any evidence to support this, but it seems, at a gut level to methat maybe what you’re describing isn’t pure transfer learning, but rather what you’re saying is, “We built a system that’s really good at translating languages, and it works on a lot of different languages.” 

It seems to me that the essence of transfer learning is when you take it to a different discipline, for example, “Because I learned a second language, I am now a better artist. Because I learned a second language, I’m now a better cook.” That, somehow, we take things that are in a discipline, and they add to this richness and depth and indimensionality of our knowledge in a way that they really impact our relationships. 

I was chatting with somebody the other day who said that learning a second language was the most valuable thing he’d ever done, and that his personality in that second language is different than his English personality. I hear what you’re saying, and I think those are hits that point us in the right direction. But I wonder if, at its core, it’s really multidimensional, what humans do, and that’s why we can seemingly do the one-shot things, because we’re taking things that are absolutely unrelated to cartoon drawings of something relating to real life. Do you have even any kind of a gut reaction to that?

One thing, at least in our current understanding of the research fields, is that there is a difference between learning and reasoning. The example I like to go to is, we’ve done quite a bit of work on language understanding, and specifically in something called machine reading—where you want to be able to read text and then answer questions about the text. And a classic place where you look to test your machine reading capabilities is parts of the verbal part of the SAT exam. The nice thing about the SAT exam is you can try to answer the questions and you can measure the progress just through the score that you get on the test. That’s steadily improving, and not just here at Microsoft Research, but at quite a few great university research areas and centers.

Now, subject those same systems to, say, the third-grade California Achievement Test, and the intelligence systems just fall apart. If you look at what third graders are expected to be able to do, there is a level of commonsense reasoning that seems to be beyond what we try to do in our machine reading system. So, for example, one kind of question you’ll get on that third-grade achievement test is, maybe, four cartoon drawings: a ball sitting on the grass, some raindrops, an umbrella, and a puppy dog—and you have to know which pairs of things go together. Third-graders are expected to be able to make the right logical inferences from having the right life experiences, the right commonsense reasoning inferences to put those two pairs together, but we don’t actually have the AI systems that, reliably, are able to do that. That commonsense reasoning is something that seems to be—at least today, with the state of today’s scientific and technological knowledge—outside of the realm of machine learning. It’s not something that we think machine learning will ultimately be effective at.

That distinction is important to us, even commercially. I’m looking at an e-mail today that someone here at Microsoft sent me to get ready to talk to you today. The e-mail says, it’s right in front of me here, “Here is the briefing doc for tomorrow morning’s podcast. If you want to review it tonight, I’ll print it for you tomorrow.” Right now, the system has underlined, “want to review tonight,” and the reason it’s underlined that is it’s somehow made the logical commonsense inference that I might want a reminder on my calendar to review the briefing documents. But it’s remarkable that it’s managed to do that, because there are references to tomorrow morning as well as tonight. So, making those sorts of commonsense inferences, doing that reasoning, is still just incredibly hard, and really still requires a lot of craftsmanship by a lot of smart researchers to make real.

It’s interesting because you say, you had just one line in there that solving the third-grade problem isn’t a machine learning task, so how would we solve that? Or put another way, I often ask these Turing Test systems, “What’s bigger, a nickel or the sun?” and none of them have ever been able to answer it. Because “sun is ambiguous, maybe, and nickel is ambiguous. 

In any case, if we don’t use machine learning for those, how do we get to the third grade? Or do we not even worry about the third grade? Because most of the problems we have in life aren’t third-grade problems, they’re 12th-grade problems that we really want the machines to be able to do. We want them to be able to translate documents, not match pictures of puppies. 

Well, for sure, if you just look at what companies like Microsoft, and the whole tech industry, are doing right now, we’re all seeing, I think, at least a decade, of incredible value to people in the world just with machine learning. There are just tremendous possibilities there, and so I think we are going to be very focused on machine learning and it’s going to matter a lot. It’s going to make people’s lives better, and it’s going to really provide a lot of commercial opportunities for companies like Microsoft. But that doesn’t mean that commonsense reasoning isn’t crucial, isn’t really important. Almost any kind of task that you might want help with—even simple things like making travel arrangements, shopping, or bigger issues like getting medical advice, advice about your own education—these things almost always involve some elements of what you would call commonsense reasoning, making inferences that somehow are not common, that are very particular and specific to you, and maybe haven’t been seen before in exactly that way.

Now, having said that, in the scientific community, in our research and amongst our researchers, there’s a lot of debate about how much of that kind of reasoning capability could be captured through machine learning, and how much of it could be captured simply by observing what people do for long enough and then just learning from it. But, for me at least, I see what is likely is that there’s a different kind of science that we’ll need to really develop much further if we want to capture that kind of commonsense reasoning.

Just to give you a sense of the debate, one thing that we’ve been doing—it’s been an experiment ongoing in China—is we have a new kind of chatbot technology in China that takes the form of a person named Xiaolce. Xiaolce is a persona that lives on social media in China, and actually has a large number of followers, tens of millions of followers.

Typically, when we think about chatbots and intelligent agents here in the US market—things like Cortana, or Siri, or Google Assistant, or Alexa—we put a lot of emphasis on semantic understanding; we really want the chatbot to understand what you’re saying at the semantic level. For Xiaolce, we ran a different experiment, and instead of trying to put in that level of semantic understanding, we instead looked at what people say on social media, and we used natural language processing to pick out statement response pairs, and templatize them, and put them in a large database. And so now, if you say something to Xiaolce in China, Xiaolce looks at what other people say in response to an utterance like that. Maybe it’ll come up with a hundred likely responses based on what other people have done, and then we use machine learning to rank order those likely responses, trying to optimize the enjoyment and engagement in the conversation, optimize the likelihood that the human being who is engaged in the conversation will stick with a conversation. Over time, Xiaolce has become extremely effective at doing that. In fact, for the top, say, twenty million people who interact with Xiaolce on a daily basis, the conversations are taking more than twenty-three turns.

What’s remarkable about that—and fuels the debate about what’s important in AI and what’s important in intelligence—is that at least the core of Xiaolce really doesn’t have any understanding at all about what you’re talking about. In a way, it’s just very intelligently mimicking what other people do in successful conversations. It raises the question, when we’re talking about machines and machines that at least appear to be intelligent, what’s really important? Is it really a purely mechanical, syntactic system, like we’re experimenting with Xiaolce, or is it something where we want to codify and encode our semantic understanding of the world and the way it works, the way we’re doing, say, with Cortana.

These are fundamental debates in AI. What’s sort of cool, at least in my day-to-day work here at Microsoft, is we are in a position where we’re able, and allowed, to do fundamental research in these things, but also build and deploy very large experiments just to see what happens and to try to learn from that. It’s pretty cool. At the same time, I can’t say that leaves me with clear answers yet. Not yet. It just leaves me with great experiences and we’re sharing what we’re learning with the world but it’s much, much harder to then say, definitively, what these things mean.

You know, it’s true. In 1950 Alan Turing said, “Can a machine think?” And that’s still a question that many can’t agree on because they don’t necessarily agree on the terms. But you’re right, that chatbot could pass the Turing Test, in theory. At twenty-three turns, if you didn’t tell somebody it was a chatbot, maybe it would pass it. 

But you’re right that that’s somehow unsatisfying that this is somehow this big milestone. Because if you saw it as a user in slow motionthat you ask a question, and then it did a query, and then it pulled back a hundred things and it rank ordered them, and looked for how many of those had successful follow-ups, and thumbs up, and smiley faces, and then it gave you one… It’s that whole thing about, once you know how the magic trick works, it isn’t nearly as interesting. 

It’s true. And with respect to achieving goals, or completing tasks in the world with the help of the Xiaolce chatbot, well, in some cases it’s pretty amazing how helpful Xiaolce is to people. If someone says, “I’m in the market for a new smartphone, I’m looking for a larger phablet, but I still want it to fit in my purse,” Xiaolce is amazingly effective at giving you a great answer to that question, because it’s something that a lot of people talk about when they’re shopping for a new phone.

At the same time, Xiaolce might not be so good at helping you decide which hotels to stay in, or helping you arrange your next vacation. It might provide some guidance, but maybe not exactly the right guidance that’s been well thought out. One more thing to say about this is, today—at least at the scale and practicality that we’re talking about—for the most part, we’re learning from data, and that data is essentially the digital exhaust from human thought and activity. There’s also another sense in which Xiaolce, while it passes the Turing Test, it’s also, in some ways, limited by human intelligence, because almost everything it’s able to do is observed and learned from what other people have done. We can’t discount the possibility of future systems which are less data dependent, and are able to just understand the structure of the world, and the problems, and learn from that.

Right. I guess Xiaolce wouldn’t know the difference, “What’s bigger, a nickel or the sun? 

That’s right, yes.

Unless the transcript of this very conversation were somehow part of the training set, but you notice, I’ve never answered it. I’ve never given the answer away, so, it still wouldn’t know

We should try the experiment at some point.

Why do you think we personify these AIs? You know about Weizenbaum and ELIZA and all of that, I assume. He got deeply disturbed when people were relating to a lie, knowing it was a chatbot. He got deeply concerned that people poured out their heart to it, and he said that when the machine says, “I understand,” it’s just a lie. That there’s no “I,” and there’s nothing that understands anything. Do you think that somehow confuses relationships with people and that there are unintended consequences to the personification of these technologies that we don’t necessarily know about yet? 

I’m always internally scolding myself for falling into this tendency to anthropomorphize our machine learning and AI systems, but I’m not alone. Even the most hardened, grounded researcher and scientist does this. I think this is something that is really at the heart of what it means to be human. The fundamental fascination that we have and drive to propagate our species is surfaced as a fascination with building autonomous intelligent beings. It’s not just AI, but it goes back to the Frankenstein kinds of stories that have just come up in different guises, and different forms throughout, really, all of human history.

I think we just have a tremendous drive to build machines, or other objects and beings, that somehow capture and codify, and therefore promulgate, what it means to be human. And nothing defines that more for us than some sort of codification of human intelligence, and especially human intelligence that is able to be autonomous, make its own decisions, make its own choices moving forward. It’s just something that is so primal in all of us. Even in AI research, where we really try to train ourselves and be disciplined about not making too many unfounded connections to biological systems, we fall into the language of biological intelligence all the time. Even the four categories I mentioned at the outset of our conversation—perception, learning, reasoning, language—these are pretty biologically inspired words. I just think it’s a very deep part of human nature.

That could well be the case. I have a book coming out on AI in April of 2018 that talks about these questions, and there’s a whole chapter about how long we’ve been doing this. And you’re right, it goes back to the Greeks, and the eagle that allegedly plucked out Prometheus’ liver every day, in some accounts, was a robot. There’s just tons of them. The difference of course, now, is that, up until a few years ago, it was all fiction, and so these were just stories. And we don’t necessarily want to build everything that we can imagine in fiction. I still wrestle with it, that, somehow, we are going to convolute humans and machines in a way which might be to the detriment of humans, and not to the ennobling of the machine, but time will tell. 

Every technology, as we discussed earlier, is double-edged. Just to strike an optimistic note here—to your last comment, which is, I think, very important—I do think that this is an area where people are really thinking hard about the kinds of issues you just raised. I think that’s in contrast to what was happening in computer science and the tech industry even just a decade ago, where there’s more or less an ethos of, “Technology is good and more technology is better.” I think now there’s much more enlightenment about this. I think we can’t impede the progress of science and technology development, but what is so good and so important is that, at least as a society, we’re really trying to be thoughtful about both the potential for good, as well as the potential for bad that comes out of all of this. I think that gives us a much better chance that we’ll get more of the good.

I would agree. I think the only other corollary to this, where there’s been so much philosophical discussion about the implications of the technology, is the harnessing of the atom. If you read the contemporary literature written at the time, people were like, “It could be energy too cheap to meter, or it could be weapons of colossal destruction, or it could be both. There was a precedent there for a long and thoughtful discussion about the implications of the technology. 

It’s funny you mentioned that because that reminds me of another favorite quote of mine which is from Albert Einstein, and I’m sure you’re familiar with it. “The difference between stupidity and genius is that genius has its limits.”

That’s good. 

And of course, he said that at the same time that a lot of this was developing. It was a pithy way to tell the scientific community, and the world, that we need to be thoughtful and careful. And I think we’re doing that today. I think that’s emerging very much so in the field of AI.

There’s a lot of practical concern about the effect of automation on employment, and these technologies on the planet. Do you have an opinion on how that’s all going to unfold? 

Well, for sure, I think it’s very likely that there’s going to be massive disruptions in how the world works. I mentioned the printing press, the Gutenberg press, movable type; there was incredible disruption there. When you have nine doublings in the spread of books and printing presses in the period of fifty years, that’s a real medieval Moore’s Law. And if you think about the disruptive effect of that, by the early 1500s, the whole notion of what it meant to educate your children suddenly involved making sure that they could read and write. That’s a skill that takes a lot of expense, and years of formal training and it has this sort of destructive impact. So, while the overall impact on the world and society was hugely positive—really the printing press laid the foundation for the Age of Enlightenment and the Renaissance—it had an absolutely disruptive effect on what it meant and what it took for people to succeed in the world.

AI, I’m pretty sure, is going to have the same kind of disruptive effect, because it has the same sort of democratizing force that the spread of books has had. And so, for us, we’ve been trying very hard to keep the focus on, “What can we do to put AI in the hands of people, that really empowers them, and augments what they’re able to do? What are the codifications of AI technologies that enable people to be more successful in whatever they’re pursuing in life?” And that focus, that intent by our research labs and by our company, I think, is incredibly important, because it takes a lot of the inventive and innovative genius that we have access to, and tries to point it in the right direction.

Talk to me about some of the interesting work you’re doing right now. Start with the healthcare stuff, what can you tell us about that?

Healthcare is just incredibly interesting. I think there are maybe three areas that just really get me excited. One is just fundamental life sciences, where we’re seeing some amazing opportunities and insights being unlocked through the use of machine learning, large-scale machine, and data analytics—the data that’s being produced increasingly cheaply through, say, gene sequencing, and through our ability to measure signals in the brain. What’s interesting about these things is that, over and over again, in other areas, if you put great innovative research minds and machine learning experts together with data and computing infrastructure, you get this burst of unplanned and unexpected innovations. Right now, in healthcare, we’re just getting to the point where we’re able to arrange the world in such a way that we’re able to get really interesting health data into the hands of these innovators, and genomics is one area that’s super interesting there.

Then, there is the basic question of, “What happens in the day-to-day lives of doctors and nurses?” Today, doctors are spending an average—there are several recent studies about this—of one hundred and eight minutes a day just entering health data into electronic health record systems. This is an incredible burden on those doctors, though it’s very important because it’s managed to digitize people’s health histories. But we’re now seeing an amazing ability for intelligent machines to just watch and listen to the conversation that goes on between the doctor and the patient, and to dramatically reduce the burden of all of that record keeping on doctors. So, doctors can stop being clerks and record keepers, and instead actually start to engage more personally with their patients.

And then the third area which I’m very excited about, but maybe is a little more geeky, is determining how we can create a system, how can we create a cloud, where more data is open to more innovators, where great researchers at universities, great innovators at startups who really want to make a difference in health, can provide a platform and a cloud where we can supply them with access to lots of valuable data, so they can innovate, they can create models that do amazing things.

Those three things just all really get me excited because the combination of these things I think can really make the lives of doctors, and nurses, and other clinicians better; can really lead to new diagnostics and therapeutic technologies, and unleash the potential of great minds and innovators. Stepping back for a minute, it really just amounts to creating systems that allow innovators, data, and computing infrastructure to all come together in one place, and then just having the faith that when you do that, great things will happen. Healthcare is just a huge opportunity area for doing this, that I’ve just become really passionate about.

I guess we will reach a point where you can have essentially the very best doctor in the world in your smartphone, and the very best psychologist, and the very best physical therapist, and the very best everything, right? All available at essentially no cost. I guess the internet always provided, at some abstract level, all of that information if you had an infinite amount of time and patience to find it. And the promise of AI, the kinds of things you’re doing, is that it was that difference, what did you say, between learning and reasoning, that it kind of bridges that gap. So, paint me a picture of what you think, just in the healthcare arena, the world of tomorrow will look like. What’s the thing that gets you excited? 

I don’t actually see healthcare ever getting away from being an essentially human-to-human activity. That’s something very important. In fact, I predict that healthcare will still be largely a local activity where it’s something that you will fundamentally access from another person in your locality. There are lots of reasons for this, but there’s something so personal about healthcare that it ends up being based in relationships. I see AI in the future relieving senseless and mundane burden from the heroes in healthcare—the doctors, and nurses, and administrators, and so on—that provide that personal service.

So, for example, we’ve been experimenting with a number of healthcare organizations with our chatbot technology. That chatbot technology is able to answer—on demand, through a conversation with a patient—routine and mundane questions about some health issue that comes up. It can do a, kind of, mundane textbook triage, and then, once all that is done, make an intelligent connection to a local healthcare provider, summarize very efficiently for the healthcare provider what’s going on, and then really allow the full creative potential and attention of the healthcare provider to be put to good use.

Another thing that we’ll be showing off to the world at a major radiology conference next week is the use of computer vision and machine learning to learn the habits and tricks of the trade for radiologists, that are doing radiation therapy planning. Right now, radiation therapy planning involves, kind of, a pixel by pixel clicking on radiological images that is extremely important; it has to be done precisely, but also has some artistry. Every good radiologist has his or her different kinds of approaches to this. So, one nice thing about the machine learning basic computer vision today, is that you can actually observe and learn what radiologists do, their practices, and then dramatically accelerate and relieve a lot of the mundane efforts, so that instead of two hours of work that is largely mundane with only maybe fifteen minutes of that being very creative, we can automate the noncreative aspects of this, and allow the radiologists to devote that full fifteen minutes, or even half an hour to really thinking through the creative aspects of radiology. So, it’s more of an empowerment model rather than replacing those healthcare workers. It still relies on human intuition; it still relies on human creativity, but hopefully allows more of that intuition, and more of that creativity to be harnessed by taking away some of the mundane, and time-consuming aspects of things.

These are approaches that I view as very human-focused, very humane ways to, not just make healthcare workers more productive, but to make them happier and more satisfied in what they do every day. Unlocking that with AI is just something that I feel is incredibly important. And it’s not just us here at Microsoft that are thinking this way, I’m seeing some really enlightened work going on, especially with some of our academic collaborators in this way. I find it truly inspiring to see what might be possible. Basically, I’m pushing back on the idea that we’ll be able to replace doctors, replace nurses. I don’t think that’s the world that we want, and I don’t even know that that’s the right idea. I don’t think that that necessarily leads to better healthcare.

To be clear, I’m talking about the great, immense parts of the world where there aren’t enough doctors for people, where there is this vast shortage of medical professionals, to somehow fill that gap, surely the technology can do that.

Yes. I think access is great. Even with some of the health chatbot pilot deployments that we’ve been experimenting with right now, you can just see that potential. If people are living in parts of the world where they have access issues, it’s an amazing and empowering thing to be able to just send a message to chatbot that’s always available and ready to listen, and answer questions. Those sorts of things, for sure, can make a big difference. At the same time, the real payoff is when technologies like that then enable healthcare workers—really great doctors, really great clinicians—to clear enough on their plate that their creative potential becomes available to more people; and so, you win on both ends. You win both on an instant access through automation, but you can also have a potential to win by expanding and enhancing the throughput and the number of patients that the clinics and clinicians can deal with. It’s a win-win situation in that respect.

Well said and I agree. It sounds like overall you are bullish on the future, you’re optimistic about the future and you think this technology overall is a force for great good, or am I just projecting that on to you? 

I’d say we think a lot about this. I would say, in my own career, I’ve had to confront both the good and bad outcomes, both the positive and unintended consequences of technology. I remember when I was back at DARPA—I arrived at DARPA in 2009—and in the summer of 2009, there was an election in Iran where the people in Iran felt that the results were not valid. This sparked what has been called the Iranian Twitter revolution. And what was interesting about the Iranian Twitter revolution is that people were using social media, Friendster and Twitter, in order to protest the results of this election and to organize protests.

This came to my attention at DARPA, through the State Department, because it became apparent that US-developed technologies to detect cyber intrusions and to help protect corporate networks were being used by the Iranian regime to hunt down and prosecute people who were using social media to organize these protests. The US took very quick steps to stop the sale of these technologies. But the thing that’s important is that these technologies, I’m pretty sure, were developed with only the best of intentions in mind—to help make computer networks safer. So, the idea that these technologies could be used to suppress free speech and freedom of assembly was, I’m sure never contemplated.

This really, kind of, highlights the double-edged nature of technology. So, for sure, we try to bring that thoughtfulness into every single research project we have across Microsoft Research, and that motivates our participation in things like the partnership on AI that involves a large number of industry and academic players, because we always want to have the technology, industry, and the research world be more and more thoughtful and enlightened on these ideas. So, yes, we’re optimistic. I’m optimistic certainly about the future, but that optimism, I think, is founded on a good dose of reality that if we don’t actually take proactive steps to be enlightened, on both the good and bad possibilities, good and bad outcomes, then the good things don’t just happen on their own automatically. So, it’s something that we work at, I guess, is the bottom line for what I’m trying to say. It’s earned optimism.

I like that. “Earned optimism,” I like that. It looks like we are out of time. I want to thank you for an hour of fascinating conversation about all of these topics. 

It was really fascinating, and you’ve asked some of the hardest question of the day. It was a challenge, and tons of fun to noodle on them with you.

Like, “What is bigger, the sun or a nickel? Turns out that’s a very hard question.

I’m going to ask Xiaolce that question and I’ll let you know what she says.

All right. Thank you again.

Thank you.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Voices in AI – Episode 25: A Conversation with Matt Grob

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Matt talk about thinking, the Turing test, creativity, Google Translate, job displacement, and education.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 25: A Conversation with Matt Grob”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-12-04-(01-01-40)-matt-grob.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview-2.jpeg”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png&#8217;
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Matt Grob. He is the Executive Vice President of Technology at Qualcomm Technologies, IncGrob joined Qualcomm back in 1991 as an engineer. He also served as Qualcomm’s Chief Technology Officer from 2011 to 2017. He holds a Master of Science in Electrical Engineering from Stanford, and a Bachelor of Science in Electrical Engineering from Bradley University. He holds more than seventy patents. Welcome to the show, Matt.

Matt Grob: Thanks, Byron, it’s great to be here.

So what does artificial intelligence kind of mean to you? What is it, kind of, at a high level? 

Well, it’s the capability that we give to machines to sense and think and act, but it’s more than just writing a program that can go one way or another based on some decision process. Really, artificial intelligence is what we think of when a machine can improve its performance without being reprogrammed, based on gaining more experience or being able to access more data. If it can get better, it can prove its performance; then we think of that as machine learning or artificial intelligence.

It learns from its environment, so every instantiation of it heads off on its own path, off to live its own AI life, is that the basic idea?

Yeah, for a long time we’ve been able to program computers to do what we want. Let’s say, you make a machine that drives your car or does cruise control, and then we observe it, and we go back in and we improve the program and make it a little better. That’s not necessarily what we’re talking about here. We’re talking about the capability of a machine to improve its performance in some measurable way without being reprogrammed, necessarily. Rather it trains or learns from being able to access more data, more experience, or maybe talking to other machines that have learned more things, and therefore improves its ability to reason, improves its ability to make decisions or drive errors down or things like that. It’s those aspects that separate machine learning, and these new fields that everyone is very excited about, from just traditional programming.

When you first started all of that, you said the computer “thinks.” Were you using that word casually or does the computer actually think?

Well, that’s a subject of a lot of debate. I need to point out, my experience, my background, is actually in signal processing and communications theory and modem design, and a number of those aspects relate to machine learning and AI, but, I don’t actually consider myself a deep expert in those fields. But there’s a lot of discussion. I know a number of the really deep experts, and there is a lot of discussion on what “think” actually means, and whether a machine is simply performing a cold computation, or whether it actually possesses true imagination or true creativity, those sorts of elements.

Now in many cases, the kind of machine that might recognize a cat from a dog—and it might be performing a certain algorithm, a neural network that’s implemented with processing elements and storage taps and so forth—is not really thinking like a living thing would do. But nonetheless it’s considering inputs, it’s making decisions, it’s using previous history and previous training. So, in many ways, it is like a thinking process, but it may not have the full, true creativity or emotional response that a living brain might have.

You know it’s really interesting because it’s not just a linguistic question at its core because, either the computer is thinking, or it’s simulating something that thinks. And I think the reason those are different is because they speak to what are the limits, ultimately, of what we can build. 

Alan Turing way back in his essay was talking about, “Can a machine think?” He asked the question sixty-five years ago, and he said that the machine may do it a different way but you still have to call it “thinking. So, with the caveat that you’re not at the vanguard of this technology, do you personally call the ball on that one way or the other, in terms of machine thought?

Yeah, I believe, and I think the prevailing view is, though not everyone agrees, that many of the machines that we have today, the agents that run in our phones, and in the cloud, and can recognize language and conditions are not really, yet, akin to a living brain. They’re very, very useful. They are getting more and more capable. They’re able to go faster, and move more data, and all those things, and many metrics are improving, but they still fall short.

And there’s an open question as to just how far you can take that type of architecture. How close can you get? It may get to the point where, in some constrained ways, it could pass a Turing Test, and if you only had a limited input and output you couldn’t tell the difference between the machine and a person on the other end of the line there, but we’re still a long way away. There are some pretty respected folks who believe that you won’t be able to get the creativity and imagination and those things by simply assembling large numbers of AND gates and processing elements; that you really need to go to a more fundamental description that involves quantum gravity and other effects, and most of the machines we have today don’t do that. So, while we have a rich roadmap ahead of us, with a lot of incredible applications, it’s still going to be a while before we really create a real brain.

Wow, so there’s a lot going on in there. One thing I just heard was, and correct me if I’m saying this wrong, that you don’t believe we can necessarily build an artificial general intelligence using, like, a Von Neumann architecture, like a desktop computer. And that what we’re building on that trajectory can get better and better and better, but it won’t ever have that spark, and that what we’re going to need are the next generation of quantum computer, or just a fundamentally different architecture, and maybe those can emulate human brain’s functionality, not necessarily how it does it but what it can do. Is that fair? Is that what you’re saying? 

Yeah, that is fair, and I think there are some folks who believe that is the case. Now, it’s not universally accepted. I’m kind of citing some viewpoints from folks like physicist Roger Penrose, and there’s a group around him—Penrose Institute, now being formed—that are exploring these things and they will make some very interesting points about the model that you use. If you take a brain and you try to model a neuron, you can do so, in an efficient way with a couple lines of mathematics, and you can replicate that in silicon with gates and processors, and you can put hundreds of thousands, or millions, or billions of them together and, sure, you can create a function that learns, and can recognize images, and control motors, and do things and it’s good. But whether or not it can actually have true creativity, many will argue that a model has to include effects of quantum gravity, and without that we won’t really have these “real brains.”

You read in the press about both the fears and the possible benefits of these kinds of machines, that may not happen until we reach the point where we’re really going beyond, as you said, Von Neumann, or even other structures just based on gates. Until we get beyond that, those fears or those positive effects, either one, may not occur.

Let’s talk about Penrose for a minute. His basic thesisand you probably know this better than I dois that Gödel’s incompleteness theorem says that the system we’re building can’t actually duplicate what a human brain can do. 

Or said another way, he says there are certain mathematical problems that are not able to be solved with an algorithm. They can’t be solved algorithmically, but that a human can solve them. And he uses that to say, therefore, a human brain is not a computational device that just runs algorithms, that it’s doing something more; and he, of course, thinks quantum tunneling and all of that. So, do you think that’s what’s going on in the brain, do you think the brain is fundamentally non-computational?

Well, again, I have to be a little reserved with my answer to that because it’s not an area that I feel I have a great deep background in. I’ve met Roger, and other folks around him, and some of the folks on the other side of this debate, too, and we’ve had a lot of discussions. We’ve worked on computational neuroscience at Qualcomm for ten years; not thirty years, but ten years, for sure. We started making artificial brains that were based on the spiking neuron technique, which is a very biologically inspired technique. And again, they are processing machines and they can do many things, but they can’t quite do what a real brain can do.

An example that was given to me was the proof of Fermat’s Last Theorem. If you’re familiar with Fermat’s Last Theorem, it was written down I think maybe two hundred years ago or more, and the creator, Fermat, a mathematician, wrote in the margin of his notebook that he had a proof for it, but then he never got to prove it. I think he lost his life. And it wasn’t until about twenty-some years ago where a researcher at Berkeley finally proved it. It’s claimed that the insight and creativity required to do that work would not be possible by simply assembling a sufficient number of AND gates and training them on previous geometry and math constructs, and then giving it this one and having the proof come out. It’s just not possible. There had to be some extra magic there, which Roger, and others, would argue requires quantum effects. And if you believe that—and I obviously find it very reasonable and I respect these folks, but I don’t claim that my own background informs me enough on that one—it seems very reasonable; it mirrors the experience we had here for a decade when we were building these kinds of machines.

I think we’ve got a way to go before some of these sci-fi type scenarios play out. Not that they won’t happen, but it’s not going to be right around the corner. But what is right around the corner is a lot of greatly improved capabilities as these techniques basically fundamentally replace traditional signal processing for many fields. We’re using it for image and sound, of course, but now we’re starting to use it in cameras, in modems and controllers, in complex management of complex systems, all kinds of functions. It’s really exciting what’s going on, but we still have a way to go before we get, you know, the ultimate.

Back to the theorem you just referenced, and I could be wrong about this, but I recall that he actually wrote a surprisingly simple proof to this theorem, which now some people say he was just wrong, that there isn’t a simple proof for it. But because everybody believed there was a proof for it, we eventually solved it. 

Do you know the story about a guy named Danzig back in the 30s? He was a graduate student in statistics, and his professor had written two famous unsolved problems on the chalkboard and said, These are famous unsolved programs. Well, Danzig comes in late to class, and he sees them and just assumes they’re the homework. He writes them down, and takes them home, and, you can guess, he solves them both. He remarked later that they seemed a little harder than normal. So, he turned them in, and it was about two weeks before the professor looked at them and realized what they were. And it’s just fascinating to think that, like, that guy has the same brain I have, I mean it’s far better and all that, but when you think about all those capabilities that are somewhere probably in there. 

Those are wonderful stories. I love them. There’s one about Gauss when he was six years old, or eight years old, and the teacher punished the class, told everyone to add up the numbers from one to one hundred. And he did it in an instant because he realized that 100 + 0 is 100, and 99 + 1 is 100, and 98 + 2 is 100, and you can multiply those by 50. The question is, “Is a machine based on neural nets, and coefficients, and logistic regression, and SVM and those techniques, capable of that kind of insight?” Likely it is not. And there is some special magic required for that to actually happen.

I will only ask you one more question on that topic and then let’s dial it back in more to the immediate future. You said, “special magic. And again, I have to ask you, like I asked you about “think, are you using magic colloquially, or is it just physics that we don’t understand yet? 

I would argue it’s probably the latter. With the term “magic,” there’s famous Arthur C. Clarke quote that, “Sufficiently advanced technology is indistinguishable from magic.” I think, in this case, the structure of a real brain and how it actually works, we might think of it as magic until we understand more than we do now. But it seems like you have to go into a deeper level, and a simple function assembled from logic gates is not enough.

In the more present day, how would you describe where we are with the science? Because it seems we’re at a place where you’re still pleasantly surprised when something works. It’s like, “Wow, it’s kind of cool, that worked.” And as much as there are these milestone events like AlphaGo, or Watson, or the one that beat the poker players recently, how quickly do you think advances really are coming? Or is it the hope for those advances that’s really kind of what’s revved up?

I think the advances are coming very rapidly, because there’s an exponential nature. You’ve got machines that have processing power which is increasing in an exponential manner, and whether it continues to do so is another question, but right now it is. You’ve got memory, which is increasing in an exponential manner. And then you’ve also got scale, which is the number of these devices that exist and your ability to connect to them. And I’d really like to get into that a little bit, too, the ability of a user to tap into a huge amount of resource. So, you’ve got all of those combined with algorithmic improvements, and, especially right now, there’s such a tremendous interest in the industry to work on these things, so lots of very talented graduates are pouring into the field. The product of all those effects is causing very, very rapid improvement. Even though in some cases the fundamental algorithm might be based on an idea from the 70s or 80s, we’re able to refine that algorithm, we’re able to couple that with far more processing power at a much lower cost than as ever before. And as a result, we’re getting incredible capabilities.

I was fortunate enough to have a dinner with the head of a Google Translate project recently, and he told me—an incredibly nice guy—that that program is now one of the largest AI projects in the world, and has a billion users. So, a billion users can walk around with their device and basically speak any language and listen to any language or read it, and that’s a tremendous accomplishment. That’s really a powerful thing, and a very good thing. And so, yeah, those things are happening right now. We’re in an era of rapid, rapid improvement in those capabilities.

What do you think is going to be the next watershed event? We’re going to have these incremental advances, and there’s going to be more self-driving cars and all of these things. But these moments that capture the popular imagination, like when the best Go player in the world loses, what do you think will be another one of those for the future?

When you talk about AlphaGo and Watson playing Jeopardy and those things, those are significant events, but they’re machines that someone wheels in, and they are big machines, and they hook them up and they run, but you don’t really have them available in the mobile environment. We’re on the verge now of having that kind of computing power, not just available to one person doing a game show, or the Go champion in a special setting, but available to everyone at a reasonable cost, wherever they are, at any time. Also, to be able to benefit, the learning experience of one person can benefit the rest. And so, that, I think, is the next step. It’s when you can use that capability, which is already growing as I described, and make it available in a mobile environment, ubiquitously, at reasonable cost, then you’re going to have incredible things.

Autonomous vehicles is an example, because that’s a mobile thing. It needs a lot of processing power, and it needs processing power local to it, on the device, but also needs to access tremendous capability in the network, and it needs to do so at high reliability, and at low latency and some interesting details there—so vehicles is a very good example. Vehicles is also something that we need to improve dramatically, from a safety standpoint, versus where we are today. It’s critical to the economies of cities and nations, so a lot of scale. So, yeah, that’s a good crucible for this.

But there are many others. Medical devices, huge applications there. And again, you want, in many cases, a very powerful capability in the cloud or in the network, but also at the device, there are many cases where you’d want to be able to do some processing right there, that can make the device more powerful or more economical, and that’s a mobile use case. So, I think there will be applications there; there can be applications in education, entertainment, certainly games, management of resources like power and electricity and heating and cooling and all that. It’s really a wide swath but the combination of connectivity with this capability together is really going to do it.

Let’s talk about the immediate future. As you know, with regard to these technologies, there’s kind of three different narratives about their effect on employment. One is that they’re going to take every single job, everybody from a poet on down; that doesn’t sound like something that would resonate with you because of the conversation we just had. Another is that this technology is going to replace a lot of lowskilled workers, there’s going to be fewer, quote, lowskilled jobs,” whatever those are, and that you’re going to have this permanent underclass of unemployed people competing essentially with machines for work. And then there’s another narrative that says, “No, what’s going to happen is the same thing that happened with electricity, with motors, with everything else. People take that technology they use it to increase their own productivity, and they go on to raise their income that way. And you’re not going to have essentially any disruption, just like you didn’t have any disruption when we went from animal power to machine power. Which of those narratives do you identify with, or is there a different way you would say it?

Okay, I’m glad you asked this because this is a hugely important question and I do want to make some comments. I’ve had the benefit of participating in the World Economic Forum, and I’ve talked to Brynjolfsson and McAfee, the authors of The Second Machine Age, and the whole theme of the forum a year ago was Klaus Schwab’s book The Fourth Industrial Age and the rise of cyber-physical systems and what impact they will have. I think we know some things from history and the question is, is the future going to repeat that or not? We know that there’s the so-called Luddite fallacy which says that, “When these machines come they’re going to displace all the jobs.” And we know that a thousand years ago, ninety-nine percent of the population was involved in food production, and today, I don’t know, don’t quote me on this, but it’s like 0.5 percent or something like that. Because we had massive productivity gains, we didn’t need to have that many people working on food production, and they found the ability to do other things. It’s definitely true that increases in unemployment did not keep pace with increases in productivity. Productivity went up orders of magnitude, unemployment did not go up, quote, “on the orders of magnitude,” and that’s been the history for a thousand years. And even more recently if you look at the government statistics on productivity, they are not increasing. Actually, some people are alarmed that they’re not increasing faster than they are, they don’t really reflect a spike that would suggest some of these negative scenarios.

Now, having said that, it is true that we are at a place now where machines, even with their processing that they use today, based on neural networks and SVMs and things like that, they are able to replace a lot of the existing manual or repetitive type tasks. I think society as a whole is going to benefit tremendously, and there’s going to be some groups that we’ll have to take some care about. There’s been discussions of universal basic incomes, which I think is a good idea. Bill Gates recently had an article about some tax ideas for machines. It’s a good idea, of course. Very hard to implement because you have to define what a robot is. You know, something like a car or a wheel, a wheel is a labor-saving device, do you tax it? I don’t know.

So, to get back to your question, I think it is true that there will be some groups that are in the short term displaced, but there’s no horizon where many things that people do, like caring for each other, like teaching each other, those kinds of jobs are not going away, they’re in ever-increasing demand. So, there’ll be a migration, not necessarily a wholesale replacement. And we do have to take care with the transient effect of that, and maybe a universal type of wage might be part of an answer. I don’t claim to have the answer completely. I mean it’s obviously a really hard problem that the world is grappling with. But I do feel, fundamentally, that the overall effect of all of this is going to be net positive. We’re going to make more efficient use of our resources, we’re going to provide services and capabilities that have never been possible before that everyone can have, and it’s going to be a net positive.

That’s an optimistic view, but it’s a very measured optimistic view. Let me play devil’s advocate from that side to say, why do you think there’ll be any disruption? What does that case look like? 

Because, if you think about it, in 1995 if somebody said, “Hey, you know what, if we take a bunch of computers and we connect them all via TCP/IP, and we build a protocol, maybe HTTP, to communicate, and maybe a markup language like HTMLyou know what’s going to happen? Two billion people will connect and it’s going to create trillions and trillions and trillions of dollars of wealth. It’s going to create Google and eBay and Amazon and Baidu. It’s going to transform every aspect of society, and create an enormous number of jobs. And Etsy will come along, and people will be able to work from home. And all these thousands of things that float out of it.” You never would have made those connections, right? You never would have said, “Oh, that logically flows from snapping a bunch of computers together.” 

So, if we really are in a technological boom that’s going to dwarf that, really won’t the problem be an immense shortage of people? There’s going to be all of these opportunities, and very few people relatively to fill them. So, why the measured optimism for somebody who just waxed so poetic about what a big deal these technologies are?

Okay, that’s a great question. I mean, that was super. You asked will there be any disruption at all. I completely believe that we really have not a job shortage, but a skills shortage; that is the issue. And so, the burden goes then to the educational system, and the fabric of society to be able to place a value on good education and stick to it long enough that you can come up to speed in the modern sense, and be able to contribute beyond what the machines do. That is going to be a shortage, and anyone who has those skills is going to be in a good situation. But you can have disruption even in that environment.

You can have an environment where you have a skills shortage not a job shortage, and there’s disruption because the skills shortage gets worse and there’s a lot of individuals whose previous skills are no longer useful and they need to change. And that’s the tough thing. How do you retrain, in a transient case, when these advancements come very quickly? How do you manage that? What is fair? How does society distribute its wealth? I mean the mechanisms are going to change.

Right now, it’s starting to become true that just simply the manner in which you consume stuff; if that data is available, that has value in itself, and maybe people should be compensated for it. Today, they are not as much, they give it up when they sign in to these major cloud player services, and so those kinds of things will have to change. I’ll give you an anecdote.

Recently I went to Korea, and I met some startups there, and one of the things that happens, especially in non-curated app stores, is people develop games and they put in their effort and time and they develop a game, and they put it on there and people download it for ninety-nine cents or whatever, and they get some money. But, there are some bad actors that will see a new game, they’ll quickly download it, un-assemble the language back to the source, change a few little things and republish that same game that looks and feels just like the original but the ninety-nine cents goes to a different place. They basically steal the work. So, this is a bad thing, and in response, there are startups now that make tools that create software that makes it difficult to un-assemble. There are multiple startups that do what I just described and I’m sitting here listening to them and I’m realizing, “Wow, that job—in fact, that industry—didn’t even exist.” That is a new creation of the fact that there are un-curated app stores and mobile devices and games, and it’s an example of the kind of new thing that’s created, that didn’t exist before.

I believe that that process is alive and well, and we’re going to continue to see more of it, and there’s going to continue to be a skills shortage more than a job shortage, and so that’s why I have a fundamentally positive view. But it is going to be challenging to meet the demands of that skills shortage. Society has to place the right value on that type of education and we all have to work together to make that happen.

You have two different threads going on there. One is this idea that we have a skills shortage, and we need to rethink education. And another one that you touched on is the way that money flows, and can people be compensated for their data, and so forth. I’d like to talk about the first one, and again, I’d like to challenge the measured amount of your optimism. 

I’ll start off by saying I agree with you, that, at the beginning of the Industrial Revolution there was a vigorous debate in the United States about the value of post-literacy education. Like think about that: ipost-literacy education worth anything? Because in an agrarian society, maybe it wasn’t for most people. Once you learn to read, that was what you needed. And then people said, “No, no, the jobs of the future are going to need more education. We should invest in that now.” And the United States became the first country in the world to guarantee that every single person could graduate from high school. And you can make a really good case, that I completely believe, that that was a major source of our economic ascendancy in the twentieth century. And, therefore, you can extend the argument by saying, “Maybe we need grades thirteen and fourteen now, and they’re vocational, and we need to do that again. I’m with you entirely, but we don’t have that right now. And so, what’s going to happen? 

Here is where I would question the measured amount of your optimism which is… People often say to me, “Look, this technology creates all these new jobs at the high-end, like graphic designers and geneticists and programmers, and it destroys jobs at the low-end. Are those people down at the low-end going to become programmers?” And, of course, the answer is not, “Yes.” The answer isand here’s my questionall that matters is, “Can everybody do a job just a little harder than the one they’re currently doing? And if the answer to that is, “Yes, then what happens is the college biology professor becomes a geneticist, the high school biology teacher becomes a college teacher, the substitute teacher gets backfilled into the biology one, and all the way down so that everybody gets just a little step up. Everybody just has to push themselves a little more, and the whole system phase shifts up, and everybody gets a raise and everybody gets a promotion. That‘s really what happened in the Industrial Revolution, so why is it that you don’t think that that is going to be as smooth as I have just painted it? 

Well, I think what you described does happen and is happening. If you look at—and again, I’m speaking from my own experience here as an engineer in a high-tech company—any engineer in a high-tech company, and you look at their output right now, and you compare it to a year or two before, they’ve all done what you describe, which is to do a little bit more, and to do something that’s a little bit harder. And we’ve all been able to do that because the fundamental processes involved improve. The tools, the fabric available to you to design things, the shared experience of the teams around you that you tap into—all those things improved. So, everyone is actually doing a job that’s a little bit harder than they did before, at least if you’re a designer.

You also cited some other examples, a teacher at one level going to the next level. That’s a kind of a queue, and there’s only so many spots at so many levels based on the demographics of the population. So not everyone can move in that direction, but they can all—at a given grade level—endeavor to teach more. Like, our kids, the math they do now is unbelievable. They are as much as a year or so ahead of when I was in high school, and I thought that we were doing pretty good stuff then, but now it’s even more.

I am optimistic that those things are going to happen, but you do have a labor force of certain types of jobs, where people are maybe doing them for ten, twenty, thirty years, and all of a sudden that is displaced. It’s hard to ask someone who’s done a repetitive task for much of their career to suddenly do something more sophisticated and different. That is the problem that we as a society have to address. We have to still value those individuals, and find a way—like a universal wage or something like that—so they can still have a good experience. Because if you don’t, then you really could have a dangerous situation. So, again, I feel overall positive, but I think there’s some pockets that are going to require some difficult thinking, and we’ve got to grapple with it.

Alright. I agree with your overall premise, but I will point out that that’s exactly what everybody said about the farmers—that you can’t take these people that have farmed for twenty or thirty years, and all of a sudden expect them to be able to work in a factory. The rhythm of the day is different, they have a supervisor, there’s bells that ring, they have to do different jobs, all of this stuff; and yet, that’s exactly what happened. 

I think there’s a tendency to short human ability. That being said, technological advance, interestingly, distributes its financial gains in a very unequal measure and there is something in there that I do agree we need to think about. 

Let’s talk about Qualcomm. You are the EVP of technology. You were the CTO. You’ve got seventy patents, like I said in your intro. What is Qualcomm’s role in this world? How are you working to build the better tomorrow? 

Okay, great. We provide connections between people, and increasingly between their worlds and between devices. Let me be specific about what I mean by that. When the company started—by the way, I’ve been at Qualcomm since ‘91, company started in ‘85-‘86 timeframe—one of the first things we did early on was we improved the performance and capacity of cellular networks by a huge amount. And that allowed operators like Verizon, AT&T, and Sprint—although they had different names back then—to offer, initially, voice services to large numbers of people at reasonably low cost. And the devices, thanks to the work of Qualcomm and others, got smaller, had longer battery life, and so forth. As time went on, it was originally connecting people with voice and text, and then it became faster and more capable so you could do pictures and videos, and then you could connect with social networks and web pages and streaming, and you could share large amounts of information.

We’re in an era now where I don’t just send a text message and say, “Oh, I’m skiing down this slope, isn’t this cool.” I can have a 360°, real-time, high-quality, low-latency sharing of my entire experience with another user, or users, somewhere else, and they can be there with me. And there’s all kinds of interesting consumer, industrial, medical, and commercial applications for that.

We’re working on that and we’re a leading developer of the connectivity technology, and also what you do with it on the endpoints—the processors, the camera systems, the user interfaces, the security frameworks that go with it; and now, increasingly, the machine learning and AI capabilities. We’re applying it, of course, to smartphones, but also to automobiles, medical devices, robotics, to industrial cases, and so on.

We’re very excited about the pending arrival of what we call 5G, which is the next generation of cellular technology, and it’s going to show up in the 2019-2020 timeframe. It’s going to be in the field maybe ten, fifteen years just like the previous generations were, and it’s going to provide, again, another big step in the performance of your radio link. And when I say “performance,” I mean the speed, of course, but also the latency will be very low—in many modes it can be millisecond or less. That will allow you to do functions that used to be on one side of the link, you can do on the other side. You can have very reliable systems.

There are a thousand companies participating in the standards process for this. It used to be just primarily the telecom industry, in the past with 3G and 4G—and of course, the telecom industry is very much still involved—but there are so many other businesses that will be enabled with 5G. So, we’re super excited about the impact it’s going to have on many, many businesses. Yeah, that’s what we’re up to these days.

Go with that a little more, paint us a picture. I don’t know if you remember those commercials back in the 90s saying, “Can you imagine sending a fax from the beach? You will!” and other “Can you imagine” scenarios. They kind of all came trueother than that there wasn’t as much faxing as I think they expected. But, what do you think? Tell me some of the things that you thinkin a reasonable amount of time, we’re going to be able to do it, in five years, let’s say.

I’m so fascinated that you used that example, because that one I know very well. Those AT&T commercials, you can still watch them on YouTube, and it’s fun to do so. They did say people will be able to send a fax from the beach, and that particular ad motivated the operators to want to send fax over cellular networks. And we worked on that—I worked on that myself—and we used that as a way to build the fundamental Internet transport, and the fax was kind of the motivation for it. But later, we used the Internet transport for internet access and it became a much, much bigger thing. The next step will be sharing fully immersive experiences, so you can have high-speed, low-latency video in both directions.

Autonomous vehicles, but before we even get to fully autonomous—because there’s some debate about when we’re going to get to a car that you can get into with no steering wheel and it just takes you where you want to go; that’s still a hard problem. Before we have fully autonomous cars that can take you around without a steering wheel, we’re going to have a set of technologies that improve the safety of semiautonomous cars. Things like lane assist, and better cruise control, and better visibility at night, and better navigation; those sorts of things. We’re also working on vehicle-to-vehicle communication, which is another application of low-latency, and can be used to improve safety.

I’ll give you a quick anecdote on that. In some sense we already have a form of it, it’s called brake lights. Right now, when you’re driving down the highway, and the car in front puts on the lights, you see that and then you take action, you may slow down or whatever. You can see a whole bunch of brake lights, if the traffic is starting to back up, and that alerts you to slow down. Brake lights have transitioned from incandescent bulbs which take, like, one hundred milliseconds to turn on to LED bulbs which take one millisecond to turn on. And if you multiply a hundred milliseconds at highway speeds, it’s six to eight feet depending on the speed, and you realize that low-latency can save lives, and make the system more effective.

That’s one of the hallmarks of 5G, is we’re going to be able to connect things at low-latency to improve the safety or the function. Or, in the case of machine learning, where sometimes you want processing to be done in the phone, and sometimes you want to access enormous processing in the cloud, or at the edge. When we say edge, in this context, we mean something very close to the phone, within a small number of hops or routes to get to that processing. If you do that, you can have incredible capability that wasn’t possible before.

To give you an example of what I’m talking about, I recently went to the Mobile World Congress America show in San Francisco, it’s a great show, and I walked through the Verizon booth and I saw a demonstration that they had made. In their demonstration, they had taken a small consumer drone, and I mean it’s a really tiny one—just two or three inches long—that costs $18. All this little thing does is send back video, live video, and you control it with Wi-Fi, and they had it following a red balloon. The way it followed it was, it sent the video to a very powerful edge processing computer, which then performed a sophisticated computer vision and control algorithm and then sent the commands back. So, what you saw was this little low-cost device doing something very sophisticated and powerful, because it had a low-latency connection to a lot of processing power. And then, just to really complete that, they switched it from edge computing, that was right there at the booth, to a cloud-based computing service that was fifty milliseconds away, and once they did that, the little demo wouldn’t function anymore. They were showing the power of low-latency, high-speed video and media-type communication, which enabled a simple device to do something similar to a much more complex device, in real time, and they could offer that almost like a service.

So, that paradigm is very powerful, and it applies to many different use cases. It’s enabled by high-performance connectivity which is something that we supply, and we’re very proficient at that. It impacts machine learning, because it gives you different ways to take advantage of the progress there—you can do it locally, you can do it on the edge, you can do it remotely. When you combine mobile, and all the investment that’s been made there, you leverage that to apply to other devices like automobiles, medical devices, robotics, other kinds of consumer products like wearables and assistant speakers, and those kinds of things. There’s just a vast landscape of technologies and services that all can be improved by what we’ve done, and what 5G will bring. And so, that’s why we’re pretty fired up about the next iteration here.

I assume you have done theoretical thinking about the absolute maximum rate at which data can be transferred. Are we one percent the way there, or ten percent, or can’t even measure it because it’s so smallIs this going to go on forever?

I am so glad you asked. It’s so interesting. This Monday morning, we just put a new piece of artwork in our research center—there’s a piece of artwork on every floor—and on the first floor, when you walk in, there’s a piece of artwork that has Claude Shannon and a number of his equations, including the famous one which is the Shannon capacity limit. That’s the first thing you see when you walk into the research center at Qualcomm. That governs how fast you can move data across a link, and you can’t beat it. There’s no way, any more than you can go faster than the speed of light. So, the question is, “How close are we to that limit?” If you have just two devices, two antennas, and a given amount of spectrum, and a given amount of power, then we can get pretty darn close to that limit. But the question is not that, the question is really, “Are we close to how fast of a service we can offer a mobile user in a dense area?” And to that question, the answer is, “We’re nowhere close.” We can still get significantly better; by that, I mean orders of magnitude better than we are now.

I can tell you three ways that that can be accomplished, and we’re doing all three of them. Number one is, we continue to make better modems, that are more efficient, better receivers, better equalizers, better antennas all of those techniques, and 5G is an example of that.

Number two, we always work with the regulator and operators to bring more spectrum, more radio spectrum to bear. If you look at the overall spectrum chart, only a sliver of it is really used for mobile communication, and we’re going to be able to use a lot more of it, and use more spectrum at high frequencies, like millimeter wave and above, that’s going to make a lot more “highway,” so to speak, for data transfer.

And the third thing is, the average radius of a base station can shrink, and we can use that channel over and over and over again. So right now, if you drive your car, and you listen to a radio station, the radio industry cannot use that channel again until you get hundreds of miles away. In the modern cellular systems, we’re learning how to reuse that channel even when you’re a very short distance away, potentially only feet or tens of meters away, so you can use it again and again and again.

So, with those three pillars, we’re really not close, and everyone can look forward to faster, faster, faster modems. And every time we move that modem speed up, that, of course, is the foundation for bigger screens, and more video, and new use cases that weren’t possible before, at a given price point, which now become possible. We’re not at the end yet, we’ve got a long way to go.

You made a passing reference to Moore’s Lawyou didn’t call it out, but you referenced exponential growth, and that the speed of computers would increase. Everybody always says, Is Moore’s Law finally over? You see those headlines all the time, and, like all the headlines that are a question, the answer is almost always, “No. You’ve made references to quantum computing and all that. Do we have opportunities to increase processor speed well into the future with completely different architectures?

We do. We absolutely do. And I believe that will occur. I mean, we’re not at the limit yet now. You can find “Moore’s Law is over” articles ten years ago also, and somehow it hasn’t happened yet. When we get past three nanometers, yeah, certain things are going to get really, really tough. But then there will be new approaches that will take us there, take us to the next step.

There’s also architectural improvements, and other axes that can be exploited; same thing as I just described to you in wireless. Shannon has said that we can only go so far between two antennas in a given amount of spectrum, in a given amount of power. But we can escape that by increasing the spectrum, increasing the number of distance between the antennas, reusing the spectrum over and over again, and we can still get the job done without breaking any fundamental laws. So, at least for the time being, the exponential growth is still very much intact.

You’ve mentioned Claude Shannon twice. He’s a fascinating character, and one of the things he did that’s kind of monumental was that paper he wrote in 49 or 50 about how a computer could play chess, and he actually figured out an algorithm for that. What was really fascinating about that was, this was one of the first times somebody looked at a computer and saw something other than a calculator. Because up until that point they just did not, and he made that intuitive leap to say, “Here’s how you would make a computer do something other than mathbut it’s really doing math. There’s a fascinating new book about him out called A Mind at Play, which I just read, that I recommend. 

We’re running out of time here. We’re wrapping up. I’m curious do you write, or do you have a place that people who want to follow you can keep track of what you’re up to? 

Well, I don’t have a lot there, but I do have a Twitter, and once in a while I’ll share a few thoughts. I should probably do more of that than I do. I have an internal blog which I should probably do more than I do. I’m sorry to say, I’m not very prolific on external writing, but that is something I would love to do more of.

And my final question is, are you a consumer of science fiction? You quoted Arthur C. Clarke earlier, and I’m curious if you read it, or watch TV, or movies or what have you. And if so, do you have any visions of the future that are in fiction, that you kind of identify with? 

Yes, I will answer an emphatic yes to that. I love all forms of science fiction and one of my favorites is Star Trek. My name spelled backwards is “Borg.” In fact, our chairman Paul Jacobs—I worked for him most of my career—he calls me “Locutus.” Given the discussion we just had—if you’re a fan of Star Trek and, in particular, the Star Trek: The Next Generation shows that were on in the ‘80s and early ‘90s, there was an episode where Commander Data met Mr. Spock. And that was really a good one, because you had Commander Data, who is an android and wants to be human, wants to have emotion and creativity and those things that we discussed, but can’t quite get there, meeting Mr. Spock who is a living thing and trying to purge all emotion and so forth, to just be pure logic, and they had an interaction. I thought that was just really interesting.

But, yes, I follow all science fiction. I like the book Physics of Star Trek by Krauss, I got to meet him once. And it’s amazing how many of the devices and concepts from science fiction have become science fact. In fact, the only difference between science fiction and science fact, is time. Over time we’ve pretty much built everything that people have thought up—communicators, replicators, computers.

I know, you can’t see one of those in-ear Bluetooth devices and not see Uhura, right? That’s what she had.

Correct. That little earpiece is a Bluetooth device. The communicator is a flip phone. The little square memory cartridges were like a floppy disk from the ‘80s. 3-D printers are replicators. We also have software replicators that can replicate and transport. We kind of have the hardware but not quite the way they do yet, but we’ll get there.

Do you think that these science fiction worlds anticipate the world or inadvertently create it? Do we have flip phones because of Star Trek or did Star Trek foresee the flip phone? 

I believe their influence is undeniable.

I agree and a lot of times they say it, right? They say, “Oh, I saw that and I wanted to do that. I wanted to build that.” You know there’s an XPRIZE for making a tricorder, and that came from Star Trek.

We were the sponsor of that XPRIZE and we were highly involved in that. And, yep, that’s exactly right, the inspiration of that was a portable device that can make a bunch of diagnoses, and that is exactly what took place and now we have real ones.

Well, I want to thank you for a fascinating hour. I want to thank you for going on all of these tangents. It was really fascinating. 

Wonderful, thank you as well. I also really enjoyed it, and anytime you want to follow up or talk some more please don’t hesitate. I really enjoyed talking with you.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}