Voices in AI – Episode 53: A Conversation with Nova Spivack

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

About this Episode

Episode 53 of Voices in AI features host Byron Reese and Nova Spivack talking about neurons, the Gaia hypothesis, intelligence, and quantum physics. Nova Spivack is a leading technology futurist, serial entrepreneur and angel investor.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today, I’m excited we have Nova Spivack as our guest. Nova is an entrepreneur, a venture capitalist, an author; he’s a great many other things. He’s referred to by a wide variety of sources as a polymath, and he’s recently started a science and tech studio called Magical in which he serves as CEO.

He’s had his fingers in all sorts of pies and things that you’re probably familiar with. He was the first investor in Klout. He was in early on something that eventually became Siri. He was the co-founder of EarthWeb, Radar Network, The Daily Dot, Live Matrix. It sounds like he does more before breakfast than I manage to get done in a week. Welcome to the show, Nova.

Nova Spivack: Thank you! Very kind of you.

So, let’s start off with artificial intelligence. When I read what you write and when I watch videos about you, you have a very clear view of how you think the future is going to unfold with regards to technology and AI specifically. Can you just take whatever time you want and just describe for our listeners how you think the future is going to happen?

Sure, so I’ve been working in the AI field since long before it was popular to say that. I actually started while I was still in college working for Kurzweil in one of his companies, in an AI company that built the Kurzweil Reading Machine. I mean I was doing early neural network there, that was the end of the ‘80s or early ‘90s, and then I worked under Danny Hillis at Thinking Machines on supercomputing and AI related applications.

Then after that, I was involved in a company called Individual which was the first company to do intelligent agent powered news filtering and then began to start internet companies and worked in the semantic web, large scale collaborating filtering projects, [and] intelligence assistance. I advised a company called Next IT, which is one of the leading bot platforms and I’ve built a big data mining analytics company. So I’ve been deeply involved in this technology on a hands-on basis both as a scientist and even as an engineer in the early days [but also] from the marketing and business side and venture capital side. So, I really know this space.

First of all, it’s great to see AI in vogue again. I lived through the first AI winter and the second sort of unacknowledged AI winter around the birth and death of the semantic web, and now here we are in the neural network machine learning renaissance. It’s wonderful to see this happening. However, I think that the level of hype that we see is probably not calibrated with reality and that inevitably there’s going to be a period of disillusionment as some of the promises that have been made don’t pan out.

So, I think we have to keep a very realistic view of what this technology is and what it can and cannot do, and where it fits in the larger landscape of machine intelligence. So, we can talk about that today. I definitely have a viewpoint that’s different from some of the other pundits in the space in terms of when or if the singularity will happen, and in particular spent years thinking about and studying cognitive science and consciousness. And I have some views on that, based on a lot of research, that are probably be different from what we are hearing on the mainstream thinkers. So, I think it will be an interesting conversation today as we get into some of these questions, and probably get quite far into technology and philosophy.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Advertisements

5 questions for… Electric Cloud

As I am working on a DevOps report at the moment, I’m speaking to a lot (and I mean a lot) of companies involved in and around the space. Each, in my experience so far, is looking to address some of the key IT delivery challenges of our time – namely, how to deliver services and applications at a pace that keeps up with the rate of technology change?

One such organisation is Electric Cloud. I spoke to Sam Fell, VP of Marketing, to understand how the company sees its customers’ main challenges, and what it is doing to address them – not least, the complexity of working at enterprise scale.

 

  1. Where did Electric Cloud come from, what need did it set out to deal with?

Electric Cloud has been automating and accelerating software delivery since 2002, from code check-in to production release. Our founders looked to solve a huge bottleneck, to address how development teams’ agile pace of software delivery and new technology adoption has outstripped the ability of operations teams to keep up. This cadence and skills mismatch limits the business, can jeopardize transformation efforts, putting teams in a constant state of what we call “release anxiety.”

The main challenges we see are:

  • The ability to predictably deploy any application to any environment at any scale they want.
  • The ability to manage release pipelines and dependencies across multiple teams, point tools, and infrastructures.
  • A comprehensive, but simple way to plan, schedule, and track releases across its lifecycle

In consequence, we developed an Adaptive Release Orchestration platform called ElectricFlow to help organizations like E*TRADE, HPE, Huawei, Intel and Lockheed Martin confidently release new applications and adapt to change at any speed demanded by the business, with the analytics and insight to measure, track, and improve their results along the way.

  1. Where’s the ‘market for DevOps’ going, from a customer perspective?

Nearly every industry now is taking notice of, or participating in the DevOps space – from FinServ and government to retail and entertainment – nearly every market, across nearly all geographies are recognizing DevOps as a way forward. The technology sector is still on the forefront, but you’d be surprised how quickly industries like transportation are catching up.

One thing we find invaluable is learning what critical factors are helping our customers drive their own businesses forward. A theme we hear over and over is how to adapt to business needs on a continuous basis.

But, there is an inherent dichotomy to how companies are expected to achieve the business goals set by leadership. For example, the need to implement fast and adapt to their changing environment easily – including support for new technologies like microservices and serverless. The challenge is, how to do this reliably and efficiently – to shift practices like security left and not create more technology debt or outages in the process.

Complexity is inevitable and the focus needs to be on how to adapt. Ways that we know work in addressing this complexity are:

  • Organizations that learn how to fix themselves will ultimately be high performers in the end – resiliency is the child of adaptability (credit: Rob England).
  • Companies that automate what humans aren’t good at – mundane, repeatable tasks that don’t require creativity – are ultimately set-up for success. Keep people engaged on high-value tasks with a focus on creating high-performance for themselves.
  • Organizations that continuously scrutinize their value streams, and align the business to the value stream, will be more successful than the competition. Improvements in one value stream may well create bottlenecks in others.
  • Companies that measure impact and outcomes, not just activities, will gain context into how ideas can transform into business value metrics such as customer satisfaction.
  • Understanding that there is no “one way” to solve a problem. If companies empower their teams to learn fast, the above may very well take care of itself.
  1. What’s the USP for Electric Cloud in a pretty crowded space?

Electric Cloud sees the rise in DevOps and modern software delivery methods as an opportunity to emphasize the fact that collaboration, visibility and auditability are key pillars to ensuring fast delivery works for everyone involved. Eliminating silos and reducing management overhead is easier said than done, but with a scalable, secure and unified platform – anything is possible.

We’re proud to say we’re the only provider of a centralized platform that can provide all of the following in one simple package:

  • model-based automation techniques to replace brittle scripting with reusable abstract models;
  • process-as-code through a Groovy-based domain specific language (DSL) to onboard apps quickly so they are versionable, testable, reusable and refactorable;
  • a self-service library of best practice automation techniques for consistency across the organization;
  • a vast amount of plugins and integrations to support enterprise governance of any tool your company uses;
  • Role-based access control, approval tracking for every change in the pipeline;
  • An impenetrable Agent-based architecture to support communications for scalability, fault tolerance and security.

And all at enterprise scale, with our ability to enable unlimited clustering architecture and efficient processing for high availability and low-latency of concurrent deployments.

  1. How does Electric Cloud play nice, and where does it see its most important integrations?

Every company’s software delivery process is unique, and touches many different tools, integrations and environments. We provide centralized management and visibility of the entire software delivery pipeline – whatever these might be – to improve developer productivity, streamline operations and increase efficiency.

To that end, Electric Cloud works with the most popular tools and infrastructure on the planet and allows our customers to add a layer of automation and governance to the tools they already use. You can find a list of our plugins, here.

  1. I’m also interested to know more about (Dev)SecOps, and I would say PrivOps but the name is taken!

We definitely think securing the pipeline, and the application, is very important in software production.  We have been talking about it a lot recently — you may find these resources helpful:

  • We recently held an episode of Continuous Discussions (#c9d9) to dive into how DevSecOps help teams “shift left,” and build security and quality into the process by making EVERYONE responsible for security at every stage. http://electric-cloud.com/blog/2018/05/c9d9-podcast-e87-devsecops/
  • Prior to that, we held a webinar with John Willis – an Electric Cloud advisor, co-author of the “DevOps Handbook” with Gene Kim, and expert at security and DevOps. You can view the webinar here.
  • We also participated in the RSA DevOps Connect event. At the show, we took a quick booth survey and the results may (or may not) surprise you…: http://electric-cloud.com/blog/2018/04/security-needs-to-shift-left-too/

 

My take: Moving beyond the principle

The challenges that DevOps set out to address are not new: indeed, they are perhaps as old as technology delivery itself. Ultimately, while we talk about removal of barriers, greater automation and so on, the ultimate goal is how to deliver complexity at scale. Some, who we might call ‘platform natives’, may never have had to run through the mud of corporate and infrastructure inertia and may wonder what all the fuss is about; for others, the challenges may appear insurmountable.

Vendors in the crowded DevOps space may have cut their teeth working for the former, platform-based group, who use containers as a default and who see serverless models as a logical extension of their keep-it-simple infrastructure approach. Many, if not all see enterprise environments as both the biggest opportunity and the greater challenge. Whoever can cut the Gordian knot of enterprise convolution stands to take the greatest prize.

Will it be Electric Cloud? To my mind, the astonishing number of vendor players in this space is a symptom of how quickly it has grown to date, creating a situation ripe for massive consolidation – though it is difficult to see any enterprise software vendor that is actively looking to become ‘the one’: consider IBM’s outsourcing of Rational and HPE’s divestiture of its own software business to Microfocus as examples of companies running in the opposite direction.

However the market opportunity remains significant, despite the elusivity of the prize. I have no doubt that the next couple of years will see considerable industry consolidation, and who knows at this stage which brands, models and so on will pervade. I very much doubt that the industry will go ‘full serverless’ any time soon, for a raft of reasons (think: IoT, SDN, data, state, plus everything we don’t know about yet), but remain optimistic that automation and orchestration will deliver on their potential, enabling and enabled by practices such as DevOps.

Now I shall get back on with my report!

 

Voices in AI – Episode 52: A Conversation with Rao Kambhampati

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

About this Episode

Sponsored by Dell and Intel, Episode 52 of Voices in AI, features host Byron Reese and Rao Kambhampati discussing creativity, military AI, jobs and more. Subbarao Kambhampati is a professor at ASU with teaching and research interests in Artificial Intelligence. Serving as the president of AAAI, the Association for the Advancement of Artificial Intelligence.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Rao Kambhampati. He has spent the last quarter-century at Arizona State University, where he researches AI. In fact, he’s been involved in artificial intelligence research for thirty years. He’s also the President of the AAAI, the Association for the Advancement of Artificial Intelligence. He holds a Ph.D.in computer science from the University of Maryland, College Park. Welcome to the show, Rao.

Rao Kambhampati: Thank you, thank you for having me.

I always like to start with the same basic question, which is, what is artificial intelligence? And so far, no two people have given me the same answer. So you’ve been in this for a long time, so what is artificial intelligence?

Well, I guess the textbook definition is, artificial intelligence is the quest to make machines show behavior, that when shown by humans would be considered a sign of intelligence. So intelligent behavior, of course, that right away begs the question, what is intelligence? And you know, one of the reasons we don’t agree on the definitions of AI is partly because we all have very different notions of what intelligence is. This much is for sure; intelligence is quite multi-faceted. You know we have the perceptual intelligence—the ability to see the world, you know the ability to manipulate the world physically—and then we have social, emotional intelligence, and of course you have cognitive intelligence. And pretty much any of these aspects of intelligent behavior, when a computer can show those, we would consider that it is showing artificial intelligence. So that’s basically the practical definition I use.

But to say, “while there are different kinds of intelligences, therefore, you can’t define it,” is akin to saying there are different kinds of cars, therefore, we can’t define what a car is. I mean that’s very unsatisfying. I mean, isn’t there, this word ‘intelligent’ has to mean something?

I guess there are very formal definitions. For example, you can essentially consider an artificial agent, working in some sort of environment, and the real question is, how does it improve its long-term reward that it gets from the environment, while it’s behaving in that environment? And whatever it does to increase its long-term reward is seen, essentially as—I mean the more reward it’s able to get in the environment, the more important it is. I think that is the sort of definition that we use in introductory AI sorts of courses, and we talk about these notions of rational agency, and how rational agents try to optimize their long-term reward. But that sort of gets into more technical definitions. So when I talk to people, especially outside of computer science, I appeal to their intuitions of what intelligence is, and to the extent we have disagreements there, that sort of seeps into the definitions of AI.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

5 questions for… Nuance – does speech recognition have a place in healthcare?

Speech recognition has been on the brink of major success for decades, so it feels. Rather than set of generic “when will it be mainstream” questions, I was keen to catch up with Martin Held, Senior Product Manager, Healthcare at Nuance, to find out how things stood in this, specific and highly relevant context.

  1. How do you see the potential for speech recognition in the healthcare sector?

Right now, the most gain will be from general documentation, enabling people to dictate instead of type, to get text out faster. In some areas of healthcare, things are pretty structured – you have to fill forms electronically, with drop-down lists and so on. That’s not a primary application for speech, but anything that requires free text, there’s no comparison or alternative. Areas where handwritten notes are put into notes fields, that’s a good application. Discharge notes can be also be very wordy.

From a use case perspective, we’ve done analysis on how much time teams are spending on documentation and it’s huge — three quarters of medical practices are spending half of their time on documentation alone. In the South Tees Emergency department, we did a study where use of speech recognition reduced documentation time by 40%. In another study with Dukinfield, a smaller practice, by introducing our technology they were able to see 4 more patients (about a 10% increase) per day.

  1. What has happened over the past 5 years in terms of performance improvements and innovation?

In these scenarios, it’s a question of “can it work, can it perform” across a range of input devices. General speech recognition has improved so much that we are in the upper 90% range straight out of the gate. Now none of our products require training, based on new technology that was introduced using deep neural networks and machine learning.

In healthcare, we have also added cloud computing and changed the architecture: we put a lightweight client on the end-point machine or device, which streams audio to a back-end recognition server hosted in Microsoft Azure. We announced recently the general availability of Dragon Medical One — cloud-based recognition.

Still connectivity is a big issue, in particular for mobile situations, such as a community nurse — it’s not always possible to use recognition back in the car, if a mobile signal is poor for example. We are looking at technology that could record, then transcribe later.

  1. How have you addressed the privacy and risk implications?

We are certified to connect to N3 network, allowing NHS entities to connect according to requirements around governance and privacy, for example patient confidentiality. Offering a service through the NHS N3 network requires an Information Governance Statement of Compliance and submission of IG Toolkit through NHS Digital — this involves a relatively long and detailed certification process, including disaster recovery, Nuance internal processes and practices, employees with access and so on.

We are also offering input via the public Internet, as encryption and other technologies are secure so customers can connect through these means. So, for example, we can use mobile phones as an input device. We are not trying to build mobile medical devices, we know how difficult that is, but we are looking to replace the keyboard (which is not a medical device!)

As a matter of best practice, it is still required that the doctor has to sign the discharge or confirm an entry in electronic medical record system, whether it has been typed or dictated. So generated text is always a reference: and that will need to stay there. It’s more than five years before the computer can be seen as taking this responsibility from the doctor. Advice similarly can only be guidance.

  1. How do you see the market need for speech recognition maturing in healthcare? 

Right now we’re still very much in an enablement situation with our customers, helping with their documentation needs. From a recognition perspective we can see the potential of moving from enablement to augmentation, making it simpler and hands-free, moving to more of a virtual assistant approach for a single person. In the longer-term, further out, we have the potential to do that for multiple people at the same time, for example a clinician, parent and child.

We’re also looking at the coding side of things — categorising disease, treatment, length of stay and so on from patient documentation. Codes are used for multiple things – reimbursement with insurance, negotiation between GPs, primary and secondary care about services to provide in future, with commissioner and trust to negotiate on payment levels. For primary care, doctors do coding but in secondary care, it’s done by a coder looking through a record after the discharge of a patient. If data is incomplete or non-specific, trusts can miss out on funding. Nuance already offers Natural Language Understanding based-coding products in the US, and these are being evaluated for the specifics of the healthcare market in the UK.

So we want to help turn documentation into something that can be easily analysed. Our technology cannot just recognise what you say, but in natural language understanding we can analyse the text and match against codes, potentially opening the door to offering prompts. For example, if doctor diagnoses a COPD, the clinician may need to ask if patient is a smoker, which will have a consequence in the code.

  1. How does Nuance see the next 5 years panning out, in terms of measuring success for speech recognition?

We believe speech recognition is ready to deliver a great deal of benefit to healthcare, gaining efficiency and freeing up clinical staff. In terms of the future, we recently showed a prototype of a virtual assistant that combines a lot of technologies, including biometrics, complete speech control, text analysis and meaning extraction, and also appropriate selection — so the machine can distinguish between a command and whether I just wanted to say something.

This combination should make the reaction a lot more human — we call this conversational artificial intelligence. Another part of this is about making text to speech as human as possible. Then combining that with cameras and microphones in the environment, for example pointing at something and saying, give me more information about ‘this’. That’s all longer term, but the virtual assistant and video are things we are working on.

My take: healthcare needs all the help it can get

So, does speech recognition have a place? Over the past couple of decades of use, we have learned that we generally do not like talking into thin air, and particularly not to a computer: the main change over recent years, the reduction in training time, has done little to reduce this very psychological blocker, which means that speech recognition remains in a highly useful, yet relatively limited niche of auto-transcription.

Turning specifically to the healthcare industry, a victim of its own science-led success: it is difficult to think of an industry vertical in which staff efficiency is more important. In every geography, potential improvements to patient outcomes are being stymied by a lack of funds, symptomized by waiting lists, bed shortages and so on, while being burdened by the weight of ever-increasing bureaucracy.

Even if speech recognition could knock one or two percentage points off the time taken to execute a clinical pathway, the overall savings could be massive. Greater efficiency also opens the door to higher potential quality, as clinicians can focus on ‘the job’ rather than the paperwork.

For the future, use of speech recognition beyond the note-taking this also links to the potential for improved diagnosis, through augmented decision making, and indeed, improved patient safety as technology provides more support to what is still a highly manual industry. This will take time, but our general habits are changing as the likes of Alexa and Siri make us more comfortable about talking to inanimate objects.

Overall, progress may be slow for speech recognition particularly in healthcare, but it is heading in the right direction. One day, our lives might depend on it.

 

Voices in AI – Episode 51: A Conversation with Tim O’Reilly

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

About this Episode

Sponsored by Dell and Intel, Episode 51 of Voices in AI podcast features host Byron Reese and Tim O’Reilly discussing autonomous vehicles, capitalism, the Internet, and the economy. Tim is the founder of O’Reilly Media. He popularized the terms open source and Web 2.0.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today our guest is Tim O’Reilly. He is, of course, the founder and CEO of O’Reilly Media, Inc. In addition to his role at O’Reilly, he is a partner at an early stage venture firm, O’Reilly AlphaTech Ventures, and he is on the board of Maker Media, which was spun out from O’Reilly back in 2012. He’s on the board of Code for America, PeerJ, Civis Analytics, and POPVOX. He is the person who popularized the terms “open source” and “web 2.0.” He holds an undergraduate degree from Harvard in the classics. Welcome to the show, Tim.

Tim O’Reilly: Hi, thanks very much, I’m glad to be on it. I should add one other thing to my bio, which is that I’m also the author of a forthcoming book about technology and the economy, called WTF: What’s The Future, and Why It’s Up to Us, which in a lot of ways, it’s a memoir of what I’ve learned from studying computer platforms over the last 30 years, and reflections on the lessons of technology platforms for the broader economy, and the choices that we have to make as a society.

Well I’ll start there. What is the future then? If you know, I want to know that right away.

Well, the point is not that there is one future. There are many possible futures, and we actually have a great role. There’s a very scary narrative in which technology is seen as an inevitability. For example, “technology wants to eliminate jobs, that’s what it’s for.” And I go through, for example, looking at algorithms, at Google, at Facebook, and the like and say, “Okay, what you really learn when you study it is, all of these algorithms have a fitness function that they’re being managed towards,” and this doesn’t actually change in the world of AI. AI is simply new techniques that are still trying to go towards human goals. The thing we have to be afraid of is not AI becoming independent and going after its own goals. It’s what I refer to as “the Mickey and the broomsticks problem,” which is, we’re creating these machines, we’re turning them loose, and we’re telling them to do the wrong things. They do exactly what we tell them to do, but we haven’t thought through the consequences and a lot of what’s happening in the world today is the result of bad instructions to the machines that we have built.

In a lot of ways, our financial markets are a lot like Google and Facebook, they are increasingly automated, but they also have a fitness function. If you look at Google; their fitness function on both the search and the advertising side is relevance. You look at Facebook; loosely it could be described as engagement. We have increasingly, for the last 40 years, been managing our economy around, “make money for the stock market,” and we’ve seen, as a result, the hollowing out of the economy. And to apply this very concretely to AI, I’ll bring up a conversation I had with an AI pioneer recently, where he told me he was investing in a company that would get rid of 30% of call center jobs, was his estimate. And I said, “Have you used a call center? Were you happy with the service? Why are you talking about using AI to get rid of these jobs, rather than to make the service better?”

You know I wrote a piece—actually I wrote it after the book, so it’s not in the book—[that’s] an analysis of Amazon. In the same 3 years which they added 45,000 robots to their factories, they’ve added hundreds of thousands of human workers. The reason is because, they’re saying “Oh, our master design pattern isn’t ‘cut costs and reap greater profits,’ it’s ‘keep upping the ante, keep doing more.’” I actually started off the article by talking about my broken tea kettle and how I got a new one the same day, so I could have my tea the next morning, with no interruption. And it used to be that Amazon would give you free 2-day shipping, and then it was free 1-day shipping, and then in many cases, it’s free same-day shipping, and, this is why they have this incredible fanatical customer focus, and they’re using the technology to actually do more. My case has been, that if we actually shift from the fitness function being efficiency and shareholder value through driving increases profits to instead actually creating value in society—which is something that we can quite easily do—we’re going to have a very different economy and a very, very different political conversation than we’re having right now.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 50: A Conversation with Steve Pratt

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Steve discuss the present and future impact of AI on businesses.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 50: A Conversation with Steve Pratt”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2018-06-14-(00-56-12)-stephen-pratt.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2018\/06\/voices-headshot-card-3.jpg”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png’
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today, our guest is Steve Pratt. He is the Chief Executive Officer over at Noodle AI, the enterprise artificial intelligence company. Prior to Noodle, he was responsible for all Watson implementations worldwide, for IBM Global Business Services. He was also the founder and CEO of Infosys Consulting, a Senior Partner at Deloitte Consulting, and a Technology and Strategy Consultant at Booz Allen Hamilton. Consulting Magazine has twice selected him as one of the top 25 consultants in the world. He has a Bachelor’s and a Master’s in Electrical Engineering from Northwestern University and George Washington University. Welcome to the show, Steve.

Steve Pratt: Thank you. Great to be here, Byron.

Let’s start with the basics. What is artificial intelligence, and why is it artificial?

Artificial intelligence is basically any form of learning algorithm; is the way we think of things. We actually think there’s a raging religious debate [about] the differences between artificial intelligence and machine learning, and data science, and cognitive computing, and all of that. But we like to get down to basics, and basically say that they are algorithms that learn from data, and improve over time, and are probabilistic in nature. Basically, it’s anything that learns from data, and improves over time.

So, kind of by definition, the way that you’re thinking of it is it models the future, solely based on the past. Correct?

Yes. Generally, it models the future and sometimes makes recommendations, or it will sometimes just explain things more clearly. It typically uses four categories of data. There is both internal data and external data, and both structured and unstructured data. So, you can think of it kind of as a quadrant. We think the best AI algorithms incorporate all four datasets, because especially in the enterprise, where we’re focused, most of the business value is in the structured data. But usually unstructured data can add a lot of predictive capabilities, and a lot of signal, to come up with better predictions and recommendations.

How about the unstructured stuff? Talk about that for a minute. How close do you think we are? When do you think we’ll have real, true unstructured learning, that you can kind of just point at something and say, “I’m going to Barbados. You figure it all out, computer.”

I think we have versions of that right now. I am an anti-fan of things like chatbots. I think that chatbots are very, very difficult to do, technically. They don’t work very well. They’re generally very expensive to build. Humans just love to mess around with chatbots. I would say in the scoring of business value and something that’s affordable, and is easy to do, that chatbots is in the worst quadrant there.

I think there is a vast array of other things that actually add business value to companies, but if you want to build an intelligent agent using natural language processing, you can do some very basic things. But I wouldn’t start there.

Let me try my question slightly differently, then. Right now, the way we use machine learning is we say, “We have this problem that we want to solve. How do you do X?” And we have this data that we believe we can tease the answer out of. We ask the machine to analyze the data, and figure out how to do that. It seems the inherent limit of that, though, it’s kind of all sequential in nature. There’s no element of transferred learning in that, where I grow exponentially what I’m able to do. I just can do: “Yes. Another thing. Yes. Another. Yes. Another.” So, do you think this strict definition of machine learning, as you’re thinking of AI that way, is that a path to a general intelligence? Or is general intelligence like “No, that’s something way different than what we’re trying to do. We’re just trying to drive a car, without hitting somebody?”

General intelligence, I think, is way off in the future. I think we’re going to have to come up with some tremendous breakthroughs to get there. I think you can duct-tape together a lot of narrow intelligence, and sort of approximate general intelligence, but there are some fundamental skills that computers just can’t do right now.For instance, if I give a human the question, “Will the guinea pig population in Peru be relevant to predicting demand for tires in the U.S?” A human would say, “No, that’s silly. Of course not.” A computer would not know that. A computer would actually have to go through all of the calculations, and we don’t have an answer to that question, yet. So, I think generalized intelligence is a way off, but I think there are some tremendously exciting things that are happening right now, that are making the world a better place, in narrow intelligence.

Absolutely. I do want to spend the bulk of our time in there, in that world. But just to explore what you were saying, because there’s a lot of stuff to mine, in what you just said. That example you gave about the guinea pigs is sort of a common-sense problem, right? In how it’s referred. “Am I heavier than the statue of liberty?” How do you think humans are so good at that stuff? How is it that if I said, “Hey, what would an Oscar statue look like, smeared with peanut butter?” You can conjure that up, even though you’ve never even thought of that before, or seen it covered, or seen anything covered with peanut butter. Why are we so good at that kind of stuff, and machines seem amazingly ill-equipped at it?

I think humans have constant access to an incredibly diverse array of datasets. Through time, they have figured out patterns from all of those diverse datasets. So, we are constantly absorbing new datasets. In machines, it’s a very deliberate and narrow process right now. When you’re growing up, you’re just seeing all kinds of things. And as we go through our life, we develop these – you could think of them as regressions and classifications in our brains, for those vast arrays of datasets.

As of right now, machine learning and AI are given very specific datasets, crunch the data, and then make a conclusion. So, it’s somewhere in there. We’re not exactly sure, yet.

All right, last question on general intelligence, and we’ll come back to the here and now. When I ask people about it, the range of answers I get is 5 to 500 years. I won’t pin you down to a time, but it sounds like you’re “Yeah, it’s way off.” Yet, people who say that often usually say, “We don’t know how to do it, and it’s going to be a long time before we get it.”

But there’s always the implicit confidence that we can do it, that it is a possible thing. We don’t know how to do it. We don’t know how we’re intelligent. We don’t know the mechanism by which we are conscious, or the mechanism by which we have a mind, or how the brain fundamentally functions, and all of that. But we have a basic belief that it’s all mechanistic, so we’re going to eventually be able to build it. Do you believe that, or is it possible that a general intelligence is impossible?

No. I don’t think it’s impossible, but we just don’t know how to do it, yet. I think transfer learning, there’s a clue in there, somewhere. I think you’re going to need a lot more memory, and a lot more processing power, to have a lot more datasets in general intelligence. But I think it’s way off. I think there will be stage gates, and there will be clues of when it’s starting to happen. That’s when you can take an algorithm that’s trained for one thing, and have it – if you can take Alpha Go, and then the next day, it’s pretty good at Chess. And the next day, it’s really good at Parcheesi, and the next day, it’s really good at solving mazes, then we’re on the track. But that’s a long way off.

Let’s talk about this narrow AI world. Let’s specifically talk about the enterprise. Somebody listening today is at, let’s say a company of 200 people, and they do something. They make something, they ship it, they have an accounting department, and all of that. Should they be thinking about artificial intelligence now? And if so, how? How should they think about applying it to their business?

A company that small, it’s actually really tough, because artificial intelligence really comes into play when it’s beyond the complexity that a human can fit in their mind.

Okay. Let’s up it to 20,000 people.

20,000? Okay, perfect. 20,000 people – there are many, many places in the organization where they absolutely should be using learning algorithms to improve their decision-making. Specifically, we have 5 applications that focus on the supply side of the company; that’s in: materials, production, distribution, logistics and inventory.

And then, on the supply side, we have 5 areas also: customer, product, price, promotion and sales force. All of those things are incredibly complex, and they are highly interactive. Within each application area, we basically have applications that almost treat it like a game, although it’s much more complicated than a game, even though games like Go are very complex.

Each of our applications does, really, 4 things: it senses, it proposes, it predicts, and then it scores. So, basically it senses the current environment, it proposes a set of actions that you could take, it predicts the outcome of each of those actions – like the moves on a Chessboard – and then it scores it. It says, “Did it improve?” There are two levels of that, two levels of sophistication. One is “Did it improve locally? Did it improve your production environment, or your logistics environment, or your materials environment?” And then, there is one that is more complex, which says “If you look at that across the enterprise, did it improve across the enterprise?” These are very, very complex mathematical challenges. The difference is dramatic, from the way decisions are made today, which is basically people getting in meetings with imperfect data on spreadsheets and PowerPoint slides, and having arguments.

So, pick a department, and just walk me through a hypothetical or real use case where you have seen the technology applied, and have measurable results.

Sure. I can take the work we’re doing at XOJET, which is the largest private aviation company in the U.S. If you want to charter a jet, XOJET is the leading company to do that. The way they were doing pricing before we got there was basically old, static rules that they had developed several years earlier. That’s how they were doing pricing. What we did is we worked with them to take into account where all of their jets currently were, where all of their competitors’ jets are, what the demand was going to be, based on a lot of internal and external data; like what events were happening in what locations, what was the weather forecast, what [were] the economic conditions, what were historic prices and results? And then, basically came up with all of the different pricing options they could come up with, and then basically made a recommendation on what the price should be. As soon as they put in our application, which was in Q4 of 2016, the EBITDA of the company, which is basically the net margin – not quite, but – went up 5%, in the company.

The next thing we did for them was to develop an application that looked at the balance in their fleet, which is: “Do you have the right jets in the right place, at the right time?” This takes into account having to look at the next day. Where is the demand going to be the next day? So, you make sure you don’t have too many jets in low demand locations, or not enough jets in high demand locations. We actually adjusted the prices, to create an economic incentive to drive the jets to the right place at the right time.

We also, again, looked at competitive position, which is through Federal Aviation Administration data. You can track the tail numbers of all of their jets, and all of the competitor jets, so you could calculate competitive position. Then, based on that algorithm, the length of haul, which is the amount of hours flown per jet, went up 11%.

This was really dramatic, and dramatically reduced the number of “deadheads” they were flying, which is the amount of empty jets they were flying to reposition their jets. I think that’s a great success story. There’s tremendous leadership at that company, very innovative, and I think that’s really transformed their business.

That’s kind of a classic load-balancing problem, right? I’ve got all of these things, and I want to kind of distribute it, and make sure I have plenty of what I need, where. That sounds like a pretty general problem. You could apply it to package delivery or taxicab distribution, or any number of other things. How generalizable is any given solution, like from that, to other industries?

That’s a great question. There are a lot of components in that, that are generalizable. In fact, we’ve done that. We have componentized the code and the thinking, and can rapidly reproduce applications for another client, based on that. There’s a lot of stuff that’s very specific to the client, and of course, the end application is trained on the client’s data. So, it’s not applicable to anybody else. The models are specifically trained on the client data. We’re doing other projects in airline pricing, but the end result is very different, because the circumstances are different.

But you hit on a key question, which is “Are things generalizable?” One of the other approaches we’re taking is around transferred learning, especially when you’re using deep learning technologies. You can think of it as the top layers of a neural net can be trained on sort of general pricing techniques, and just the deeper layers are trained on pricing specific to that company.

That’s one of the other generalization techniques. Because AI problems in the enterprise generally have sparser datasets than if you’re trying to separate cat pictures from dog pictures. So, data sparcity is a constant challenge. I think transfer learning is one of the key strategies to avoid that.

You mentioned in passing, looking at things like games. I’ve often thought that was kind of a good litmus test for figuring out where to apply the technology, because games have points, and they have winners, and they have turns, and they have losers. They have structure to them. If that case study you just gave us was a game, what was the point in that? Was it a dollar of profit? Because you were like “Well, the plane could be, or it could fly here, where it might have a better chance to get somebody. But that’s got this cost. It wears out the plane, so the plane has to be depreciated accordingly.” What is the game it’s playing? How do you win the game it’s playing?

That’s a really great question. For XOJET, we actually created a tree of metrics, but at the top of the tree is something called fleet contribution, which is “What’s the profit generated per period of time, for the entire fleet?” Then, you can decompose that down to how many jets are flying, the length of haul, and the yield, which is the amount of dollars per hour flown. There’s also, obviously, a customer relationship component to it. You want to make sure that you get really good customers, and that you can serve them well. But there are very big differences between games and real-life business. Games have a finite number of moves. The rules are well-defined. There’s generally, if you look at Deep Blue or Alpha Go, or Arthur Samuels, or even the Labradas. All of these were two-player games. In the enterprise, you have typically tens, sometimes hundreds of players in the game, with undefined sets of moves. So, in the one sense, it’s a lot more complicated. The idea is, how do you reduce it, so it is game-like? That’s a very good question.

So, do you find that most people come to you with a defined business problem, and they’re not really even thinking about “I want some of this AI stuff. I just want my planes to be where they need to be.” What does that look like in the organization that brings people to you, or brings people to considering an artificial intelligence solution to a problem?

Typically, clients will see our success in one area, and then want to talk to us. For instance, we have a really great relationship with a steel company in Arkansas, called Big River Steel. Big River Steel, we’re building the world’s first learning steel mill with them. Which will learn from their sensors, and be able to just do all kinds of predictions and recommendations. It goes through that sense, propose, predict and score. It goes through that. So, when people heard that story, we got a lot of calls from steel mills. Now, we’re kind of deluged with calls from steel mills all over the world, saying, “How did you do that, and how do we get some of it?”

Typically, people hear about us because of AI. We’re a product company, with applications, so we generally don’t go in from a consulting point of view, and say “Hey, what’s your business problem?” We will generally go in and say, “Here are the ten areas where we have expertise and technology to improve business operations,” and then we’ll qualify a company, if it applies or not. One other thing is that AI follows the scientific methods, so it’s all about hypothesis, test, hypothesis, test. So it is possible that an AI application that works for one company will not work for another company. Sometimes, it’s the datasets. Sometimes, it’s just a different circumstance. So, I would encourage companies to be launching lots of hypotheses, using AI.

Your website has a statement quite prominently, “AI is not magic. It’s data.” While I wouldn’t dispute it, I’m curious. What were you hearing from people that caused you to… or maybe hypothetically, – you may not have been in on it – but what do you think is the source of that statement?

I think there’s a tremendous amount of hype and B.S. right now out there about AI. People anthropomorphize AI. You see robots with scary eyes, or you see crystal balls, or you see things that – it’s all magic. So, we’re trying to be explainers in chief, and to kind of de-mystify this, and basically say it’s just data and math, and supercomputers, and business expertise. It’s all of those four things, coming together.

We just happen to be at the right place in history, where there are breakthroughs in those areas. If you look at computing power, I would single that out as the thing that’s made a huge difference. In April of last year, NVIDIA released the DGX-1, which is their AI supercomputer. We have one of those in our data center, that in our platform we affectionately call “the beast,” which has a petaflop of computing power.

If you put that into perspective, that the fastest supercomputer in the world in the year 2000, was the ASCI Red, that had one teraflop of computing power. There was only one in the world, and no company in the world had access to that.

Now, with the supercomputing that’s out there, the beast has 1,000 times more computing power than the ASCI Red did. So, I think that’s a tremendous breakthrough. It’s not magic. It’s just good technology. The math behind artificial intelligence still relies largely on mathematical breakthroughs that happened in the ‘50s and ‘60s. And of course, Thomas Bayes, with Bayes’ Theorem, who was a philosopher in the 1700s.

There’s been a lot of good work recently around different variations on neural nets. We’re particularly interested in long- and short-term memory, and convolutional neural nets. But a lot of this is, a lot of the math has been around for a while. In fact, it’s why I don’t think we’re going to hit general intelligence any time soon. Because it is true that we have had exponential growth in computing power, and exponential growth in data. But it’s been a very linear growth in mathematics, right? If we start seeing AI algorithms coming up with breakthroughs in mathematics, that we simply don’t understand, then I think the antennas can go up.

So, if you have your DGX-1, at a petaflop, and in five years, you get something that’s an exaflop – it’s 1,000 times faster than that – could you actually put that to use? Or is it at some point, the jet company only has so much data. There are only so many different ways to crunch it. We don’t really need more – we have, at the moment, all of the processor power we need. Is that the case? Or would you still pay dearly to get a massively faster machine?

We could always use more computing power. Even with the DGX-1. For instance, we’re working with a distribution company where we’re generating 500,000 models a day for them, crunching on massive amounts of data. If you have massive datasets for your processing, it takes a while. I can tell you, life is a lot better. I mean, in the ‘90s, we were working on a neural net for the Coast Guard; to try to determine which ships off of the west coast were bad guys. It was very simple neural nets. You would hit return, and it would usually crash. It would run for days and days and days and days, be very, very expensive, and it just didn’t work.

Even if it came up with an answer, the ships were already gone. So, we could always use more computing power. I think right now, a limitation is more on the data side of it, and related to the fact that they shouldn’t be throwing out data that they’re throwing out. For instance, like customer relationship management systems. Typically, when you have an update to a customer, that it overwrites the old data. That is really, really important data. I think coming up with a proper data strategy, and understanding the value of data, is really, really important.

What do you think, on this theme of AI is not magic, it’s data; when you go into an organization, and you’re discussing their business problems with them, what do you think are some of the misconceptions you hear about AI, in general? You said it’s overhyped, and glowing-eyed robots and all of that. From an enterprise standpoint, what is it that you think people are often getting wrong?

I think there’s a couple of fundamental things that people are getting wrong. One is I think there is a tremendous over-reliance and over-focus on unstructured data, that people are falling in love with natural language processing, and thinking that that’s artificial intelligence. While it is true that NLP can help with judging things like consumer sentiment or customer feedback, or trend analysis on social media, generally those are pretty weak signals. I would say, don’t follow the shiny object. I think the reason people see that, is the success of Siri and Alexa, and people see that as AI. It is true that those are learning algorithms, and those are effective in certain circumstances.

I think they’re much less effective when you start getting into dialogue. Doing dialogue management with humans is extraordinarily difficult. Training the corpus of those systems is very, very difficult. So, I would say stay away from chatbots, and focus mostly on structured data, rather than unstructured data. I think that’s a really big one. I also think that focusing on the supply side of a company is actually a much more fruitful area than focusing on the demand side, other than sales forecasting. The reason I say that is that the interactions between inbound materials and production, and distribution, are more easily modeled and can actually make a much bigger difference. It’s much harder to model things like the effect of a promotion on demand, although it’s possible to do a lot better than they’re doing now. Or, things like customer loyalty; like the effect of general advertising on customer loyalty. I think those are probably two of the big areas.

When you see large companies being kind of serious about machine learning initiatives, how are they structuring those in the organization? Is there an AI department, or is it in IT? Who kind of “owns” it? How are its resources allocated? Are there a set of best practices, that you’ve gleaned from it?

Yes. I would say there are different levels of maturity. Obviously, the vast majority of companies have no organization around this, and it is individuals taking initiatives, and experimenting by themselves. IT in general has not taken a leadership role in this area. I think, fundamentally, that’s because IT departments are poorly designed. Like the CIO job needs to be two jobs. There needs to be a Chief Infrastructure Officer and Chief Innovation Officer. One of those jobs is to make sure that the networks are working, the data center is working, and people have computers. The other job is, “How are advances in technologies helping companies?” There are some companies that have Chief Data Officers. I think that’s also caused a problem, because they’re focusing more on big data, and less on what do you actually do with those data?

I think the most advanced companies – I would say, first of all, it’s interesting, because it’s following the same trajectory as information technology organizations follow, in companies. First, it’s kind of anarchy. Then, there’s the centralized group. Then, it goes to a distributed group. Then, it goes to a federated group, federated meaning there’s a central authority which basically sets standards and direction. But each individual business unit has their representatives. So, I think we’re going to go through a whole bunch of gyrations in companies, until we end up where most technology organizations are today, which is; there is a centralized IT function, but each business unit also has IT people in it. I think that’s where we’re going.

And then, the last question along these lines: Do you feel that either: A) machine learning is doing such remarkable things, and it’s only going to gain speed, and grow from here, or B) machine learning is over-hyped to a degree that there are unrealistic expectations, and when disappointment sets in, you’re going to get a little mini AI winter again. Which one of those has more truth?

Certainly, there is a lot of hype about it. But I think if you look at the reality of how many companies have actually implemented learning algorithms; AI, ML, data science, across the operations of their company, we’re at the very, very beginning. If you look at it as a sigmoid, or an s-curve, we’re just approaching the first inflection point. I don’t know of any company that has fully deployed AI across all parts of their operations. I think ultimately, executives in the 21stcentury will have many, many learning algorithms to support them, making complex business decisions.

I think the company that clearly has exhibited the strongest commitment to this, and is furthest along, is Amazon. If you wonder how Amazon can deliver something to your door in one hour, it’s because there are probably 100 learning algorithms that made that happen, like where should the distribution center be? What should be in the distribution center? Which customers are likely to order what? How many drivers do we need? What’s the route the driver should take? All of those things are powered by learning algorithms. And you see the difference, you feel the difference, in a company that has deployed learning algorithms. I also think if you look back, from a societal point of view, that if we’re going to have ten billion people on the planet, we had better get a lot more efficient at the consumption of natural resources. We had better get a lot more efficient at production.

I think that means moving away from static business rules that were written years ago, that are only marginally relevant to learning algorithms that are constantly optimizing. And then, we’ll have a chance to get rid of what Hackett Group says is an extra trillion dollars of working capital, basically inventory, sitting in companies. And we’ll be able to serve customers better.

You seem like a measured person, not prone to wild exaggeration. So, let me run a question by you. If you had asked people in 1995, if you had said this, “Hey, you know what? If you take a bunch of computers, just PCs, like everybody has, and you connected them together, and you got them to communicate with hypertext protocol of some kind, that’s going to create trillions and trillions and trillions and trillions and trillions of dollars of wealth.” “It’s going to create Amazon and Google and Uber and eBay and Etsy and Baidu and Alibaba, and millions of jobs that nobody could have ever imagined. And thousands of companies. All of that, just because we’re snapping together a bunch of computers in a way that lets them talk to each other.” That would have seemed preposterous. So, I ask you the question; is artificial intelligence, even in the form that you believe is very real, and what you were just talking about, is it an order of magnitude bigger than that? Or is it that big, again? Or is it like “Oh, no. Just snapping together, a bunch of computers, pales to what we are about to do.” How would you put your anticipated return on this technology, compared to the asymmetrical impact that this seemingly very simple thing had on the world?

I don’t know. It’s really hard to say. I know it’s going to be huge. Right? It is fundamentally going to make companies much more efficient. It’s going to allow them to serve their customers better. It’s going to help them develop better products. It’s going to feel a lot like Amazon, today, is going to be the baseline of tomorrow. And there’s going to be a lot of companies that – I mean, we run into a lot of companies right now that just simply resist it. They’re going to go away. The shareholders will not tolerate companies that are not performing up to competitive standards.

The competitive standards are going to accelerate dramatically, so you’re going to have companies that can do more with less, and it’s going to fundamentally transform business. You’ll be able to anticipate customer needs. You’ll be able to say, “Where should the products be? What kind of products should they be? What’s the right product for the right customer? What’s the right price? What’s the right inventory level? How do we make sure that we don’t have warehouses full of billions and billions of dollars worth of inventory?”

It’s very exciting. I think the business, and I’m generally really bad at guessing years, but I know it’s happening now, and I know we’re at the beginning. I know it’s accelerating. If you forced me to guess, I would say, “10 years from now, Amazon of today will be the baseline.” It might even be shorter than that. If you’re not deploying hundreds of algorithms across your company, that are constantly optimizing your operations, then you’re going to be trailing behind everybody, and you might be out of business.

And yet my hypothetical 200-person company shouldn’t do anything today. When is the technology going to be accessible enough that it’s sort of in everything? It’s in their copier, and it’s in their routing software. When is it going to filter down, so that it really permeates kind of everything in business?

The 200-person company will use AI, but it will be in things like, I think database design will change fundamentally. There is some exciting research right now, actually using predictive algorithms to fundamentally redesign database structures, so that you’re not actually searching the entire database; you’re just searching most likely things first. Companies will use AI-enabled databases, they’ll use AI in navigation, they’ll use AI in route optimization. They’ll do things like that. But when it comes down to it, for it to be a good candidate for AI, in helping make complex decisions, the answer needs to be non-obvious. Generally with a 200-person company, having run a company that went from 2 people to 20 people, to 200 people, to 2,000 people, to 20,000 people, I’ve seen all of the stages.

A 200-person company, you can kind of brute force. You know everybody. You’ve just crossed Dunbar’s number, so you kind of know everything that’s going on, and you have a good feel for things. But like you said, I think applying it in using other peoples’ technologies that are driven by AI, for the things that I talked about, will probably apply to a 200-person company.

With your jet company, you did a project, and EBITDA went up 5%, and that was a big win. That was just one business problem you were working on. You weren’t working on where they buy jet fuel, or where they print. Nothing like that. So presumably, over the long haul, the technology could be applied in that organization, in a number of different ways. If we have a $70 trillion economy in the world, what percent is – 5% is easy – what percentage improvement do you think we’re looking at? Like just growing that economy dramatically, just by the efficiencies that machine learning can provide?

Wow. The way to do that is to look at an individual company, and then sort of extrapolate. I would say an individual company could, if you look at the value of companies. That’s the way I look at it, like shareholder value, which is made up of revenue, margins and capital efficiency. I think that revenue growth could take off, could probably double, from what it is. The growth could double from what it is now. Margins, it will have a dramatic impact. I think you could, if you look at all of the different things you could do within the company, and you had fully deployed learning algorithms, and gotten away from making decisions on yardsticks and averages, you could, a typical company, I’ll say double your margins.

But the home run is in capital efficiency, which not too many people pay attention to, and is one of the key drivers of return on invested capital, which is the driver of general value. This is where you can reduce things 30%, things like that, and get rid of warehouses of stuff. That allows you to be a lot more innovative, because then you don’t have obsolescence. You don’t have to push products that don’t work. You can develop more innovative products. There are a lot of good benefits. Then, you start compounding that year over year, and pretty soon, you’ve made a big difference.

Right, because doubling margins alone doubles the value of all of the companies, right?

It would, if you projected it out over time. Yes. All else being equal.

Which it seldom is. It’s funny, you mentioned Amazon earlier. I just assumed they had a truck with a bunch of stuff on it, that kept circling my house, because it’s like every time I want something, they’re just there, knocking on the door. I thought it was just me!

Yeah. Amazon Prime now came out, was it last year? In the Bay Area? My daughter ordered a pint of ice cream and a tiara. An hour later, a guy is standing at the front door with a pint of ice cream, and a tiara. It’s like Wow!

What a brave new world, that has such wonders in it!

Exactly!

As we’re closing up on time here, there are a number of people that are concerned about this technology. Not in the killer robot scenario. They’re concerned about automation; they’re concerned about – you know it all. Would you say that all of this technology and all of this growth, and all of that, is good for workers and jobs? Or it’s bad, or it’s disruptive in the short term, not in the long term? How do you size that up for somebody who is concerned about their job?

First of all, moving sort of big picture to small picture, first of all, this is necessary for society, unless we stop having babies. We need to do this, because we have finite resources, and we need to figure out how to do more with less. I think the impact on jobs will be profound. I think it will make a lot of jobs a lot better. In AI, we say it’s augment, amplify and automate. Right now, like the things we’re doing at XOJET really help make the people in revenue management a lot more powerful, and I think, enjoy their jobs a lot more, and doing a lot less routine research and grunt work. So, they actually become more powerful, it’s like they have super powers.

I think that there will also be a lot of automation. There are some tasks that AI will just automate, and just do, without human interaction. A lot of decisions, in fact most decisions, are better if they’re made with an algorithm anda human, to bring out the best of both. I do think there’s going to be a lot of dislocation. I think it’s going to be very similar to what happened in the automotive industry, and you’re going to have pockets of dislocation that are going to cause issues. Obviously, the one that’s talked about the most is the driverless car. If you look at all of the truck drivers, I think probably within a decade, that most cross-country trucks, there’s going to be some person sitting in their house, in their pajamas, with nine screens in front of them, and they’re going to be driving nine trucks simultaneously, just monitoring them. And that’s the number one job of adult males in the U.S. So, we’re going to have a lot of displacement. I think we need to take that very seriously, and get ahead of it, as opposed to chasing it, this time. But I think overall, this is also going to create a lot more jobs, because it’s going to make more successful companies. Successful companies hire people and expand, and I think there are going to be better jobs.

You’re saying it all eventually comes out in the wash; that we’re going to have more, better jobs, and a bigger economy, and that’s broadly good for everyone. But there are going to bumps in the road, along the way. Is that what I’m getting from you?

Yes. I think it will actually be a net positive. I think it will be a net significant positive. But it is a little bit of, as economists would say, “creative destruction.” As you go from agricultural to industrial, to knowledge workers, toward sort of an analytics-driven economy, there are always massive disruptions. I think one of the things that we really need to focus on is education, and also on trade schools. There is going to be a lot larger need for plumbers and carpenters and those kinds of things. Also, if I were to recommend what someone should study in school, I would say study mathematics. That’s going to be the core of the breakthroughs, in the future.

That’s interesting. Mark Cuban was asked that question, also. He says the first trillionaires are going to be in AI.  And he said philosophy. Because in the end, what you’re going to need are what the people know how to do. Only people can impute value, and only people can do all of that.

Wow! I would also say behavioral economics; understanding what humans are good at doing, and what humans are not good at doing. We’re big fans of Kahneman and Tversky, and more recently, Thaler. When it comes down to how humans make decisions, and understanding what skills humans have, and what skills algorithms have, it’s very important to understand that, and to optimize that over time.

All right. That sounds like a good place to leave it. I want to thank you so much for a wide-ranging show, with a lot of practical stuff, and a lot of excitement about the future. Thanks for being on the show.

My pleasure. I enjoyed it. Thanks, Byron.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Voices in AI – Episode 49: A Conversation with Ali Azarbayejani

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

In this episode, Byron and Ali discuss AI’s impact on business and jobs.




0:00


0:00


0:00

var go_alex_briefing = {
expanded: true,
get_vars: {},
twitter_player: false,
auto_play: false
};

(function( $ ) {
‘use strict’;

go_alex_briefing.init = function() {
this.build_get_vars();

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) {
this.twitter_player = ‘true’;
}

if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) {
this.auto_play = go_alex_briefing.get_vars[‘auto_play’];
}

if ( ‘true’ == this.twitter_player ) {
$( ‘#top-header’ ).remove();
}

var $amplitude_args = {
‘songs’: [{“name”:”Episode 49: A Conversation with Ali Azarbayejani”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2018-06-12-(00-57-00)-ali-azarbayejani.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2018\/06\/voices-headshot-card-2.jpg”}],
‘default_album_art’: ‘https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png&#8217;
};

if ( ‘true’ == this.auto_play ) {
$amplitude_args.autoplay = true;
}

Amplitude.init( $amplitude_args );

this.watch_controls();
};

go_alex_briefing.watch_controls = function() {
$( ‘#small-player’ ).hover( function() {
$( ‘#small-player-middle-controls’ ).show();
$( ‘#small-player-middle-meta’ ).hide();
}, function() {
$( ‘#small-player-middle-controls’ ).hide();
$( ‘#small-player-middle-meta’ ).show();

});

$( ‘#top-header’ ).hover(function(){
$( ‘#top-header’ ).show();
$( ‘#small-player’ ).show();
}, function(){

});

$( ‘#small-player-toggle’ ).click(function(){
$( ‘.hidden-on-collapse’ ).show();
$( ‘.hidden-on-expanded’ ).hide();
/*
Is expanded
*/
go_alex_briefing.expanded = true;
});

$(‘#top-header-toggle’).click(function(){
$( ‘.hidden-on-collapse’ ).hide();
$( ‘.hidden-on-expanded’ ).show();
/*
Is collapsed
*/
go_alex_briefing.expanded = false;
});

// We’re hacking it a bit so it works the way we want
$( ‘#small-player-toggle’ ).click();
$( ‘#top-header-toggle’ ).hide();
};

go_alex_briefing.build_get_vars = function() {
if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {

var query = document.location
.toString()
// get the query string
.replace(/^.*?\?/, ”)
// and remove any existing hash string (thanks, @vrijdenker)
.replace(/#.*$/, ”)
.split(‘&’);

for( var i=0, l=query.length; i<l; i++ ) {
var aux = decodeURIComponent( query[i] ).split( '=' );
this.get_vars[ aux[0] ] = aux[1];
}
}
};

$( function() {
go_alex_briefing.init();
});
})( jQuery );

.go-alexa-briefing-player {
margin-bottom: 3rem;
margin-right: 0;
float: none;
}

.go-alexa-briefing-player div#top-header {
width: 100%;
max-width: 1000px;
min-height: 50px;
}

.go-alexa-briefing-player div#top-large-album {
width: 100%;
max-width: 1000px;
height: auto;
margin-right: auto;
margin-left: auto;
z-index: 0;
margin-top: 50px;
}

.go-alexa-briefing-player div#top-large-album img#large-album-art {
width: 100%;
height: auto;
border-radius: 0;
}

.go-alexa-briefing-player div#small-player {
margin-top: 38px;
width: 100%;
max-width: 1000px;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info {
width: 90%;
text-align: center;
}

.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large {
width: 75%;
}

.go-alexa-briefing-player div#small-player-full-bottom {
background-color: #f2f2f2;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
height: 57px;
}

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Ali Azarbayejani. He is the CTO and Co-founder of Cogito. He has 18 years of commercial experience as a scientist, an entrepreneur, and designer of world-class computational technologies. His pioneering doctoral research at MIT Media Labs in Probabilistic Modeling for 3-D Vision was the basis for his first startup company Alchemy 3-D Technology, which created a market in the film and video post-production industry for camera matchmoving software. Welcome to the show Ali.

Ali Azarbayejani: Thank you, Byron.

I’d like to start off with the question: what is artificial intelligence?

I’m glad we’re starting with some definitions. I think I have two answers to that question. The original definition of artificial intelligence I believe in a scholarly context is about creating a machine that operates like a human. Part of the problem with defining what that means is that we don’t really understand human intelligence very well. We have a pretty good understanding now about how the brain functions physiologically, and we understand that’s an important part of how we provide cognitive function, but we don’t have a really good understanding of mind or consciousness or how people actually represent information.

I think the first answer is that we really don’t know what artificial or machine intelligence is other than the desire to replicate human-like function in computers. The second answer I have is how AI is being used in industry. I think that that is a little bit easier to define because I believe almost all of what we call AI in industry is based on building input/output systems that are framed and engineered using machine learning. That’s really at the essence of what we refer to in the industry as AI.

So, you have a high concept definition and a bread and butter work-a-day working definition, and that’s how you’re bifurcating that world?

Yeah, I mean, a lot of people talk about we’re in the midst of an AI revolution. I don’t believe, at least in the first sense of the term, that we’re in an AI revolution at all. I think we’re in the midst of a machine learning revolution which is really important and it’s really powerful, but I guess what I take issue is with the term intelligence, because most of these things that we call artificial intelligence don’t really exhibit the properties of intelligence that we would normally think are required for human intelligence.

These systems are largely trained in the lab and then deployed. When they’re deployed, they typically operate as a simple static input/output system. You put in audio and you get out words. So, you put in video and you get out locations of faces. That’s really at the core of what we’re calling AI now. I think it’s really the result of advances in technology that’s made machine learning possible at large scale, and it’s not really a scientific revolution about intelligence or artificial intelligence.

All right, let’s explore that some, because I think you’re right. I have a book coming out in the Spring of 2018 which is 20,000 words and it’s dedicated to the brain, the mind and consciousness. It really tries to wrap around those three concepts. So, let’s go through them if you don’t mind for just a minute. You started out by saying with the brain we understand how it functions. I would love to go into that, but as far as I understand it, we don’t know how a thought is encoded. We don’t know how the memory of your 10th birthday party or what pineapple tastes like or any of that. We don’t know how any of that is actually encoded. We can’t write to it. We can’t read from it, except in the most very rudimentary sense. So do you think we really do understand the brain?

I think that’s the point I was actually making is that we understand the brain at some level physiologically. We understand that there’s neurons and gray matter. We understand a little bit of physiology of the brain, but we don’t understand those things that you just mentioned, which I refer to as the “mind.” We don’t really understand how data is stored. We don’t understand how it’s recalled exactly. We don’t really understand other human functions like consciousness and feelings and emotions and how those are related to cognitive function. So, that’s really what I was saying is, we don’t understand how intelligence evolves from it, although really where we’re at is we just understand a little bit of the physiology.

Yeah, it’s interesting. There’s no consensus definition on what intelligence is, and that’s why you can point at anything and say, “well that’s intelligent.” “My sprinkler that comes on when my grass is dry, that’s intelligent.” The mind is of course a very, shall we say, controversial concept, but I think there is a consensus definition of it that everybody can agree to, which is it’s all the stuff the brain does that doesn’t seem, emphasis on seem, like something an organ should be able to do. Your liver doesn’t have a sense of humor. Your liver doesn’t have an imagination. All of these things. So, based on that definition of creativity and not even getting to consciousness, not even experiencing the world, just these abilities. These raw abilities like to write a poem, or paint a great painting or what   have you. You were saying we actually have not made any real progress towards any of that. That’s gotten mixed up in this whole machine learning thing. Am I right that you think we’re still at square one with that whole building artificial mind?

Yeah, I mean, I don’t see a lot of difference intellectually [between] where we are now from when I was in school in the late 80s and 90s in terms of theories about the mind and theories about how we think and reason. The basis for the current machine learning revolution is largely based on neural networks which were invented in the 1960s. Really what is fueling the revolution is technology. The fact that we have the CPU power, the memory, the storage and the networking — and the data — and we can put all that together and train large networks at scale. That’s really what is fueling the amazing advances that we have right now, not really any philosophical new insights into how human intelligence works.

Putting it out there for just a minute, is it possible that an AGI, a general intelligence, that an artificial mind, is it possible that that cannot be instantiated in machinery?

That’s a really good question. I think that’s another philosophical question that we need to wrestle with. I think that there are at least two schools of thought on this that I’m aware of. I think the prevailing notion, which is I think a big assumption, is that it’s just a matter of scale. I think that people look at what we’ve been able to do with machine learning and we’ve been able to do incredible things with machine learning so far. I think people think of well, a human sitting in a chair can sit and observe the world and understand what’s going on in the world and communicate with other people. So, if you just took that head and you could replicate what that head was doing, which would require a scale much larger than what we’re doing right now with artificial neural networks, then embody that into a machine, then you could set this machine on the table there or on the chair and have that machine do the same thing.

I think one school of thought is that the human brain is an existence proof that a machine can exist to do the operations of a human intelligence. So, all we have to do is figure out how to put that into a machine. I think there’s a lot of assumptions involved in that train of thought. The other train of thought, which is more along the lines of where I land philosophically, is that it’s not clear to me that intelligence can exist without ego, without the notion of an embodied self that exists in the world, that interacts in the world, that has a reason to live and a drive to survive. It’s not clear to me that it can’t exist, and obviously we can do tasks that are similar to what human intelligence does, but I’m not entirely sure that… because we don’t understand how human intelligence works, it’s not clear to me that you can create an intelligence in a disembodied way.

I’ve had 60-something guests on the show, and I keep track of the number that don’t believe we can actually build a general intelligence, and it’s I think 5. They are Deep Varma, Esther Dyson, people who have similar… more so I think they’re even more explicitly saying they don’t think we can do it. The other 60 guests have the same line of logic, which is we don’t know how the brain works. We don’t know how the mind works. We don’t know how consciousness works, but we do have one underlying assumption that we are machines, and if we are machines, then we can build a mechanical us. Any argument against that or any way to engage it, the word that’s often offered is magic. The only way to get around that is to appeal to magic, to appeal to something supernatural, to appeal to something unscientific. So, my question to you is: is that true? Do you have to appeal to something unscientific for that logic to break down, or are there maybe scientific reasons completely causal, system-y kind of systems by which we cannot build a conscious machine?

I don’t believe in magic. I don’t think that’s my argument. My argument is more around what is the role that the body around the brain plays, in intelligence? I think we make the assumption sometimes that the entire consciousness of a person, entire cognition, everything is happening from the neck up, but the way that people exist in the world and learn from simply existing in the world and interacting with the world, I think plays a huge part in intelligence and consciousness. Being attached to a body that the brain identifies with as “self,” and that the mind has a self-interest in, I think may be an essential part of it.

So, I guess my point of view on this is I don’t know what the key ingredients are that go into intelligence, but I think that we need to understand… Let me put it this way, I think without understanding how human consciousness and human feelings and human empathy works, what the mechanisms are behind that, I mean, it may be simply mechanical, but without understanding how that works, it’s unclear how you would build a machine intelligence. In fact, scientists have struggled from the beginning of AI even to define it, and it’s really hard to say you can build something until you can actually define it, until you actually understand what it is.

The philosophical argument against that would be like “Look, you got a finite number of senses and those that are giving input to your brain, and you know the old philosophical thought experiment you’re just a brain in a vat somewhere and that’s all you are, and you’re being fed these signals and your brain is reacting to them,” but there really isn’t even an external world that you’re experiencing. So, they would say you can build a machine and give it these senses, but you’re saying there’s something more than that that we don’t even understand, that is beyond even the five senses.

I suppose if you had a machine that could replicate atom for atom a human body, then you would be able to create an intelligence. But, how practical would it be?

There are easier ways to create a person than that?

Yeah, that’s true too, but how practical is a human as a computing machine? I mean, one of the advantages of the computer systems that we have, the machine learning-based systems that we call AI is that we know how we represent data. Then we can access the data. As we were talking about before, with human intelligence you can’t just plug in and download people’s thoughts or emotions. So, it may be that in order to achieve intelligence, you have to create this machine that is not very practical as a machine. So you might just come full circle to well, “is that really the powerful thing that we think it’s going to be?”

I think people entertain the question because this question of “are people simply machines? Is there anything that happens? Are you just a big bag of chemicals with electrical pulses going through you?” I think people have… emotionally engaging that question is why they do it, not because they want to necessarily build a replicant. I could be wrong. Let me ask you this. Let’s talk about consciousness for a minute. To be clear, people say we don’t know what consciousness is. This is of course wrong. Everybody agrees on what it is. It is the experiencing of things. It is the difference between a computer being able to sense temperature and a person being able to feel heat. It’s like that difference.

It’s been described as the last scientific question we don’t really know how to ask, and we don’t know what the answer would look like. I put eight theories together in this book I wrote. Do you have a theory, just even a gut reaction? Is it an emergent property? Is it a quantum property? Is it a fundamental law of the universe? Do you have a gut feel of what direction you would look to explain consciousness?

I really don’t know. I think that my instinct is along the lines of what I talked about recently with embodiment. My gut feel is that a disembodied brain is not something that can develop a consciousness. I think consciousness fundamentally requires a self. Beyond that, I don’t really have any great theories about consciousness. I’m not an expert there. My gut feel is we tend to separate, when we talk about artificial intelligence, we tend to separate the function of mind from the body, and I think that may be a huge assumption that we can do that and still have self and consciousness and intelligence.

I think it’s a fascinating question. About half of the guests on the show just don’t want to talk about it. They just do not want to talk about consciousness, because they say it’s not a scientific question and it’s a distraction. Half of them, very much, it is the thing, it’s the only thing that makes living worthwhile. It’s why you feel love and why you feel happiness. It is everything in a way. People have such widely [divergent views], like Stephen Wolfram was on the show, and he thinks it’s all just computation. To that extent, anything that performs computation, which is really just about anything, is conscious. A hurricane is conscious.

One theory is consciousness is an emergent property, just like you are trillions of cells that don’t know who you are and none of them have a sense of humor, you somehow have a distinct emergent self and a sense of humor. There are people who think the planet itself may have a consciousness. Others say that activity in the sun looks a lot like brain activity, and perhaps the sun is conscious, and that is an old idea. It is interesting that all children when they draw an outdoor scene they always put a smiling face on the sun. Do you think consciousness may be more ubiquitous, not unique to humans? That it may kind of be in all kinds of places, or do you just at a gut level think it’s a special human [trait], and other animals you might want to include in that characteristic?

That’s an interesting point of view. I certainly see how it’s a nice theory about it being a continuum I think is what he’s saying. That there’s some level of consciousness in the simplest thing. Yeah, I think this is more along… it’s just a matter of scale type of philosophy which is that at a larger scale that what emerges is a more complex and meaningful consciousness.

There’s a project in Europe you’re probably familiar with, the Human Brain Project, which is really trying to build an intelligence through that scale. The counter to it is the Open Worm Project which is they’ve sequenced the genome, of the Nematode worm and its brain has 302 neurons, and for 20 years people have been trying to model those 302 neurons in a computer to build, as it were, a digital functioning Nematode worm. By one argument they’re no closer to cracking that than they were 20 years ago. The scale question has its adherence at both extremes.

Let’s switch gears now and put that world aside and let’s talk about the world of machine learning, and we won’t call it intelligence anymore. It’s just machine learning, and if we use the word intelligence, it’s just a convenience. How would you describe the state of the art? As you point out, the techniques we’re using aren’t new, but our ability to apply them is. Are we in a machine learning renaissance? Is it just beginning? What are your thoughts on that?

I think we arein a machine learning renaissance, and I think we’re closer to the beginning than to the end. As I mentioned before, the real driver of the renaissance is technology. We have the computational power to do massive amounts of learning. We have the data and we have the networks to bring it all together and the storage to store it all. That’s really what has allowed us to realize the theoretical capabilities of complex networks as we model input/output functions.

We’ve done amazing things with that particular technology. It’s very powerful. I think there’s a lot more to come, and it’s pretty exciting the kinds of things we can do with it.

There’s a lot of concern, as you know, the debate about the impact that it’s going to have on employment. What’s your take on that?

Yeah, I’m not really concerned about that at all. I think that largely what these systems are doing is they’re allowing us to automate a lot of things. I think that that’s happened before in history. The concern that I have is not so much about removing jobs, because the entire history of the industrial revolution [is] we’ve built technology that has made jobs obsolete, and there are always new jobs. There’s so many things to do in the world that there’s always new jobs. I think the concern, if there’s any about this, is therateof change.

I think at a generational level, it’s not a problem. The next generation are going to be doing jobs that we don’t even know exist right now, or that don’t exist right now. I think the problems may be within a generation transformation. If you start automating jobs that belong to people who cannot be retrained in something else, but I think that there will always be new jobs.

Is that possible that there’s a person out there that cannot be retrained to do meaningful work? We’ve had 250 years of unending technological advance that would have blown the minds of somebody in 1750, and yet we don’t have anybody who… it’s like, no, they can’t do anything. Assuming that you have full use of your body and mind, there’s not a person on the planet that cannot in theory add economic value. All the more if they’re given technology to do it with. Do you really think that they’ll have people that “cannot be retrained”?

No, I don’t think it’s a “can” issue. I agree with you. I think that people can be retrained and like I said, I’m not really worried that there won’t be jobs for people to do, but I think that there are practical problems of the rate of change. I mean, we’ve seen it in the last decades in manufacturing jobs that a lot of those have disappeared overseas. There’s real economic pain in the regions of the country where those jobs were really prominent, and I don’t think there’s any theoretical reason why people can’t be retrained. Our government doesn’t really invest in that as much as it should, but I think there’s a practical problem that people don’t get retrained. That can cause shifts. I think those are temporary. I personally don’t see long term issues with transformations in technology.

It’s interesting because… I mean, this is a show about AI, which obviously holds it in high regard, but there have been other technologies that have been as transformative. An assembly line is a kind of AI. That was adopted really quickly. Electricity was adopted quickly, and steam was adopted. Do you think machine learning really is being adopted all that much faster, or is it just another equally transformative technology like electricity or something?

I agree with you. I think that it’s transformational, but I think it’s probably creating as many jobs as it’s automating away right now. For instance, in our industry, which is in contact centers, a big trend is trying to automate, basically to digitize a lot of the communications to take load off the telephone call center. What most of our enterprise customers have found with our contact centers is the more they digitize, their call volume actually goes up. It doesn’t go down. So, there’s kind of some conflicting evidence there about how much this is actually going to take away from jobs.

I am of the opinion I think anyone in any endeavor understands there’s always more to do than you have time to do. Automating things that can be automated I generally feel is a positive thing, and putting people to use in functions where we don’t know how to automate things, I think is always going to be an available path.

You brought up what you do. Tell us a little bit about Cogito and its mission.

Our mission is centered around helping people have better conversations. We’re really focused on the voice stream, and in particular our main business is in customer call centers where what we do is our technology listens to ongoing conversations, understands what’s going on in those conversations from an interactive and relationship point of view, from a behavioral point of view, and gives agents in real-time, feedback when conversations aren’t going well or when there’s something they can do to improve the conversation.

That’s where we get to the concept of augmented intelligence, which is using these machine learning endowed systems to help people do their jobs better, rather than trying to replace them. That’s a tremendously powerful paradigm. There’s trends, as I mentioned, towards trying to automate these things away, but often our customers find it more valuable to increase the competence of the people doing the jobs there because those jobs can’t be completely automated, rather than trying to automate away the simple things.

Hit rewind, back way up with Cogito because I’m really fascinated by the thesis that there’s all of this. There’s what you say and then there’s how you say it. That we’re really good with one half of that equation, but we don’t apply technology to the other half. Can you tell that story and how it led to what you do?

Yeah, imagine listening to two people having a conversation in a foreign language that you don’t understand. You can undoubtedly tell a lot about what’s going on in that conversation without understanding a single word. You can tell whether people are angry at each other. You can tell whether they’re cooperating or hostile. You can tell a lot of things about the interaction without understanding a single word. That’s essentially what we’re doing with the behavioral analysis of how you say it. So, when we listen to telephone conversations, that’s a lot of what we’re doing is we’re listening to the tenor and the interaction in the conversation and getting a feel for how that conversation is going.

I mean, you’re using “listen” here colloquially. There’s nothing really listening. There’s a data stream that’s being analyzed, right?

Exactly, yeah.

So, I guess it sounds like they’re like the parents [of] Charlie Brown, like “waa, wa waa.” So, it hears that and can figure out what’s going on. So, that sounds like a technology with broad applications. Can you talk about in a broad sense what can be done, and then why you chose what you did choose as a starting point?

It actually wasn’t the starting point. The application that originally inspired the company was more of a mental health application. There’s a lot of anecdotal understanding that people with clinical depression or depressed mood speak in a characteristic way. So the original inspiration for building the company and the technology was to use in telephone outreach operations with chronically ill populations that have very high rates of clinical depression and very low rates of detection and treatment of clinical depression. So, that’s one very interesting application that we’re still pursuing.

The second application came up in that same context, in the context of health and wellness call centers is the concept of engagement. A lot of the beneficial approach to health is preventative care. So, there’s been a lot of emphasis in healthcare on helping people quit smoking and have better diets and things like that. These programs normally take place over the telephone, and so there’s conversations, but they’re usually only successful when the patient or the member is engaged in the process. So, we used this sort of speech and conversational analysis to build models of engagement and that would allow companies to either react to under-engaged patients or not waste their time with under-engaged patients.

The third application, which is what we’re primarily focused on right now is agent interaction, the quality of agent interaction. There’s a huge amount of value with big companies that are consumer-oriented and particularly those that have membership relationships with customers in being able to provide a good human interaction when there are issues. So, customer service centers… and it’s very difficult if you have thousands of agents on the phone to understand what’s going on in those calls, much less improve it. A lot of companies are really focused on improvement. We’re the first system that allows these companies to understand what’s going on in those conversations in real-time, which is the moment of truth where they can actually do something about it. We allow them to do something about it by giving information not only to supervisors who can provide real-time coaching, but also to agents directly so that they can understand their own conversations are going south and be able to correct that and have better conversations themselves. That’s the gist of what we do right now.

I have a hundred questions all running for the door at once with this. My first question is you’re trying to measure engagement as a factor. How generalizable is that technology? If you plugged it into this conversation that you and I are having, does it not need any modification? Engagement is engagement is engagement, or is it like, Oh no, at company X it’s going to sound different than a phone call from company Y?

That’s a really good question. In some general sense an engaged interaction, if you took a minute of our conversation right now, it’s pretty generalizable. The concept is that if you’re engaged in the topic, then you’re going to have a conversation which is engaged, which means there’s going to be a good back and forth and there’s going to be good energy in the conversation and things like that. Now in practice, when you’re talking about in a call center context, it does get trickier because every call center has potentially quite different shapes of conversations.

So, one call center may need to spend a minute going through formalities and verification and all of that kind of business, and that part of the conversation is not the part you actually care about, but it’s the part where we’re actually talking about a meaningful topic. Whereas another call center may have a completely different shape of a conversation. What we find that we have to do, where machine learning comes in handy here, is that we need to be able to take our general models of engaged interactions and convert and adapt those in particular context to understanding engaged overall conversations. Those are going to vary from context to context. So, that’s where adaptive machine learning comes into play.

My next question is from person to person how consistent… no doubt if you had a recording of me for an hour, you could get a baseline and then measure my relative change from that, but when you drop in, is Bob X of Tacoma, Washington and Suzie Q of Toledo, do they exhibit consistent traits or attributes of engagement?

Yeah, there are certainly variations among people’s speaking style. You look at areas of the country, different dialects and things like that. Then you also look at different languages and those are all going to be a little bit different. When we’re talking about engagement at a statistical level, these models work really well. So the key is when thinking about product development for these, is to focus on providing tools that are effective at a statistical level. Looking at one particular person, your model may indicate that this person is not engaged, but maybe that is just their normal speaking style, but statistically it’s generalizable.

My next question is: is there something special about engagement? Could you, if you wanted to tell whether somebody’s amused or somebody’s intrigued or somebody is annoyed or somebody’s outraged? There’s a palette of human emotions. I guess I’m asking, engagement like you said, there are not so much tonal qualities you’re listening for, but you’re counting back and forths, that’s kind of a numbers [thing], not a…. So on these other factors, could you do that hypothetically?

Yeah, in fact, our system is a platform for doing exactly that sort of thing. Some of those things we’ve done. We build models for various emotional qualities and things like that. So, that’s the exciting thing is that once you have access to these conversations and you have the data to be able to identify these various phenomena, you can apply machine learning and understand what are the characteristics that would lead to a perception of amusement or whatever result you’re looking for.

Look, I applaud what you’re doing. Anybody who can be better phone support has my wholehearted support, but I wonder if this technology wouldn’t be heading is kind of an OEM thing where it’s put into caregiving robots, for instance, who need to learn how to read the emotions of the person they’re caring for and modulate what they say. It’s like a feedback loop to self-teaching kind of, just that use case. The robot caregiver that uses this [knows] she’s annoyed, he’s happy, or whatever, as a feedback loop. Am I way off in sci-fi land or is that no, that could be done?

No, that’s exactly right, and it’s an anticipated application of what we do. As we get better and better at being able to understand and classify useful human behaviors and then inferring useful human emotional states from those behaviors, that can be used in automated systems as well.

Frequent listeners to the show will know that I often bring up Weizenbaum and ELIZA. The setup is that Weizenbaum, back in the 60s, made this really simple chat bot that you would say, “I don’t feel good today,” and it would say “why don’t you feel good today?” “I don’t feel good today because of my mother.” “Why does your mother not make you not feel good?” It’s this real basic thing, but what he found was that people were connecting with it and this really disturbed him and so he unplugs it. He said, when the computer says “I understand,” it’s just a lie. That there’s no “I,” which sounds like you would agree with, and there’s nothing that understands anything. Do you worry that that is a [problem]? Weizenbaum would be: “that’s awful.” If that thing is manipulating an old person’s emotions, that’s just a terrible, terrible thing. What would you say?

I think it’s a danger. Yeah, I think we’re going to see that sort of thing happen for sure. I think people look at chat bots and say, “Oh look, that’s an artificial intelligence, that’s doing something intelligent” and it’s really not, as ELIZA proves. You can just have a little base system on the back and type stuff in and type stuff out. A verbal chat bot might use a speech-to-text as an input modality and text-to-speech as an output modality, but have also a rules based unit on the back, and it’s really doing nothing intelligent, but it can give the illusion of some intelligence going on because you’re talking to it and it’s talking back to you.

So, I think yeah, there will be bumps along that road for sure, in trying to build these technologies that, particularly when you’re trying to build a system to replace a human and trying to convince the user of the system that you’re talking to a human. That’s definitely sketchy ground.

Right. I mean, I guess it’s forgivable we don’t know, I mean, it’s all new. It’s all stuff we’re having to kind of wing it. We’re coming up towards the end of our time. I just have a couple of closing questions, which are: Do you read science fiction? Do you watch science fiction movies? Do you go to science fiction TV, and if so, is there any view of the future, any view of AI or anything like that that you look at and think, yeah that could happen someday?

Yeah, it’s really hard to say. I can’t think of anything. Star Warsof course used very anthropomorphized robots, and if you think of a system like HAL in 2001: A Space Odyssey,you could certainly simulate something like that. If you’re talking about information, being able to talk to HAL and have HAL look stuff up for you and then talk back to you and tell you what the answer is, that’s totally believable. Of course the twist in 2001: A Space Odysseyis that HAL ended up having a sense of self, sense of its own self and decided to make decisions. Yeah, I’m very much rooted in the present and there’s a lot of exciting things going on right now.

Fair enough. It’s interesting that you used Star Wars, which of course is a long time ago, because somehow or another you think the movie would be different if C3PO were named Anthony and R2D2 was named George.

Yeah.

That would just take on a whole different… giving them names is even one step closer to that whole thing. Data in Star Trekkind of walked the line. He had a name, but it was Data.

It’s interesting actually to look at the difference between C3PO and R2D2. You look at CP3O and it has the form of a human, and you can ask the question: “Why would you build a robot that has a form of a human?” R2D2 is a robot, which does, or could potentially do, exactly what C3PO does in the form of a whatever – cylinder. So, it’s interesting to look at the contrast and while they imagine there’s two different kinds of robots. One, which is very anthropomorphized, and one which was very mechanical.

Yeah, you’re right because the decision not to give R2 speech, it’s not like he didn’t have enough memory. He needed another 30MB of RAM or something. That also was something clearly deliberate. I remember reading that Lucas’s original wasn’t really going to use Anthony Daniels to voice it. He was going to get somebody who sounded like a used car salesman, kind of fast talking and all that, and that’s what the script is written for. I’m sure it’s a literary device, but like a lot of these things, I’m a firm believer that what comes out in science fiction isn’t predicting the future. It kind of makes it. Uhura had a Bluetooth device in her ear. So, it’s kind of like whatever the literary imagining of it is probably going to be what the scientific manifestation of it is to some degree.

Yeah, the concept of the self-fulfilling prophecy is definitely there.

Well, I tell you what, if people want to keep up with you and all this work you’re doing, do you write, yak on Twitter, how can people follow what you do?

We’re going to be writing a lot more in the future. Our website www.cogitocorp.com is where you’ll find the links to the things that we’re writing on, AI and the work we do here at Cogito.

Well, this has been fascinating. I’m always excited to have a guest who is willing to engage these big questions and take, as you pointed out earlier, a more contrarian view. So, thank you for your time Ali.

Thank you, Byron. It’s been fun, and thanks for having me on.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}