Episode 17: Katherine Gorman on the Question All Business Leaders Should Be Asking: "How Does It Work?"

Rob May and Talking Machines' Katherine Gorman talks AI trends, hype, "least favorite stories", why AI is absolutely not magic, and much more. Check out this episode for her take:

Subscribe to AI at Work on iTunes or Google Play

Rob Circle Headshot

Rob May, CEO and Co-Founder,
Talla

Katherine Gorman, Host,
Talking Machines 


  


 Episode Transcription 

Rob May: Hello everybody, and welcome to the latest episode of AI at Work. I am Rob May, CEO and co-founder of Talla. And I'm very excited today to have Katherine Gorman on as our guest. Katherine does a bunch of things, but one is she she's the host of Talking Machines, which I think was the first serious AI podcast out on the web. It's definitely the first one that I started listening to several years ago. It's very technical at times, which I like, but it is a fantastic podcast if you haven't heard it. So, Katherine, welcome.,

Katherine Gorman: Thanks so much, Rob. It's great to be here.

RM: Why don't you give us a little bit about your background and how you got to Talking Machines?

KG: It was a long and winding path. I'm so flattered that you like Talking Machines. Every time somebody tells me that they like it, I'm still surprised by that pleasantly.

I worked in public radio for a really long time, and then left and started making podcasts specifically. When I was in public radio, I started because I really wanted to be a science reporter. I wanted to be a science producer, because I thought that these were the places where really interesting conversations could take place, where you had a lay public that didn't have all the information they quite needed to understand a topic, and you had all of this wealth of research and in-depth information.

You just really couldn't parse it out in four minutes and 30 seconds, which is the length of a public radio segment. So I left. I started making podcasts. One of the first shows that I got to work on was Talking Machines. I developed it with Ryan Adams, who is now at Princeton. But in 15 minutes, I'm sure will be doing something new and amazing somewhere else. We co-hosted the show together for about two years. Then, Ryan, because of time commitments and other things, needed to move on.

Neil Lawrence, who had been a friend of the show for a long time, who's now at Amazon and came there from the University of Sheffield, is co-host. We've been running for four seasons. As you say, we talk about some pretty deep cuts. We get a little nerdy.

Most of our shows feature an interview with a researcher, someone who's working in the field, is a practitioner either in industry or in academia. We take listener questions, and we talk about some stuff that is probably bumping around that Neil is really interested in, that I'm really interested in. It's really a conversation about the reality of the research and trying to open that up to a larger audience.

RM: Four years ago when you started that, which was about the time that I started my AI newsletter, and we were here even in Boston, which is a tech hub, it has a lot of AI, we were part of this small group of people.

KG: We were the two sad nerds who were like, we need to talk about this.

RM: Starting to take AI seriously and starting to go to small Meetups and put stuff together. And it's amazing that was only four years ago, given that, I feel like a little bit in 2016, but then really in 2017, AI just exploded. Now, it's everywhere and everybody does it and everybody's into it and all that kind of stuff. What's the biggest change that you've seen? Or how has the space changed? What have you noticed in the four years you've been doing this?

KG:  I feel like the hype has always been there and this weird flavor of hype where it's both super positive and super negative at the same time. I think that's because when the lay public, the listening audience, gets involved in the conversation, they feel like the information that they're getting is going to change their lives immediately, irrevocably, and in ways that they don't really understand.

The only other place we see this weirdness, this similar weirdness around the conversation is when we're talking about pharmaceutical research. Right? It's going to have an immediate impact. It's going to change my life. I don't really know how it's going to do it, but I know it's going to do it right now. Oh my god.

That has always been there. I think the thing that we've seen since 2017, like you say, is an increased attention on what the field is actually doing. I think that that's caused a reaction within the field to focus on interpretability, explainability, moving outside of the black box, and moving away from applications in the field that are simply a technology and are one that we expect to interact with humans and we expect to be explainable, interpretable, and malleable. I think that that's the thing that I've really seen change. And it seems like a positive direction that I'm really excited about.

RM: Let's pull on that hype thread for a second. Because this is one of the things, I never thought about it like biotech, but it a little bit is. Right? The way you read one of these articles that says, hey, scientists did this thing. And now, it looks like we might

In 15 years, the drug will be on the market.

KG: When we all live on the moon, it will be awesome.

RM: What's interesting about AI hype, I think is very similar. A lot of these things are research papers where they did something really cool, because they have massive computes and a synthetic cleaned up, not real world data set. And then people wonder, oh my gosh, why aren't my tools doing every day? So let me ask you. I'll tell you mine after this. What's the most over hyped? What's your least favorite story?

KG: Oh, my least favorite story? Of actual stories or of people like completely fabricating things?

RM: Anything that made the news, that made the public consciousness?

KG: I think my least favorite stories are the ones that are completely shocked when they report something that I think the field is very used to and totally expects. There was a breathless piece recently about how a recognition system was totally stumped by an elephant when it was placed in the image of a living room. To me, that seems like, of course, you trained it on living room images. There are no elephants in living rooms. Of course, this brittle tool doesn't understand an elephant when you're looking at a living room setting.

But the piece was like, it had a cute title, like not being able to recognize the elephant in the room and all this other stuff. I think that that just highlights this thing about the larger public conversation that there's a disconnect between the understanding and the foundational ideas here and the reality of the research. Right?

We're not actually talking about what's going on. We're talking about the froth on top. And that story just hit it on the head for me.

RM: Someday, the bots are going to be talking about their least favorite stories about humans.

KG: Exactly right. Totally. All those clip-art of fingers pushing buttons. Who has fingers? What are fingers for?

RM: I can tell you my least favorite story of AI of all time was the Facebook bots that they had to shut down, because they developed their own language. I was like, oh my god. The media kept spinning it as this thing.

KG: They're learning to speak to each other.

RM: It's dangerous or whatever. You're like, no. It's because it was stupid, and it failed the point of the experiment.

KG: It developed a more efficient language based on the rules that you had given it. You had trained it into a niche.

RM: That's pretty interesting. Let's talk about Talking Machines and which episodes have been most popular, and do you see a consistent trend there? Have you learned going into an episode to say like, oh, I know this one's going to be bigger or worse or whatever?

KG: Consistently the most popular episode is the one where we had Geoff Hinton, Yoshua Bengio, and Yann LeCun on all at the same time. Right. Yeah, of course. Like Elvis the big bopper and somebody other famous all at once. So that one has been very popular, I think, because people are just really interested in what those people have to say.

I think that we find that our audience, which we understood to be heavily technical, and has widened into a larger audience with not such a technical background, but also a pretty large contingent of people who are interested in hearing what people have to say for business intelligence.

The episodes where we talk about what people think are actionable pieces of research tend to be very popular. Also the episodes where we talk about the intersection of AI and life science research tend to be incredibly popular, because I also I think that those feel like actionable areas. Those are sort of the big ones. Yeah. Then also the greatest hits episode of Yann, Yoshua, and Geoff.

RM: I know you spend some of your time talking to business people about how to apply AI and how to get started. It's a big question we try to talk about on this podcast, because there are so many people, they're VPs of whatever somewhere, and they're just hearing all this stuff about AI and they just don't know where to start.

It's overwhelming, and it feels like magic and everything else. What's your advice for people that are just getting started? If you're not a technical person, how do you come into here, into this space and avoid the snake oil? How do you figure out how it applies to your business? What advice do you give people?

KG: I think it's pretty foundational. Like you said, it feels like magic, but it's absolutely not. If you are a person who is entering this space and you're really relying on someone to help you build a technology or find a new way to bring this into your company, you should always be asking the question, how does it work? If they can't explain that to you, you need to find a new part of that team to interact with.

The ideas here are pretty fundamental. The team of people that you're working with should be able to find a way to clearly communicate how these things are operating to you, because what if something goes wrong? You're going to need to find either them or someone else to fix that for you. Having a good working knowledge and having a team that's going to be able to provide you that good working knowledge is really fundamental.

Now, how do you find that team? How do you think about these things basically, is the next step. I think that just really engaging people in conversation is a really good way to suss out the snake oil and try to avoid that, because there is a lot of it, and it does feel like it's amazing.

Starting with immersing yourself in the language and being able to parse the big terms in a precise way that is informed by the technicality of it, but it doesn't have to be 100%. Knowing the difference between machine learning and deep learning. Knowing about artificial intelligence in general. Knowing where that phrase comes from. Knowing that clip arts of robots shaped like humans don't really have a whole lot to do with the field of artificial intelligence is a really good start. Being willing to learn the language, being willing to ask questions is really the best, most actionable advice I can give.

RM: You touched on explainability a couple of times. We've had some guests on here who have hinted that that's one of the things that they feel like is holding AI back in a lot of ways. The fact that, for example, the use case you hear a lot is you come up with a credit scoring model that's neural network based that is better than anything else you have. Now, you're required under like GDPR and things like that to be able to explain why you made the credit decision that you did and maybe the model can't.

Therefore, maybe you can't use the model. Even if you're in the United States where maybe that doesn't apply and you wanted to use it, does it open you up to some kind of lawsuit?

What's your view on that? Are we solving that problem faster and faster? Or, is it going to hold AI back for a while in terms of applied AI?

KG: I think that explainability is incredibly necessary. Right? If you've never seen a knife before, you order a knife. It comes with instructions. But, the instructions are in a totally different language. You don't actually know how to use that knife or interact with it in a way that's not going to hurt you.

I think explainability is absolutely necessary. We're seeing a lot of focus on it. And I think as it becomes more important, and as it becomes something that we cannot avoid, we're going to get towards usable functional solutions around it faster, which I think is absolutely necessary.

RM: Have you had any guests on or have you played in this space around the United States versus China and AI policy? I ask because this is one of the things that I personally have received the most split pieces of feedback around. Right?

There are people who say, oh my gosh, the AlphaGo was the Sputnik moment for China. They decided like, we're going all in on this. We're going to have a huge national policy. They have passed us now in AI research papers and some of that kind of stuff.

Current climate around immigration and stuff like that, if the best and brightest from other countries come here and get trained in AI, we prefer to send them back to their countries. Right? They're going to compete with us. They're going to create companies and everything else.

One school of thought says, oh my gosh, China is moving forward. They have national policy. They've created a lot of money to this. There's another school of thought that says, well, the way things work in China is that there's a lot of social pressure to maintain the status quo and to prop up bad companies even when they're not doing well. A lot of their research is just marginal stuff. It's quantity over quality, copying American things and all that kind of stuff.

From people who have deep Chinese roots, I have heard very conflicting reports of whether they see China as a real threat to the US in terms of AI dominance. So two questions for you. Number one, do you have an opinion on that or know much about it? Number two, whether China is or not, do you think we are moving towards this is a new kind of Cold War with AI at the center?

KG: That's really interesting. So I mean, I've followed the unfolding of this just as much as it's been in the public conversation. And I don't have any sort of area expertise on Asian focus on artificial intelligence. It's really interesting that you mentioned Sputnik, right, because I believe it was the launch of Sputnik that inspired the US to start ISAT DARPA. I think that was right. Because we were surprised. Right?

We were surprised by this technology. And when you were surprised, and you have a reactionary reaction, there's a lot of activity. Right? Whether or not that activity remains at that pitch and spins out meaningful stuff, I think only time will tell.

RM: Do you think this is going to lead to a Cold War? Do you think AI is the kind of technology that is going to lead to that kind of political and economic confrontation down the road?

KG: You know, I don't know. We'll have to see. I mean, we've had things like winters. We've had intense focus. We've had intense displeasure or moving away from the field. But I think we're just going to have to see it spin out.

RM: When you look around the AI ecosystem, and particularly focusing on where some of these technologies are starting to get applied, do you see any really common mistakes, or any really big opportunities that you feel like people are missing? Or well, I really wish somebody would go to this. I don't know why anybody's working on this. Is there anything that jumps to the top of your mind there?

KG: I think that people are really hungry for huge pools of data lakes, huge pools of unclean, unsorted information. And so every time I hear about a new place of data exhaust, I think, oh, someone's going to descend on this in three to six months and just come up with a way to clean it and hopefully find out something latent about those relationships that are swimming around in there.

Nothing surfaces as something that I in particular want to see take place. I'm particularly excited though when I hear new things about the intersection of life science and artificial intelligence. I think that there's just a lot of low hanging fruit that's going to have real impact for lots of humans, and I'm really excited to see how the science unfolds.

RM: I definitely agree with that. As an active angel investor, I've been out of the healthcare space for a long time, because I think there's a lot of complications in the space.

Particularly with AI and the need for data, you have this privacy issues in health care a lot. But at the same time, the benefits are so powerful. And some of the studies I've seen, and even some of the practical applications I've seen, are pretty impressive. There's been a lot of news made about, for example, there was a high profile case earlier this year of I forget the hospital now, but a big failure by IBM Watson to deliver on what they've done. But at the same time, a lot of the people that I know at IBM and some of the work they've done, they've had a lot of successes as well that haven't been as widely reported.

In the people that you talk to and the companies that you deal with, do you think people are setting their AI projects up for success? When these projects fail, is it because the expectations are wrong? Is it because the project was set up wrong? You had the wrong people? What's your view on where companies are going wrong with these things when they do fail?

KG: I think it goes back to the same sort of gap that we have in the public understanding. Right? We're having two conversations that are running concurrent to each other, and they don't converge a lot.

I think it has a lot to do, as we've seen historically, with the difference between the reality of what can be achieved and expectations. I think if we spent more talk time talking about the reality of what we are working towards and really trying to find a common understanding of what we're going to get out of these tools, we wouldn't have so many disappointments.

RM: What's your advice to somebody who's young, maybe a college student, regardless of what they're studying, they're listening to this, and they're thinking like, I want to go into AI. I want to go work in the field.

Where should they start? Where should they be looking for jobs? Should they brand themselves as an AI expert? Should they just go into their field and study AI? Should they go work at Google? What would you advise them to do?

KG: I would do some soul searching about the thing that you are really passionate about. If you are passionate about questions in machine learning and building tools that use machine learning and are enabled by machine learning, then absolutely pursue that. If you are passionate about questions about biology or other places, but you want to be involved in machine learning, be an expert in the thing that you're passionate about. Find collaborators who are passionate about their expertise and work with them to create something that's going to be really amazing.

If you are going to be involved in this field, it moves very quickly. It requires a lot. You have to be dedicated to your questions and dedicated to your mission, and you've got to be really passionate about your core questions there in order to have enough fuel to sustain you. So I would say think about the thing that you really want to do, and then find the people who have expertise to fill in to help you get there. 

RM: You commented a little bit on the fact that this field changes quickly and made some earlier comments about data exhaust and data lakes and those kinds of things. Where do you think we're going with big data and AI? Do you see a time coming pretty soon-- I know you've had guests on your show that do things like probabilistic programming and one shot learning and some of these things that are meant to this to deal with the problem of: “I don't have a massive data set”.

There are lots of companies out there that would love to do things. They don't have massive data sets to train neural networks on. How do you feel? How would you handicap small data AI going forward? Are we close? Are we far away? Is it targeted? What do you think?

KG: I am really excited about small data. I think that more and more people are focusing on it. And where you have an area of focus, you tend to see a lot of growth.

I think because people are really excited about the power of discovering these latent relationships and the way that these tools can lead us to those discoveries and having less information to draw those relationships and discoveries from is really an area that we're going to see a lot of people working towards.

As you said, lots of people who have lots of questions don't have huge amounts of information to draw from. And sometimes, that information is unintelligible or even unusable. So how can we surface those relationships from smaller grouping, smaller data points? I think we're going to see a lot of exciting stuff happen there.

RM: Last question, and we ask this of almost all our guests. There was a very public spat last year between Elon Musk and Mark Zuckerberg about whether we should be worried about killer AI or whether that worry is misplaced. Do you have an opinion on this debate? And if so, what is it?

KG: Oh my god. Do I have an opinion on the hype? My real opinion is that the hype is detrimental and keeps us from talking about the real things that are happening in the field.

I have my favorite little Andrew Ng quote that I carry around in my back pocket, I don't worry about starvation on Mars, because there are no people on Mars to starve. Right? We have so many other questions to answer. Like, who's part of the Mars colonization program? How are we going to get there? What kind of houses are we going to build? What kind of crops are there going to have to fail? Right? So I'm not worried about starvation on Mars, because we have so many other problems to think about first.

RM: All right, good answer. Katherine Gorman, thank you for coming on the show.

KG: Thank you, Rob.

RM: For those of you listening, if you have suggested guests or suggested questions for our guests, please send those to podcast@talla.com, and we'll see you next week. Thanks for listening.

Subscribe to AI at Work on iTunes or Google Play and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.