Episode 12: Being an AI-Driven Leader

Hosts Rob May and Brooke Torres interview Tom Davenport, President’s Distinguished Professor of Information Technology and Management at Babson Collegea, Fellow of the MIT Center for Digital Business, and an independent senior advisor to Deloitte Analytics. Tune in to learn what it means to be an AI-driven leader. 

Subscribe to AI at Work on iTunes or Google Play

Rob Circle Headshot   

Rob May, CEO and Co-Founder,


Brooke headshot circle  

Brooke Torres, Director of Marketing,


Tom Davenport b&w 

Tom Davenport, Distinguished Professor, Babson College




Episode Transcription 

Rob May: Hi everyone and welcome to the latest edition of the AI at Work Podcast. I’m Rob May, the Co-founder and CEO here at Talla. I'm here with Brooke Torres, my co-host. Today, we have Professor Tom Davenport. He is a distinguished professor of information technology and management at Babson College just outside of Boston here, a fellow of the MIT Center for Digital Business, and has a bunch of other roles. Tom, welcome. Why don't you introduce yourself and talk a little bit about your background?

Tom Davenport: Thanks, it's great to be here. I've been around for a while in the enterprise, IT space. I won't go through all the generations. I did business process reengineering and ERP and how companies get value from that and then went into knowledge management for a decade or so. Over the last 15 or 20 years, I've been doing analytics and big data work. Nobody is quite interested in that topic anymore. They're only interested in AI, even though AI and analytics and big data have a remarkable amount in common. Nobody ever asks me to talk about those subjects.

About four years ago I started doing research on AI and how it's being used in organizations. I would have normally written the first book on AI and the enterprise since my shtick is, sort of, something in the enterprise, but, there was no AI in the enterprise four years ago. I did a book called Only Humans Need Apply with a woman named Julia Kirby on, "what does it mean for jobs and skills?" Then I’m coming out with an AI in the enterprise book very soon.

RM: I'm really interested that you've been through several waves of technology here, and you've done a better job than most people of adapting in your field to the latest trends. I guess, two-part question for you-- is that a personal thing? Is that part of being at Babson, which is very entrepreneurial and they sort of encourage that? Then, how do you handle digging into some of these new technologies? When you moved into AI, in particular, was it a pretty easy leap from big data and analytics or did you read a lot? Do you talk to people? How do you make the jump and really learn about the new thing?

TD: It's not necessarily a Babson thing. Although Babson is a little bit different from many business schools, maybe because entrepreneurship is our primary focus. Entrepreneurship is not considered the most rigorous subject in business schools. I's not like economics or finance or something like that. They like things that are relevant rather than just things that are rigorous. This is, to me, the tragedy of business schools. That they emphasize rigor over relevance. Nobody reads their work, for the most part. To be relevant, you've got to keep moving, particularly if you're in information technology. You know, what's interesting to people changes very quickly. My general research approach is to get out and talk to people a lot. I don't sit in my office very much. I do lots of interviews and surveys, to some degree, particularly in some of the big consulting firms that I've worked with, like Deloitte now. I call people up and say, “How are you using this stuff?” And, “What do you think about it and how is it working for you?” It's a very rigorous research method, (laughs) I do talk to a lot of people.

RM: In your early conversations digging into this space, did you find it to be a little more confusing as to what was real and what wasn't, from the perspective of people that are trying to adopt and deploy this technology? I know one of the things that we hear a lot is that the news media over-hypes what the potential really is, while at the same time maybe under-hyping what some of the actual applied potential is inside of companies. Did you find that to be true?

TD: A lot of that had to do, frankly, with Watson. I mean, when I started, it was all about Watson. IBM was pushing that technology in a huge way. I tried to play. It turns out they were not particularly interested in having me going to their customers and see what was really happening. Now we know why, because what was happening wasn't that good.

I have been teaching now for three years, an MBA course at Babson called Cognitive Technologies. It started out being Cognitive Technologies and IBM's Watson, because they were going to help, and so on. It turns out they didn't help much at all, even though the head of Watson Health is a Babson MBA. Over time, the world has moved on a very different direction from Watson. It's hard to find anybody who has anything good to say about it in large enterprises these days, which may not be-- that may just be just as unfair as the totally positive stuff that their marketing people were putting out. But yeah, I think there's a lot of confusion about what's included in AI.

Years ago, maybe 2003 or something, I wrote a piece when I was at Accenture on decision-making technology, so very focused on rules and so on. Everybody thinks that's dead, but it turns out rules are quite alive. In the surveys that I've been doing, 50% of large US companies say, "Yeah, we have rule-based systems up and running." The old stuff doesn't necessarily go away. I think it's very confusing for people. Certainly, the press hasn't helped much at all, as you suggest.

Brooke Torres: What about in other recent writing that you've done? I know you did this AI-Driven Leadership piece in the MIT Sloan Management Review. Talk a little bit about that. I think that's really interesting and applicable for our listeners.

TD: Well, I think the AI isn't different, in that sense, from other technologies, in that the likelihood that your senior executives are going to embrace it and act on it is probably the single most important factor in how quickly you get moving. I think there are, as we were saying, more barriers to really fully understanding it. That's what takes a little bit more work for senior executives to get involved.

What we did in that article was, you hear a lot of companies saying, "Oh, we want to be AI-first" and so on. We talked about, what would an executive of an AI-first company really look like? It's things like, well of course, you'd need to know something about the different technologies and what they do. You need to be clear about what you want to accomplish with AI in your business.

The ambition one is a really interesting one for me. Most of the press accounts are these breathless transformational moonshot stories. Then many of those were Watson-based, and many of them did not work out very well. What's not as widely publicized is the low-hanging fruit, every day-- make this decision a little better; make this process more efficient and effective. It's good at that. In fact, I think that's really the only thing that it's good for just because it tends to be very task-based and not entire process or even job based. I think understanding that and having the right level of ambition and saying, “Oh yeah, we can eventually transform our companies. But it's probably going to be through a series of these less ambitious projects in the same area.”  A lot of companies are in pilot mode, proof of concept mode, prototype mode. I think having a clear set of criteria for when you go into production deployment is very critical for AI-first executives.

Getting your people ready-- this is different from other technologies, I think, in that it has more implications for your workforce. I think letting them know that and letting them know how they can add value is very, very critical. There's an important data piece, owning the data. I think, Rob, you wrote about this on your blog this past weekend, which I certainly agree with. And there's, I think, a piece of how the organization works together on it too. Those are some of the things that we put in that piece.

RM: One of the things we've seen is that a lot of employees, there's really two big key workflow behavior changes that have to happen, depending on the type of AI you're doing. One is sometimes you have to have this training period or ongoing training with your software.

People don't always realize that. We actually had a discussion with an early customer here, where the head of a department was complaining about, you know, I don't know if my people would spend any time training Talla. I don't know that they would want to tag this or label that or answer a question that Talla might pop up and ask. He's like, “Even 10 minutes a week would be a lot.” And I said, “Well, how long does it take you to train a new employee?” And he was like, “Well, it takes about two weeks… oh, oh, oh. I get it.” So you actually save a lot of training time, I think. One of things we encourage people to do is -- and I encourage other AI companies to do this-- try to build training into their standard workflows. Make it as easy as possible. Then this idea that a lot of these outputs, because they're not black or white rule space, they're probabilistic, employees don't know what to do with them. 

TD: People don't think probabilistically in our society, sadly.

RM: I think that's going to be one of the biggest things that's going to change as you roll some of this out.

TD: I think it's the probabilistic versions of AI that are doing the best right now. The deterministic ones are having more problems. My sense is there's room for both. My guess is, over time, we'll go a little bit back toward the deterministic stuff. People are starting to complain, as I'm sure you hear, that deep learning is reaching its peak. And we need to have more common sense in these systems, which I think will swing us a little bit back in the deterministic oriented direction and rules and so on. But that whole area has problems too.

That was why the last generation of AI really died out. But yeah, it definitely means changes in the workforce. And even having people come to realize that these systems are going to be their colleagues for the future of their careers, for the rest of their careers, I think, is a tough thing for a lot of people to grasp.

BT: Speaking of the future, what do you think as far as actual applications of AI that excite you? And in particular, I'm interested in what you think about knowledge management and AI, since that's two of the big pieces of your background.

TD: I gave up on knowledge management. One, because it was the Great Recession and a lot of the companies I was working with were giving up on it. They were less interested in supporting my research center at Babson on it. Two, I really thought that the one key future of knowledge management was the knowledge based on data, knowledge derived from data, analytics and so on. And most knowledge management people were not that interested in that. They were only focused on textual knowledge or, in some cases, even more implicit kinds of knowledge than that.

I think it's really good, now, that AI has started to prosper. And I think that combination of probabilistic orientation-- a lot of analytics-- and knowledge management starts to give people knowledge that is more targeted at them and likely to solve their particular problem. I should go back and start doing some more writing on how AI and knowledge management are being combined. But my brain sort of gets in one particular mode for a while, and it's hard for me to relate it to everything that I've done in the past. I will go back to it at some point.

RM: When we started Talla, one of the premises was that you had modern AI methods around natural language processing were going to make unstructured text computable for the first time, at a very basic level, and that you could do some stuff with that. And I think we've started to see a little bit of a resurgence. Because our approach was if you think about-- we don't write many things in plain text. And if you're going to write something for human consumption, you'll put it in Microsoft Word instead of plain text so that you can bold it, italicize it, make it more easier to read for humans.

We'll write things in HTML to make it more readable for browsers. And so the idea was, can we build a knowledge base and bot combo for knowledge management applications that allows you to add some stuff that makes it more readable by machine learning models? So when you look at how bad-- I mean, one of the problems with Watson and Lex and Wit.ai and some of these early NLP tools was they're trained on broad swaths of consumer data.

Watson, IBM being so business-focused, they really were the first to move beyond that and start trying to do more with business vernacular. But that's been our approach as well, which is, if I can get you to start to tag the top 20 or 30 entities in your organization, it makes a big difference in the quality of the NLP models that we can put out. And so when you're querying things, when you're looking for things, when you're trying to automate knowledge delivery and all that kind of stuff, we can do a much better job.

TD: I've certainly heard that from other companies that I've talked to. I think part of the problem with Watson, from the beginning, was that they kind of implied that all that important taxonomic work wasn't really necessary, that you could just grab a bunch of articles and have Watson ingest them as if it's eating them for its dinner or something like that. It would make sense of all that language.

Many companies that I've talked to have been frustrated by the fact that they have to do a lot of taxonomic work still. And they weren't prepared for it. It takes a lot longer than they thought. But I think having the people who do the work participate in that process, it's all for the good. It gets them much more oriented to how these tools really think and how they might augment their own capabilities.

RM: You have a book coming out on AI. Tell us a little about what prompted you to write the book. Who's it targeted to? What's the gist of the story?

TD: My usual pattern, as I was saying, is to write a book-- it should've been called "AI at Work," I guess. Because I wrote Analytics at Work, Big Data at Work. I had a book called Working Knowledge, which is really knowledge at work. But fortunately, my editor was a little bit more creative. And I think there's "at work" somewhere in the subtitle. But it's called The AI Advantage. It's MIT Press.

It's about how enterprises are using AI right now. It's not incredibly futuristic. It's not a downer of a book. But it's also not-- I don't think anybody is going to say it over promises for what AI can do. It starts with three examples of organizations. A couple were Watson-based. MD Anderson, the famous cancer center project that went kaput after 62 million worth of spending-- which I figured out this weekend, most of the money from that came from this Malaysian guy who's being pursued as a crook. I don't really know enough to know whether he really is a crook, but a guy named Jho Low, I think. He gave the money for that big MD Anderson project which went nowhere. And then there was a bank in Singapore called DBS that was trying to use Watson for a robo-advisor system that didn't really work.

Then I talk about Amazon, which everybody, I think, would say, if Amazon has trouble doing moon shots, then maybe the rest of us shouldn't even be trying. And I talk about Amazon Go and what little you can read about the automated drone experiment. But Jeff Bezos very clearly says the bulk of this technology is going toward day-to-day invisible operational improvements and things like product recommendations and fraud prevention and all that sort of thing.

That's what seems to work really well. I think, over time, that will add up to a huge amount of improvement in productivity and business capability. But we somehow only want to look at the moon shots. Another great example-- I don't spend a huge amount of time talking about it but-- the whole autonomous vehicle thing, which everybody seems to think is just around the corner. At MIT recently, I was talking with this woman from Carnegie Mellon, who's head of machine learning. And she's going to go to work for JPMorgan Chase, probably just started as head of machine learning there now.

She came to Carnegie Mellon in 1983, I think. And she said people, then, were saying, autonomous vehicles-- right around the corner. And we're still saying that. And then in the Wall Street Journal this weekend, there was a quite good article by Christopher Mims saying it's going to take a really long time for fully autonomous vehicles to appear in large numbers.

RM: I'm actually an investor against the autonomous vehicle thesis. I have an investment called Scotty Labs. Their idea was to take a-- they licensed a telepresence stack. And basically, their idea is truck drivers can sit in an office somewhere. You navigate the truck from the dock to the interstate. Then you put it on the interstate and put it on autopilot.

That's level 3 autonomy. That's solved, right? Then when it comes off, you see a flashing light. They pick it up, VR headset, and navigate it to its final destination. And it saves you the challenge of having to have a truck-- you know, you're going 300 miles. And 14 of those miles are off the interstate. It's like, we might as well just let the truck do what it can do.

TD: I wouldn't be quite as optimistic as you are that that problem is solved. I have a Tesla that supposedly can run on autopilot, but it still has its challenges, I would say. I agree, in general, we're going to see lots of those kinds of hybrids of human capability and machine capability. That's probably a good idea for investing in.

BT: What else do you think is either realistic coming up and hasn't been done? Or, do you feel like the market is pretty flooded with people taking advantage of opportunities?

TD: I used to maintain a running list of all the use cases that I saw. I got into the several hundreds and finally said, this is crazy. It's basically anything any human has ever done almost, there is somebody trying to do it in AI.

I think that's the great thing about it. If you take it at the task level-- I mean, my conclusion from the Jeopardy stuff and from the AlphaGo, and so on, was that if we set out to do any particular task with a machine, we can probably do it. It may take a lot of time and effort but it's still only a task. Then we have to figure out how to do the next one, and so on.

I don't know of anything that we really ought to be doing that we're not except that it's only in pilot mode in most cases. It's not-- even in big companies who could afford to do it, they're still screwing around with proofs of concept and so on. Because the full deployment, it's a lot of change in the job. It's a lot of change in the process. It's a lot of change in the underlying systems. That part is really expensive and time-consuming.

RM: We were talking before the podcast started about something related to that, which is, do you think these companies that are late to adopt AI are going to find themselves behind, that it's not like other technologies where, when you finally deploy it, even if you deploy it two or three years late, fast follower kind of idea, you're now up to par with everybody else? Can you ever catch up to par or is this technology fundamentally different from that?

TD: I think, for the most part, it's different. I think there are some things that it can be helpful to wait on. I mean, if you're the first in your industry to try to develop a taxonomy for a particular key process, you're probably better off having somebody else pay for that. Then you can get it later.

In general, I think the learning that is required on the part of the systems, on the part of the people, the data that you have to accumulate-- I just don't see it as a fast follower kind of technology, for the most part.

BT: Before we wrap up we like to ask this question about where you are on the spectrum of Elon Musk versus Mark Zuckerberg to be sort of the fear versus excitement about the future of AI.

TD: I have no fear. I guess I'm closer to Zuckerberg, although I admire Elon Musk a lot. It always puzzled me why he and a number of other people-- the late Stephen Hawking and Bill Gates-- all these people are so petrified of what AI is going to do to us. I just don't see it. I don't see it happening anytime soon.

I mean, it seems to me, when they start doing dastardly things, we'll have a number of years to think about that ahead of time. And since we can't even envision how this might happen now, I don't see any point in worrying about it and focusing on it. I try to be positive about a lot of things. I don't worry about things that haven't happened yet.

I do think that in the weapons space we're going to have to have some safeguards. Some people are going to have to think a fair amount about the ethics of AI weaponry and so on. For business purposes, I don't really see a big problem anytime soon.

RM: Thanks, Tom, for being on the show this week. For all the listeners, if you have questions that you want us to ask some of the guests or if you have guests you would like us to invite to the show, you can send those to podcast@talla.com. As always, if you run a sales or support team and you're interested in this idea of automating the knowledge management around the things that they do and say and some of their processes using artificial intelligence, please check us out. Happy to set-up a demo or give you a trial.

Subscribe to AI at Work on iTunes or Google Play and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.