Episode 34: Best of AI at Work: Part 2

Check out a roundup of some of the best advice that we have gotten so far from guests on the show.

Subscribe to AI at Work on iTunesGoogle Play, Stitcher, SoundCloud or Spotify


jeetu patel

Jeetu Patel, Cheif Product Officer,
Box

James CHam b&w

James Cham, Partner, Bloomberg Beta

Vishal Sunak

Vishal Sunak, CEO and Co-Founder, LinkSquares

gabi z b&w

Gabi Zijderveld, CMO, 
Affectiva


Episode Transcription   

 

Rob May: You're a product guy, and you're in the middle of Silicon Valley at one of the sort of key Silicon Valley anchor companies talking about AI. And so do you have a framework that you use or the way that you think about-- let's say I'm an executive at, I'm in an insurance company, I'm in an e-commerce company or whatever. I'm not that technical. And I'm trying to figure out, how do I evaluate AI as a strategic advantage or differentiator in my business?

Do you have a framework for thinking about what that person should say, oh, Box should do that for AI. I should do this myself for AI. Do have advice for people about when they should outsource AI to the tools they use and the different providers for things and when they should sort of do it in-house?

Jeetu Patel: Yeah. We have a pretty solid point of view on that, which is, what you have to do as a business is, there's the half-life of business models, and products, and businesses in general, and the velocity with which they move, the half-life is shrinking down quite dramatically of business model. So every seven to eight years, you see that a business model completely resets within a company. And as a result, every seven to eight years, you as a company have a choice of disrupting yourself and moving to the next wave or being disrupted and being in a relevant company. Now, as that based on the law and by the way the half life of a product is actually going down even faster we're within 18 months a product gets dated you literally can't take three years to build a product and ship it because by the time you shipped it, it would be dated.

Then you ask yourself, well, if you're half-life is shrinking, what do we as a company need to do to make sure that we can succeed at the velocity that the market's moving at? And the way that you do that is by being very disciplined about what you pursue versus what you pursue to partner with. And the framework that we typically use is, anything that is meaningfully core to your business, where you would differentiate your offering based on that capability, you would want to make sure that you're building yourself and you maintain that IP in that process. But anything that is an essential component but not core to your differentiation, you might not necessarily want to focus on building yourself but go to someone whose core business it is to do that thing and partner with them.

We do that a lot on our side with things that are not core to us that we would partner with, and we actually encourage our customers to do it because if there are certain things-- I'll give you an example. If content management is not core to someone's business but it is core to Box's business, you should not as a financial services company think about building a content management system. You should go and make sure that you use someone like Box for that piece of it and then focus on doing what you do best, which is serve your customer, issue alone, have an insurance policy, whatever it might be. That's your core, and you should focus on your core and partner for the rest.

We think that that formula works in every step of the value chain. We do the same thing. Our suppliers do the same thing. Their suppliers do the same thing. And what that does is creates an era of specialization and openness of an ecosystem, where the ecosystem just interoperates with one another. And it actually is just a different way to operate in this new economy compared to the way that it used to be, which is you have a vendor or a company deciding that they want to go out and build out the entire staff from top to bottom, and that's just not how the world is going to work in the future. So you have to be extremely adept at partnering and knowing what's core versus what's not core.

RM: Really interesting. So, think about people who are thinking about adopting AI now, and they're looking for their vendors to start doing some of this. How does a new person coming into the space, how do they educate themselves in a market where we're still trying to sort out-- maybe less than with Box but particular with a lot of start-ups-- like, what's the snake oil and what's the real stuff, and what really works, and all that kind of stuff?

Two-part question here-- how do you get smart about AI if you're not an AI person? Are there frameworks you can use or resources you can look to? And then number two, can you wait? Can you wait for the market to mature or is this technology moving too fast that you have to jump in and find ways to embrace it no matter what business you're in?

JP: Both great questions. Let me answer the first question by, before you get smart about a technology area, I think what's really important is to get extremely smart about the problems that you want to try to spend your time solving. I think this is an area where, especially in companies-- the reality is, every company eventually is going to have some kind of a technology differentia, and they're going to be a tech company. They're going to be a digital company of some sort. Even if you look at a financial services company, or you look at the retail company, or you look at an insurance company, look at a health care company, you look at a life sciences company, they're just operating in a way that's much more digital today than the way that it used to be.

The business processes need to get more digital. The way in which you operate with your people needs to get more digital. But the thing that's most important is, don't start from the technology and work out outwards to find problems. Start from really important problems that are going to be central to defining yourself as a unique provider of value, and then work backwards from that problem. Once you've identified what that problem is, one of the things we talk about over here is, we're very obsessed about building painkillers, not vitamins. It's nice to have nice-to-have technology but it's really important that you actually build some technology that's critical, that alters and makes a difference. Do something different so that you can make a difference.

RM: So, in that vein, what we tend to do is to identify what are the different kinds of areas where problems need to get solved within your domain. You're going to be the best judge of what those problems are that are going to be the most potent problems. Go validate those problems of whether or not they are in fact problems for which people are willing to part with their money, regardless of the industry that you're in. And then work backwards and say, if I want to do this in a very effective way, at scale, efficiently, and do things in a very discontinuous manner where the solution I build is going to be 10x better than the solution that currently exists, that's the only time that people are going to move over to you. They don't move over when you have a 20% better solution.

AI can be a pretty big contributor to that in saying, what can AI then do to make sure that it can provide you a 10x differential rather than a 20% differential. So that's the way that, at least, we think about it over here. We try not to get into things that other people could have replicated just as easily. We try to get into things where we have unique value and a perspective where if we didn't build it, the world would look different compared to if we built it. Then in those areas where we choose to build it, we want to make sure that we are very, very focused and laser focused on using technology that's actually going to provide us a tailwind to propel forward in a much more accelerated fashion.

RM: Interesting. That's a great answer. So, why aren't people doing more on the ground, right? Let's talk about that. I mean, we have this-- well, you know because you've been in software for a long time, as have I, that it used to be, best practice was wait. Right? Wait to adopt a thing. Adopt version two when the bug-- particular package software, right-- when the bugs have been worked out and all that kind of stuff. And then with SaaS, you could kind of adopt it whenever you adopt it. Can you do that with AI? Can you wait? Or are you are you behind because of the learning nature of it? Or does it matter? If you're “insurance company A”, and “insurance company B”, one adopts AI for many of their core business operations three years earlier, can a second company ever catch up?

JP: I mean, I think that, of course, it depends. That's true. At the same time, there's this realization that the people who figure out how to apply machine learning in the right ways and sooner are going to be the ones that win. And I think that there's a little bit of an overemphasis on the importance of data. I think data is critical. But, more often than not, data in most companies is bad. It's captured for some different reason than the models you're building and so I think there's a way in which most companies, you think you have all this data. You really don't. Right? So, there is this entirely new data collection process that needs to be introduced.

There's clearly this learning curve there. The real learning curve that I think is poorly understood is when to trust a machine and when not to. All those judgments around when to automate and when not automate, when to bump issues up to people and when not to, my guess is that the companies that are most aggressive and smart about that are going to be the ones that actually win. There'll be some insurance company that will say, you know what, actually we can skip three steps now, and as a result of that, our cost to acquire changes fundamentally, so we can do more Super Bowl ads, or more search apps, or whatever they do. And because their core economics change, and it's not because they have better data, but rather, it's because they understand where their models are good enough that they can trust them, and as a result of trusting them, they could automate in different ways. And as a result of that, they can fundamentally change their economics. And I think that's the actual big opportunity.

RM: Yeah. We've seen, as I think many other AI companies have seen, that the workflow behavior change piece is hard, right? Because you're no longer just giving an output, but you're giving a probabilistic output, and people need to know how to use it and what to do with it and understand it. It's like a weather report, right? Like, even if it's 95% one way, you could take the advice of the model and still have a bad beat.

James Cham: That's right. Then the other part of it-- it's worse than that, right? It's worse than that in the sense that some person as a CEO, she's super smart and she thinks she understands her company. But of course, she doesn't. Because all the actual work that's happening on the ground is poorly understood. And so her ability or her VP's, or her director's, or her manager's ability to actually change things is actually surprisingly limited because they don't really understand what's happening at the ground level. And so there are plenty of opportunities either for models to replace people or to automate things that, to be honest, are only understood by people at the ground and they've got no incentive to tell anyone. Did you read the Brian Merchant article in The Atlantic Monthly about coders who coded themselves out of a job?

RM: No, I did not. It sounds interesting.

JC: Okay, so this is maybe my current favorite business-related puzzle, which is that in most big companies, it is the people at the ground who actually understand the opportunity for automation. But in the way that we're set up, they have no incentive to tell anyone. Because if there's someone who really knows they could write a little script that replaced 60% of their job, why would they tell anyone. Because if they told anyone, they might get fired or, even worse, they might get bumped up to middle management. And it's not clear to me which one is worse, but they're both pretty bad.

So they're no good and it's not a cultural thing because it's a pretty straightforward economic question, right? What incentive do you have for telling everyone else that, actually, part of my job really should be replaced by a computer. And so I've not seen anyone solve that. Like, there are slightly kooky ideas, like something similar to patent models, like where the patent's created so that you have the monopoly for ten years. Now you go on vacation, and occasionally you come in and retrain it. But you should be compensated for that, right? And so, I think that's an interesting puzzle.

RM: That's a very interesting puzzle. Yeah. I think companies definitely have to figure out how to solve that. I think there's a lot of executives that are struggling with how to adopt AI, and you're seeing it adopted in places where you have just the most forward-thinking executive who's just irrationally committed to it. But companies aren't going through and they're not adopting it with any kind of sort of economic sense in the sense of, hey, this is where I could have the biggest business impact based on the data that we have, what it could do.

JC: That's right. We were talking earlier about Danny Conoman, who lives a few miles from here and is a genius and all that. And he spoke at a AI for economics conference a few years ago and he said something along the lines-- we'll have to find the actual quote. But he said something along the lines of how he felt like his entire life had been a waste because he would say that his whole life has been studying the ways we make systematic mistakes that we always-- we have loss aversion or whatever, right?

And he'd say, that's not the problem for most businesses and people. The problem for most businesses and people is we're random, and we're noisy, and we make decisions based on whether we have a tummy ache or not, right? And that is the actual problem in most businesses in the most parts of the world. And to be honest, that's the part that machine learning and models can help in an incredibly straightforward way. But there is no economic incentive right now to figure out how to do that in a clear way because you're giving away what you think of is the thing your best at, which is this judgment piece.

RM: But people have to move to becoming trainers, right? This is what I think is going to go-- if you look at the trends in enterprise software user interface design, you went from, it doesn't matter, it's going to run on a computer, to, it became more like, well, it's got to run in a browser now, and it's got to be a little more user friendly to the consumerization of IT, like I need it to feel a little social, it needs a feed, I need to have a avatar and tags and whatever. And I think the next wave that you're going to see is, really, this wave around UX, UI design such that everything you do is a piece of feedback to the machine. It's going to capture all this data passively in ways that-- and what's hard is you have to do that in a way that doesn't seem like a lot of extra work.

JC: You know what? That's super insightful. I think that one of the problems in most machine learning annotation products right now, obviously annotations is and a really important part is they're really designed for people who make about $10 an hour, right? And the real opportunities are in human-in-the-loop user interfaces for people who make $100 or $200 an hour, right? And figuring out ways to create good experiences for them, in which it makes sense for them, that's another one of those great puzzles. And you figure that out, then you've done much, much, much for the world.

RM: Yeah. Then you have to figure how to sell it.

RM: So, let's talk about a couple other things. What's your advice-- a lot of people that listen this podcast find it and start listening to it because they're trying to figure out where to get started in AI. You're a non-technical or semi-technical executive in an industry that's a little bit of a technology laggard. And you start to think about, wow, we need to be thinking about AI. Where do you start? There's so much BS. How do you cut through it and what are your sources or ideas? What's your advice for them to pay attention to or read or--

JC: Of course, after you buy Talla, and fully implement it. You start there. I think that that process of thinking about machine intelligence and clear ways only comes from first-hand experience. I think the place to start is actually to look for small business processes that you can replace with models, just initially to see what happens on the side. So anything from some small churn prediction piece, just to start seeing whether we can build a model that does a better job of predicting churn or some piece around predicting some logistics question. Like, starting small but model and data centric creates the muscle memory needed for executives to figure out how to have good intuition.

That's as opposed to going after the latest research trend, which are all really important, but just not relevant to most companies because-- we were talking about this earlier. But Peter Senge, in the mid-90s, had all this stuff about learning organizations. And it's like, most of that was not true because organizations don't learn because they're made of people. The people might learn, but they leave. Except we now live in a world where you might really have learning organizations because you have these models that'll actually capture key transactions.

RM:  The person leaves and you have the model that they trained.

JC: You still have the model that has all of their knowledge captured in it. And I think that's both great and terrible. And I switch between how I feel about that over time.

RM: Yeah, definitely. Did you read-- Steve Cohen wrote a piece in the Wall Street Journal, I think it was in December, called Models Will Run the World.

JC: Yes.

RM: Yeah. That was a great piece too. Similar line of thinking. So, Steve and Matthew Granade, who was the founder and chairperson of Domino Data Labs, the co-founder and chairperson of Domino Data Labs co-wrote it with him. I think that that is almost exactly right, that that frame of thinking is the right way to think about the future of businesses.

RM: So, you do have a technical background, which a lot of the people on the show don't but are in the business side now. But again, you didn't come from AI. You came from other types of engineering. What's your advice to somebody who is, they're a senior executive at a big company and they're kicking the tires on this AI stuff, and they're trying to really understand it and they're trying to figure out what's real and what's hyped, and they read crazy stuff in the news, and all this kind of stuff. Are there resources that you have? Are there frameworks or concepts that you have? Is there any advice you would give them for how they can get started trying to wrap their heads around it all?

Vishal Sunak: Back to the founding story of LinkSquares, it's really centered around what's the problem you're trying to solve and what kind of pain do you have, right? I mean, if you're sitting at a big Fortune 100 company and you're saying, well, we have an issue with all of our legacy business agreements or we're trying to make statistical predictions on supply chain and different use cases like that, it has to start with the problem initially. Then from there, you have to work yourself back to maybe AI is not the best answer.

Maybe with AI today, getting your head around its limitations because nothing is perfect. Nothing is 100% of anything in the world, and saying, well, maybe I could get like 90% of the way there. This would actually be an amazing efficiency, even though I know there's 10% that we're going to fall short on. It's like thinking about that value. It's like, well, how big is your pain? How expensive is your problem to go unsolved? And if you think about the industry we're in, with executed business agreements, I mean, these agreements are the foundation of a company with your customers, and your partners, and your vendors.

The pain is visceral. It's there. If you think about other problems, like, oh, we're trying to do some sort of statistical prediction on doing millions and millions of dollars of cost savings. Well, it's like, if someone can get you 90% there, that's an interesting way of looking at saying, maybe it's time for us to take the bite on AI. But be purpose driven. Not one technology can solve every problem, and put on your engineering hat, break problems down and say, this is our problem today. And if you can define that, maybe there are tools out there in the market that can be found, that can solve your problem.

RM: Yeah. I think it's a great use of NLP because what you guys are doing is much broader than a keyword search. A keyword search is going to return a whole bunch of crap that you may or may not want to dig through, and it's going to be slow from the lawyer's perspective. So, when you have these things where you want to look for certain types of natural language clauses and things like that, there's been a lot of work done in the last five years in neural information retrieval and other kinds of NLU that can deal with some of the variability in language and be like, this means the same thing. We're going to source it anyway, which is pretty cool.

I think the first wave of AI was very much about prediction. You guys are playing into what I think is the next trend, which is sort of the automation trend. It's interesting that what you're doing is you're making law teams, legal teams, so much more powerful and efficient, and you're giving them a chance to be more strategic. Like I assume nobody that you're dealing with is complaining that jobs are going away because you're automating this contract work.

VS: No. Quite the opposite. They feel more empowered to be able to do their job. And we don't approach everyday and talking to general counsels and legal teams saying, well, you can let go three of your staff. It's more like, why don't we get three of your staff their own robot paralegal, and make them three times, ten times more efficient to do their job so that they can actually have a better work experience, rather than combing through files manually, like scrolling through scanned PDFs endlessly to try to find answers.

Ultimately, making themselves more efficient so that they have the opportunity to get involved in what we call higher level business strategy type of things, figuring out the next fundraiser has to be done, or dealing with a compliance issue, or dealing with litigation. Contract review ends up taking a lot of the time when you actually try to project how long will it take us to review 3,000 files one at a time, looking for ten pieces of metadata. That'll take you a long time, right, to get hundreds of hours. I think it's the opposite. I think we've seen some incredible things, like some of our customers hiring dedicated people to run LinkSquares instances.

It's kind of like the opposite of what you're saying. It's like AI is here, and we're helping a company, but also they're having job creation inside their company, inside their legal department, pushing for these roles to exist to actually run our instance and get the value out of it that they need, which is really cool. That's been one of the coolest things we've seen.

RM: That's awesome. Tell us a little bit about the-- you guys are playing in the AI space, and you're looking at companies that are adopting this kind of technology. Historically, it's paid to be a technology laggard because you could let the bugs get worked out of the latest version of Excel or Word or whatever, and then you could implement it after the excited for thinkers did that. Is it the same with AI? Or are companies falling behind if they're waiting on AI solutions or they're not looking at AI solutions now?

VS: I think if you're a company that has that pain, I think the time is always now, right? And, when we talk to a company that's just experienced some sort of really painful experience around, we had to review all our contracts because we had a security breach, or we had to review all of our contracts because we missed our SLA, or we're trying to raise our Series B or Series C, we have to comb through all these contracts, prepare disclosure schedules, it ends up bucketing into a category of companies that just can't wait because they've tried to do it the ways they've been trying to do it-- manually or pay their outside counsel hundreds of thousands of dollars.

Again, relating back to the pain, that's usually the moments where the best discussions happen, right? If you're looking at any sort of AI solution, be it a Talla type application, where the support and creation of a knowledge base is manual or inefficient and you're spending hundreds of hours and many people are trying to keep it updated, it's like why bother waiting if the future is here, even to what I was saying earlier. It's like, maybe 90% will make you a lot more efficient than trying to do things manually the old way, whatever way you were trying to do it. I think the time is now. But, making good vendor selections is everything, and knowing what you're getting, and partnering with a company that can deliver on, ultimately, what they promised to do.

RM: So, you've mentioned a couple of times about new technologies and how early this industry is and things. I always think marketing new technologies is particularly challenging because, when you market things, particularly when you market to companies, you sort of want to understand the buyer's journey. And often there is no buyer's journey yet. People are figuring it out. They're figuring out what they want. They're figuring out the buying criteria. They don't know how to evaluate it, etc.

What are some of the best lessons that you have learned about doing this for other AI companies? Like, how do you take something that's maybe amorphous as a process and a market and all that kind of stuff and what kind of strategies have worked for you to help maybe bring some structure to that or pin down on something?

Gabi Zijderveld: That's a really good question. I think some of our best practices-- certainly this is [INAUDIBLE] done this at other companies. I think it's simplicity, clarity, and lots of examples of the so what. Right? And we see this in technology all the time. Like, of course, we all make awesome speeds and feeds. We have great features, great capabilities. But you have to be able to articulate a lot of it. Like, who cares? Who actually would be willing to spend money on that? Like, how is it going to help them make their products better, or generate more revenue, or make their clients happier?

It's really all about value problem and being able to articulate that clearly and with simple language. And when we first got started in coining this term emotion AI and carving out what that space was, we spent a lot of time focusing exactly on that. And it was a content driven approach, putting a lot of content out there. And we got pretty organized about that as well. To this day, our CEO and I, we refer to our messaging as our talk track. And we actually spend time sitting down and writing down what are the key points. What is the so what? And then not get wrapped up in speeds and feeds but you explain it in simple language.

And I have an example of that too. Like several months ago, we are working with a robotics company. Actually, I can name them because we announced it-- SoftBank Robotics. We're working on integrating our technology in the Pepper robot, which for us is kind of an extension of automotive because autonomous vehicles, of course, are advanced robots. And I was talking to one of our product managers because I also head up our product management team. I was working with someone on our team. And you have big great news, we can now run on ARM architecture. That is freaking awesome. Who cares? So what?

Then the person was a little bit taken aback. I know why it matters, but I want them to tell me why this is good. Yeah, but we run on ARM architecture now. I'm like, well, who cares? Why would someone pay money for that? Well, it matters because now we can run on better on with a small footprint on embedded. And yeah, essentially, it means we can run on the robot. I'm like, OK, thank you. Why didn't you just tell us then that? Let's not get focused on talking about the features in the technology, but always attach to that why it matters, why it matters to our client, why it matters to the market. And so that's kind of my constant game is constantly ask these why questions and the so what questions.

I think coming back to those best lessons, again, it's the clarity, simplicity, and focusing on so whats in value prop.

RM: It kind of reminds me, there was a book I read years ago during the web 1.0 tech bubble. I think it was called Jumpstart Your Business Brain and it was by a guy named Doug Hall who had worked for Procter and Gamble. And he had this model when you're selling new technology and he's like, look, you have to tell people three things and they really need to be in this order. Number one, what's the overt benefit? What do they get from this? Number two-- what's the dramatic difference? Like, why you? Of all the people that provide this benefit, why you? And number three, what's the reason I should believe you? What's the reason to believe?

GZ: Yeah, absolutely.

RM: So, it's kind of an interesting model.

GZ: That's exactly it. And what's the reason I should believe you, I mean, I can say whatever I want to say and it doesn't mean people are going to believe it. So for us, what was really important is also finding examples and proof points.

Especially in the early days when we first got started at that broader vision for the ubiquity of emotion AI, it didn't matter that these were not huge, big name companies. I would take any case study, success story, or reference I could get. And then I would milk it until the cows came home, seriously, like make the most out of it. But also, because we then got a true, to this day, we still get a lot of incoming press interest, I was able to forge really strong relationships with these referencable clients and partners because I would give them something back. I would bring them into PR opportunities. I would bring them with our CEO to speak at conferences, so it wasn't me just asking for good references, us giving them something back as well. That's worked well for us. 

To this day, we have really strong and collaborative partnership with lots of different folks and my team that lends a certain level of gravitas and credibility that's really valuable.

RM: That's great. So tell us a little bit about-- a lot of the people that listen to the show, they are executives at companies. They aren't super technical. And they are listening in trying to make sense of all this AI stuff, right? They hear about deep learning. They hear about machine learning. They hear about all kinds of things that they don't know what they mean. And then they hear people say that so much of it's over-hyped and it's crazy. They don't know what to believe. And sometimes it's just a basic statistical technique. Is it really AI? It's like, what's your general advice for people who, they're getting started, they're trying to make sense of it all. Where do they turn? How do they think about it? Do you have any frameworks, or resources, or--

GZ: That's an interesting one. So obviously, there is a lot of AI hype and fluff out there. Because frankly, these days, every single bit of technology, every piece of software is AI. So everything gets labeled as such. And for the first, you can't-- what is the expression? You can't see the trees for the forest or something. Like, how can you make sense out of that when everything is the same? So how can you work through that?

I think, first and foremost, it is educating yourself a little bit about what AI really means, what machine learning really means. And there are great resources for it. Frankly, your newsletter, Inside AI, is awesome. That's a really accessible and simple way to get a good read on what's out there. There is credible and relevant information online to read up on. So just get a baseline understanding of what it means. If you haven't done that yet, I think I would get started there.

But then, also, look at what's going on in your industry. You don't have to reinvent the wheel. Why would you? That's stupid. So try and find out what your peers are doing. Let's say you're in finance, or in health tech, or whatever, right? Go talk to people you know-- former co-workers, or other executives you might know. What are they doing with AI? How are they maybe deploying it in their organizations already? What are some of the solutions that they're using that they're actually getting value out of? Look at what your competitors are doing. Again, why would you reinvent the wheel.

If you are a super large corporation, maybe you have the luxury of putting together some kind of cross discipline task force where you have every single department maybe do an assessment of where inefficiencies are, where they think they can make their work and their processes better. And maybe there is an AI solution out there that can help with that. I would certainly also-- there's lots of AI research out there, right, and amazing innovation being potentially developed and deployed but it's all early stages.

Maybe also focus on real applied AI solutions. What are the companies that actually have legitimate products and legitimate customers, and can give you examples of how those customers are benefiting from it. Talk to those customers. Call them up. Get insight.

Subscribe to AI at Work on iTunes, Google Play or Spotify and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.