Episode 28: Investing in the Future of Work with James Cham from Bloomberg Beta

Rob May sat down with James Cham from Bloomberg Beta. Tune in to their conversation on investing in the future of work, AI investing, his insights on AI companies, best practices for adopting AI, and more.

Subscribe to AI at Work on iTunes or Google Play

Rob Circle Headshot

Rob May, CEO and Co-Founder,
Talla 

James CHam b&w

James Cham, Partner,
Bloomberg Beta 


Episode Transcription  

 

Rob May: Hello, everybody and welcome to the latest edition of AI at Work. I am Rob May, CEO of Talla. And with me today is James Cham from Bloomberg Beta. Welcome James. Why don't you give us a little bit about your background, how you got to Bloomberg, and maybe talk about some of the investments that you guys make in AI.

James Cham: Bloomberg Beta was started about five years ago at the seed stage fund with Bloomberg as our only investor. We were investigating, what we told them initially was the future of work, whatever that actually ends up meaning. Because we started five years ago, a couple of my partners and I realized that we were telling big data investment when the real question was, what next? What do you about that?

We were lucky enough to start thinking about machine intelligence, relatively early as far as this current wave of AI-related investments. We've been good so far.

RM: I was actually here in 2015, you and I were talking about AI investing. Not many people were looking at that space yet. You recommended-- I forget the name of the book that you recommended a book for me to read.

JC: Was it Pedro Domingos' book?

RM: No, it was before that. What was the book? It was something about talking to your computer.

JC: The Man Who Lied to his Laptop by Cliff Nass, which is honestly my favorite book. Cliff Nass was a sociologist at Stanford. He understood that people treat machines like humans, that we anthropomorphize constantly. He did a series of tests and psychological studies around how that works.

I think there's a weird way in which everyone worries about the singularity, worries about Turing tests. I think that's mostly wrongheaded. Because, we as people are desperate to treat machines and anything around the world as human. And thusly, it actually isn't that hard, to be honest.

RM: When I go on vacation, my kids think Alexa misses us. That's how they think about the world. We talk sometimes about AI companies and how they're different in terms of trying to build them compared to SaaS companies. Have you seen some of those differences? Are they slower? Are there things that you need to do differently? Do you believe in this concept that a good set of interests puts forward a model-market fit? Stuff like that.

JC: I mean, they're so smart. Those other guys are great. I think that if you were to compare us now to where we were five years ago, we've figured out one of the definitions of the models, that we now understand that models are the heart of machine learning. I think that that piece is now widely understood to the point that senior executives at big companies whisper it as if it's some big insight. Right?

RM: Yeah.

JC: OK, that's good. I think that's mostly good for the world. I think the reality though is that the process of building machine learning models is in some pernicious ways, very different than software development. I started my career as a soft developer. I figured out a few tricks. I figured out that I could say, I would do less or I should take longer. But I can almost always build the thing you wanted to build, right?

That's not true with machine learning models. There's plenty of times that there's not enough juice in the data. The reality is, you need to recast the question or just give up. That's so different than normal software development. That's one key thing. The other thing that's true is the economics of software development. Let's say the specific characteristics of software developments in the 70s led to the rise of license software. The specific characteristics of online software development, which Scott pioneered in the mid-90s, led to a series of economic business models that made sense.

In part because model building is different enough. I'm pretty convinced that there will be some dominant business model that we haven't really seen yet. Subscription software made sense for online software and for the model stuff, I'm still desperately looking around. I think we all are trying to figure out what will be the dominant model.

We're two blocks from Salesforce Tower right now. And, the guy or gal who figures out that dominant business model is going to be successful enough that they might have two towers. That's mostly what I'm looking for.

RM: Well, a lot of people think it's going to be one of these things where you combine blockchain and AI. People submit data and you can buy data from people at very small prices. You can arbitrage the marginal value that another piece of data will improve your model how much based on would you sell your model for. People are looking at a lot of that.

JC: It's both less complicated and more complicated than that, right? There's a way in which the genius of Marc Benioff and his team that they described online software in such clear ways that they had a number of technical slash marketing innovations that made it possible for average buyers to decide, oh I trust Salesforce with my most precious customer contacts. And as a result of that, I will be willing to give them a certain amount of money per seed every month.

That's, to be honest, both kind of super simple and super profound. There's going to be someone,  I don't know, like you, who will be able to explain it in those terms that a buyer will say to themselves, oh my goodness. What have I've been thinking? I need to go buy this thing in this slightly different way. And it'll be, to be honest, mostly buzzword free. It'll be as simple as, no software.

RM: Well I can tell you what we've learned in the automation space. I've seen this not just in what Talla does with customer support automation. But we've seen this in with Automation Anywhere and some of these things. They sell against a human capital budget. They don't sell against a software budget. So, that's at least trying to pull the money from a different bucket, which works well on a tight labor market like we have now. But yeah, unclear if that's the long term sort of way to do this.

JC: I think things like that are exactly the trends that are interesting right now. Those changes, which I think people don't talk enough about, are the interesting changes.

RM: Sometimes we wonder if the next staffing company will be built from the ground up with simple digital agents and just renting them out. Did you ever read Robert Hansen's book?

JC: The Age of M. I just recommended that to someone yesterday.

RM: He's another guy that looks a lot at the  economics of this and how it plays out in the future, right?

I think the critical thing is that there are a lot of very smart people looking at macroeconomics around job replacement and those sorts of rates. And, that's interesting, but that's not as interesting as the microeconomics. And I think microeconomics remain undercover. And so I was lucky enough-- those guys out of University of Toronto-- Ajay Agarwal, Avi Goldfarb, and Joshua Gans--

JC: Prediction Machines, a book I recommend to everybody as well.

I think part of their appeal is that they thought first in terms of the basic units of what's actually happening. Because they thought in those terms, you can expand from there. In some ways, the macro-question is a little ridiculous because we don't even know what people are doing on the ground yet. We don't even know how to think about it.

RM: That's very true. So, why aren't people doing more on the ground? Let's talk about that. You've been in software for a long time, as have I, that it used to be best practice was to wait, right? Wait and adopt a thing, adopt version 2, particularly package software, when the bugs have been worked out and all that kind of stuff. Then with SaaS, you could kind of adopt it whenever you adopt it.

Can you do that with AI? Can you wait? Or, are you are you behind because of the learning nature of it? Does it matter? If your insurance company A, insurance company B. One adopts AI for many of their core business operations three years earlier. Can the second company ever catch up?

JC: I think that, of course, it depends. That's true. At the same time, there's this realization that the people who figure out how to apply machine learning in the right ways, and sooner, are going to be the ones that win. I don't know. There's a little bit of an overemphasis on the importance of data. I think data is critical. More often than not, data in most companies is bad, right? It's captured for some different reason than the models you're building.

I think there's a way in which most companies, you think you have all this data. You really don't. So, there's this entirely new data collection process that being produced. There's clearly this learning curve there.

The real learning curve that I think is poorly understood is when to trust the machine and when not to. All those judgments around when to automate and when not to automate, when to bump issues up to people and when not to. My guess is that the companies that are most aggressive and smart about that are going to be the ones that actually win. There will be some insurance company that will say, we can skip three steps now. As a result of that, our cost to acquire changes fundamentally. So, we can do more Super Bowl ads, or whatever, more search sda, whatever they do.

Their core economics change. It's not because they have better data, but rather it's because they understand where their models are good enough that they can trust them. As a result of trusting them, they can automate in different ways. As a result of that, they can fundamentally change their economics. I think that's the actual sort of big opportunity.

RM: What we've seen-- as I think many other AI companies have seen-- is that the workflow behavior change piece is hard. You're no longer just giving an output, you're giving a probabilistic output. People need to know how to use it and what to do with it and understand it. It's like a weather report, right? Even if it's 95% one way, you could take the advice of the model and still have a bad beat.

JC: That's right. Then the other part of it, it's worse than that right? It's worse than that in the sense that some person, a CEO, she's super smart and she thinks she understands her company. But of course, she doesn't. Because all the actual work that's happening on the ground is poorly understood. So her ability, or her VP’s, or her director’s, or her manager's ability to actually change things is actually surprisingly limited, because they don't really understand what's happening at the ground level.

There are plenty of opportunities either for models to replace people or to automate things that to be honest, are only understood by people at the ground. They've got no incentive to tell anyone. Did you read the Brian Merchant article in the Atlantic Monthly about coders who code themselves out of their job?

RM: No, I didn't. It sounds interesting.

JC: This is maybe my current favorite business related puzzle, which is that in most big companies it is the people at the ground who actually understand the opportunity for automation. The way that we're set up, they have no incentive to tell anyone. If there's someone who really knows they could write a little script that would replace 60% of their job, why would they tell anyone?

If they told anyone, they might get fired, or even worse, they might get bumped up to middle management. It's not clear to me which one's wore, but they're both pretty bad. It's not a cultural thing because it's a pretty straightforward economic question. What incentive do you have for telling everyone else that actually, you know what, part of my job really should be replaced by a computer. I've not seen anyone solve that. Like, there are slightly kooky ideas like something similar to patent models where the patent was created so you have a monopoly for 10 years.

So that piece, I think that's an interesting puzzle. That's one of the good questions out there.

RM: That's a very interesting puzzle. I think companies definitely have to figure out how to solve that. I think there's a lot of executives that are struggling with how to adopt AI. You're seeing it adopted in places where you have just the most forward thinking executive who's just irrationally committed to it.

But, companies aren't going through. They're not adopting it with any kind of economic sense in the sense of like, this is where it could have the biggest business impact based on the data that we have, what it could do.

JC: That's right. Because we were talking earlier about Danny Kahneman, who lives a few miles from here is a genius and all that. He spoke at a AI for Economics conference a few years ago. He said something along the lines-- we'll have to find the actual quote-- but he said something along the lines of how he felt like his entire life had been a waste.

His whole life has been studying the ways we make systematic mistakes, that we have loss aversion or whatever. He'd say, that's not the problem for most businesses and people. The problem for most businesses and people is we're random and we're noisy. We make decisions based on whether we have a tummy ache or not, right?

That is the actual problem in most businesses and in most parts of the world. To be honest, that's a part that machine learning and models could help in an incredibly straightforward way. There's no economic incentive right now to figure out how to do that in a clear way because you're giving away-- what you think of is things you're best at, which is the judgment piece.

RM: Yeah. But, people have to move to becoming trainers. This is what I think is going to go. If you look at the trends in enterprise software user interface design, you went from: it doesn't matter, it's going to run on computer; to: it's got to run in a browser now. It's got to be a little more user friendly to the consumerization of IT. I need it to feel a little social. It needs a feed. I need to have an avatar and tags,

I think the next wave that you're going to see is really this wave around UX/UI design, such that everything you do is a piece of feedback to the machine. It's going to capture all this data passively. What's hard is you have to do that in a way that doesn't seem like a lot of extra work.

JC: You know what, that's super insightful. I think that one of the problems in most machine learning annotation products right now. Obviously annotations are a really important part. They're really designed for people who make about $10 an hour.

The real opportunities are in human in the loop user interfaces for people who make $100 or $200 an hour. Figuring out ways to create good experiences for them in which it makes sense for them, that's another one of those great puzzles. If you figure that out, then you've done much for the world.

RM: Then you have to figure out how to sell it.

JC: That's another thing.

RM: Which is hard. You know, the biggest things I've noticed between Talla and doing my last company, Backupify, is how much better everyone has gotten at online marketing. I mean in 2010, 2011, if you were doing re-targeting and strong SEO and all that, you were way ahead of the game. Everybody does that now. Small companies do that now. People are looking at all that.

JC: I mostly believe you there. Except, I would say that what's interesting is that the biggest companies, they do a very bad job with that. There's a way in which in part because we live in a startup world, which is not a geographic idea. It's a conceptual thing. Now you've talked to everyone who's doing growth marketing. As a result of that, you end up in a place where all these techniques are super easy for you and all your competitors. Sort of, suggesting that you should competing against big companies who are really bad at this.

RM: That is true. There haven't been new channels that have opened up. That's I think what's been frustrating. 

JC: That's right. But you could do a ABM against a big company. And you'd run rings around them.

RM: That's probably true.

JC: Because they'd be like, what's ABM?

RM: Yeah, exactly. You guys are looking a lot at deals and have done a lot and have been for some time. As we were talking about earlier, you actually introduced me to two entrepreneurs at a dinner who I ended up investing in two of my best investments. I don't think you made those investments.

JC: I missed out. I missed out.

RM: Is there anything that you're looking for that you haven't seen? If there's an entrepreneur out there and you're like, man this is a thing I would build right now that I think there's an opportunity for that you haven't seen out in the market?

JC: I think that there are many open opportunities. One of the clear ones is, RPA is amazing. It's also terrible. It's also very big business-y right now. There's some lightweight version of RPA that initially looks like a fancier version of it...

RM: Yeah, Zapier, or...

JC: Zapier, that just ends up growing like mad. There's some bottom-up RPA tool out there that it would be really exciting and interesting.

I think that there are a whole set of sort of fairly straightforward business processes in which automation and model building becomes really available. That one, we just look at all day. That's in some ways, our meat and potatoes. Then I think the sort of like the big enterprise transformation stuff, I just don't know how that happens. Like that opportunity, the first company that actually manages that kind of transformation will be really interesting.

Similarly, you look at Google and Facebook. They're all these great PMs and engineering leads and thoughtful people there. They are encountering the problems that the rest of the world will encounter five years from now. That, as a theme, I think is tried and true. There are lots and lots of opportunities still there.

RM: To some extent, I think it'll be harder with AI. But, to some extent, it happened in Cloud, which is why you had the whole hybrid cloud time. I mean, I remember when I was raising money for Backupify Series A, the number of venture capitalists in 2010 who told me Rob, let me tell you something. Enterprises will never move their data to the cloud. Which is clearly wrong.

JC: I mean, it's one of those things where similar piece on SaaS, right?

RM: Yeah.

JC: I think those sorts of things which are obvious to you and that sort of are not obvious to the world, like that sort of insight is what everyone is looking for. It's certainly what I'm looking for where someone will tell me something. I'm like, I don't know if that's true. Then they convince me that I'm wrong. That's the dream, right? That's the dream.

RM: There you go. Well hopefully, somebody listening will have some cool ideas and hit you up. Let's talk about a couple other things. What's your advice? A lot of people that listen this podcast find it and start listening to it because they're trying to figure out where to get started in AI.

You're a non-technical or semi-technical executive in an industry that's a little bit of a technology lagger. You start to think about wow, we need to be thinking about AI. Where do you start? There's so much BS. How do you cut through it? What are your sources or ideas? What's your advice for them to pay attention to?

JC: You know of course, after you buy Talla and are fully committed, you start there. I think that that process of thinking about machine intelligence in clear ways only comes from first hand experience. I think the place to start is actually to look for small business processes that you can replace with models just initially to see what happens on the site.

Anything from some small churn prediction piece just to start seeing whether we can build a model that does a better job of predicting churn or some piece around predicting some logistics question. Starting small but model and data centric creates the muscle memory needed for executives to figure out how to have good intuition.

That's as opposed to sort of going after the latest research trend, which are all really important, but just not relevant to most companies. You know we were talking about this earlier, but Peter Senge in the mid-90s had all this stuff about learning organizations. To be honest, most of that was not true because organizations don't learn because they're made of people. The people might learn, but they leave. Except we now live in a world where you might really have learning organizations because you'll have these models that'll actually capture key transactions.

RM: The person leaves and you have the model that they trained.

JC: You still have the model that has all of their knowledge in it.

RM: That's a great point.

JC: That's both great and terrible. I switch between how I feel about that over time.

RM: Definitely. Did you read, Steve Cohen wrote a piece in the Wall Street Journal, I think it was in December, called Models Will Run the World.

JC: Yes.

RM: Yeah, that was a great piece too.

JC: Steve and Matthew Granade, who was the founder and chairperson of Domino Data Labs, co-founder and chairperson of Domino Data Labs sort of co-wrote it with them. I think that is almost exactly right. That frame of thinking is the right way to think about the future of businesses.

RM: Last question. Back in 2018 when we did these podcasts, we would always end on the sort of Zuckerberg, Elon Musk debate. We don't talk about that anymore. Now the question I always end on is what I call the Gary Marcus versus everybody else debate, which sort of cropped up in December with Gary Marcus sort of saying, hey deep learning is not going to get us there. We need new breakthroughs. And then a lot of people saying, deep learning has a very long runway. And it may actually get us closer to AGI than anybody thinks. As somebody who sees a lot in the AI space, do you have an opinion?

JC: Gary is so smart and astute about many things. At the same time, many of the opportunities for businesses right now are even simpler that they really are kind of basic models that could look almost like maybe just a progression. Just starting to think in those terms, I think, is the first step.

I am agnostic on whether you know Gary is right or you know Jeff Hinton is right. I'm agnostic on that piece. I think that it is a case where I don't know if you remember the risk versus sisk fights. It was super important, except of course they were both kind of right and both kind of wrong. Then it was the dominant commercial model that ultimately won rather than the dominant architecture.

RM: Very true as is often the case. I mean, how many times did you hear that Microsoft was going to die. From 2005 through I think 2015, it was like a yearly thing.

JC: Every year, they continued working with their developers and working with the people who loved working with their teams and working with the products. They just kept on getting better. You know, they've been incredibly successful as a result.

RM: Well James, thanks for being on. For those of you listening, if there's guests you'd like to see on the podcast or questions you'd like us to ask, please email those to podcasts@talla.com. We'll see you next week.

Subscribe to AI at Work on iTunes or Google Play and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.