Episode 15: Best of AI at Work  

Check out a roundup of some of the best advice that we have gotten so far from guests on the show. 

 

Subscribe to AI at Work on iTunes or Google Play

 


 Episode Transcription 

Alyssa Verzino: Hi, everyone. You're listening to AI at Work, hosted by Rob May, the CEO at Talla, and Brooke Torres, the Director of Growth that Talla. My name is Alyssa Verzino, and I produce this podcast.

This week, we have a roundup for you guys of some of the best advice that we have gotten so far from guests on the show. To start, we have Drew Magliozzi, CEO and Co-Founder at AdmitHub. Throughout the podcast, you can just check out the episode description for more information on each of the guests that is featured in the episode. Thanks so much, and I hope you enjoy it.

Rob May: What have you seen as some of the criteria that say that people are ready to adopt these tools more than other organizations that aren't?

Andrew Magliozzi: The number one criteria is having a problem that they are desperate to solve. That is always an important one. That's why summer melt was an easy entree into the industry. They hadn't solved it. It was costing them many millions of dollars. And we felt like we had the ability to move the needle, and we did.

Understanding, do you have an urgent need? And usually, the way I like to think of what we do is-- I never really explain it in terms of AI. I usually say it's behavioral economics or like behavior change at scale. We are able to nudge, convince, guide, support students in whatever they need to do really at any time in their student journey to impact change.

Now what is it that you want to change, is the best question. If folks are able to wrap their minds around that, then we can make a difference. Second, you know, is there a person? One key collaborator on site at the school who is going to be the Sherpa for their new AI mascot, who's going to care for it, tend to its needs, and train it every day. Because it's really about the flywheel effect of training that we find brings the most benefit. If you're going to use it, use it to its fullest extent and make sure you're doing it daily.

It's like we're giving you a puppy, we often say. Here's this puppy. It might make a mess at first, but I guarantee you, you're going to love it. By the end of the year, it'll be rolling over, and playing dead, and it will be the most benefit you've ever brought to your office. The key is it's a very different type of technology.

If I buy a sports car and I drive it off the lot it starts depreciating every day. The exact opposite happens when you buy AI. It is the worst it will ever be on day 1. It is totally underwhelming. Every day, it gets a little bit better. By the end of the year, you've got a Batmobile.

Brooke Torres: What other advice do you have so for executives who are getting started in AI, you know, other than starting with something small. Maybe you're managing expectations around data set that you need, high-quality data, time to train, those sorts of things.

Steve Peltzman: Getting smart people. Like, by hook or crook, get somebody in the company that really knows what they're talking about. I don't know what I'm talking about, you know? Certainly my executive team aren't experts at it. Even some of the experts aren't experts in implementation.

We acquired a company called GlimpseIt out of San Francisco. And you know, there were many reasons, but one of the reasons was we need AI expertise now in the company, embedded. Iit needs to be part of who we've become, so let's start there.

Not every company can go acquire a company, but you can hire people. So you've got to make the investment, at least in a small way, of getting AI expertise in, in-house rather than just dealing with companies, I would say.

We covered the other areas, which is start small. Try something. Learn. We did two or three pilots and learned so much, we didn't make a whole lot of progress, but we learned a lot. And that's okay. That's progress, you know? There's a lot of things we didn't launch from those pilots, but we got a lot of knowledge and a lot of experience, and now we know better.Try and fail, I guess. Hire, try, and fail. 

Rob May: Do you have any guidelines or anything that you've seen work? If you're a big company, and you're thinking about how to dip your toe in the AI water, what do you do?

Rudina Seseri: Well, I think if you're a big company and you're trying to solve certain problems, whether you're going through digital transformation or some sort of data, whatever the challenge might be, you say, "okay. I'm going to solve this, let me see if there are AI-enabled solutions that will help me with that", rather than just looking at the legacy players.

Don't go looking for AI for the sake of AI. I think that's misguided. If you're going to go through a transformation, if you're going to have a different approach to cybersecurity, if you're going to launch a new product, how could machine learning help? How could predictives help? Who's the best vendor that's leveraging AI? You will see the advantage that way.

Or, if you're trying to build an internal team and capabilities, then what problem-- again, what business problem or technical problem am I solving? How could I do better if I leveraged AI in a certain way? And who are the right people to build that with? Start with what you're trying to solve first, and then look at AI as an enabler.

Brooke Torres: What about in other sort of recent writing that you've done? I know you did this AI-driven leadership piece in the MIT Sloan Management Review. Talk a little bit about that, because I think that's really interesting and applicable for our listeners.

Tom Davenport: Well, I think AI isn't different in that sense from other technologies in that the likelihood that your senior executives are going to embrace it and act on it is probably the single most important factor in how quickly you get moving. But I think there are, as we were saying, more barriers to really fully understanding it, so it takes a little bit more work for senior executives to get involved.

What we did in that article was, you hear a lot of companies saying, oh, we want to be AI first, and so on. We talk about, what would an executive of an AI-first company really look like? And it's things like, of course, you need to know something about the different technologies and what they do, and you need to be clear about what you want to accomplish with AI in your business.

The ambition one is a really interesting one for me, because most of the press accounts are these breathless kind of transformational moonshot stories. Many of these were Watson-based, and many of them did not work out very well.

What's not as widely publicized is the kind of low-hanging fruit, every day, make this decision a little better, make this process more efficient and effective. And it's good at that. In fact, I think that's really the only thing that it's good for, just because it tends to be very task-based and not entire process or even job-based.

I think understanding that and having the right level of ambition and saying, oh, yeah. We can eventually transform our companies, but it's probably going to be through a series of these less ambitious projects in the same area.

A lot of companies are in pilot mode, proof of concept mode, prototype mode. I think having a clear set of criteria for when you go into production deployment I think is very critical for AI-first executives. Getting your people ready. This is different from other technologies I think in that it has more implications for your workforce.

I think letting them know that and letting them know how they can add value is very, very critical. There's an important data piece, owning the data. I think, Rob, you wrote about this in your podcast, your blog this past weekend, which I certainly agree with. And there's, I think, a piece of how the organization works together on it too. So those are some of the things we put in that piece.

Rob May: Yeah, it's really interesting. One of the things we've seen is that a lot of employees, there's really two big key workflow behavior changes that have to happen, depending on the type of AI you're doing. One is sometimes you have to have this training period or ongoing training with your software. People don't always realize that.

We actually had a discussion with an early customer here where the sort of head of a department was complaining about, you know, I don't know if my people would spend any time, like, training Talla. I don't know that they would want to tag this or label that or answer a question that Talla might pop up and ask.

He's like, "even 10 minutes a week would be a lot." And I said, "well, how long does it take you to train a new employee?" And he's like, "well, it takes about two weeks." And then he was like, "oh, I get it. So you actually save a lot of training time". 

One of the things we encourage people to do is to try to, and I encourage other AI companies to do this, but try to build training into their standard workflows. Make it as easy as possible. And then this idea that a lot of these outputs, because they're not black or white rules-based, they're probabilistic, employees don't know what to do with them.

Tom Davenport: People don't think probabilistically in our society, sadly.

Rob May: I think that's going to be one of the biggest things that's going to change as you roll some of this out.

Tom Davenport: I think it's the probabilistic versions of AI that are doing the best right now. The deterministic ones are having more problems. My sense is there's room for both, and my guess is over time, we'll go a little bit back toward the deterministic stuff.

People are starting to complain, as I'm sure you hear, that deep learning is kind of reaching its peak, and we need to have more common sense in these systems, which I think will swing us a little bit back in the deterministic-oriented direction, and rules and so on. But that whole area has problems too. That was why the last generation of AI really died out.

It definitely means changes in the workforce. Even having people come to realize that these systems are going to be their colleagues for the future of their careers, for the rest of their careers, I think is a tough thing for a lot of people to grasp.

Rob May: You've invested in a lot of companies now. You're running an AI company. What are some of the most interesting lessons you've learned advising these companies in terms of if some of our listeners are in the process of buying AI products or building or deploying AI products, like, what are some of the things that you've seen that have worked, and maybe some of the mistakes that you've seen people make?

Brendan Kohler: I think the primary mistake that I've seen companies that I've advised and invested in make is for companies or smaller companies to assume that they understand the complexities of the business they're trying to help. In our world, AI is this amazing, malleable tool that we can use to solve many problems. But, recognizing within a large company what problems are actually solvable and what problems boil down to issues of company structure or processes or the other variety of potential blockers that exist in an organization that's actively trying to execute on a business model, the business owners that engage these smaller companies really need to do a lot to bring along the smaller technology company into the organization and understand how they can help.

If they don't do that, these technology-focused companies generally don't have a chance. So, what we see as being most effective is for smaller companies to engage with companies that are willing to invest the time and resources learning and bringing these smaller, AI-focused companies into the fold.

Rob May: Yeah, it's interesting. One of the things I've noticed in the ecosystem in general is that you had this explosion of things people were trying with AI applications and use cases and everything else, and tons of startups doing that. I think one of the things that's happened over the last year in particular is many of those companies have shifted and moved. Some of the moves have been big. Some of the moves have been very nuanced.

Some of the companies are starting to look more like each other, and I think they're doing this because we've sort of found that intersection of the Venn diagram. Everybody's stumbling upon the same handful of use cases that are both technically feasible and economically viable for where we are. Are seeing a similar thing where maybe some of the companies that started over here are shifting?

Brendan Kohler: I think we're seeing it to a certain extent in various verticals. So for example, in industrial IoT, many of the companies that were heavily focused on many different areas of the problem, everything from sensors to the AI platforms, they're all converging on a business model that involves more of a vertically integrated solution. And that's a reflection of the maturity and the technical capabilities of the businesses they're engaging in.

Similarly, our investments in financial-oriented technology companies that employ AI to help larger banks and investment organizations, they're also converging on certain business models. So I think you're absolutely right that there's certain areas where we're all finding AI can truly help.But a lot of this is because of the maturity of the markets. And because businesses are unable to properly evaluate all the potential options with AI, it's much easier for them if the companies that are pitching to them frame everything the same way.

Rob May: Tell us a little bit about what you've learned about rolling out AI. I know-- I don't remember if you were involved in this, but two and a half years ago, one of Talla's very-- one of our very first beta customers was HubSpot, and we had this kind of like recruiting workflow AI tool. And we went over, and it was Becky who we worked with.

And after about six weeks, I called her and I said, hey, what's going on? She said, you know, it actually makes our workflow harder. Like, the AIs make it harder, not easier. And so these things are tough to roll out. Have you guys-- we learned a lot from that, and it's influenced how we built the teleproduct today. But like, have you guys had other similar experiences?

Jim O'Neill: Yeah. And so I'm smiling. The people on the radio and the internet can't see that. But the reason why I'm smiling is one of my personal mental conflicts when I was at HubSpot is I was starting to become an active angel investor. I had always made a very bright line that if I was looking at a company to invest, I had to recuse myself-- not to use a political joke-- recuse myself of actually participating in the implementation. And Talla was kind of in my sights at the time. So Becky clearly took the lead on that so there was no kind of worries.

I think to answer the question, I don't think it's unique to Talla. I think there is a buying mismatch between what AI software and the current state of the art requires and what people expect when they're trying a product out, specifically from a tech startup. So there's this almost kind of competing time investment, perceived or real, that it's more enterprise-like than traditional startup software, which is more consumer-like. And I don't think those lines have crossed yet. When you think of the investment, I think there's still a mystery around the time investment or the complexity. And what I don't think the AI world has done from the layman's point of view is educated people enough on the initial time investment versus the long-term award.

You almost think of it, or I think of it like a reverse bell curve where there's this really big time investment up front. You actually don't get really great results until you come through the other end of that bell curve, and then the results can be phenomenal. I think most tech companies, people are the most valuable asset they have. So any time investment that they make, they can probably think of a one to three-month payback. I think it's really hard to think of a six 12, 18-month payback, because alternatively, they'd invest those people into their own tooling and their own system.

There's just this competition, frankly, for resources. Now that may be changing as AI becomes more mature, the data sets become more established. But my take on AI is initial use takes the initial feeding of information to make it personalized enough to make it meaningful enough. And I just think those things are at odds today, and I think that was HubSpot's experience, from what I recall.

Rob May: We had that experience with a couple of early customers. And what it led us to do was really redesign the product such that it could be deployed and show value. And a lot of ways you do this, we do a similar thing which is you put a human in the loop sometimes, right? Tou start with the human in the loop, and you say, I'm going to improve this workflow, and I'm slowly going to eat into what the human has to do. And that tends to be a way that these startups can get around it.

Jim O'Neill: And if I can add one thing, I think that's fascinating. So you know, it's not a great example, but the Mechanical Turk of AI, I don't think enough companies are willing to do that, frankly. I think they rely too much on algorithms. And a lot of the businesses I see, the more successful ones have a small army of people actually curating information, and they have to do it in a timely fashion.And I think the best example would be Fin, if you guys all use Fin for personal assistant. That's probably the more well-known one. I've seen companies in real estate tech and lending tech and a lot of those kind of technologies that are using AI. But truly, they have a services component, just making sure that the time to enjoyment is reasonable and keeps people kind of interested.

Alyssa Verzino: All right. I hope you guys all enjoyed that roundup. And thank you for listening. Keep on tuning in to AI at Work for more tips. If you have any questions or somebody that you would like us to have on the show, you guys can tweet us @tallainc or email podcast@talla.com. See you guys next time.

Subscribe to AI at Work on iTunes or Google Play and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.