Episode 18: Where Do I Start? How to Start Actually Doing AI with Jana Eggers
Rob May interviews Jana Eggers, CEO at Nara Logics. Tune in to hear her answer to the common question, “where do I start?” and learn 4 tips to stop reading about AI and start actually doing it.
Rob May, CEO and Co-Founder,
Jana Eggers, CEO,
Rob May: Hi everyone, and welcome to the latest episode of AI at Work. I'm Rob May, the co-founder and CEO of Talla. Today we have Jana Eggers, the CEO of Nara Logics on the program. Welcome, Jana.
Jana Eggers: Hi, Rob. Great to be here.
RM: Why don't you give us a little bit about your background and about what the company does.
JE: My background is strange and fun. I was a mathematician computer scientist. I was a research scientist at first, and then got the business bug and walked into the business spectrum, and I've done a whole bunch of different types of companies, lots of start ups, which has been a lot of fun, and ended up at Nara, which is almost completing a circle for me. My research was using AI to solve some computational chemistry work. I worked with traditional neural nets as well as genetic algorithms and trying to do that optimization and conducting polymers. When I got the call from Nara and they were asking about, “I think you may have some background in AI”. I said, “Yeah, actually I do.” I worked at Lycos, so I know a lot about NLP. AI has always been a part of my career, and it's really fun that right now the industry is in a place where you can actually have a company focused on AI.
RM: That's interesting. CEO is a hard job and a different job at every different company. Being a mathematician by training, do you think it's helped you be a better AI CEO, given sort of the state of where AI is today?
JE: I think in a startup, you have so many different jobs as a CEO. It's definitely helped me in my role as sales engineer. I can go more technical than most CEOs can in that role. Another aspect, I was raised by a banker, so it helps me that I understand the finance side of it. When I was a kid, the fun stuff my dad used to do is make me balance the checkbook. That helps as well. I do think it helps you. The reason why I'm even saying this is I really encourage CEOs to get to understand AI. They can understand it to a level enough that it will be beneficial to their business. It's not something that I think they should say, “Oh, that's my tech and that's my smart guys.” It's everyone. Don't put it off in a corner.
RM: This is something we've talked about a lot on the program over the last couple of months, is this idea that everybody's interested in AI, and every business leader that you talk to will say, “It's going to be revolutionary”, and this and that and the other. Yet, I went to a Silicon Valley Bank dinner the other night with a panel on AI, and the lead question was, “Why aren't these AI companies performing better?” Everybody wants this. Tens of billions of dollars have gone into the space, and you don't have any AI unicorns yet. You don't have a lot that are building significantly hyper growth sort of revenue models yet.
They're doing pilots, they’re a couple million in revenue, at best. We've talked a lot about the the fact that there's this gap between what companies want and what they're actually doing. I think it's interesting, because you wrote this post in January, Four Tips to Stop Reading About AI and Start Doing AI. Were you seeing the same thing? Is that what inspired that post? Then, tell us a little bit about what you wrote there.
JE: That's exactly it. There's a lot of talk about it. I did an O’Riley session, and I asked people, “How many of you have prototypes going on?” And almost everyone raised their hand, 90% of the room. Then I said, “How many of you have AI in production?” This was a room of about 150 people. There were three that did.
JE: I think that's what you're seeing, and that's why there aren't any AI companies that have gotten to even $10 million in revenue yet. I was just talking to one of my investors about that this morning. People are dabbling in AI but they're not doing it. I will tell you, we're seeing a change. We definitely saw a change starting, you know, late spring, early summer, and we're seeing even more now. So that's great.
On the tips, the first tip that I gave is the one that I was just talking about for CEOs, is really, understand AI is not magic. There's nothing magic there. You don't have to understand the theory behind it to get what it's doing and to start understanding and thinking about the answers that it's given.
My second tip is to get serious about your digital transformation. By that, I mean understand your data, what data you have, the quality of it. The quality doesn't mean that stops you from AI. Actually, AI is pretty good at handling strange anomalies, and it's better for you to not clean your data. With our customers, I'm always telling them, “Guys, we want the raw data.” Because you go in and clean it, then when something unclean comes in, we don't know what to do with it.
JE: Really understand what digital transformation means. It really means, “I've got all this digital data. How do I use it across my organization?”
The third thing I said was, stop with the excuses that, “we don't have enough data” or “we don't have the expertise”. Jeff Dean from Google was asked at a conference about a year ago, do mere mortal companies have enough data for AI? He said, absolutely, yes, without any doubt. One thing I want to encourage people on is that you have more data than you think. This is not about acquiring data. Also, you don't need the PhD's. Google, Facebook-- those folks, they absolutely need the PhD's. I'm not saying they're bad. I mean, listen, any PhD's that want to come join us, we love it. We're there for you. We want to have people like that, but you don't have to have them. Most companies aren't going to need them. You can build a neural net in seven to ten lines of code, and you don't even need to do that. You just need to understand what your organization is trying to accomplish.
RM: Sometimes there's even a gap that we've seen, if you have too many PhD's, you have this gap where your AI people are very researchy, and it makes it difficult to get things into production systems because they're not applied engineers. There's very few people that have both sides of that right now. AI machine learning techniques are getting better, but they're still not as well understood as traditional engineering from a best practices process. There's still some things to be bridged there.
JE: I think that, to your point, the tools are getting better. I think it's much easier. Every year it is that much easier to not be an expert and be able to go out and do some basic machine learning and apply it. You're not doing research. You're really applying it. And what you said is the reason why I left research, which, I loved research. But it wasn't applied to the real world. And so that's what I was excited about going into business is like, well, now I’ve got to make sure it works.
JE: That's what we need. We need great software engineers, not necessarily PhD's in AI, but great software engineers that are helping apply this. We need great product managers that are working with them. We need great UX folks. There's so much more happening with AI. How do you how do you get that? We need great data scientists that really understand what's going on in the data and can help guide us. What do we have in that data that can then be applied.
RM: Then your fourth point in that article was build a learning org.
JE: There's a book called The Fifth Discipline, and the fifth discipline is being a learning organization. In tech we call it failing fast. You have lots of people say,“It's not about failing. It's about learning”, right? Which is true. That's really what this is about. One of the places he starts is these are systems. What I like about that is, if you try and operate there's something called the beer game, which is really fun. It's about being a beer distributor. You can't just know one piece of the section.
You really have to understand, “Well, wait a second. My end outlet is doing a promotion. Well, that's going to impact my distribution. If I don't know that they're doing a promotion, I might think that they just figured out more marketing, or opened up to a new market. And so then I amp my production up. Well, then I have too much. And then, by the way, I've screwed up my supplier”.
It's one of the things that starts off in the book, so that people start understanding, “Oh wait, different organizations read signals wrong from the other organization”. Well, that's what you have in yours. When you're talking about something like the business, unit and engineering, and data science, and a UX organization, they all have to communicate together. I think The Fifth Discipline is a great book to really understand what a learning org is. I think it's actually going to be AI's hat trick to make us all learning orgs.
RM: You're going to have to be to be able to do that. It's interesting, because when you talk about fail fast in startups, in big companies that's called pointing fingers and crucifying the guilty.
JE: That's why I never really survived very well with any big company.
RM: Same here. I am definitely in agreement on that. I think there's a lot of people who don't understand that, we had Steve Peltsman on here from Forrester. He talked about the fact that, look, they had done a couple of big AI pilots that hadn't worked out, but they had learned so much from them that now their next levels were going better.
I think companies, it's like the web. It's like 1997 and you say, “This web's going to be a thing. We gotta do something here. What do we do?” Everybody's figuring it out. Dip your toe in the water and start going, and take it seriously. Try to put it somewhere, and realize you're going to make some mistakes.
JE: My parents were so upset when I left my job in software and the logistics industry, so trucking. I was serving the trucking industry. And I went from there to Lycos, and that was in 1996. They're like, I don't know what this internet thing is.
RM: I was a freshman at University of Kentucky. I was an engineering student. This was 1994, and I remember the head of the engineering department gets up on stage, and he's talking about, “The university paid all this money to be hooked up to this system of information” and I remember thinking, “What the hell is he talking about? Sounds like crazy stuff”. He was talking about the internet. I think we're very much in a serious time. It's very similar. So that's cool.
JE: I agree, and I think it's going to be less transparent. Because you don't really notice, you know, AI is sort of behind the scenes, whereas the internet was in our face. Mobile's in our face. The internet's happening kind of behind the scenes, which is why I also think there's a lot of CEOs that think that they don't need to get involved or understand.
RM: The last few waves of technology have been more front end focus. They've been more user interface focus. You've got cloud, which was all about like, running in the browser with SaaS. You've got mobile, you've got social networking. Now your main technologies, like AI, Blockchain and IoT, they're all hardcore, behind the scenes processing stuff. And so it's different, because the tech press really came of age really in the Web 2.0 phase.
That's our model of the world. I always tell people this is why there are tons of like, you know, programming language blogs on like, Ruby on Rails and Python, and stuff like that. And there's not a lot of like embedded software blogs, even though, you know, there may be more embedded programmers in the world than there are web programmers. I don't know. But there's certainly a lot of them.
It influences how people that are growing up and interested in programming, how they think about things, I think. Tell me a little bit about the Nara Logics tech stack, in terms of the types of machine learning that you use. I find it interesting that you have this genetic algorithms background. Have you looked at non-neural net AI at Nara?
JE: That's a great question. We are non-neural net, or we can say we're the OG neural net. We come out of the Brain and Cog lab at MIT. Our CTO and co-founder, Dr. Nathan Wilson, really, his doctorate degree and postdoc work was on how our neurons build their own networks, so how they actually decide to connect, when they decide to connect, when they weaken in connection, when they strengthen them, how they're activated. That's where we come from. We're basically building a network. We call it a graph, because there's not a specific flow to it, a weighted associated graph of how your data and the enterprise data that you work with us on, how it's connected.
RM: I see. Then you query the graph, that makes sense.
JE: We do use traditional neural nets, some for optimization of that graph. Because we think that's a good approach for it. We use some of it for the input of the graph. If we have customers that are working with images and you don't do that, it's kind of like your eyes process the image. Your brain doesn't. You know, the cognitive side of your brain doesn't. We definitely use traditional neural nets.
It's more akin to a Bayesian approach, with some distinctions from that, as well. I would say, you know, our core is different from any of that. But we use almost all of them in different ways, depending on the problem we're trying to solve.
RM: I see. Since you have a sort of graph model, are you able to do inference across the graph?
RM: That's pretty cool.
JE: That's a very common thing to do. Just to give you an idea of something that we often talk about is movie recommendations. The reason for that is everybody understands them and they get them all the time. If you look at something like, my favorite example, because it's one of my favorite movies, is The Abyss. That's not a movie about space, but there are aliens, there's weightlessness, there's darkness.
It wouldn't be recommended if you're just filtering by it being a space movie, but if you love space movies, you're probably going to like The Abyss. That's an inference that I can infer, because it has these qualities like this that that's a space movie. So for us, that'd be an inference that we would build, is we'd build a connection in our graph to space, because it is like space.
RM: Do you think that even though the modern wave of AI that, say, started in 2012, 2014, whenever you want to sort of pin it down, it's been very neural network driven. Do you see that driving a bunch of innovation in other forms of AI? Do you see a lot of work in genetic algorithms in Bayesian stuff, or other companies sort of exploring some of the areas you guys are?
JE: There's a lot going on in Bayes, for sure. That's getting a lot more focus. Not as much GA's. I can't figure out why, because they're really pretty well understood, and they work very well, as well.
I do see some people trying to automate on the real building side, so the traditional expert systems, but automating that. And so there's some different approaches that you can do for that. So when O'Reilly first launched their AI conference, it really was a deep learning conference.
RM: That's definitely true.
JE: That's pretty much what it was. But quickly, you know, I think they heard the feedback that people are trying other things and that they've tried neural nets, but it didn't work for certain problems. They're great for certain problems and they're not great for others. I think that's really what's driven it. I think Bayes has caught on the most, other than ours. We're very proud of the work that we've done and what our customers, but it's proprietary.
RM: A lot of the people that listen to the podcast here are executives at large organizations that don't come from an AI background, and they're trying to understand the trends in AI and where this world is going. What advice do you have for a VP at a fortune 1,000 company who says, I'd love to do AI in my department. I don't know where to start. What should they be thinking about?
JE: I get asked this a lot, and I call it the chicken, egg, and bacon problem. So people said, where do I start? Is it the data? I start with the data? Or, do I start with an algorithm that I think is going to solve my problem?
That's the chicken and egg problem. The data is the egg, the chicken is the algorithm, the bacon is the results, which I call the holy trinity of AI. Anytime you change any of them, you have to consider changing the others. My big recommendation is start with your own eggs. Start with your data. A lot of people go and try and buy data, and the problem is, they don't know what's been done with that data. They don't know how it's collected. They don't know how it's been transformed. They really need to understand the data that they're going to be making decisions on, so I do think you start with the egg in that situation.
The algorithms are pretty easy to change out. I mean, you talked about some of the ops problems. We, as an industry, haven't figured out how to do model management. There are some theories out there, but none of us has figured it out and what to do with that. But choosing an algorithm, you can switch the algorithms in and out, see what they do. So get into that kind of --this goes back to being a learning org-- iterative way of doing things.
Then the last thing is, be really careful about what you set the results to be. I know Meetup has been public about this. I'm not telling on them, and I actually think what they did was great. They found that if they set their goal of how many people click on and sign up for a Meetup, they realize that more men click on event tech events than women do. So, if they just are setting their goal at clicks, they're going to be showing more tech events to men and not to women.
They came out and said, “We think that's wrong.” So you really need to think about what the objective that you're setting is, and what that's really going to drive. That's what I mean by the results. It's not just, “Make this hire”. I mean, you know, it came out about the self-driving car with Uber. They said, we didn't want to break it as much. This is the one that killed the woman in Arizona. They said, “We don't want it to break as much”. Well, that was setting the threshold for it identifying a person too high or too low, depends on where you think of that.
It means that, yeah, your risk is higher there. You need to set an environment where you're having open discussions about this. I mean, I have a whole talk on this alone, which is “How do you get your team talking about these things?”. When you ask what an executive can do, one thing is get their team started with their own data. And that's a problem, because by the way, there's lots of silos in organization. So you need to encourage your team.
I talk about being free range eggs, right? You've got to have your data be free range. Don't worry as much about the algorithm, but make sure that the team's trying many of them, and thinking about, “Hey, what's the difference in how it is?”
As an executive, you've really got a sudden open tone about the results that you're trying to drive and make sure that you understand, not breaking too much to make the ride good doesn't sound like a bad thing. What did I trade off for that? Is your team asking that? I think those are probably the three core things that I would start with.
RM: People don't understand trade offs at all. We're very software being a binary thing, not a probabilistic thing.
JE: And death is an uncomfortable thing to talk about. If I can talk about “breaking” instead of death, I'd like to. We can't. We've got to say, well, what does “breaking” mean?
RM: We're a week out from Thanksgiving when we're sitting here recording this. This is going to go live next week. When you look out in the near term a couple of years, or maybe even in the long term, maybe eight to ten years, what things are we going to have to be thankful for with respect to AI? Where are you most optimistic.
JE: I think there's a lot more than people realize that we have now to be thankful for. Just look at the quality of search results. So much of that is to AI. Look at the quality of translation in enabling us to communicate better with people. That's AI. I remember, I first moved over to Germany and in the late '90s. And I remember how hard it was to communicate. And you're looking for that perfect word, you know, and now you can get that perfect word. You can get the new nuance, and it's really amazing.
Mapping, I remember printing out in the early '90s. I would never go without anything printed out, right? Now we have dynamic routing that's happening. There's so much optimization and production that we have now, thanks to AI. It's impacting our lives.
What I think we're going to be most thankful for are some of the things like that we forget. Calendaring is still really, really hard. How many times did we go back and forth about getting this meeting set up?
RM: Well, and I don't know if you saw, but x.ai, which was trying to do sort of conversational scheduling, I mean, they still do it but they've introduced a bunch of other tools now-- Slackbot, visual scheduling tools. I think because we've equated AI with different things, image recognition, conversation, rather than sort of like the underlying things that it does, like prediction and automation and classification.
Going back to one of your earlier points about the fact that AI isn't as-- because its back end it's not as in your face and everything else, you know, I hate to use the term, it's like marginal improvements on things, but because they're huge marginal improvements, right? But they're improvements in performance. They're not these sort of new ways of doing things necessarily sometimes, that maybe like a mobile app was.
JE: I think that's exactly right. I said calendaring, and I'm not just saying that jokingly. To your point, calendaring takes a lot. There's NLP that it has to understand, it's priorities that it has to understand, its categorization. Oh, this is a customer. That's a really high priority. This is one of my key team members. That's high priority.
You know, it's things like that. It has to understand a lot about categorization. It has to understand a lot about priorities and life in general. That's not an easy problem to solve. It's actually very, very hard. I think to your point, it's going to take lots of different types of AI to solve that well. And that's what I think is going to happen. That's what I'm really excited about, is it isn't going to be one AI that solves it, it's going to be seven AIs that solve some of these even bigger problems. We're building all this experience in that.
RM: Tell us a little bit about what kinds of companies should be looking at Nara Logics. Like, what's your sort of profile customer where they're right in your sweet spot?
JE: Very large enterprises. The reason is that we're a platform. We're not a services company. We're a platform that they would use to just make it easier for them to apply AI. They certainly could go to Google or Amazon and use their tools, but they have to really pull a lot of different pieces together. The second thing is, we're explicit with our AI. We're not BlackBox. That's a huge thing.
RM: It's advantage of a graph over neural net, right?
JE: Exactly. When that is important, which it often is if you're dealing with consumer, so for example Procter and Gamble's public about their work with us, they tested with consumers whether finding out when I give you a product recommendation, if you actually want to know why. They did it blind without the why's. You know, this product, because of this and this. They tested it with and consumers like it. Even when it was echoing back to the customer what they said, they understood, oh, this product is for that and this product is for this.
It's done quite well. I think those are probably the two big things for us, is anybody that's in enterprise that's going to be doing a lot of AI, and then we're a tool that they can use across the organization. That's what we're seeing. Most of our growth comes from our current customers, even though we're adding customers. As soon as they know how to use it they're like, oh, we can apply it. You know, I can apply it to supply chain just as well as I can apply it to my front end consumers.
RM: I just read an article about BlackRock shelve, the neural network model for credit risk scoring, that outperformed everything that they had because it was not explainable and that scared them, that there might be some liability there.
JE: We've heard that.
RM: Here's the last question we like to ask people on the show, is there's a big debate that happened last year between Elon Musk and Mark Zuckerberg about whether or not we should be concerned and investing a lot of time in preventing killer AI. It's going to take over the world. Where do you fall on that spectrum?
JE: I'm definitely not on the Musk side, and I'm probably closer to Zuckerberg, but not quite. I mean, one of the reasons that's why I joined Nara Logics is because it was explainable. To me, that is exactly one of the things that will help us prevent. The killer robot thing-- I think that's a bit overblown. I don't think it's going to be robots. But like I said, AI hides.
RM: That's what I'm worried about. I am worried about the fact that AI is going to control so much of our lives just in the services that it runs for us every day in ways that we understand, way before we have this AGI problem.
There are so many chances for manipulation there. I've written before about this idea that you and I both leave for work at 8:00 o'clock to be there at 8:30. One day Waze realizes, I'm a little bit ahead of schedule. You're running behind. Should Waze send me down a path that's two minutes slower so that you can get to work faster, because it knows I can bear that two minute burden? Should it make decisions like that to ease out the whole traffic flow, or not? If so, how does it make that decision? Is it who uses Waze the most, who clicks on the most Waze ads, who's worth the most to Waze, who pays the most? There are all kinds of simple ways to abuse that power. This is exactly why I think leaders have to get into these conversations. You can see, “Well, Rob's a better user than Jana”.
JE: Right. “Rob's worth more. Rob clicks on two ads for every one of Janna, so we're going to make his commute faster”.
RM: Exactly. They're just going by who clicked on the ads rather than who bought stuff, right? That's the whole thing. These are complex things.
JE: Very complex. We have to have these discussions. At the first level it sounds like, well, that's fair. We need Rob to click on more ads. Well, maybe we slow him down so he has more time to click on ads.
RM: Don't give Google any ideas. If you're listening at Google, ignore that statement.
JE: You need to have organizations that are having these conversations. They're not easy conversations. This is not a five minute meeting conversation.
RM: Well there hasn't been a lot of ethical thought put into tech products, and that's where you really saw the problems we've created in this sort of social mobile wave of tech. It was all about pressing the users buttons in ways that get them to do what you want to do, and that may not always be the right thing for them or for society or for anything else. So, AI's going to amplify that.
JE: I think we have learned. So, that's good news. People are thinking about things like screen time. Our apps are now telling us how much screen time that we're in. So, that's good that we've had that learning, because to your point, you're really talking about a possibility of a huge amount of growth of things like that.
RM: Good, well Jana, I thank you for coming on the show. If people want to reach out on social media or follow you or learn more about Nara Logics, what's the best way to do that?
JE: You search for my name, Jana Eggers. I'm on Twitter, @jeggers. You can find me. Search for my name and you got me. And, I’m firstname.lastname@example.org, and I'm always happy to talk. And, we're hiring. I got to say that.
RM: Yeah, everybody has to say that. We're all hiring.
JE: We're all hiring, but we're a lot of fun.
RM: As they can tell, from the podcast. You are a lot of fun. I've hung out with you a little bit. Good, thanks for coming on the show.
JE: Thank you.
RM: All right, if you have ideas for guests you'd like to see or questions you'd like us to ask, please send those to email@example.com. Thanks for listening.