Episode 29: The Future of AI Hardware with Mike Henry, Founder and CEO of Mythic AI
On this episode of AI at Work, host Rob May sat down with Mike Henry, Founder, and CEO of Mythic AI, an AI hardware company. Tune in to learn more about Mythic AI, from the founding story to their core innovation and applications. And, get Mike’s take on AI trends, predictions on where AI is going, and much more.
Rob May, CEO and Co-Founder,
Mike Henry, Founder and CEO,
Rob May: Hello, everyone, and welcome to the latest episode of AI at Work. I'm Rob May, the co-founder and CEO of Talla. I'll be your host today. I am here in Redwood City, California, with Mike Henry, from Mythic AI, which is an AI hardware company. So let's get started. Welcome to the show, Mike, and tell everybody sort of the founding story of Mythic, the work you were doing that led to this and then the main market that Mythic is in today.
Mike Henry: Thanks for having me. The founding story is quite long. We actually created the company back in 2012. Around 2013, we saw this explosion of AI happening, and we knew pretty quickly that the hardware that the large chip companies were putting out was not going to be well-matched to what these algorithms had to do.
We saw early on there were going to be two sides to the problem. There's going to be training these algorithms and teaching them how to learn and do their tasks, and that was going to be pretty well-served by video for quite a while, but then there's this other issue of how do you actually deploy these algorithms out in the wild? How do you put them into devices, cars, electronics, anything that is cost-sensitive and power-sensitive?
I have this big powerful algorithm that typically needs to run on a $1,000 GPU, and now, I want to put it into a billion devices. That was the other problem we saw. And that was actually the problem that we chose to solve, and that was all the way back in 2013.
To solve it, we knew that in order to beat the large chip companies, which, at the time, they all knew the same things we did, we would have to have a really large technological breakthrough to power the company. We spent actually about three or four years working on that core technology, which is analog computing. A small amount of government funding, very small team, just three or four years of really hard. work.
Then when you fast forward to 2016, I think the VCs and the markets, in general, realized the value proposition, and we were able to raise a series A at the end of 2016 and then pretty quickly, raise additional money. Now, we've raised up to $56 million. The company's grown from five people back in 2016 to now 80 people two years later. So now, we're scaling up to production. It was a really long road. It was a lot of heads-down work with a small team in the early days.
RM: How would you explain some of Mythic's core innovation to somebody who's not super technical about how computer chips are made?
MH: The core innovation really came down to how do we cram a lot of compute and a lot of memory into a really small chip that is low cost and low power? With the end of Moore's law coming, Moore's law was not going to solve that problem, and that was our kind of secret sauce.
That was our big intuition was that these other companies, the large chip companies, were going to run into the end of Moore's law and not have a solution. We went back and actually dug up an old technology, which is analog computing. Instead of using digital ones and zeros, we manipulate small electrical currents to do math. It's been talked about for 30, 40 years. It was never really successfully executed on. We applied a lot of modern thinking to it and a lot of the modern work done on analog and digital and came up with a working solution.
RM: Very cool. Talk a little bit about some of the places you're starting to see early interest and traction. Like, what kinds of applications and what kinds of companies are you working with, are you looking to work with? What are the industries where they're going to sort of need this first?
MH: I think to just take an example industry and dig into it is probably the best way to think through this. If you think through, let's say, security and surveillance or maybe more broadly, the video analytics space. You can think about retail analytics, where I want to track customer behavior in a store using cameras or like smart city. I want to measure traffic. All those spaces you can think of as smart cameras that apply AI and gather information from the scene, whether it's to do security or do data gathering or whatever.
Actually, after 9/11, that industry saw this massive explosion in interest of smart cameras and smart video analytics From like 2001 to 2015, the algorithms are really low quality. They were based on classic computer vision techniques, and they just had a huge amount of false alarms and false positives, and they were not effective.
Fast forward now, the deep learning, they've been able to demonstrate that deep learning actually solves a lot of those accuracy challenges. You have these really innovative software companies doing really powerful AI algorithms that can really effectively analyze a scene using state-of-the-art deep learning.
But here's the problem is that they have to run one camera on a GPU, and they barely even get the frame rate they want. That GPU is $1,500-- no, it's actually more like $4,000 for enterprise grade. So $4,000 per camera burning 150 watts of power. Now, how do you do that in a Walmart that might have 1,000 cameras?
That's where a company like us can step in and say, all right, for mid-double-digit price tag, we can get you the same compute as that $4,000 GPU at a hundredth of the power. That's how you can now scale that thing all over the place. Get it into every camera.
You can think the same about autonomous, right? If you have a trunk full of GPUs, and you have $40,000 in compute equipment in the trunk, well, how do you get that into all 100 million cars made every year around the world? How do you get the entire world to benefit from that when most people can't afford that? You can find those problems all over the place of how do you scale this up and make it cheap and affordable?
RM: I think one of the things that's most exciting about some of this stuff that a lot of people are missing, because to your point, back in 2016 when you and I met, people weren't doing a lot of hardware investment, right? The general thought was, ah, it's not going anywhere, and video is going to own it all and everything else.
I think what people miss is that, take, in videos, GPUs, right? They came out to sort of initially to process graphics. People realized over time, oh, wow. This would be a cool thing to use to program to train neural networks, because they work very similarly to how neural networks are structured. So they found these extra uses, which has really driven video stock price and success in the last few years.
I'm really optimistic that AI is driving changes in chip design that, as people grow up programming these new chips, they're going to have new ideas that we're not even thinking about yet of things that these chips can do, that maybe weren't even originally designed to do. I think there's going to be even a lot of, what I think of as, beneficial, surprising overflow in terms of the economics of these early chip companies, the ones that are successful. So that's pretty exciting.
MH: I mean, you can find a similar analogy. If you think about our technology and how analog compute lets us take what's normally a large power-hungry GPU and shrink it down to a tiny chip that can go in anything and how that can create new markets, you can look at the same with flash memory. Before flash memory and flash USB sticks and all that, you had big spinning hard disks, right? Those hit a certain size limit, and they hit a certain power limit. You were never going to go below that with a spinning hard disk.
Someone comes along and invents flash memory. And all of a sudden, the market explodes. And it goes in literally everything. It goes in your watch. It goes in your phone. It goes in your car. It's literally everywhere, right?
That technological breakthrough, which is take the spinning hard disk and turn it into a little, tiny solid state chip and create new massive markets out of that. You're going to see that with AI as well. You're going to see that with the kind of technology we're developing.
RM: Most of the companies we talked to are sort of software companies, either application layer, infrastructure layer, or whatever. You guys are much, much further down the technology stack. When companies start to adopt chips like this, do they have to change a lot of their workflows for how they produce products? Do they need new software and tool sets? If so, does that come from third parties? Can they use what they have? Are you guys working on that? What's that like?
MH: I think in these early days of AI adoption. You're going to have hardware OEMs, and they make the hardware purchasing decision. They might be the security camera hardware maker. They might be Dell or HP. They're going to be somebody like that. Then you have this really rich third-party software ecosystem that's developing out there. You'll have companies like Pilot AI or DeepScale and other venture-backed start-ups. You have even established players selling things like face recognition technology and all that. It's like this really rich software ecosystem.
The key for success for any hardware startup is to be able to pull in all those third-party software developers around you. And they're all really smart. It's all like rooms full of Stanford PhDs. So we'll know that, OK, if you give them 100 times more compute, they're going to do some amazing things with it. It's really getting them rallied around that idea.
Then when you have them rallied, then the hardware OEMs, now all of a sudden, really want to adopt you. Because they have all their software vendors saying like, hey, you've got to go with Mythic. This is going to amplify our stuff by 100x.
That's a real challenge, and that means you need a really rich compiler ecosystem. You need to support all the latest tool flows and you need to support TensorFlow and all that, right? The challenge is creating an ecosystem that the third-party software people will rally behind, and then sell them on the vision of 100x more compute.
RM: You've watched that market for a while, and you might remember before Google launched TensorFlow, there were a handful of things out there that were competing like Keras and Caffe, and a whole bunch of different standards. All the major tech companies had launched theirs. And it seems like TensorFlow has become very, very dominant now since it's been out. I'm curious on your perspective on is that true and why? And then where might the sort of software ecosystem around this be going?
MH: I thought that TensorFlow won, and then all of a sudden, PyTorch comes around, and we see a lot of people using that. Onyx is getting some adoption, and so I honestly don't think it's over yet. I think when you have the new wave of hardware come out, that's the people trying to dislodge in video and the training side of things, they might try to push their own thing. But if they have a massive advantage in training time, you might get people actually adopting their ecosystem.
On the training side, it's not over yet. That's going to actually make it difficult for people who are AI developers. Luckily, for us, when you're talking about inference and the deployment side of things, luckily, there's actually some pretty simple interchange formats among those different ecosystems. Onyx is actually doing a very good job of bridging everything together.
It's less of an issue for us. You can actually go from an already trained TensorFlow network to an Onyx format very easily. We're less exposed to that. But in general, I mean, I don't think the battle's over yet. And I think TensorFlow has a chance to maybe go from 70% to 30% market share. It could easily happen, yeah.
RM: For people that aren't familiar with how the hardware industry works, your customers make their buying decisions, or general hardware customers make their buying decisions on a lot of different criteria, right? Obviously, it's like chip performance, but there are things that you don't think about as much that you wouldn't think about if you're buying software as much, like power consumption and footprint and how it literally fits onto the board, the circuit board that you're designing and everything else.
What are some of the vectors that people are looking at around that, that are AI-related? In terms of, like, can Mythic run-- you have recurrent neural nets and convolutional neural nets and lots of different things emerging. And can you run all those? Are you focused on a specific subset of those? And what are some of the AI criteria that your customers are going to be looking at for chip selection?
MH: The most sophisticated customers we talk to, they have kind of all landed on some pretty common neural networks. Like, I hear YOLO V3 all the time and NASCAR CNN and these really advanced networks that are quite beefy and very compute-intensive, but customers seem to have accepted that OK, these are the really highly capable ones that can actually give me really good results out in the field. And then they from there, they want a certain resolution and a certain frame rate and a certain cost and a certain power.
I think what's the most interesting that I've seen is that what they want is actually quite high and it's not even, the current hardware isn't even close to delivering it. And the next-generation hardware from the big chip companies aren't even close to hearing it. If they say I want YOLO V3 at HD resolution with 100 object classes at 30 frames a second. And I want it to be less than a watt, and I want it to be a $20 chip.
No one out there really has even remotely a path to get there unless they're doing something completely radically new on the chip side of things. So it's encouraging for us. I mean, the customers seem to have realized this is the level of compute, this is a level of teraops per second that I need to really have good quality results out in the field. And you don't see a lot of companies actually being able to deliver that solution, so that's very encouraging for us. And so the customers are very sophisticated, and they know exactly what they want.
RM: Let's talk a little bit more broadly about AI and some of that stuff. Because you guys occupy a unique place in the AI industry. So I'm interested in your perspective on some of the bigger things.
Is there a problem in AI that Mythic is not working on but you wish somebody else would solve to really make your-- would really help you guys out or make your business better?
MH: I mean, the one that right now is getting a lot of the VC attention, because it's like the problem right now-- we're solving the problem for two or three years from now. But, the problem right now is training time. There's this quote, that's an overused quote from a guy from Facebook. Where he said, “Give me the top five problems in AI right now.” And the guy said, “Training time, training time, training time, training time, training time.”
So that problem is getting well-served by some start-ups that have hit billion-dollar valuations. It is really kind of what's holding back widespread adoption and scale-up of this. Because right now, the training times are brutal. But it's one of those things where once they solve that, then that's going to drive a significant increase in demand for inference, right? As these algorithms get bigger and beefier and more capable, you're going to need more hardware to actually deploy it out in the field. We need beefier hardware to deploy out there.
RM: For the people listening who don't understand the difference, typically, you're training a model. Then once that model is trained, you've run through your back propagation and all that kind of stuff and run through your data sets, then you're deploying it on a chip, where it can just run the model and classify things or predict things, whatever it is that it's doing.
MH: Inevitably, when you get more powerful hardware to train and teach the model, you're going to have more powerful neural networks coming out of that that are going to do amazing things and drive all these new markets. And you're going to need really powerful hardware to actually run them out in the field, which is the problem we solved.
The training ability drives market demand for what we make. Luckily, like I said, it's very well-served right now by start-ups and big companies. It's going to get better and better over time. So yeah, that's on the hardware side.
On the software side, I think that one of the things we really need is a, not just a coalescing of frameworks for training, but let's say I want to build a robot. Having a prepackaged SDK that does really advanced vision and path planning and basically, if I want to make a robot, I just drop this thing in and all of a sudden, it has great vision and it can see and it can navigate and do complex tasks. Making it really high level and abstract in a way, because that's going to drive developer demand to do a lot more things with robots, and therefore, that drives our demand.
And you can say the same thing for drones, right? Like right now, drones are almost a hobby toy, and they're starting to see a little bit of commercial traction. But if you think 10 years from now, if drones become the delivery platform of choice, think about all that stuff that's going to be built on top of that by new developers coming in and building all these new things on top of that. Again, that drives demand for our chips, because our chips will be in every drone out there, right? So that kind of like next layer of framework that people can develop on top of isn't really there yet.
RM: If there are people listening to this and they don't know a lot about AI hardware or AI, in general, it's a confusing space to a lot of people. The last waves of technology, sort of social, mobile, cloud, were easy for somebody to understand who's non-technical. This way I sort of classify as IOT, blockchain, and AI, and they all require a little more technical chops.
What are resources? Or where should people go or how should they think about these things? Like, if you're new to AI, and you want to learn about AI hardware, do you just go read EE Times? Or are there good resources or frameworks for thinking about these things or how you classify and divide up the world or anything like that you can share?
MH: Well, so this would be a good time to plug-- we're, starting this week, we're going to do a weekly blog series for the next three months that dissects all of those different things, and it's especially around hardware, software, what we're hearing from customers, the ecosystem, all of that. It would be a good chance to plug our blog series that's going to be rolling out this week, and we'll have a release every week.
Outside of that, yeah, the trade journals like EE Times, they get very deep in the weeds on hardware but tend not to connect the dots to the the software and the markets. I'd say like, McKinsey actually just came out with a very good report that's public. Places like McKinsey, they're paying close attention to this. They do a lot of good breakdowns in analysis. I think the Linley Group does a really good job of it.
The analysts and the consultant groups and all that are really taking a good holistic view of the entire ecosystem. I encourage reading those reports. I would stay away from the trade journals. They get too technical, and then even Google's blog posts, they do a lot of good blog posts, but they're very technical.
RM: I think it's hard. There's a limited number of people that are good at, I think, regging the sort of technical aspects of AI and making them more accessible to a wider audience.
The analysts and the bigger consultants and all that are really starting to pay attention and put out some good work, so that's a good place to look. Yeah, McKinsey's report, I highly recommend that. That came out a couple of days ago.
RM: When one of the questions we always wrap up on-- last year, we would ask everybody about the sort of Zuckerberg versus Elon Musk debate, where they stood on that, but that's old news. So now we ask everybody about, from sort of, I guess, it was November/December time frame, what I call-- I don't know if you follow this-- but the Gary Marcus versus everybody else debate, right? Which is sort of how far is deep learning going to take us versus are we going to need to really progress intelligence as a concept in Silicon to the next level or are we going to need new breakthrough technologies and approaches, maybe things that aren't deep learning or whatever. Did you follow that? Do you have an opinion on that?
MH: I sat at a roundtable with Gary Marcus, and I'm much more on the side of like, every time I hear someone say AI is the next electricity, I cringe inside. I think that it's possible. It's also possible that AI is the next relational database, like it's just a tool that gets a lot of great work done, but it's not going to transform civilization into something incredible.
I am much more on the Gary Marcus side, where deep learning, as it stands right now, are essentially pattern matchers. They're incredibly powerful pattern matches. And we've been able to take those things and solve a lot of really important problems in things like robotics and driving and AR/VR and all these things in consumer electronics and things like that like. It's created a model of really amazing new markets but down under the hood, they are essentially just really powerful pattern matchers.
Gary Marcus, I think, is spot on in what he thinks we need. We need the next Image Net. Image Net was this computer vision competition that basically said recognize what's in this picture. Is it a cat? Is it a dog? Is it a tea kettle? He says we need to create the next challenge that involves the next higher level of intuition. We need to give ourselves 10 years, and we need to throw a billion dollars at it and just let everyone grind away on solving that, and I agree 100% with him on that. But I think that we're going to do some really amazing stuff with these glorified pattern matchers. I'm not worried about the markets themselves.
RM: I mean, deep learning is still at the very beginning of sort of moving out into applications. I think people forget that so much of the tech news that you read about AI, these sensational headlines are like, they're really like, oh, this academic thing on this synthetic data set did this thing. And it's like, translating that into a real world application is still messy and hard and has a bunch of practical problems to solve.
MH: We need the next level challenge. We need to put a lot of weight behind it and track the progress of it to know are we really getting to the next level of intuition for AI? So I agree with him 100% on that, yeah.
RM: If you're interested in learning more about Mythic or following the blog series on hardware in this space, which I would highly recommend because I think that sounds like a really great resource, you can go to mythic/ai.com. Then if you have guests you'd like us to interview or questions you'd like us to ask, feel free to send email to firstname.lastname@example.org. We'll see you next week. Thank you.