Episode 25: The Importance of Data with Fusemachine's CEO Sameer Maskey 

In this episode of AI at Work, Rob May interviewed Sameer Maskey, CEO and Founder at Fusemachines. Fusemachines is an Artificial Intelligence solutions and services provider that offers AI engineers to companies seeking to develop their capabilities in Big Data & Machine Learning. Tune in for Sameer's take on the importance of data, how AI is helping financial institutions, tips for hiring AI talent, and much more.

Subscribe to AI at Work on iTunes or Google Play

Rob Circle Headshot

Rob May, CEO and Co-Founder,
Talla 

Sameer Maskey b&w

Sameer Maskey, CEO and Founder, Fusemachines


Episode Transcription 

Rob May: Hello everyone, and welcome to the latest edition of AI at Work. I am Rob May, the co-founder and CEO at Talla. My guest today on AI at Work is Sameer Maskey, the founder and CEO at Fusemachines. Welcome, Sameer, and why don't you tell us a little bit about your background and tell us what Fusemachines does?

Sameer Maskey: Thank you for having me here, Rob. My background is machine learning AI. Been doing it before all the hype. I got my PhD in machine learning AI, particularly focused on natural language processing. And then I worked at IBM Watson for a bunch of years. Then I started Fusemachines and also taught at Columbia University.

Fusemachines is a 200 person company that helps other companies build AI systems. And we have a big social component to our mission. Which is we try to find talent from underserved communities in the developing world and in the United States, train them, get them educated on the skill sets required for machine learning, artificial intelligence, NLP, computer vision-- all put together as AI, and connect them with opportunities in the US and Canada.

RM: Do you primarily work with companies in the sense that they rent a developer, they hire a developer for a period of time to work on day to day what they want, or do they primarily come to you for help with a project, and you assign the developers, you manage the developers. Or do you support both models?

SM: We support both models. We primarily at look ourselves as finding talent and training them up and building teams for companies that are looking into building AI teams. Many companies are looking into finding and building AI teams for adding new features in their product lines or building AI products themselves. And we find the talent and build a team. We don't manage them day to day ourselves, at least in most of the cases.

RM: Do you place some of them full time with companies?

SM: Yes. We place some of them full time with companies. Even the remote engineers we provide them, we are essentially looking at it as building a full-time team. We don't do consulting in the sense of-- we don't charge, we don't keep our engineers saying, oh, here's four hours of remote engineering. We do a little bit of that when it comes to PhDs, because many companies struggle to keep machine learning PhDs in the full time payroll, because they tend to be very expensive. And we do have a few PhDs, so when it comes to PhDs we do provide some consulting at the hourly level. But otherwise most of the talent we provide is on full-time basis. And not even an individual basis. We actually provide a full team.

SM: You you've obviously done this a lot. You've recruited several hundred people or maybe more into this. Trying to hire AI talent is something that everybody's struggling with. We're struggling with it here at Talla. It's just, there's a global shortage.

I think I read that there are 22,000 people worldwide who have sort of formal training in AI, which is not very many by any kind of standard. What are the things that you look for that tell you someone will be successful at mastering this?

SM: One of the most crucial experience and knowledge an engineer requires is math behind some of the machine learning algorithms. I know there are many, many tools out there these days. Some of it is so easy to use that you can import a library and build an image classification system in a day. But, for most of the systems that you want to put into production, where you want the system to be at its best, you really need to be able to tune the machine learning models to your domain and to problems, so that it's performing at the highest accuracy.

A lot of the off-the-shelf tools also work, but they need to be heavily tuned. In order to really tune it though, you need to understand how the algorithms work. In order to understand how the algorithms work, you need to understand the math behind it. So I usually look for engineers who are quite proficient in math. And, obviously you need to be able to code.

That's actually a hard combo to find, because many computer science students are trained in software engineering quite a bit, but not as much so in math. And I think that's partially the problem. There is so much gap between the number of machine learning engineers needed versus the number of machine learning engineers available.

RM: That's a great point. My background many years ago was, I was a hardware engineer. And in the mid- to late '90s, if you were going to go to college and study hardware engineering, there were two paths. You could take the sort of digital logic path, which required very little math. It was C programming, maybe Fortran or something else as well. And then lots of Boolean algebra, simple things like that.

The other hardware path was motors and antennas and stuff like that, which required vector calculus, math of fields and waves. Much harder math. I was a student who took very quickly to all the digital stuff, which was logical but not necessarily strongly mathematical. I mastered all that stuff first try. The first time I took math of fields and waves, I had to drop the class. I went back and took linear algebra, vector calculus. Came back and took it and did much better.

I do think it's interesting that when you think about the last wave of technology, cloud, mobile, social-- those are technologies that, while they required some technical understanding, they didn't require deep mathematical backgrounds. Now with things like IoT, AI, and blockchain-- this modern triumvirate of technologies pushing the world forward now, they actually require a lot of hardcore math skills. In some ways, things that I think we've gotten away from. It's really interesting to see you recruiting for those and bringing some of those things back. And so hopefully that'll filter all the way through the job market.

You wrote an article a while back in Forbes about how AI is helping financial institutions. And I assume this came from the fact that you've probably seen a fair number of them in your own business. Tell us a bit about the article and some of the things that you highlighted for AI, particularly chatbots and personalized service, fraud detection, and things like that.

SM: This is basically, having helped many companies build systems and having helped companies at least brainstorm and think about how AI could help on some of the problems in financial institutions, I did recently write an article in Forbes about it. And in it I particularly talk about a few things such as chatbots, personalized customer service, fraud detection, process automation, and so forth. Particularly, I think a chatbot could potentially be quite useful for customer service, where there is information overload. A lot of people do want to converse with customer service reps, and be able to ask questions and get answers instead of having to do searches or wait on the customer service line for a half hour.

I think there is quite a bit of potential of using chatbots to enable customers, the end users, to access information quite quickly. And there's been quite a few companies that have come out to work on it. I think Kasisto has found quite a bit of success on building chatbots. Bank of America has built a chatbot, and so forth.

Having said that, I would say one thing, when you're trying to build a chatbot, be it for finance or be it something else, you have to be careful on how good it may work for your particular domain. Language is one of the hardest problems in machine learning. For a machine to understand a sentence is very hard.

Especially in NLP, the longer piece of text is, easier it is for the machine to understand. So document classification works decently well, but trying to understand a simple sentence is harder. So when you're trying to think of if you should use a chatbot for your use case, I think you have to be careful on the limitations of chatbots as well.

RM: I can tell you that one of the things that we've seen at Talla is that exactly. So much of your off-the-shelf NLP stuff is trained on common vernacular, Wikipedia data. And when you get into business vernacular, you get into these terms that may mean different things than what they mean in colloquial usage. And it becomes easy to make the models confuse and everything else.

We've had to strike this balance of training some models cross customer. Things like paraphrase detection can be trained across customers. But models like question answer pairs have to be trained on a per-customer basis, based on the fact that words might mean different things at different companies.

SM: Exactly. And, I think in the chatbot space, sometimes people forget about these nuances and then start implementing without thinking about some of those nuances. It's very important that you do, like you are doing.

RM: You've obviously seen a lot of companies go through AI projects. Most of these companies, I'm guessing, hadn't done a lot of AI projects yet to date. This was the first one, or maybe a pilot. Or maybe they've done a few small ones, and this is their first big one that they're coming to you with to hire for. What advice do you have to somebody, an executive at a large company doing their first AI product, things that might help increase the likelihood that they have success, or think things that they might not expect or understand going in that you would give as recommendations?

SM: At a high level, I would say that the executive who is hoping to get x, y, z out of the project should be aware the importance of data. Without the right kind of data, you won't be able to train good machine learning models, and you won't get the expected output from the machine learning system. Oe advice I could give is make sure you have set up infrastructure and enabled the team to be able to get the kind of data they want.

It's especially hard sometimes in highly regulated industries like finance, where you cannot just take data from wherever you want. Sometimes there might be data that could simply be super useful for the machine learning project that you're doing, but you still can't get access to it because of regulations. Knowing all the possible data that you have access to, and how to get it, and having it set up before you actually even start a project makes it easier to move fast and be able to build models. Which would help and improve likelihood of success of the machine learning project. That's my first piece of advice.

One thing I think a lot of people don't realize is, trying to build a machine learning project to provide business value is different than building typical software engineering projects, such as building an app or building mobile app, or building a web application.

The difference is that everybody's sort of used to saying, “I want to build this web app. This is what it should look like. These are the interfaces, designs. These are all the features. Based on this, I can figure out the time I need to take. We should have our first demo in six months.” In the machine learning world, it's not easy to do that because you don't know if your system that you're going to build and train is going to have the accuracy that you would like to have before you can put it out into production.

Let's say you’re building image classification system for insurance industry, such that the image system could automatically classify if this is a totaled car versus not totaled car, or a degree of totaled car. The system needs to have 99% accuracy before you go into production, you don't know when you would get there.

Machine learning projects are iterative process of figuring out what new set of data, what new set of features, what new set of new tune hyperparameters brings a new level of accuracy. You keep on reiterating on that as you improve the models. In the beginning, you can't really tell when you can get to that accuracy level. Having that open mindset on, “It may take longer. It may take shorter.” And, it may not have exact four-month timeline, or a six-month timeline is important to 6

RM: That's a great point. I think there's been a lot of talk in the VC community now about why AI companies are a little more expensive to build, why they're taking a little bit longer. It's not just product market fit, it's now you have model market fit that you have to add to that. Then, the product you're building and whatever it does, can you do the AI piece you think you can do? Do you have the data? Will an algorithm work for it? Will you hit the right level? It's a really great point

SM: I mean, if you look at chatbots-- partly, I think there was all this craze around how chatbot would change the world, and conversation commerce will change the world. And now the chat has reduced a little bit, because the model market fit of, the promise of being able to build a dialogue system that can talk like human-- no one has been able to really pull it off. So there's product market fit obviously, but can you build it?

RM: Yeah, very good point. Looking around at the AI landscape, are there opportunities that you think are untouched? Are there anything that you see where you say, wow, it's amazing that nobody's applying AI over there. Or, if somebody is listening and they're an entrepreneur, do you have a suggestion for an area they might look at that's being under investigated from an AI perspective?

SM: At this point, so many people are trying to define AI opportunities and starting new companies around AI opportunities that probably whatever I say, somebody has probably already thought about it and is already probably building companies around it.

We at Fusemachines particularly focus on the data space. I think there's a whole bunch of opportunities around using machine learning to help build teams. Help automatically interview candidates, help automatically source talents, and so forth. I think that's one area that I haven't seen a lot of companies working on, which we are working on.

I think another opportunity is that there are a lot of applications, from the AI perspective that could potentially have huge impacts in developing countries. For example, we had recently built medicine delivering drones. For a lot of developing countries, where there are not enough infrastructure and roads and so forth, there could be a lot of opportunities on using drones to deliver things, especially when it's very mountainous and it would take a three-hour drive to get to the top of the mountain to deliver stuff, but the drone could easily drop it.

Besides even drones, I think there is just quite a bit of open market, untapped market in developing countries. Because there's not enough competition from the perspective of many companies trying to solve the problems of developing countries. So I think that's another interesting avenue to explore because there's lot to solve there. And there's probably less competition.

RM: Looking at 2019, do you have any major predictions for AI in the coming year?

SM: Making future predictions are always very tricky. Just looking at the trends, there would just be more AI companies. Probably a lot more. And I think probably some sort of self-driving taxi probably would be tested in more cities, which would be very interesting to keep tabs at and to be able to look at how that unfolds on changing the behavior of people who take taxis.

RM: Yeah, I definitely buy that.

SM: That's something I'm looking forward to, as self-driving taxis come into the cities.

RM: Then one of the questions that we ask most of the guests on here, particularly ones that are more deeply technical from an AI perspective is-- there's a big debate last year between Elon Musk and Mark Zuckerberg about how close are we to a dangerously sentient AI. And if we were to hit that AI sentient mark at some point, should we be scared of that or should we not? Is this a big concern for us at this point in time, we should be contributing resources to it at a heavy level, or not at a heavy level? Where do you fall on that spectrum?

I probably would fall on the more on the spectrum towards what Mark Zuckerberg aligns with, which is we are not close to having Terminators on the ground that we should be worried about. From the research perspective, we still haven't figured out how to build machines that can even talk like a five-year-old kid from the commission level. So I think there's still a decent amount of research that needs to be figured out before we'll have sentient, conscious machines that will think about its survival and will start to destroy humans to survive itself. I think we're pretty far away from that, partly because when I look at the current state of the research, all the machine learning models, they are doing predictions. But they are just building statistical models on data and doing prediction based on the data it has seen before. Or in some cases, based on the data it has learned doing things against other machines or humans. It's not really translated in the same sense of consciousness and intellect we as humans understand it.nI think we are far from having Terminators walking the streets of New York City.

RM: Sameer, thanks for coming on the program today. If people want to learn more about Fusemachines, what's the best URL or the best way to get a hold of you guys?

SM: Best way is to go to the website www.fusemachines.com, and best email is info@fusemachines.com.

RM: Great. Well thanks, everybody for listening, and if you have questions that you'd like us to ask on the podcast, guests you'd like to see, please email those to podcast@talla.com. With that, we'll see you next week.

Subscribe to AI at Work on iTunes or Google Play and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.