Episode 16: Should We Trust These Systems? 

Rob May and Brooke Torres interview Murray Cantor, Co-Founder and CTO at Aptage. Tune in to hear what Murray thinks about future of AI and commentary on how to think about trusting these systems.

Subscribe to AI at Work on iTunes or Google Play

Rob Circle Headshot

Rob May, CEO and Co-Founder,
Talla

Murray Cantor b&w

Murray Cantor, Co-Founder and  CTO,
Aptage 


  


 Episode Transcription 

Rob May: Welcome, everybody to the AI at Work podcast. I'm Rob May, the co-founder and CEO of Talla. We make a support automation platform for companies of all sizes.

We do this podcast to help educate you about what's going on in the market with respect to AI, how to think about it, how to look at trends. And so hopefully you can learn something today from our guest, Murray Cantor, the co-founder and CEO at Aptage. Welcome, Murray. Will you tell us a little bit, quickly, about your background and how you got to Aptage?

Murray Cantor: I hope I can do it briefly. By the way, I'm the CTO of Aptage. My co-founder, John Heinz, got the short straw, and he's the CEO.

I have a PhD in mathematics. I've also been working in the product development, product design, product management, and project management space, and systems engineering space for quite a while now. I left IBM as a distinguished engineer in the rational brand. I'm also one of many disgruntled former IBM engineers, where I never could get IBM to build what I wanted in terms of applying machine learning to project management.

John Heinz and I co-founded Aptage together, to bring those kind of products to market. The idea there is that there's a lot people aren't very good at, and it takes machine learning to really understand the dynamics of a project about which you don't have complete information. You're learning as you're going. We're applying those sorts of techniques to project management at Aptage.

RM: When I met your co-founder, John, what I found really interesting was in the early days of Talla, when we were just getting started building very broad based sort of bots to automate things and we hadn't settled in on sort of customer facing use cases, we got a lot of requests for a project management-related AI tasks. It's a big thing that people feel it's way more burdensome than it should be. It's pretty interesting what you guys are doing.

You're also involved as a consultant to a company that works on next gen explainable AI. I think it's a fascinating topic. Explain to the listeners why do we need to worry about explainable AI? Why isn't it explainable today? And then, what's pattern doing to sort of help with that?

MC: Sure, well. First of all, this is a great time to be a mathematician. There's lots of really fun, interesting work to do these days. I feel real blessed about that.

Let's step back a little bit and understand the history of artificial intelligence and where we are today. I'll try to keep this brief. The idea of neural nets actually goes back to the 1990s.

Jeff Elman was one of the inventors of that, working in neuroscience at the University of California, San Diego. What this research was just trying to build a model that worked like the brain. The idea was that neurons get input. At a certain threshold, they provide output. They're all connected to each other in very complex ways.

From that, intelligence emerges. So, the original neural nets, when they were put out again, in the late 20th century, people who were doing artificial intelligence said, well, this is sort of way cool. It's too bad that these are way too hard to compute, way too much computational power is required to actually do these in any practical way. They went on to looking at other kinds of artificial intelligence, predictably, rules engines and inference engines, and the like.

Since then, we sort of crossed the threshold that people came up with the back propagation algorithm, and we have enough computing available through GPUs and the cloud to actually start building neural nets that take inputs, reason about them through these connections by having neurons connect to each other and driving through training, thresholds of which different combinations of neurons provide inputs to create responses from others. Then, lo and behold, we can do this to do a variety of classification algorithms, particularly image recognition, voice recognition-- which essentially classifies sound waves into words or sentences and the like. And by a lot of trial and error and a whole lot of just sort of ad hoc techniques, they built some very remarkably insightful neural net programs that do a very limited set of things.

And predictably, most of the things they do-- there are two things they do. There's game playing AI, which trains itself to classify the best move based on the previous evidence. Or there's other AI, which you train to do things like voice recognition and the like.

We don't know why they work. We just know that if you do something that looks like a brain, it acts like a brain, it might have worked. Sure enough, it did.

This is sort of like what I described as the early days of building engines. People knew that if you boiled water, it could create steam, which would allow things to move. But they didn't know the laws of thermodynamics. A whole lot of trial and error went into the early days of steam engines.

We're adapting now. We know that if you build these things, they'll do it. But we don't know why.

Now you get to an interesting situation where, in some cases, they're applying these kind of classification algorithms, so things like image recognition, and for diagnosing skin diseases from just pictures of the lesions. They turn out to do better in some cases than the trained doctors that diagnose this. The question is, should we trust these?

If we're going to start putting these systems in, can we trust that the answers they come with are right often enough, or when they're wrong, aren't so far wrong that they can create danger? We should really understand how the decisions are made.

RM: Right. When we inspect these neural nets, what we see is, oh, here's a node that has a weighting of 0.4. And, we have no idea what that means. This is problematic for, let's say again you're using a neural net to do a credit scoring algorithm, and now you reject somebody for credit, and you can't explain, you can't really explain why you did it. Right?

So it could be biased. It could be gender bias, racial bias, other kinds of things. And we can't prove it one way or the other. And so how are companies dealing with things like this, other than just not using neural nets?

MC: Well, right now, the example you give is just perfect. Because this is a national problem, right? where they're trying to train neural nets to do resume inspection in order to help people make more quick and automated decisions for hiring. And what happened is, when they trained them with the people, the people they trained them with had biases. And those biases just got reflected into the software somehow-- we don't know-- into the neural net somehow, we don't know why.

We don't know where in the neural net to look. The answer is, right now, you're actually to use these kind of technologies until you can figure out where in the decision process the virus found its way in. You really shouldn't be using these neural nets for solving problems like that.

For medical purposes, for avoiding certain kinds of liabilities, or whatever, given that we don't know what these algorithms are really doing, you really can't rely on them to do classes of things, certain classes of things. This is a well-understood problem. DARPA has asked, and there's other research supporters who asked-- so, OK, can we figure out a way to look at the decision processes that these systems are making, and then sort of partition-- not only understand the geometry of the classification space, but also understand the geometry of the decision space?

There are techniques for doing this. They're sort of very speculative and privately held right now. But I think they will work. And they're really exciting for mathematicians, because they get to use a whole lot of sort of deep math involving topological decision analysis, and the like to start understanding now-- essentially, the topology and geometry of the decision space. I wish there were a simpler answer for you, Rob, but that's where we're going.

RM: Yeah. I mean, it's interesting. You don't hear people say very often that it's a great time to be a mathematician.

MC: I don't know why.

RM: I'm a hardware engineer by training. My original degree was very many years ago. And it is kind of nice for me, as an entrepreneur now, that this last wave of technology was sort of social, mobile cloud. And it didn't require strong and deep tech chops the way that sort of this new wave does-- of blockchain IoT and AI. And that's kind of exciting that the hard tech has come back.

What do you think about some of the AI? You know, there's rumblings about non-neural AI approaches, resurgence in symbolic logic programming, probabilistic programming, things that might make explanation easier. I read about some work at IBM where they're using neural networks to generate symbolic logic programs so that the symbolic logic programs then are explainable, even if the neural nets weren't. Do you have any experience with any of those, or have any thoughts on these new approaches?

MC: Well, first of all, neural nets, at the base of them, are really probabilistic, anyhow. If you start looking what a neural net does, it predicts the likelihood of some input being in an output. And there are more advanced neural nets, Bayesian neural nets and the like, that make this more explicit. Those turn out to be both more accurate and more compute intensive.

The symbolic logic thing is, I think, a dead end. I won't go there. The problem is that in the real world, it is very unlikely that we can apply sentential calculus, predicate calculus to real world problems. Because we never have the certainty of A implies B, period. All we have is it's likely that a A implies B. And you get the likelihood from degree of belief by using base theorem.

Now on the other hand, there are techniques that are related to neural net that there are some papers out there that show that random forest method, about which you can be much more explicit about the decision process. And people are writing some fairly scholarly papers on the relationship between random forest and neural net as a path for explaining what the neural nets are doing. Because you can sort of understand what the random forests are doing in some instances.

So it's more like that.  When you read The Book of Why which talks about the new science of causality. But if you actually read the book, he's the guy that invented neural-- Bayesian net, not neural net, Bayesian net. And what his book is, is a really nice explanation of Bayesian reasoning and Bayesian nets. But it's not about getting too strict causality--

RM: Right.

MC: --because you won't. So the answer is there some really good techniques out there and some techniques which I don't think will work.

RM: So the good thing for our listeners is that number one, it's-- I think lessons from this-- number one, you have to be careful when you apply this technology, right--

MC: Oh, god, yes.

RM: In this case and everything else. But then, the second thing is that if you are able to apply it because of this explainability problem, there are people working on that. I want to shift gears a little bit before we run out of too much time. You and I had a conversation before we jumped on the podcast about the increase in frameworks and your fear of frameworks around AI.

We talked about some of that. And this is in response to the shortage of data scientists and machine learning engineers, that people are trying to make it easier and easier and easier. But with that comes some problems. Why don't you talk a little bit about what you don't like about those and what people should be concerned about if that's their main approach.

Of course, so yes, I will. To build good AI, to quote the CTO of Google, right now we're sort of-- we don't know, again, even with things, like where they built the program, that alpha go program that beats everybody in every human player go pretty consistently, that's probably the best [INAUDIBLE] playing scene in the history of the game, we don't really know why it works. And so if you don't know why something works, how can you live with it?

So what it takes is a lot of trial and error to get something that works. And you learn whether it works or not by looking at training sets and testing sets and whatever. And it takes a lot of hard work and a lot of time to build something that is reliable using the kind of technologies that are out there, like TensorFlow.

And the thought that people can just do this without understanding-- the people who do this well have good intuitions about at least the data they're working with. And after awhile, they'll have good intuitions about how deep a neural net needs to be and how connected it has to be and whatever, having done a lot of trial and error, just like the early days of engine engineering. People just got good at it without understanding the theory. And until you understand the theory, you can't expect that somebody can walk in, get a visual interface to building neural nets, and build something very practical. At least, that's my thought.

RM: I tell people that all the time. That when you start to build AI into your product, you have to think about it differently. It's not like engineering. It's not as well understood. If you need to scale from 10,000 users to a million users on your web application servers, we kind of understand conceptually what you need to do and how it works, and what impact it has.

MC: Exactly.

RM: It is not that way if you have a model that needs to incorporate new data or make better predictions or whatever. You don't know how long it'll take if you try to make it better and how far you can actually get with it. And that's a problem when you're trying to run a business the way that we've come to run software.

MC: I think you're absolutely right. One way to think about why this isn't engineering yet is we have no really good model of how to measure how good an AI is compared to another AI. Now what we can do is start looking at how well it handles certain predictive sets that we've trained it with. But that doesn't mean it's necessarily more intelligent. It just means we've optimized it for a certain training set. So it's sort of like what is the IQ of your AI? I don't know. How do we know that, right? And so for things like, as you said, scalability of a cloud implementation, we have perfectly good measures in terms of average response time, latency, and the like, to understand what we're engineering towards. We don't have that for AI yet.

RM: So let's bring it up to a little bit higher level of abstraction. A lot of our listeners, they're not technical people. They hear a lot about AI. They're listening to this show because they're very interested in learning about AI. And then they hear, like, oh, my god, there's so much that's not figured out. It's still more art than science.

How should you, if you're an intelligent, well-educated person, but not super technical, what should you pay attention to? How should you educate yourself about this stuff, or how should you think about applying AI to your company, dipping your toe in that water? What advice do you have for people getting started?

MC: I think the thing to do is to start out with a simple-- you say I want to apply AI to some space. The first question is can you define the problem you're solving as a kind of classification problem?

I want to know if it's a sentiment problem. I want to know who likes our product. I'd like to divide the people using our product into categories. Can I predict, from various interactions, what category they would fit in?

If you can think of it, what you're trying to do as doing a better job of classifying a large set of people into a useful into useful sets, then there probably are AI techniques that could be very powerful for you.

That's what I would start with. Then, if you can characterize the problem that way, you bring in the data scientists. You explain what you want to do.

Maybe you suspect there are categories of people or of objects in your data set. You want to discover them. And then you want to be able to predict who's in which one. If you can define the problem that way, you have a good shot at the current AI techniques working for you. If you want to know why people fit into various categories, you may or may not care. If you don't care, and a lot of times you wouldn't because it doesn't matter, you just know they are. you're in very good shape.

As I said, image recognition is a great example. You have a bunch of pictures out there. Some of them have cats. Some of them don't.

You'd like to classify which images have pictures of cats in them. You don't care how the thing, in the end, figured that out. You just would like to have a good way of predicting that.

That's the kind of thing AI is good at. If your problem looks like that, you should go for it. If it doesn't, then maybe you need more consulting or something.

RM: Do you think that using these products in the workplace is going to change a little bit how people behave, some of the workflows that they use? For example, the user image classification analogy. Let's say I have a neural net. And it's classifying Item A and Item B, typewriters and mason jars.

Occasionally, it's going to things wrong, which is not necessarily what we're used to from our software today. We're not used to probabilistic outputs. You know do you think that the workforce has to adapt to number one, understand the difference between a 88% level of certainty and a 97% level of certainty? And number two, are companies going to have to put in workflows that, if they catch an error, can sort of apply that feedback back into the model so it can sort of retrain and get smarter and all that? And if so, are you seeing any of that at Aptage?

MC: Well, at Aptage, we're solving a different kind of problem. We're doing something which is a much more small data, probabilistic, learning around individual projects. That's a whole other category we talk about. To answer your first question, there's two things.

One is you should think of AI as not replacing workers, but augmenting workers. So if you have a program that's 85% accurate, what you want it to do is to tell you when it's not sure. And then you have a person step in and do that. 

Let's say you're looking at x-rays. And it says if the software says it's 99% certain it's found a cancer thing, probably you'd still-- then that can go back to the doctor very quickly, and the doctor can look at it and say where it does or it doesn't. The point is, if it says I'm just not sure, then a person with deeper radiological skills should look at the picture and make a decision.

And the idea is that the radiologist now becomes much more efficient because of the machine augmentation. A lot of the work that's going on in AI, that is the kind of thing we're finding in Aptage. The kind of reports that we're giving out, it's the kind of thing that a good project manager would understand intuitively where there's a problem, where there isn't.

What we can do is we can take a average project manager and show them what the really good project managers are seeing. And we can quantify that. And that makes the project manager more efficient and more impactful, whereas we're essentially upscaling a white collar job rather than replacing one.

The answer to your question is there are sort of various kinds of workflows for dealing with probabilistic outputs. And the thing is, the programs always aren't going to be probabilistic. One is, as you said, you can train them to be better, but in the end there's always some level of uncertainty in the classification. To be efficient, to be useful, the product just has to be better than the average person, which by the way, in many cases is a pretty low bar.

So beyond that, suddenly the thing that's become economically efficient, and it does augment. And so it's like the earlier Industrial Revolution, we provided tools that made workers more efficient, blue collar workers more efficient. Now we're making tools that make white collar workers more efficient.

It will change how people work. And we will have to create workflows to not only improve the probabilities, but to deal with the uncertainties of the outgrowths.

RM: Very well said and  thank you for your time today. Do you want to give out the best way to contact you at Aptage if anybody's interested, or is your website just aptage.com?

It's www.aptage.com. People ask us why it's Aptage, and the answer was because it was kind of a cool sort of sounding name, and the domain name was available. How cool is that?

Secondly, I can be reached at murray@aptage.com. Love to hear from people.

RM: All right. Well, thanks for being a guest today. And we look forward to having you back on at some point in the future.

Thank you, Rob.

Subscribe to AI at Work on iTunes or Google Play and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.