Episode 33: Driving Value with IntrospectData CEO Patrick McClory

Host Rob May interviewed Patrick McClory, CEO and Founder at IntrospectData, where they help organizations better leverage their data and drive business-oriented insights and outcomes. Tune in to learn more about Patrick's background and what IntrospectData does, what dev-ops needs to change to make AI work, trends in AI adoption he is seeing at the companies he talks to, his opinion on if executives can take a wait-and-see approach with AI, and much more. 

Subscribe to AI at Work on iTunesGoogle Play, or Spotify

Rob Circle Headshot

Rob May, CEO and Co-Founder,

Patrick McClory

Patrick McClory, CEO and Founder,

Episode Transcription   

Rob May: Hello, everybody, and welcome to the latest episode of "AI at Work." My guest this week is Patrick McClory, the founder and CEO of IntrospectData. Patrick, welcome to the show. And why don't you give us a quick overview of your background and what IntrospectData does?

Patrick McClory: Hey, thanks for having me. Glad to be here. My name's Patrick McClory. IntrospectData is an AI and ML consulting and product-focused organization. We're really looking at driving value out of all that crazy stuff that AI and ML seems to be in the market. We're really focused on helping organizations make sense of it and actually apply it to their business, instead of it being a great big science experiments into it seems to be in the open source world sometimes.

RM: What were you doing before you started the company? Where did the idea come from?

PM: Prior to getting into the data space, I was really heavily steeped in the dev-ops world. Starting 8-10 years ago, I was in the Amazon partner channel. Went to work to help that organization build their professional services group. Built my own consulting firm from there. Focused on high-scale data use and engineering practices.

I spent some time working with Datapipe, then Rackspace, building strategic products for them. All along the way leaning back on my university experience in-- Indiana. I went and got a degree in psychology. The really interesting thing there was that it was a lot of heavy statistics, a lot of focus on testing and measurement in the neuropsychology world.

I kind of see things coming full circle, where now I'm using the high-scale engineering skills that we've built over the last 10, 12 years to start to really dig into data and make sense of it at scale and with greater precision and greater visibility overall. It's kind of cool to be able to bring those two parts of my background and my world together at the same time.

RM: You guys touch on dev-ops and AI. AI is a little bit of a different technology from the types of infrastructure that we've had to build in the past because you know you have these models now, and most of the stuff that you read about in the news that comes out is very research, and doesn't think about things like, hey, models have to work in production systems just like anything else. They're subject to running on a machine that can keel over and die. They might use up too much memory.

What do you see happening in this dev-ops world? How is AI changing it, and what are the trends? What does dev-ops need to change to make AI work?

PM: I think it's a really interesting space right now, because I fully agree with your point that the AI and ML space really is very research intensive, very academic, is how I look at it. It's a very natural fit for me when I start looking at how you take these kinds of concepts into production, you have to start worrying about things, to your point, like versioning. Like actually looking at a deployment architecture.

How do we prepare and wrap that structure so that we can achieve viability, and do microservices become a software pattern that we can leverage and use in that ML model inference process. And I've found, by and large, the automation focus of dev-ops really helps on both ends of the spectrum. If you talk about the iterative and ability-- the rapid iterative ability to cycle and test new models and tweak parameters. You've got things in Amazon's world, like they're hyperparameter tuning in SageMaker. Taking that to heart and really digging into using those tools at a model building level really helps to increase and improve training outcomes and training accuracy.

To your point, that deployment piece is still something that, when you talk about putting it into production, it becomes tricky. The more and more we look at it, the more some of these more formal wrappers and models mature and come around. I'm continuing to see a trend towards easy wrapping of models around a microservices architecture. At least an API-first approach, where building API contracts to then service the back end of that becomes a great way to separate concerns. It becomes a great way to deploy those concepts. And, done properly, they can really scale very well.

RM: Now, do you have a good view into some of the AI tools that are emerging and where things might be going with respect to market share? So to give you an example, if you look back a couple of years ago when TensorFlow came out, it was just one of many tools for building and running these kinds of models. And then it's like, well, it seemed like TensorFlow really started to grow and was going to be the clear winner.

Then I would say from a lot of the people that I talked to that are doing the practical day-to-day stuff, it's kind of stalled. And some of the other platforms and tools are making a comeback. I'm just curious if you have any insight into that, and what where you think all that might go.

PM: I see a couple of trends. I think I have to be careful because I'm both an engineer and a business-level consultant on multiple different engagements. And I think with many of these, I find that my role, and specifically how I approach this, has to change. And, as an engineer, I find that there's so much cool stuff coming out that it's-- I don't want to say it's just difficult. It is daunting to keep on top of the ever-growing landscape.

It's growing in two dimensions. It's growing in breadth in terms of different types of tools and different types of focal areas, and then in depth, where, to the point of TensorFlow, I know that-- been using it for a number of years. And as it grows, it gets much deeper and much more diverse inside of that ecosystem as well.

On the engineering side, I'm constantly looking for the latest and greatest. I'm really focused a lot on natural language processing these days. I'm looking at Facebook's natural language tool set and the unique structure that they have set up to allow for real-time and in-place updates of models without downtime. Some of those operational concerns are really cool at that level.

On a business side, what I find is that the basic tooling, the basic stuff, is really still pretty revolutionary to businesses as they look to leverage the data they have under the hood to go in and do something interesting. And we could talk for days about the long tail of what big data was, and what caused it, and how organizations are really looking to even just basic AML and AI capabilities to make good on the mountain of data that they've been storing for the last 10, 20 years.

It's almost funny at some level where, on one hand, I'll play in the deep end with TensorFlow, and really dig into the depth of that tool set. And then on the next meeting or the next moment, I'm really just talking about basic NLP and basic you entity structure, entity assignment to language, to do some real basic language and text analysis for organizations. So the value is enormous. And the opportunity is really, really there.

The first steps, and the ones that organizations are holding on to, in my experience, has been really just some of the basics to start these days. We're still in the early days on that side, I think.

RM: What are you seeing with respect to the companies that you talk to? What are their first questions or concerns? What are the first business problems they're looking to AI to solve? Or do they have a problem in mind, or are they just tired of taking on AI? If you think about your entry point into an organization, what are you seeing there in terms of their early AI adoption?

PM: I'd say that there's a very, very small contingent of organizations that I talk to that actually knows what they want to do. It's very much a small group of people who have a focus when they start talking about AI inside the organization. I find more and more that organizations are feeling pressure to go in that direction. Whether it's from the IT organization, or it's from the marketing and sales organization to sort of make sense of what's coming or what's going on around them.

I hear much more often a desire to explore and understand. And I generally kind of go in with more of an exploratory approach, and find it to be almost a mini entrepreneurial experience, where I help organizations look at what they have and explore a couple of options to then really get serious about building that AI strategy. But I'd tell you most organizations, even in the sort of high end, big enterprise world are still very much looking to AI and ML to be the big win, but they're still struggling to understand how or where that can be their big win.

And, by and large, the first steps become very simple trend analysis at a data level or a metric level. And more of an NLP basic first step around-- whether it's email and marketing communications, or just language in general around how they talk about themselves, or how the media is portraying them. There's that first step in what can we do with these tools to then spark that creativity in those organizations to connect the art of the possible with what really is valuable to them as an organization.

RM: Interesting. When you think about technology adoption and what you're seeing in AI-- you've been around technology a long time. A lot of best practice has been, sometimes, to wait. Let's wait and get the bugs worked out. And you're not too far behind what other people have done. In fact, sometimes you're better off because you didn't have to work through all the early issues.

Do you think that's true with AI? Or, is AI technology going to be different? And so I would couch that in the context of, as a general business initiative for C-level executives, should they have a wait-and-see approach? Or a dive in head first and experiment approach to AI, and why?

PM: I think you've got a couple of other factors that help to push this more into a direction of-- I would call it more of an experimental or more of a speculative approach that I generally try to help organizations move into. And the big difference being that the cost of experimentation because of cloud, and the on-demand, as-you-need-it nature of pricing in that world makes it significantly less risky to go experiment in this world. It's really an HR people-level cost, or a consultant costs, and a cost of storage and usage to understand and play around with these tools at a basic level.

For organizations that are already in the cloud, regardless of which provider, the tools are there to go and-- I don't want to put it lightly, but to literally go and play and do some discovery, and understand what they can do with that data. For organizations that aren't, there are some great ways to even do this-- you don't need a pile of GPUs to just get started, to try out you basic even Tensor or NLP or other tool sets.

Sure, it may make it go faster. But at the end of the day, if you're looking to see how you really take that data you have in your hands that maybe you've been storing for a long time, or you have a larger influx of data coming down the pipe, understanding what to pay attention to and how to pay attention to it becomes a really big question when you're not trying to grow your staff. You're trying to kind of growing the business over time.

That some of those trend analysis. Those anomaly detection algorithms. Some of those specific sentiment analysis tools. And even building your own sentiment analysis process, which is fairly straightforward with the right tools, can really be a great first step, even in your data center. Even without a lot of data scientists coming to the fore to come and help you with it.

There are some great ways to take a first step to try it out, and then see how that can help you move forward, and see how much further you can go with it. I find, personally, that it's very difficult to explain to a business or to a C-level executive at an academic-- or even a presentation level-- what these tools can do for businesses. It's easy to say, well, yeah, it can help improve decision-making times. It can help you know bring a quantitative level of confidence to what may have previously been a qualitative decision, or a gut-level decision. Or just provide a more structured view or more salient view of what the data is telling you.

When you can actually rapidly put something like that in someone's hands, it really begins that art of the possible discussion. That's where I find organizations really start to take hold of these things. I am a big advocate for taking speculative first steps, even without a whole lot of intent to use it in production, to then begin that kind of deeper discussion around what is possible, how they can leverage it, and how they can move forward with those technologies.

RM: When you look out at the landscape, particularly on the technical side, but also on the business side, if you're so inclined, if there's entrepreneurs listening out there, what do you see that's not being built? Where do you see opportunities that people haven't take advantage of yet, if any?

PM: I'd tell you that the thing that frustrates me most as an entrepreneur, as someone in the business world, and even as an engineer. Nearly every tool out there-- and I'm talking about nearly every single AI ML tool, data science tool, period-- they're built for engineers and data scientists. That's great.

To the earlier point that we made, the idea that this market has been so academically focused, it's great to see things moving into the engineering world, where we're starting to leverage engineering practices to operationalize and productionalize these capabilities. I think that's an amazing move. It's good for the market. It's good for us. It's good for the further growth of these technologies.

What's really lacking is what I see as last mile technologies that connect businesses with the tools. There are a number of vertically focused capabilities out there per market that are interesting. So, like chat bots, and more intelligent language processing, more intelligent data trend analysis for-- pick your market. The oil and gas market, the retail market, the real estate market.

What I'm really interested in, and what I'm very much focused on these days, is putting these capabilities in the hands of the business. Case in point, things like sentiment analysis. The process of building a straightforward sentiment analysis model in attended or a supervised learning process where a user is going to classify images or text in one way or another, whether it's classification or it's sentiment analysis.

The process of doing that isn't really that intensive on a data scientist. It doesn't take an engineer, or a data scientist, or even a technical person to do the work of saying, well, this term is important, or this term is not. Or in this image, this is a dog. This is a cat. It takes a subject matter expert to do that.

I think that the real gap in the market today is the space between where we are today, and allowing that business user to follow the right patterns to build their own-- to build their own models, to build their own process, to inject their subject matter expertise into models that they can apply to their businesses. And that's where I think this gets really interesting, is when we can open up the door to the business to start experimenting at the same level, or at a similar level and at a similar pace, as what dev ops was to engineering. To give them the ability to iterate and experiment and try it out and see what happens.

Then, since the cost of experimentation at that point should be much lower than the long tail IT or the long tailed data science workflow that we have today, their ability to go dream bigger, and play with these tools, and really start to take it from interesting and science experiment to incredibly impactful to the business becomes a much lower barrier to entry and a much greater value to the business.

I definitely agree. One of the key ideas that is behind Talla when we started the company was this idea that there are a whole bunch of things we could teach machines to do today that we don't have a data set to train on. And so the next thing that has to happen is you have to go through every role in an organization. And you have to build a tool that says, hey, let me capture part of what you're doing, particularly the stuff that would be fastest to turn into a model to automate some of your monotonous work, or help augment you and assist you as an employee.

I think one of the big trends here is going to be an enterprise software design, UX/UI design is going to be designing user interfaces such that they catch that information, and can do things with it.

I mean, if we look at what a basic classification model is-- I think there's thousands, if not hundreds of thousands. There's so many examples of this out on the internet that it's an easy one to talk about. But it's a pretty codified pattern. There isn't a lot of rocket science left in the process of preparing for building a classification model. It's annotate your images or classify them, put them in a box, hit a button, and go with building the model.

All of the cool data science stuff, I hate to gloss over it, but that's what it is, from a business perspective. All that cool stuff that happens under the hood is all fine and good. What the business is looking for is an outcome. As organizations start to encapsulate that and build them in the services such that the business can focus on what it's good at, and the service can produce the results that the business requires, that's where this gets super interesting. And the more of those patterns we can establish and identify, the more valuable these services can be to the business.

RM: Awesome. So, last question that we ask people is, there was a debate-- sort of the end of the year last year-- that I like to call that Gary Marcus versus everyone else debate. It's not that extreme, but you know what I mean. And Gary Marcus said, hey, deep learning is not going to get us there. There are other things that need to happen to really get AI over the hump and make it more broadly applicable, and maybe move from this narrow AI that can play and master video games and simple tasks to something more broad-- to reasoning, and to these kinds of tasks.

There's a bunch of people that said, no, no, no. Deep learning has a long way to go, and we should stick with it. And it's going to it's going to take us further than you think. And I'm curious if you have an opinion on that debate.

PM: I actually agree with both perspectives. On the one hand, I would tell you that Gary's not wrong in that deep learning has a functional area that it lives in. As a single instance of a model, or a single concept, it's definitely limited, by definition. There's nothing that you can really argue against on that side.

I would say that deep learning is a part of the ecosystem. Even deep learning as-- we talked earlier about engineering practice. I look at multiple models and the amalgamation of multiple models being, really, that next step. And deep learning is a core, critical piece about that process, where-- to break up models and inferencing, and then build solutions that take multiple outputs from multiple models and multiple structures and processes to then drive a business value and drive an outcome.

I think that deep learning is absolutely a core part of that. I think that it is a critical part of that, but not the only part. I also think that, if I'm remembering that specific post and perspective properly, there was also a bit of a flavor in that argument around deep learning being done. Like, we've got everything out of it that we can, so let's move on. I think that's, again, a really academic view of it.

That might be, actually, not all that wrong. We may have kind of gotten deep learning, through the innovation cycle, to a point where it's now, really, an effort of operationalizing and productionalizing and driving to business value. I'd tell you that there's plenty of open road ahead for organizations to use deep learning at a business and a tangible level. We may have tapped out, or we may have passed the exponential growth phase of the technology capability. But the application to the real world is still very much an open road ahead.

RM: Awesome. Well, Patrick McClory, thank you for participating today. And if people want to find out more about you or about IntrospectData, what's the best URL?

PM: Yeah, it's pretty simple. It's IntrospectData-- all one word-- .com. And we are here to help if people are looking for a hand. Or we love just talking shop, too. So, we'd love to hear from people.

RM: All right. And with that, we will see you guys next week on the show. If you have guests you would like us to bring on, or questions you would like to ask, please email those to podcast@talla.com. And we'll see you next week.

Subscribe to AI at Work on iTunes, Google Play or Spotify and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.