Episode 22: Marketing New Technologies with Gabi Zijderveld

In this episode of AI at Work, Rob May sits down with Gabi Zijderveld, CMO at Affectiva. Tune in for a fresh perspective on marketing new technologies and Gabi's lessons learned on communicating your business's answer to the question, "So what?".  

 

Subscribe to AI at Work on iTunes or Google Play

Rob Circle Headshot

Rob May, CEO and Co-Founder, Talla

gabi z b&w

Gabi Zijderveld, CMO, Affectiva 



 


 Episode Transcription 

 

Rob May: Welcome to the latest edition of AI at Work. I'm Rob May, co-founder and CEO of Talla. My guest today is Gabi Zijderveld, the Chief Marketing Officer at Affectiva. Gabi, welcome. Why don't you give us a little bit about your background and then tell us what Affectiva does?

Gabi Zijderveld: Thanks for having me, Rob. It's great to be here. I've been in high-tech, gosh, over 20 years now, mostly in the Boston area. I've working for local technology companies typically in product management and product marketing roles. A bit earlier in my career, I did a lot of international work, international channel management, which, for everyone, in the right time in your life, it's super awesome if you can travel for work and see the world and work with interesting people. I did that for almost 10 years.

Now I'm happy to be working at Affectiva, a startup that spun out of MIT Media Lab. We've been in market with commercial products for about nine years now. Essentially, what we do is analyze all things human, especially human emotions and what we refer to as cognitive states such as distraction, drowsiness, and, hopefully in the near future, frustration.

We do that by analyzing face and voice. We use machine learning and massive amounts of data. Of course, this is what we'd like to call AI technology or, even more specific, artificial emotional intelligence, or emotion AI. We actually coined the term.

We kind of seeded that market. Emotion AI now is a thing and I'm very happy to be at the forefront of it.

RM: You guys even run a conference on it now, right?

GZ: Emotion AI Summit. We had our second one this year. That's definitely not the Affectiva show. It's more about educating the industry about the promises of this technology and the various applications of it.

Even though we have a small number of Affectiva folks presenting at the event, it's mostly about bringing in other people and letting them talk about what they can do with this technology and where they see it going. It's pretty interesting.

RM: Now, you guys were obviously very early on the emotional AI front, to be around for nine years. Your first product was that if I was going to make a commercial or something like that, I could have Affectiva sort of watch people's responses to the commercial, predict how they're going to feel. Is that accurate? What did you guys learn from that?

GZ: The first market and the first product we built, and to this day, we're still very active in that space, was in the area of market research. Specifically ad testing, where you have all these big brands, advertisers, their consumer insights people. They basically want to understand how target audiences are going to respond to brand content. Specifically, videos and video advertising, but also things such as TV programming. If you think about our technology, it's software-based. We use computer vision, as well. We can access a camera.

Imagine that you were a participant in this study and you were looking at a video ad. You would get a URL. It would prompt you to opt in and consent to having the camera turned on and your video data recorded.

In the background, our technology would record frame by frame how you're reacting to that content that you're viewing. Not only you, they would actually probably go out to hundreds of people so they can get data in the aggregate and kind of slice and dice it by demographics. It gives them really deep insight into how people are reacting to that content so that they can improve or optimize the content.

Also, it allows them to make decisions about media placement, where do I spend my advertising dollars? Especially if you have a campaign with numerous ads, if you can test them and compare, that's all relevant. We've done a tremendous amount of work in that space.

I think we're close to having tested about 40,000 ads. We've analyzed, as of this week, almost seven and 1/2 million faces in 87 countries. That has also allowed us to develop these norms that folks can use to benchmark how they're performing by that same product category or geography.

These advertisers use our emotion data, if you will, because they've seen that data directly correlates with certain KPIs in advertising, such as sales lift and virality and the potential for certain content to go viral.

RM: Do you find that it's a more accurate reading, reading people's faces than asking them questions and the answers they might give on surveys? It's a little truer?

GZ: Yes, absolutely. The partners with whom we work in this space, they still use survey to compliment. But, they're finding that our data is actually scientifically very accurate. It's also unfiltered, because, to your point, people kind of answer things often the way they think they should be answering things. How you're viscerally responding to content, that's something that just happens.

RM: Do you ever have any customers that use these for political advertisements?

GZ: It's really funny you say that. Yes. It was never, if you will, a revenue-generating space for us. But, a number of years ago, we did do some testing of political ads. Also, we've done it for kicks and giggles and seen some really interesting results, especially a couple of years ago, we did some analysis of ads.

Even of the political candidates that were out there campaigning and how our technology picked them up, not to get into the details of that, but it was interesting. It was fun. We kind of kept that for ourselves.

RM: Yeah, I'm sure.

GZ: Maybe if you come by our office, I'll show you sometime.

RM: I'll come over and get the scoop. Was there anything that you learned from doing this? You mentioned you've done faces across many countries and everything else. Has AI exposed any differences that surprised you? Like, “wow, people in this part of the world they respond differently to things than we would have expected, compared to people in this part of the world?” Have you uncovered anything counterintuitive like that?

GZ: You can see cultural differences in this data. There are a lot of things that we know from prior research. For example, notions of the “polite smile” in a country like Japan-- we completely see that in our data. Not only that, in more collectivist cultures, like Japan or Southeast Asia, we see that when people are in group settings, they tend to dampen their emotions.

Here’s what surprised us, if these people are in the privacy of their homes, if they're just doing a study and there's no one else around them, they're actually really expressive. In more individualistic cultures, such as North America and Western Europe, we actually see the inverse. When they're in group settings, they are very expressive.

It's probably all about wanting to profile your personal identity and jump out from the pack and all that good stuff that we do here on a daily basis. In more individualistic cultures, people are actually less expressive by themselves. That surprised us.

One thing that was funny was that we can also compare gender, so how men versus women react. Of course, we all know women are more expressive than men. The data shows that.

In the US, women are about 40% more expressive. I believe in France, it's 20%. Yet, in the UK, curiously, it's equal. At one point in time, we mentioned that. A British guy said that was because British men, contrary to popular belief, are really in touch with their emotional self. No scientific explanation, but who knows?

RM: Which makes me think of, has anybody approached this emotion AI stuff from a dating perspective?

GZ: Yes, absolutely. Has anything been deployed that's effectively being used? No. So definitely in dating apps, absolutely.

There's a number of companies that have approached us that had been interested in maybe looking at our SDK to integrate it. Most of them were start-ups. This was almost like roadmap version 10.0, something that they wanted to add in the future, and they still had to get their house in order and get the technology out there. Early stages. But yeah, definitely people have asked about that.

RM: That's sort of your business where you got started. You mentioned your SDK. This is one of the changes that you guys have made now, as you come up with SDK so that anybody can incorporate emotion AI into their products. You started to focus more on the automotive industry and things like that. Right?

GZ: A number of years ago, we came out with our SDK, which was basically a way for us to package up our technology so that we can license it to others so that they could, quote unquote, "emotion enable" whatever it was they were building, either a cool app or a game or some kind of digital experience or a device. That was extremely interesting.

Frankly, it served us quite well, in terms of creating awareness and visibility for the technology. However, how can you focus? How can you drive scalable and repeatable revenue?

By basically selling your stuff to everyone to build everything, is not a very viable path for a start-up. We were beginning to ask ourselves, can we really support all these different use case? Frankly, with this SDK, we had people in education, in gaming, in retail, in health care. I'm sure I'm missing 10 markets or so, but really all over the place.

Every single use case was super interesting. A lot of it was also very early stages. We were kind of looking at this and saying, “Ok, this is really dispersed and distributed. How can we focus?”

At the same time, about a year and a half ago, we saw a significant uptick in interest out of the automotive industry, where basically every single large automotive manufacturer was knocking on the door, asking, “how can we use your technology in automotive environments?” There the use case was two-fold.

On the one hand, it was about driver monitoring, being able to understand, with our technology, things such as drowsiness and distraction so that you can understand potentially dangerous driving behavior. You can design a vehicle in a way that it offers up some kind of intervention or adaptation when you detect that someone is drowsy or distracted. The other use case was really around understanding how everyone in the car, not just the driver, but also the passengers, are reacting to the environment.

How are they enjoying that ride? What's that experience like for them? How can you improve that by adapting the in-cabin environment?

When you think about ride-sharing and the future of, let's say, robo-taxis, even with luxury vehicles today, in terms of building a brand experience, that's incredibly important to understand what's going on with the people in the car. We see all these automotive car companies knocking on the door, coming to us for our technology. After some pretty rigorous research of the market and business planning and opportunity assessment, we realized, “oh my god, there's no one else doing this, and we have a real opportunity here.”

We've really started now to focus on the automotive market as our next growth market. We're still extremely active in market research. We have some great client relationships there. That's still a big market for us.

Frankly, we're moving away a little bit from just making our technology available to everyone for everything, because we just don't have the bandwidth to support it. It's as simple as that.

RM: You've mentioned a couple of times about new technologies and how early this industry is. I always think marketing new technologies is particularly challenging, because when you market things, particularly when you market to companies, you sort of want to understand the buyer's journey. Often, there is no buyer's journey yet.

People are figuring it out. They're figuring out what they want. They're figuring out the buying criteria. They don't know how to evaluate it, etc. 

What are some of the best lessons that you have learned about doing this for other AI companies? How do you take something that's maybe amorphous as a process and a market and all that kind of stuff? What kind of strategies have worked for you to help maybe bring some structure to that or pin down on something?

GZ: That's a really good question. I think it's simplicity, clarity, and lots of examples of the, so what? And we see this in technology all the time. Of course, we all make awesome speeds and feeds. We have great features, great capabilities. You have to be able to articulate the "so what" of it.

Like, who cares?

Who actually would be willing to spend money on that? How is it going to help them make their products better or generate more revenue or make their clients happier? It's really all about value prop and being able to articulate that clearly and with simple language.

When we first got started in coining this term "emotion AI" and kind of carving out what that space was, we spent a lot of time focusing exactly on that. It was a content-driven approach, putting a lot of content out there. We got pretty organized about that, as well.

To this day, our CEO and I, we refer to our messaging as our talk track. We actually spend time sitting down and writing down, “what are the key points? What is the so what?" Then not get wrapped up in speeds and feeds, but explain it in simple language. I have an example of that, too. Several months ago, we are working with a robotics company. Actually, I can name them, because we announced it, SoftBank Robotics. We're working on integrating our technology in the Pepper robot, which, for us, is kind of an extension of automotive, because autonomous vehicles, of course, are advanced robots.

I was talking to one of our product managers, because I also head up our product management team. Big great news, we can now run on ARM architecture. I'm like, well, that is friggin awesome. Who cares? So what?

RM: Right.

GZ: The person was a little bit taken aback. I know why it matters, but I want them to tell me why this is good. It will be run an ARM architecture. I'm like, well, who cares? Why would someone pay money for that? Well, it matters, because now we can run better with a small footprint on embedded. Essentially, it means we can run on the robot.

I'm like, thank you! Why didn't you just tell us then that? Let's not get focused on talking about the features in the technology, but always attach to that why it matters-- why it matters to our client, why it matters to the market.

That's kind of my constant game, ask these "why" questions and the "so what" questions. I think coming back to those best lessons. It's the clarity, simplicity, and focusing on the "so what" and value prop.

RM: There was a book I read years ago during the web 1.0 tech bubble. I think it was called Jumpstart Your Business Brain. It was by a guy named Doug Hall who'd worked for Procter and Gamble.

He had this model when you're selling new technology. He's like, look, you have to tell people three things, and they really need to be in this order. Number one, what's the overt benefit? What do they get from this?

Number two, what's the dramatic difference? Why you? Of all the people that provide this benefit, why you?

Then number three, what's the reason I should believe you? What's the reason to believe?

GZ: That's exactly it. What's the reason I should believe you? I can say whatever I want to say. It doesn't mean people are going to believe it.

For us, what was really important is also finding examples and proof points. Especially in the early days when we first got started with kind of that broader vision for the ubiquity of emotion AI, it didn't matter that these were not huge, big-name companies. I would take any case study success story or reference I could get.

Then I would milk it until the cows came home, seriously make the most out of it. Also, because, to this day, we still get a lot of incoming press interest, I was able to forge really strong relationships with these referenceable clients and partners, because I would give them something back. I would bring them into PR opportunities.

I would bring them with our CEO to speak at conferences. It wasn't me just asking for a good reference. It was us giving them something back, as well. That's worked well for us.

RM: For your reason to believe, Rana's background, PhD, she worked in this area.

GZ: Yeah, and we have IT pedigree, as well. To this day, we have really strong and collaborative partnership with lots of different folks at MIT. That lends a certain level of gravitas and credibility that's really valuable.

RM: Well, that's great. A lot of the people that listen to this show, they're executives at companies that aren't super technical. They are listening and trying to make sense of all this AI stuff. They hear about deep learning. They hear about machine learning. They hear about all kinds of things that they don't know what they mean. Then they hear people say that so much of it's over-hyped and it's crazy.

They don't know what to believe. Sometimes it's just a basic statistical technique. Is it really AI?

What's your general advice for people who, they're getting started, they're trying to make sense of it all? Where do they turn? How do they think about it? You have any frameworks or resources or ideas?

GZ: That's an interesting one. Obviously, there is a lot of AI hype and fluff out there, because frankly, these days, every single bit of technology, every piece of software is AI. So everything gets labeled as such.

What is the expression? You can't see the trees for the forest or something? How can you make sense out of that when everything is the same? How can you kind of work through that?

I think, first and foremost, it is educating yourself a little bit about what AI really means, what machine learning really means. And there are great resources for it. Frankly, your newsletter, "Inside AI" is awesome. That's a really accessible and simple way to get a good read on what's out there. There is credible and relevant information online to read up on. Just get a baseline understanding of what it means. If you haven't done that yet, I think I would get started there.

Also, look at what's going on in your industry. You don't have to reinvent the wheel. Why would you? That's stupid. Try and find out what your peers are doing. Let's say you're in finance or in health tech or whatever. Then go talk to people you know, former co-workers, or other executives you might know.

What are they doing with AI? How are they maybe deploying it in their organizations already? What are some of the solutions that they're using that they're actually getting value out of?

Look at what your competitors are doing. Again, why would you reinvent the wheel? If you're a super large corporation, maybe you have the luxury of putting together some kind of cross-discipline task force where you have every single department maybe do an assessment of where inefficiencies are, where they think they can make their work and their processes better. Maybe there is an AI solution out there that can help with that.

There's lots of AI research out there and amazing innovation being potentially developed and deployed. But it's all early stages. Maybe also focus on real, applied AI solutions. What are the companies that actually have legitimate products and legitimate customers and can give you examples of how those customers are benefiting from it?

Talk to those customers. Call them up. Get insight.

RM: Well, that's what's so interesting, I think, about your technology. It is there. You can put it into your product. It works. It's been there a long time. You have big data sets. So much of the AI news is coming off of something out of DeepMind, something out of Google Brain, something out of Facebook, where it's like, they've published a paper. And it had a synthetic data set under a very controlled situation. It's like, you can't get that to work that way in a product if you tried.

GZ: Actually, even within our organization, we distinguish between research and applied AI. We have folks-- and in our R&D, it's even a rotational program. We have folks that work on research and futuristic stuff. We have a team that's focused on what we call applied AI.

What is the AI technology that's going to be deployed in real-world production product? For example, when we think about automotive, how do we get all these classifiers and these models to run on the edge, on better chip sets, in a vehicle, with really low cost margins, with super high accuracy, meeting all these safety requirements and regulations? That's not trivial.

RM: I am a big advocate for and an investor in a couple of neuromorphic hardware companies, because I think they're going to transform AI at the edge. I think that's going to be a big important wave that's coming.

GZ: I have one more piece of advice for executives starting in AI. Technology is not the answer to everything, nor is AI. If you're doing something in your company and an old-fashioned ruler does the job, then stick with the ruler. Maybe you don't need AI. So my advice would be, don't overthink it, or don't want to over-engineer it too much.

RM: It's that old joke about how NASA spent all the money to develop the pen that writes in space and underwater and upside down. And then it's like, the Russians use a pencil.

GZ: I do love that pen, though. I had one maybe-- gosh, long time ago. But that's that kind of silver-looking or the space pen. But, exactly. To that point, millions, millions were spent designing that thing. Then, oh, my god, the Russians were better at it. They used a pencil. Awesome.

RM: Looking ahead, what is 2019 going to bring? What are your predictions for AI for the coming year?

GZ: A few thoughts. Obviously, ethics in AI is already a really hot debate. It's not going to go away. There's two aspects of that that I think will get a lot more scrutiny in the next year.

It's the issue around algorithmic bias. That, in our mind, has to do, first and foremost, with data. If you do not train your algorithms and validate your algorithms or your machine learning models, whatever you want to refer them to, with real-world data that's representative of people that will ultimately use those technologies and representative of the use cases in which it will get deployed, then, sure. If, again, you use synthetic data sets and labs under optimal conditions, you'll get super high accuracy. Awesome. Good for you.

Then again, with applied AI, you deploy it in the real world, it's not going to work.  We see it in the news every single day. Massive failures with technologies or, for example, facial recognition technologies that don't recognize people that are representative of real demographics, because the algorithms haven't been trained on that, because they don't have the right data. I think that's going to get a lot more scrutiny, so a lot more scrutiny on the data that's being used and how that data is being collected, how that data is being annotated, for sure.

It's not just about the data. I think it's also diversity of thinking and diversity in the teams that build these algorithms. Another saying goes, we build what we know. If you have a team that's, to just mention another cliche, it's all developers that are 20-something white males, then, yeah, maybe you build what you know.

You need to have diversity in your team that's building these technologies. That's diversity in age, diversity in gender, diversity in education, diversity in life experiences. Yes, you need machine learning scientists. Yes, you need data scientists. Yes, you need computer engineers. But, you can round out your team with other backgrounds. I myself got my graduate degree in art history. Here I am, 20 years later, in high tech.

RM: Well, it's funny, being here in Boston and being in the start-up scene. So many people went to Harvard and MIT and Stanford. They're all great schools. I went to University of Kentucky. I went to a basketball school. I've done quite well.

GZ: Yeah, exactly. That's a great example. We actually look for that here. The way we deal with that at Talla, because this is a big pet peeve of mine, is everybody looking for this cookie cutter person that has this perfect resume. I do agree that you need the intellectual diversity. That comes from people having different life paths and life perspectives.

What I always tell people is, look, you never change the bar for the final hire. You need somebody who can do the job that you believe in that's a culture fit and everything else. But, when we get somebody in with a weird resume, I encourage people to bring them in.

GZ: Oh, my god. I'm so with you on that. If I'm hiring in marketing and hiring in product management and I get all these resumes with MBAs, because it's getting commoditized. These days, everyone has to have an MBA. If I find this really quirky, weird resume, especially if they've done interesting things and especially if they haven't done an MBA, I want to talk to that person. Like, why did you choose that path? Why did you do these things? Why are you now going after a technology job? Why is it that you chose not to do an MBA? I want to hear that story.

RM: One of the companies I'm invested in is a company called LinkSquares. They do natural language processing for contract analysis for legal. They're here in Boston. Most of the team used to work for me at my last company. And the CTO over there is a guy named Eric Alexander-- one of the best developers I ever worked with. And Eric never finished college.

GZ: Good for Eric. Awesome.

RM: Nobody even cares anymore.

GZ: I'm married to an Eric who never finished college. He's super smart. It's interesting, too, because we have these discussions internally, as well. You write job descriptions. You write the requirements. You want to check all the boxes. Sometimes the right person is the person that doesn't check all the boxes.

RM: It's hard, because if you don't put some requirements, you'll get every Joe and Sally applying in the world. It's a tough problem.

GZ: It's a tough one to navigate, for sure.

RM: Maybe AI will help with that someday soon.

GZ: As a matter of fact, AI is already helping with that. We have a client that's using our technology called Higher View. They have an AI platform that helps in hiring. Especially, they do in the area of video recruitment. If you think about these really large corporations that hire massive amounts of workers, like the big box retailers or airlines. Going through paper resumes is incredibly challenging, because how do you filter down 5,000 resumes to five people that you absolutely need to talk to? So they use an AI-based approach to do that. Massive amounts of data, of course, that they use.

RM: Particularly when, for some of those roles, the most important criteria is, how will you interact with customers? What's your personality like? Can you build rapport?

GZ: Exactly. That is super important if you have a client-facing sales role, maybe less so important if you have engineers sitting in the back office.

RM: Well, Gabi, thanks for coming on. If people want to know more about Affectiva, what's your website? What's the best way to get a hold of you?

GZ: www.Affectiva.com or on Twitter @Affectiva, or find us on LinkedIn.

RM: Thank you, everybody, for listening. If you have guests you'd like us to see, things you'd like us to talk about, send your request to podcast@talla.com and we will try to work them in. Thank you for listening, and we'll see you next week.

Subscribe to AI at Work on iTunes or Google Play and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.