Episode 14: What's Next? Predictions on The Future of Work and How to Prepare

Rob May and Brooke Torres interview Erin Winick, Associate Editor of the Future of Work at MIT Technology Review. Tune in to hear what Erin learned from automating someone’s job, predictions on the Future of Work, and how to prepare yourself and your business.

Subscribe to AI at Work on iTunes or Google Play

Rob Circle Headshot

 Rob May, CEO and Co-Founder,
Talla

 

Brooke headshot circle

Brooke Torres, Director of Marketing, Talla 

 

erin winick

Erin Winick, Associate Editor, Future of Work, MIT Tech. Review 

 

 


 Episode Transcription 

Rob May: Welcome, everybody. You're listening to the AI at Work podcast. I'm Rob May, the co-founder and CEO of Talla. Talla's a platform for automating away work for customer-facing teams, like sales and support. If you like the podcast, we hope you'll check us out.

Brooke Torres: Hi, everyone. I am Brooke Torres, and I'm our Director of Growth at Talla and one of our co-hosts for the podcast. I'm excited we're joined by Erin Winick today. She is the associate editor for the MIT Tech Review. Erin, welcome.

Erin Winick: Thanks for having me.

BT: Why don't you give a little background on yourself?

EW: My education is actually in mechanical engineering, but I've pivoted, and now I'm writing about all topics, engineering and, specifically, the future of work. I'm the associate editor for the future of work for Tech Review. I do a lot of writing, specifically for our newsletter, Clocking In, which is a twice-weekly newsletter on the future of work, covering everything from how AI is influencing the workplace of the future to what AR is doing on the manufacturing floor and everything in between. I also run my own business, a 3D-printing jewelry company.

RM: Back in January, you wrote a piece called "Why Automating Someone's Job Drove my Interest in the Future of Work." Tell us about that experience automating someone's job and where you took that from there and how you've used that at Tech Review.

EW: Like I mentioned, my background is actually in mechanical engineering. I did the normal engineering student thing. I had a lot of internships while I was in college. Specifically, I worked on the manufacturing floor at a variety of companies. At one of those internships, my task I was given in my intern project was to use 3D printing to automate a process that had been long standing on the floor, and it was one guy's job to do it.

After I was given the task, when I went and talked to him, I realized that I wasn't just automating the process. He wouldn't be needed anymore. As I went through the summer, I had to negotiate talking with him about what I was doing and, moving forward, how that conversation would happen It was a really awkward experience, but it was a really great one, because I feel like I got through to him in a really interesting way and was able to develop a relationship.

When I wrote this article, I actually reached back out to him and found out that he had lost his job. He had moved to Arizona, started a completely new career. He said he was really grateful for the experience, because I was the only one who really was respectful and talked to him about what he was going through and the problems that he was having at work. I think it's given me some perspective on-- automating your job and all the numbers that you see out there-- it's not just a big, overall stat, 50% of jobs or whatever are going to be lost to AI and automation, all those stats you see. It's a very personal thing, and it motivated me to want to tell a lot of personal stories about the future of work.

RM: Do you have any theories in the work that you've done on this about what companies or governments or societies or whatever should do for this? Are there certain types of people-- take the skill-set piece out of it, right? Are there personality traits or psychological traits or whatever that you think are going to make this less of a problem for some people and more of a problem for others, or have you had any observations around that?

EW: Specifically, the guy I talked to said his biggest thing he learned out of it was, even if the company isn't supporting you to try to retrain yourself, you need to take it on yourself a little bit to be able to do that. I think, personality-trait wise, self-starters, people that are willing to look at the landscape of the industry and see, these are the skills I need, even if my company isn't necessarily providing the support-- in my free time, I need to try to do those things. I know that's a lot to take on for someone. Anything that we can do public-policy wise to be able to help support that and even get people in that mindset, not just getting those skills, will be really beneficial to people.

RM: We haven't talked about it a lot on this podcast, but I assume you've probably looked into basic income and some of those things. Do you have an opinion on that, one way or the other?

EW: What we said at Tech Review is we need a lot more data. A lot of the experiments that have happened on universal basic income haven't really finished. A lot of them get cut because they're really expensive. People know that up front, but then, once they actually start spending the money, they usually often get cut pretty short. There's some potential ones coming out in the next year that I think are really good data providers. But I will say, in 2016, we wrote an article that said "2017 will be the year we know if universal basic income's going to work or not." We weren't right, so I'm not going to make that 100% claim. Over the next few years, I think there's going to be some more data coming in to really make a decision on that.

RM: One of the books I've been reading is actually just called Basic Income. It's a look at the history of the concept. I'm not finished with it yet, but they talk a lot about what-- they make a compelling argument for why it's a better approach than other forms of social welfare, why it might be the thing that you need in future. I haven't got to the part about justifying the cost, but they're foreshadowing it like, that is the problem, is how do you do it at a level that you can pay for it.

EW: Getting everyone to be on board with that cost, because sometimes you can get private people to fund it. Y Combinator is doing a lot of stuff with it. The Canadian government's done some. But usually, just getting that funding is the hard part.

BT: Changing gears a little bit, we have a lot of executives and business leaders and VCs as well who listen to this podcast. And so one of the things I'm curious about from your perspective is, how can business leaders prepare employees to work with an AI coworker, the upper end of that, where it's like, you take this guy that you worked with, how could management have helped and supported the entire organization, in that case?

EW: I think getting in early is key for a lot of areas in AI, and making it so you're not just throwing people into the deep end, and trying to start using some AI tools early so that when some of the more advanced tools come along down the line, and you're actually having, like you said, an AI coworker, not just AI augmentation or small pieces-- people become more accustomed to that. I think there has to be a gradual process to be able to get people to feel comfortable with that.

RM: Do you have a strong opinion, having studied a lot of what companies are doing with AI and how they're rolling it out-- we have this debate here on a regular basis with a lot of the people that come through around whether AI is a fundamentally different type of technology from a competitive dynamic? What I mean by that is, if you were on Microsoft Excel 2003, and then Microsoft Excel 2007 came out, and you waited until 2010 to implement it, you were fine. You got all the features that everybody else had in 2007 or whatever, and they might know how to use it a little bit better. But it wasn't like they had some big advantage in financial modeling over you.

We've had people, both pro and con, some will argue that AI is fundamentally different. As you get AI technology out there, the learning that you're doing, the models that you're building, the data that you're collecting, particularly if you believe that some of these models will eventually have different levels of hierarchy, et cetera-- it's really hard for other companies to catch up, very similar to if you're 55 and you've been studying a topic for 40 years. Somebody who's in their first year-- it's going to be hard for them to catch you if you're still studying at the same rate.

There's other people that feel either like, no, that's not true, it's not going to matter, you're not going to be behind, or, well, eventually the compute's going to be fast enough that you'll catch up pretty quickly, even if you start behind. And so maybe there's a benefit to being AI first. You get it over the next decade, but you don't get to be on that. Have you seen anything like that, or has it come up in any of the research that you've done?

EW: I think with where AI is at right now, at least where we see the technology, yes, it is an advantage to get in early. I think it's largely an institutional thing of learning where it fits into your functionality and, like you were saying, what to do with the data you get out of it and knowing what data to put into it. So much of AI is, often, training these models. The longer you can get them running, they really do get more effective.

The hard thing with talking about the future of work is it is the future. I've already been wrong in the year I've been covering it and stuff that I've predicted happening. This technology is moving so quickly. So I think, honestly, a company could get really far ahead or have a really strong product if they were able to say, you can get in now with having no experience and creating those better experience flows and things like that. I think those evolutions, as iit goes on  will make it easier to jump in later. But still, getting in early seems like the right way to go.

Well, in your job, the way that you make a bunch of correct predictions about the future is make a bunch of predictions.

EW: I promise I won't just hide the ones that I was wrong about, though. I will talk about them.

RM: What advice do you have for executives who might be listening to the podcast? Let's say you're a VP or you're a CIO at a logistics company, and you're thinking about AI, and you want to know where to get started. Have you seen this "I don't know where to start" issue be front and center for executives, and do you have any advice?

EW: I think part of it is getting that ground-level understanding of AI, because if you haven't really doven into it at all, there's this mythos around what AI really is and what it can do. And I think reading, getting up to date with what's happening on the day-to-day about AI is really crucial, and staying up to date with the current events and where things are moving. I think that's really the ground level, just be getting this understanding of what AI is and what it can actually do, and not just what it feels like it should be able to do. Because then you're able to accurately assess which of these things are actually good for the business and which-- AI, adding it, doesn't always make things better. And knowing, like I said, which direction to take it is the right way to go.

RM: That's something you could probably talk to as well, Brooke. When you came to Talla, you didn't know a lot about AI. You were coming from a different industry.

BT: Oh, yeah. I had no idea.

RM: How long do you feel like it took you to really, I don't mean to have some level of intellectual mastery over the concepts, but at least to feel like, I'm not a moron about this space anymore. I'm starting to know some stuff.

BT: I think it definitely takes a while, because like you said, there's the technology side of things, which, for me, was new. I came from the consumer space before this. There's also a lot of uncertainty, and even things around user experience, around deploying it, around marketing it. How do you talk to people about AI? And so that whole learning experience-- I mean, it takes a while. But if you know where to look-- and to plug your newsletter, InsideAI, has a really good newsletter that I know a lot of people read and get value out of and help get up to speed. It's obviously a worthwhile thing to do.

EW: For me, I came from a technical background, but it was mechanical engineering. Going through school, I learned a little bit on coding and things like that, but I didn't dive into what AI and machine learning and the differences between all the types were. So when I got this job, I did subscribe to a ton of newsletters. We have some that we do through Tech Review. And I got to talk to lots of people around me in journalism and things like that that gave me that familiarity. But yeah, I had a bit of a learning curve as well.

RM: Well, that's what I love about the future of work piece that you're writing, is that there's still, to this day-- so much of the AI-related material is highly, highly technical, right? So much of the news is tied to research papers that come out. And most people can't understand the research papers, a lot of times. Even highly technical people will get on Reddit sometimes and ask questions and debate what they really mean. And so I think it's great to have these sources that are like, what does this mean for your business, and what are the practical applications, and what's real and not, what's theoretical?

EW: There's a lot of press releases out there that'll put AI in it, but I call it "AI" in quotation marks. Podcasts don't show quotation marks very well. But they're throwing out all those buzzwords and stuff to try to get press' attention that doesn't necessarily understand it. As a journalist, you have to get that understanding to be able to sort through all of that.

RM: My least favorite story in AI of the last couple years was this one where Facebook launched these bots, and they created their own language. And the media came out and spun it like, oh my god, they created their own language, and Facebook was worried and had to shut it down. And you're like, no, it's because their language was stupid, and the project didn't live up to its goals. They shut it down because it was dumb, not because it got super smart or whatever.

EW: A lot of times, when journalism things can be aggregated, and you only just write about the article that was written about, and then someone writes about that article. And if you don't take a step back and talk to the original people, a lot gets lost in translation along the way.

RM: Yeah, that's a great point.

BT: Looking ahead a little bit, knowing that we'll all be right and all be wrong about some things, what are you most excited about for the future of work in regards to AI, and what are some of the predictions you would have?

EW: Personally, like I mentioned, I have a big interest in manufacturing, mechanical engineering. I think there's a lot of opportunity to apply AI to a lot more areas of manufacturing. There's a lot of machines on the manufacturing floor, but being able to connect them, combining it with the big data, Internet of Things, industrial AI, that sort of thing, is really interesting to me. Being able to get all of this data that's being produced from all these machines and using it to actually optimize processes and being able to connect the manufacturing floor in a more efficient way is really interesting to me.

From having worked in these areas and seen the machinists that are still working on the floor, a lot of them-- there is questions about whether that will take away some of those jobs. But at the same time, there's a question of whether those people will actually be able to still have a role in helping facilitate some of this and work with the companies to better optimize these AI processes and things like that. So I'm really interested to see how that plays out and what businesses are really able to push that forward.

RM: Did you have any opinion-- did you follow this news on Rethink Robotics and them shutting down? Do you have any opinion on whether that was-- I think a lot of the media has been talking about, was there not a use case there, or were they just too early and pioneered something that then other people came in to dominate the space? Have you watched that?

EW: I followed them for a while and, honestly, with a lot of interest, because there's a lot of research projects out there that use Baxter and Sawyer, their main two products, and probably still will for a while, which, from my perspective, I didn't see this coming at all, because I've seen a lot of use cases and research papers coming out that took advantage of it. I think, more than anything, it showed that even though cobotics is growing, robotics is hard, especially with combining it with AI and a lot of the areas that's using pick-and-place robots and things like that.

Robots still aren't great. They're not dexterous. They're not fantastic at picking things up. There's still a lot of area to be able to push it forward. I don't know exactly why they shut down, but I feel like I'd be interested to see if there's companies that can learn from some of the lessons of this, and the ones that are able to still succeed, like Universal Robotics and things like that, how they perform for the next few years if they are able to take some of that market share, and it helps boost them a little bit. Like I said, it was really surprising to me to see that happen.

RM: We normally ask all the guests here, where do you fall on the spectrum of Elon Musk versus Mark Zuckerberg in the AI debate? Should we be worried it's going to kill us all, or is that overhyped or not? What's your opinion?

EW: I don't think we'll have any problems any time soon. No, I don't think it's going to kill us all. I think there is definitely worries about certain jobs going away, certain tasks being taken. And we are still pretty early on it, so I don't think we've seen the full extent of what it's going to be able to do. But I think, more than anything, it is going to augment work. It's going to take away some tasks that people do, which could require some work forces to be taken down some. But in the near term, I don't think we have too much to worry about.

RM: Well, Erin Winick, thank you for joining us today. Those of you who are listening, if you have guests you'd like to see or questions you'd like us to ask, you can send an email to podcast@talla.com. If you like the podcast, please upvote it, share it with your friends. And we will see you next week. Thank you.

Subscribe to AI at Work on iTunes or Google Play and share with your network! If you have feedback or questions, we'd love to hear from you at podcast@talla.com or tweet at us @talllainc.