<img src="https://ws.zoominfo.com/pixel/Nfk5wflCTIIE2iSoYxah" width="1" height="1" style="display: none;">

Episode 45: Interview with Paul Roetzer - Accepting AI

by Hannah Rose | Feb 1, 2023 10:00:00 AM

We get to have a conversation today that has been on Doug’s list for a couple of years now. Today we get to introduce you to the AI Sherpa, Paul Roetzer! Welcome Paul, and thank you so much for joining us on the show today! For those of you tuning in today, Doug and Paul are going to get into what the future of AI looks like and what we can expect from it in the years to come.

Audio:


Video:

 

Additional Resources:

Connect with Paul Roetzer: 

Paul Roetzer is the Founder and CEO of Marketing AI Institute and is the Co-Author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business. You can find Paul through Marketing AI’s website or his LinkedIn page.

Show Notes:

Doug was listening to Pivot the podcast the other day and Scott Galloway was talking about how there is a video app that will remove the selfie stick from your video, so it can look like you have your own camera crew. According to Paul, it’s going to be wild.

Paul’s wife is an artist and he would say she tolerates listening to him talk about AI. A lot of times Paul solves problems by talking them through with her, but her level of interest in AI is probably almost nonexistent. The other morning she told him that she kept seeing all these AI things on Twitter. There’s one where when you take a video, you can use AI to fix your eyes on the camera, so even if you’re looking away they’re fixed to the camera. It’s synthesizing your eyes and making them appear to be looking at the camera when you aren’t. We have a tendency on Zoom to look at ourselves talking, not the person we’re talking to, so we’re entering a world where little by little we won’t know what’s real.

Doug is a fan of tech and is a fan of AI. He doesn’t always appear to be. To start at the surface level, what is AI? So when we talk about AI what exactly is it? 

From Paul, the way his company always defines AI is that it’s the science of making machines or software smart. The simplest way to think about that in a business world, marketing, sales, RevOps or whatever you work in is to think about the software you use. It doesn’t ever get smarter. It doesn’t do anything. You don’t tell it what to do, so you buy HubSpot or whatever the tool is and you write all the rules. You determine when emails are going to be sent and what they are going to say and what the sequence is. It doesn’t get better unless you learn more information or take data and improve what you’re doing. In all software moving forward, whether it’s email or marketing, it’s going to get smarter. It’s going to take information about what's happening and it’s going to learn and help you by making predictions about what’ll happen next. In one sense AI makes predictions about outcomes, which you can then use to become a better marketer or salesperson. The other thing that we’re seeing is it gives the AI machine human-like abilities. Alexa is a good example of this because it’s hearing you, it’s listening, it’s converting bits of data into words that it can understand and then it’s responding back to you in words. It’s language understanding, language generation, human capabilities embodied in a machine.

What is one stat that you think leads people to go over the top? In the next five years, every piece of software will have AI. Why is that not thought of as AI?

It’s good to have some historical context that AI has been a discipline/an area of studies since the 1950’s. There’s been belief that we could build these capabilities in machines and that they could be superhuman in their intelligence and their capabilities for years. There were always roadblocks they would run into in the research realm, specifically around data. It took a breakthrough in AI research in 2011 where a team competing for this call ImageNet were able to achieve a benchmark they thought they couldn’t get to. It proved that machines were actually able to “see within these photos.” That breakthrough created a rush for talent and technological capabilities between Microsoft, Google and Amazon. 

As a HubSpot customer, you may have a hub or you may have the whole thing. As far as Paul knows, HubSpot’s website claims they have 10 AI tools like lead scoring. Historically they have been very conservative in their AI play. HubSpot largely today is still a human powered software platform, so you do everything in HubSpot. You can automate some things, but HubSpot generally speaking doesn’t get smarter on its own.

The challenge in recent years, going back to when Paul started the institute in 2016 is that CEOs of software companies didn’t understand AI. Many still don’t, and they’re the ones supposed to be driving the direction of their company.

Doug isn’t denying the excitement and the potential impact of ChatGPT, but if you go out and talk to a hundred random people, if two of them actually know what you’re talking about, you probably have a highly educated group. When it’s brought up that AI gives the software human likability, does it give it human likability or does it give it the appearance of human likability? 

It’s the appearance. And it’s math/probabilities. At the end of the day, it doesn’t actually know what it’s saying. It’s predicting. The simplest way to think about ChatGPT is that they use what are called tokens. A token generates tokens and predicts the next token or word in a sequence and it does this over and over again. It starts writing a sentence and then looks at its previous words and predicts what the token is. Does it actually understand what you’re asking? Does it actually understand its reply at the moment? Probably not. Does that mean that in the next three to six months there won’t be versions that do, Paul wouldn’t bet against it. This stuff is moving insanely fast. To a point we can’t comprehend it at the moment.

There’s something with humanity that goes beyond just our electronics and our neural connections. There’s a creation, a shift, a thing that we see that’s there. People call out Doug saying he probably doesn’t use Grammarly, but he does all the time. As the data is coming, creators will use these AI tools to enable more of their creations so that the redundant tasks are taken care of.

Paul has a few thoughts on this, and some of the best conversations he’s had in his last 11 years of researching AI were with his daughter. She wants to be an artist when she grows up. When she was 10 that’s when Paul got access to Dolly in June of 2022 so he could generate images and illustrations with a prompt. He debated on showing it to her because his fear was that it would take away her hopes of what she wanted to be when she grew up. He decided he needed to show her, and she gave him this really weird look. She gave him the prompt of a fat fluffy unicorn dancing in rainbows and it output six illustrations of a unicorn dancing on rainbows. She got up and walked out of the room and didn’t want to talk about it. After about a month she sat down to let Paul know that she didn’t like AI because it’s stealing people’s imaginations. People put up their artwork that comes from their imaginations and steal it. His daughter doesn’t ever want her artwork online because she doesn’t want AI to steal her imagination. That’s what makes her who she is, and that’s the best definition he’s actually ever heard of generative AI. That’s what it does.

A few years ago Douglas Rushkoff was at a conference that Marketing AI puts on as the closing keynote. The tagline of that conference was “More Intelligent, More Human.” Paul’s belief is that if we do this right, AI is a once in humankind opportunity to allow us to actually be more human, to free us up from a lot of repetitive tactical things that don’t create fulfillment in our lives to focus on things that do. He’s accepted that it is there. He gets that it does some things that he wishes it didn’t do, but at the end of the day, he understands it as deeply as possible so he can educate others around it. He wants AI to help redistribute our time and resources in the proper way and he sees the institute he runs as a way to have a voice in that.

Do you think now or ever that there will be a time that will come that more than a certain percentage of whatever is being created is created from AI? 

Paul thinks by the middle of 2023 that it will be expected that every publisher discloses exactly how they use AI. There was a company just last week that was blasted because they were disguising their use of AI to write up crap content that was just for SEO value and affiliate links. There are scientific journals that say you cannot have AI as a co-author of any scientific papers because the writers are responsible for the factual nature of everything they publish, and AI is incapable of having that as one of its features. Paul thinks initially it will be pushed as an expectation of ethical use, and then in the not too distant future it’ll be regulated and required by the government that you disclose how you use AI.

There’s going to be a spectrum of how these tools are used, so if you’re an average writer or not a good writer, AI can make you a good writer. You can go in and say “Write me a sales email for this product” and drop the URL in there and make it less than 300 words and make sure to use five bullet points. You’re going to be able to do that. You can do that now with ChatGPT. That’s a general use case with standard rules set. What these programs aren’t doing is learning the last 20 emails that Doug sent and the difference between those emails and which performed the best. If they can do that, the recommendations you would be getting would actually be tuned specifically to you in the performance of your emails with a foundation of what a great email might be. That’s where we’re heading with the personalization of these recommendations based on the performance of your individual. We’re probably a year or two away from those being embedded into the platforms we use everyday according to Paul.

What does training mean as it relates to the more powerful applications of AI? 

Sticking with the example of lead scoring, HubSpot had an early offering of a lead scoring capability and it wasn’t very good. There are companies out there that that’s what they do; they build lead scoring models. If you buy a lead scoring AI solution, it was trained on something like they had training data that tells them what’s predictive and based on your industry or the product you’re offering, they have some training that tells them what likely a good lead looks like for you. The key though is if you are buying a real AI from a real AI company, the learning starts the day you sign on and put that script on your site. Then it starts learning your data. It might need a hundred thousand conversations. Let’s say it needs 5,000 conversations to dine tune that score to your business model and your customers and your sales process. That’s acceptable, but you have to go into it knowing that you’re buying it today and it’ll get better over time. You wouldn’t want to advise your salespeople to rely on it, but to advise them to help tune the model. It’ll learn.

What often happens with lead scoring as an initial pilot thing for a lot of companies is they buy this thing for thousand a month sales. And then the thing sucks. The last 10 leads that it told me were good were terrible. And then they stop using it because there’s no education around what it’s able to do and how it’s going to get smarter over time. Most of the things you bring up come from a past education issue. What are its limitations and how good can it get with the proper training and patience on our end as the consumer?

Doug knows a lot of executives that they have a lot on their plate to try and keep up with AI. Now we need to know and understand the future of AI. What’s that? Where do I find the time? Where do I find the capacity? 

From an analogy standpoint the simplest way for people to understand it is if we rewound to 1995. To think about trying to run a business and have a sustainable and profitable business model in the future where AI isn’t infused into the company. Paul doesn’t think you have a company in 3-5 years then. He really sees AI as foundational to every business the way the internet became foundational to every business.

Paul advises people to approach this from two angles. One is a use case angle, which is what are some quick win things? You can get his AI writing tool and can save a team 50 hours a month. What you need to simultaneously do is the education of really understanding what’s possible. So you look at problems differently then you look at your organization and say what are the core things you’re constantly trying to solve for yourself or your clients that AI can help me solve differently. One thing Paul did with his company is took a look at their content to see how they could completely reimagine their content strategy. They’re picking apart the business one by one and seeing what a smarter version of their core pieces looks like because if they don’t someone else could come along and do it.

Paul realized while researching and getting into AI that he overestimated how quickly AI would advance and he underestimated the total impact it was going to have on business and society. He talks to enough people on the inside to know that there are things coming that are mind blowing.

Next Steps: