Cyber Crime Junkies

Attack of the Deepfakes. New AI Risks for Business.

Cyber Crime Junkies. Host David Mauro. Season 5 Episode 64

Exclusive interview with Perry Carpenter, a multi-award-winning author, podcaster, and cybersecurity expert. Perry's latest book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions, offers invaluable insights into navigating the complexities of AI-driven deception. 

 

Send us a text

Have a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-446

Get peace of mind. Get Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com  
 
Imagine setting yourself apart from the competition because your organization is always secure, always available, and always ahead of the curve. That’s NetGain Technologies – your total one source for cybersecurity, IT support, and technology planning.

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!

Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!

 

Exclusive interview with Perry Carpenter, a multi-award-winning author, podcaster, and cybersecurity expert. Perry's latest book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions, offers invaluable insights into navigating the complexities of AI-driven deception. 

 

Topics: cybersecurity, deepfakes, AI, disinformation, social engineering, media literacy, human behavior, technology, risk management, awareness training

 

New AI Risks for Business

New AI Risks For Business, role of ai in business risk, ai risks for business, new Deepfake Risks, ai impact on business, ai deepfake explained, detecting fake videos, defending ai cyber attacks, new ways ai is used in social engineering, ai risks in social engineering, ai used in social engineering, emerging technologies changing the world, ai, artificial intelligence, deepfake, deep fake, social engineering, social engineering risks, new ways to reduce risk of deep fakes,how are deep fakes made,how are deep fake videos made,how are audio deep fakes made,how is ai making it harder to detect deep fakes, ai implications in cyber security today, best policies to limit cyber liability, how ai can be regulated and made safer, how ai can be used for fraud attacks, how ai will effect cyber security, new tips on how to create a security culture, new ways to limit cyber liability, ways to limit cyber liability, ways to stop social engineering

Chapters

00:00 Introduction to Cybercrime and Human Behavior

06:23 The Evolution of Technology and Deception

12:13 The Role of AI in Business and Society

20:14 Detection vs. Creation: The Ongoing Battle

26:07  The Exploitation Zone: Technology vs. Human Adaptability

32:57Defending Against Deepfakes in Small Businesses

43:15 Creating a Culture of Skepticism

49:28 The Future of AI and Deepfakes

D. Mauro (00:01.813)
Welcome, everybody, to Cybercrime Junkies. I am your host, David Mauro And today we have the pleasure of speaking with Perry Carpenter, a multi-award winning author, podcaster and cybersecurity expert. With over two decades of experience in the field, Perry has dedicated his career to understanding how cybercriminals exploit human behavior. Currently serves as the chief human risk management strategist at Know B4

the world's largest security awareness training and simulated phishing platform. Perry's latest book, which we're very excited about touching on, is right here. And it is Fake, a practical guide to living in a world of deep fakes, disinformation, and AI-generated deceptions. You've got some invaluable insights, and we welcome you to the studio. Perry, thank you so much for joining us.

Perry Carpenter (00:57.454)
Thank you so much. It's great to be here.

D. Mauro (01:00.277)
Yeah. So tell us about how has the reception been on fake? I mean, it seems like it is available everywhere. I see it at like Barnes and Noble, downtown New York. Like it seems like it's catching on to touching the lives of a lot of people that don't normally even understand or embrace, you know, technology and AI and things like that.

Perry Carpenter (01:20.963)
Yeah.

Perry Carpenter (01:27.82)
Yeah, think it hit at the right moment, I guess, would be the best way that I would describe it. I've always been one in cybersecurity who's kind of preached to the choir a lot. I've talked to a lot of cybersecurity awareness professionals, and that's been my primary audience. This was the first book that I wrote towards the general public.

D. Mauro (01:33.397)
Mm-hmm. Yeah.

Perry Carpenter (01:51.758)
and really wrapped up a lot of my mindsets and thoughts and takeaways that I hoped anybody in the world that would have, regardless of whether they're into cybersecurity or technology or not. And luckily, it seems to have found a pretty good place there. Could always be better. So if you've not yet picked up a book, please do so. yeah, I've been really, really happy. People find it approachable. People find it actionable.

And it seems like it's a scratches a definite edge.

D. Mauro (02:23.913)
Yeah, absolutely. So right after the floodgates are open in generative AI, this book comes out and it's talking about deep fakes. So for listeners that may not even be aware of what deep fakes are, I want to go into that just generally from a high level. I've heard you do this on other public speaking and as well as some of your other podcasts.

Perry Carpenter (02:43.704)
Yeah.

D. Mauro (02:53.417)
But how did you come up with the name FAIK?

Perry Carpenter (02:57.248)
So yeah, that's a good question. And I always want to, when I have time, make sure that I give credit where credit is due. So I have a little side business called Eighth Layer Media that I do some of my podcasting and writing and other work under. And my creative director guy, a really great guy, really smart, creative person. His name is Mason Amadeus.

D. Mauro (03:05.273)
Ha

Perry Carpenter (03:24.646)
And I was telling him about this book I was working on. like, I want like a really short snappy title that everybody can remember. And like you do, you toss around a few ideas and then like as an almost throw away idea, typed in our Discord channel. He just typed fake spelled F-A-I-K. And immediately it was like, that's exactly it. Cause you look at a lot of successful books and they're always one or two word titles.

D. Mauro (03:36.319)
whiteboard it.

D. Mauro (03:48.8)
Yeah.

Perry Carpenter (03:53.308)
subtitle can be whatever, but something really quick, something that just snaps works in your mind. and fake seems to, to be great for that. I will say there's been an unintended consequence with that, that neither of us saw coming. neither of us thought or remembered that fake F A I K also stands for, "For All I Know" and so.

D. Mauro (03:55.05)
Right.

D. Mauro (04:08.029)
How so?

D. Mauro (04:19.595)
That's correct. Yeah.

Perry Carpenter (04:21.324)
And had never thought of that. We were just always, well, it's like fake, but there's AI in the middle of the fake. So it makes a lot of sense because we're talking about deep fakes and everything being fake. But three or four people came and they were like, what does that stand for? Does it stand for, for all I know? It's like, well, it could, it actually works well in that context. Yeah.

D. Mauro (04:27.369)
Right.

D. Mauro (04:39.609)
I was going to say that's a great play on words too. Like that's germane to the topic itself.

Perry Carpenter (04:47.638)
Yeah, so exactly. Give us credit for that as well. Thanks.

D. Mauro (04:50.643)
Yeah, that's great. So let's back up just a bit. You've been in the industry for a number of years. I always like to point out that 20 years ago, didn't have to. mean, we we weren't so heavily dependent on the technology that we use 20 years ago or so. We had two versions of our lives. We had computers in the office, let's say. Right. But we still had a physical we still had our physical world. And should the computers.

Perry Carpenter (05:14.434)
Mm-hmm.

D. Mauro (05:18.667)
Should there be an outage? know, you know, whatever the cause may be. Right. We could still function. We could still make payroll, deal with clients, address concerns. But today it's just not that case. Anybody that's been to health care facilities or a doctor recently knows that the nurse doesn't come in with your chart and papers. Right. It is it is all digitized and that is across the board in all the different industries.

Perry Carpenter (05:25.826)
Yeah. Yeah.

Perry Carpenter (05:41.368)
Mm-hmm.

D. Mauro (05:50.8)
But how did you first approach or decide to go deep on deepfake and AI? what was it like in the transition, know, 20 years ago, it wasn't the dependence on technology didn't exist as much as it is today. Today it's interwoven into the fabric of everything we do. So

Perry Carpenter (06:02.273)
Yeah.

Perry Carpenter (06:14.806)
Mm-hmm.

D. Mauro (06:18.655)
why the significance on the deceptive piece and the AI generation? I I see it, but for most business people that are just kind of catching this, like, what is it that inspired you to go very, very deep and create? I mean, I think it's just the quintessential piece on DeepFakes. Like you have absolutely all the research.

Perry Carpenter (06:25.624)
Yeah.

Perry Carpenter (06:43.47)
Appreciate it.

D. Mauro (06:46.807)
all and it's explained in such an easy way. Like it is it is so easy to understand. And it's just great practical stories. The the the fiction input that you have in the beginning of every chapter, it's just it reads so smoothly. But what was it that I'm trying to get to, like, what was that spark where you're like, we need a book on this? Like what?

Perry Carpenter (06:50.028)
Mm.

Perry Carpenter (07:02.914)
Yeah.

Perry Carpenter (07:07.511)
Yeah

D. Mauro (07:13.767)
You know, what was that? Was there an event or was there just a culmination of things you were seeing?

Perry Carpenter (07:14.198)
Yeah, so...

Perry Carpenter (07:20.014)
Kind of a culmination and an event. I had been doing since before even the chat GPT moment. So November 30th, 2022, when chat GPT came out and everybody kind of had this, wow, things are different type of moment. Months or maybe even a year or so before that, there were hints of that because you could access what would become chat GPT via API.

D. Mauro (07:28.373)
Mm-hmm.

D. Mauro (07:35.85)
Mm-hmm.

Perry Carpenter (07:47.218)
And a number of companies were creating kind of these completion model versions of writing with you. Thinking of specifically of Jarvis, which got renamed Jasper that was based on open AI technology. And about six months before chat GBT, we saw mid journey and Dolly and stable diffusion start to get a lot of traction on the image creation front.

D. Mauro (07:58.589)
Mm-hmm. Yep.

Perry Carpenter (08:15.436)
And those were starting to, people are posting really good images and also really insane images like on late night talk shows and everything else. And I think all of us around the same time said there is a fundamental shift going on. I was maybe a little bit earlier than a lot of security people because I also play in the creative space. And so people that I saw and respected that were artists were starting to use the tools or at least

talk about maybe the ethics behind the tools. I think that, you know, both of those happen about the same time as people start to see what's possible, they start to play with that. And then they also start to ask hard questions of the technology. But because my life for so long has been centered around studying deception and why we're all fooled and has been studying the intersection of technology and humanity,

I really wanted to get at the front and stay at the front for as long as humanly possible on like what's going on here and then how can it be used, but even more importantly, how can it be turned manipulated so that our human nature gets used against us with it? And that's where a lot of the experimentation that I've done with it has taken me on creating the scam bots and even pieces of

D. Mauro (09:21.226)
Yeah.

Perry Carpenter (09:41.086)
advice in the book, like when I'm saying how do you inoculate yourself against deep fakes or disinformation? Well, it's to use the tools to actually create some of those for yourself. So you understand the technology, you understand even more so than the technology, the mindset of somebody that would use it to sell a certain narrative. And like, how would they prompt for the photo that they would want that would tell the story that they would want that would poke your emotion in the right way, that would make you either give away money or

believe something to spur a different action, like a political action, like who you may vote for or whether you like or dislike this other people group. So how would you go about doing that? What mindset would you step into to accomplish that?

D. Mauro (10:18.207)
Mm-hmm.

D. Mauro (10:29.203)
So in terms of defining terms, as we kind of dive into the fundamentals of deepfakes and AI, the just correct me if I'm wrong, deepfakes are artificial video audio images that are manipulated to be something that they aren't or created out of thin air using various

Perry Carpenter (10:35.288)
Yeah.

D. Mauro (10:58.419)
machine learning and GANs going back and forth until they, which is fascinating to me, but you know, where they have computers competing, like one will create a deep fake and then the other one goes, that's fake. And so it gets better. And then they go back and forth and back and forth until it's essentially undetectable by the human eye. And then you have companies. I'd be remiss if I had you on the podcast and didn't ask, but you have legitimate companies out there.

Perry Carpenter (11:01.965)
Yep.

Perry Carpenter (11:05.792)
Absolutely.

Perry Carpenter (11:12.364)
Right.

Perry Carpenter (11:18.446)
Mm-hmm.

D. Mauro (11:28.479)
that are selling people's avatars, right? Like they're to market saying, you can attend, you know, a thousand meetings at once. It will sound like you. It will look like you. It will decide like you, which freaks me out, right? Because when it like looking like me, sounding like me is fine, but deciding like me, that's pretty ominous.

Perry Carpenter (11:33.004)
Yeah.

Perry Carpenter (11:38.524)
Mm-hmm.

D. Mauro (11:55.605)
But they're doing it so that people can scale, organizations can scale, right? They can do customer service tickets or HR requests or whatever the department may be in an organization. And they can attend hundreds of meetings at the same time. What is your take on that? Like I've read the book, like,

Perry Carpenter (11:55.746)
Yeah.

Perry Carpenter (11:59.937)
Mm-hmm.

Perry Carpenter (12:21.11)
Right.

D. Mauro (12:21.639)
There's a lot of concerns there, right, societally as well as from a security and a national security perspective, isn't

Perry Carpenter (12:23.918)
Mm-hmm.

Perry Carpenter (12:29.962)
Yeah, I think with a lot of the companies that are trying to do that legitimately, there should be a focus on transparency of saying that this is my AI digital twin or whatever nice term you want to put with that is taking my place for this so that I can continue to X, Y, and Z, know, figure out how you articulate driving value to the company and making the best use of your time and everybody else's time, though.

D. Mauro (12:38.527)
Yes.

D. Mauro (12:44.276)
Mm-hmm.

Perry Carpenter (12:58.062)
If I personally attended a meeting that had somebody else's AI avatar and say, like, why are you deciding that I'm on the short stick of your ROI curve? So a lot of those kind of things need to be worked out. And there are HR, you know, like recruiting organizations that are interviewing people that are using AI avatars or people who are sending their AI avatar to go to the recruiting appointment. So you have

D. Mauro (13:06.537)
Right.

D. Mauro (13:16.427)
Mm-hmm.

Perry Carpenter (13:24.0)
a recruiter and an AI avatar, AI recruiter and AI avatar for a wannabe employee talking to each other, making decisions that nobody really can audit well. It's a little bit scary and a little bit dystopian. And that's people that are trying to think about like, how do we use this for good? I think when you put your other hat on and you say, how do I use this for bad? It gets really scary really quick because we strip away. Yeah.

D. Mauro (13:50.185)
It goes off the rails right away, doesn't it? Yeah.

Perry Carpenter (13:52.91)
I mean, you strip away any thought about transparency for it. And you think about instead about how do I make this as deceptive as possible? And when I think about how to make it deceptive as possible, it's not only how do I make it look real, but how do I build the emotional and mental or cognitive frame around this that will poke you in just the right way to make you give me the information you want.

D. Mauro (13:57.994)
Right.

D. Mauro (14:02.741)
Hmm.

Perry Carpenter (14:19.182)
do the thing that I'm hoping that you'll do or believe the thing that I hope you'll believe. And it turns out that the avatars and the deep fakes, don't have to be all that good, all that sophisticated in order to do it. You can do it with the technology that you and I have access to right now for zero to $20 a month.

D. Mauro (14:38.249)
Mm-hmm. Yep. That's exactly right. And so there's a very low barrier to entry. And through your reviews, I mean, I would think that the more contextually relevant and emotionally driven the avatar would be, the less concerned they are about how accurate it is, maybe.

Perry Carpenter (14:43.95)
Yeah.

Perry Carpenter (15:03.317)
Mm-hmm. Yeah.

D. Mauro (15:04.051)
Right. Because they're so drawn by the emotion, meaning it's one thing to deep fake the voice of a relative, you know, and to get their their intonation of their speech correct. But it's another thing if you can have them sitting in a jail cell or have them sitting at, you know, at an accident site needing to wire money or something like that. Right. Some emotional thing where you're like.

Perry Carpenter (15:16.526)
Mm-hmm.

Perry Carpenter (15:26.872)
Yeah, exactly.

D. Mauro (15:30.163)
I'm not even looking at the lip syncing. I'm still overwhelmed by the emotion that my relative was in an accident or my relative is in need. Right.

Perry Carpenter (15:35.886)
That's exactly it. Yeah, that's exactly. And the other thing that comes with that, that you touched on is at that point, you're not really paying attention to a lot of the fine grain detail. You go into almost this gross motor perception way of looking at everything. You're not focusing in on all the subtleties that our more logical minds would tell us that we should be focusing on.

D. Mauro (16:02.218)
Mm-hmm.

Perry Carpenter (16:05.208)
Things like are, I mean, if you think about those high emotion situations, like I'm in jail right now, or I got kidnapped, or I just had an accident, a lot of those little idiosyncrasies that we would normally hear in our friend's voice, those can get stripped away too, because you're hearing that through the emotional filter. But also, how often do we hear people in a highly emotive state, like they have been taken...

D. Mauro (16:25.386)
Right.

D. Mauro (16:30.911)
Right, you don't necessarily know what they sound like. Yeah.

Perry Carpenter (16:33.438)
Yeah, you don't know what they sound like and you're going to excuse a lot of it because you're saying this person's under duress. They're not necessarily themselves right now anyway.

D. Mauro (16:38.123)
Sure.

D. Mauro (16:42.431)
Yeah, excellent point. So when we think about

AI and deep fakes. mean, it's great to have it as parlor games and for entertainment purposes. We saw that years ago with Tom Cruise and President Obama at the time, et cetera. it seems to really being leveraged to enhance social engineering. you know, are you hearing more and more reports of it? Obviously, there was the

Perry Carpenter (16:53.869)
Yeah.

Perry Carpenter (17:00.525)
Yep.

D. Mauro (17:17.395)
the incident in Hong Kong. There's another one where there's an international consulting firm. There is a, I think it was a principal outside of Maryland whose job was lost because there was a audio deep fake of him saying some, you know, some detrimental things, some off-color things. Yeah, when it went.

Perry Carpenter (17:28.802)
Mm-hmm.

Perry Carpenter (17:40.652)
Yeah, he had some racially charged things. Yeah, in the investigation, he got his job back, luckily, but he was, yeah, he was set up by a disenfranchised employee that, yeah, yeah, he had to go through all that and had his reputation smeared on social media and everything else. You know, the interesting thing that comes out of all of this is that the way that I think about it is

D. Mauro (17:46.795)
Right.

D. Mauro (17:50.687)
Yeah, but the damage was still done, right? He still had to go through all of that, right?

D. Mauro (17:59.402)
Yeah.

Perry Carpenter (18:10.43)
Every bit of deepfake technology has strengths and weaknesses. And many of those weaknesses, when you and I are in a very logical mode, we might be able to tell, you know, maybe there's a flat effect on the voice, or maybe there's something just weird enough with the image, or maybe the lip sync isn't right, you know, all those things that we believe we can tell.

but if you take your fake and then you put that within a cognitive frame that tells a story and has heightened emotion and all those other things, well, then all of sudden now our storytelling mind unpacks it and it becomes this much bigger thing that will deal with all of the little problems that are indicative to it. So like every time I build a social engineering employ with a, like a scam bot or something else, I'm thinking about what

D. Mauro (19:01.067)
Hmm.

Perry Carpenter (19:02.582)
goes wrong naturally with the Scambot. What are the like, you maybe there's audio glitches every now and then. Well, at that point, I just need to explain that away upfront and say, well, the Scambot right at the beginning says its headset is having issues, or there's VoIP issues that their company's dealing with that day. So now every audio glitch is erased from that person's mind, because they have a way of explaining that that they already understand.

Same thing with pauses and latency. If you have a voice-only channel and you're talking to somebody and you just say at the front, our computers have been a little bit slow. I got to go through a few screens on this. So now every bit of latency with any audio path gets explained away in that person's mind because that person's multitasking and they have a computer that's a little bit slow. Yeah, we understand that. The story is complete. And I think

D. Mauro (19:51.433)
Yeah.

D. Mauro (19:59.328)
Or I'm working from home and Comcast is doing a thing in the neighborhood. it's right, like you can explain it away. You set the context right off the bat.

Perry Carpenter (20:03.835)
Exactly. Yep.

Yeah, so it's a lot like doing a really good magic trick is you give the story or the pattern that will help somebody create a cognitive frame that's in a different place than where the deception is really happening. So all their attention is focused in one place while the real work is happening somewhere else.

D. Mauro (20:12.981)
Yeah.

D. Mauro (20:28.395)
So we see on platforms like Telegram and other places where they have tutorials on how to do the face swaps and it's being used in certain romance scams and targeting the elderly, targeting the vulnerable. How is the pace from your vantage point where you're probably able to see it more than anybody, what is the pace of

Perry Carpenter (20:39.502)
Mm-hmm.

Perry Carpenter (20:46.274)
Yeah.

D. Mauro (20:57.803)
the development of detection capabilities compared to the pace of creation. It seems like the pace of creation is winning, but how are we doing on the ability to detect? Will there be watermarking available soon for live? Where are we today and where do you think we're going to go?

Perry Carpenter (21:05.293)
Yeah.

Perry Carpenter (21:14.786)
Yeah.

Perry Carpenter (21:22.508)
Yeah, it's not where it needs to be. Most of the detection technology is about like flipping a coin. They're right about half the time, they're wrong about half the time, which means they're just not good right now. And as soon as somebody understands the way that the detection mechanism works, easier it is to just work around it. So let's say there's a video deep fake and

D. Mauro (21:24.81)
No.

D. Mauro (21:29.586)
Mm-hmm.

D. Mauro (21:34.901)
Right.

Perry Carpenter (21:52.29)
there's information within the metadata that a program should look for. Well, as soon as I know that, I can just screen record the video instead of using the actual file. Well, now that can get passed around and everybody's going to believe that. They may even believe it more because the image is degraded a little bit and it looks like the way that people capture things. Same thing if there's image metadata. I can just take a screenshot of that.

D. Mauro (22:03.358)
Mm-hmm.

D. Mauro (22:21.034)
Hmm.

Perry Carpenter (22:21.282)
Let's say there's a pattern within a video where every X frame, there's something that would be there. Well, as soon as I understand that, I can just start to add more jump cuts in it. So now it's not every X frame. One of the interesting things that I realized is that teenagers are actually really good at getting around all this because two examples, one on

Like on Snapchat, if somebody were to take a screenshot of something that you posted, it notifies the person that posted that. But what did kids realize that you can do is, well, you've got your cell phone with the thing you want to capture, just put another phone up and take it. Yeah. So no, it doesn't take long because people are motivated and they're smart.

D. Mauro (23:03.059)
I another phone. Right.

It didn't take long, did it? No, no. Right.

Perry Carpenter (23:15.182)
Same thing with like algorithms that would look for certain keywords like sex or self harm. They just, you know, change the word that they actually say for that. And there's this whole almost language called algo speak that's out there where people just fundamentally understand how to do that. And I think that that's the fight that we're in as people on the good end of this.

D. Mauro (23:24.853)
Mm-hmm.

Perry Carpenter (23:40.044)
We will build in things like interesting watermarking techniques and metadata techniques and provenance markers and everything else. But either somebody will just go around that and use an open source version that doesn't have any of that. Or if they realize that the one that they like, let's say they really love mid journey or Dali and that includes the watermarking or whatever tag is in there. Well, then they'll just know the other way to strip it out as soon as they use it. So.

That's the fight we're in. The way it's very likely to be combated is that it won't be a tool that you or I have a feeling that something is a deep fake. And so we upload or put a link to it in that tool and then it runs it through it. More and more I'm hearing people say the way to deal with this is like an on platform or an on device filter.

So let's say Meta has one and then all of sudden on everything in Meta's platform, it's given you like a percentage likelihood that this is synthetic media that you're seeing. It's like, this is 40 % likelihood that it's a deep fake. Or let's say it's embedded in your iPhone just at the OS level and everything that you're getting, if it goes past a certain threshold, let's say it goes past 50 % likelihood that it is

synthetic that it would raise some kind of warning. And then you just kind of bake that into your assessment of how you're looking at the overall media that you're evaluating.

D. Mauro (25:12.009)
Right.

D. Mauro (25:15.659)
And then you can verify through a reliable other channel, right? Like, I always think of it like in the business context, somebody gets an email and they've taken your training, they've taken my training and they spot that it's a fish, so they're not going to act on it. And then all of a sudden it's followed up with a calendar invite. And then they jump on a Zoom call or a Teams call and all of a sudden it's

Perry Carpenter (25:19.126)
Yeah, exactly.

Perry Carpenter (25:38.654)
Mm-hmm. Yeah.

D. Mauro (25:43.891)
your employer or somebody that you work for and maybe a couple different people. And then you get all of your questions answered. You're able to speak freely and then you get that all that anxiety goes away. You feel better. Now you understand the context. They answered my questions and then you go ahead and you do the wire transfer. You send the sensitive information only to find that that zoom call or that team's call was in fact a deep fake.

Perry Carpenter (25:59.938)
Mm-hmm.

Perry Carpenter (26:08.47)
Yeah.

D. Mauro (26:11.729)
So to me, I'm curious at what point will Microsoft or Zoom have something like that, where it's like this may be a percentage of deep fake. And so then you're like, well, it's right on the cusp. So I'm going to just double check. Right.

Perry Carpenter (26:21.71)
Mm-hmm.

Perry Carpenter (26:29.238)
Yeah. You know, the other side of that is, is it can become a false sense of security too, because then what if somebody gets the next, yeah, they get the next level of technology and then it assumes 0%, but it's 100 % synthetic. Do we end up relying on the detector to our own detriment?

D. Mauro (26:38.111)
gets around it, right? Because as soon as, yeah, right.

D. Mauro (26:45.546)
Mm-hmm.

D. Mauro (26:50.397)
Right. Because as soon as the code is written for that detector, there's going to be a way around it. Right. I mean, that's the challenge. That seems to be the challenge. One of the things I loved, just want to shift gears just a bit, but it's totally related. One of the things I loved, and you can see I dog-eared a lot of this, I don't bend the pages. My daughter taught me, don't bend the pages.

Perry Carpenter (26:56.908)
Yeah, yeah, there's gonna be a vulnerability for sure.

Perry Carpenter (27:08.802)
Sure, yeah.

Perry Carpenter (27:13.268)
Nice. I love books like that.

D. Mauro (27:18.767)
But I've got notes all over. But I love the explanation that you have about why this is happening now. And you talk about "The Exploitation Zone" So can you walk our listeners and our viewers through that briefly?

Perry Carpenter (27:28.526)
Mm.

Perry Carpenter (27:35.326)
Yeah, sure. So the exploitation.

D. Mauro (27:36.979)
It's a really great concept and I think it explains it perfectly.

Perry Carpenter (27:41.238)
Yeah, and I stole most of it. there's a Google in. Yeah. There's. Exactly. Yeah, there's an employee, Google, named Astro Teller, who, man, I wish I remembered the name of the book he mentioned it in, but he he didn't call it the exploitation zone. What he did is he talked about one of the challenges that we face as a society is that society and humanity.

D. Mauro (27:44.053)
Good. Like the best the best things are stolen. But you apply them here, which is relevant.

Perry Carpenter (28:10.862)
adapts very, very slowly to change. And so over eons, really, you just see these slight adaptations. At the same time, though, technology is on this hockey stick curve. So technology was something that people could adapt to really, really easily because it was sub-human capability for a long time. And then all of a sudden, we've been on this exponential curve.

D. Mauro (28:13.907)
Mm-hmm.

Perry Carpenter (28:36.906)
and it's really, really crossed and surpassed our ability to adapt. And so that means that people that really kind of at that top level of the hockey stick that understand the technology can kind of look down and go, look at all the areas of society where people have not yet adapted. People are confused, they're unaware. I can take advantage of that. And you can do that in one of two ways. You can bring good products to bear that alleviate societal

D. Mauro (28:56.395)
Right.

Perry Carpenter (29:06.094)
concerns and areas that are ways that we can add more efficiency or effectiveness in different areas. Or you can look down and say, well, let me take advantage of all that confusion and all of that unawareness and let me do something really devastating and trick people or make people believe certain things.

And that's the exploitation zone is after those two lines have crossed and the technology has continued to hockey stick on the right hand of this graph opens up this really big gap. And I see that gap as the zone of exploitation, where if you're anywhere above the human adaptability line, you can look down and you can figure out how to, how to take advantage of that.

D. Mauro (29:55.657)
And that's where we sit today, isn't it?

Perry Carpenter (29:57.91)
Yeah, yeah, absolutely. And all of us are there in different ways. But, I mean, you and I are there probably, like, if one of our kids comes or somebody in a younger generation comes, and they show us the newest, coolest social media platform. And you're like, I can't even figure out, how do I scroll on this? Where are comments? How do I create a new thing, whatever it is? And we're just, you you look at it kind of in bafflement. But for them, it's intuitive.

D. Mauro (30:01.995)
Mm-hmm.

D. Mauro (30:18.035)
Exactly.

D. Mauro (30:25.673)
Right. Right.

Perry Carpenter (30:27.662)
The same time, like you or I might go into our parent or, you know, a grandparent's house and you go in their kitchen, you look at the microwave and it's noon 24 seven in their house because they've never figured out how to set the clock on it. And that's, you know, us being at one side of the exploitation zone and them being on another side. They just don't understand it. Can't be bothered.

D. Mauro (30:39.672)
It always is. Right.

D. Mauro (30:51.157)
Right.

Perry Carpenter (30:53.454)
When it comes to deep fakes and when it comes to a lot of cybersecurity related stuff, most of society is at a level of high ability to exploit. mean, we're even you and I, think people that study this, you get the right person that understands the right bit of technology, a little bit of psychology and has the time and persistence to do something good.

D. Mauro (31:06.837)
Mm-hmm.

Perry Carpenter (31:21.998)
They're going to get us, they'll be able to.

D. Mauro (31:24.573)
Absolutely. Is there a segment that you're seeing that succumbs to social engineering more than another? I mean, this isn't really related to fake, you know.

Perry Carpenter (31:37.75)
Yeah, no, think it's... Yeah, I would say it just...

D. Mauro (31:41.567)
I've always been curious, the Midwest, I live here in the Midwest of the United States, and the Midwest values were very trusting as a general, just sweeping generalization, right? But the folklore is that we're very trusting and things. I've always been curious whether the data, because I haven't seen the data support that we're more vulnerable, right? Like it's happening everywhere. So I've always been curious whether, you know, we're more vulnerable and

Perry Carpenter (31:44.705)
Yeah.

Perry Carpenter (31:49.876)
Mm-hmm. Yeah.

D. Mauro (32:10.347)
I'm trying not to get all on my neighbors jaded, but I also want to raise awareness at the same time.

Perry Carpenter (32:15.372)
Right. Yeah, I don't, and I'm not seeing numbers on it, but my suspicion is, is that it just, there's a spectrum, right? Is that there are some age groups that are more vulnerable to certain types of scams. And there are certain people, know, certain regions of the country or regions around the world that are going to be more and less vulnerable to different types of scams, depending on

D. Mauro (32:25.883)
Mm-hmm. I would agree. Mm-hmm.

So like.

Perry Carpenter (32:42.764)
their training, their peer group, their age group, their social conformity. Yeah, all of that comes in. And I do think that one of the really interesting things is that physical things will trick us more now, because we're on guard when it comes to email. We're on guard when it comes to a lot of social media things. But you get a letter in the mail that looks like it's from a reputable company.

D. Mauro (32:46.655)
their technology adoption, right? Yeah.

D. Mauro (33:03.807)
Mm-hmm.

Perry Carpenter (33:12.622)
that's asking for money or saying you didn't pay something or you owe a fine and it's on decent paper with a decent logo. All of a sudden, all the training that we would have in our online phishing is really difficult to apply in this certain area because everything feels different. Our cognitive frame around that's completely different. And because we can hold it, it feels way more legit. And so now we're seeing more QR code phishing.

D. Mauro (33:20.414)
Mm-hmm.

D. Mauro (33:38.485)
Yeah, it's interesting. Yep.

Perry Carpenter (33:42.306)
being used in physical mail as well, just because of that.

D. Mauro (33:46.313)
Yeah, mean, QR code, makes perfect sense to QR codes in physical mail makes a lot of sense. Yeah. So in terms of some practical defenses, what does a small business leader or somebody trying to protect a small business organization do today when it comes to deepfakes? Is it creating a policy, raising awareness?

Perry Carpenter (33:51.447)
Yeah.

Mm-hmm.

Perry Carpenter (34:11.534)
Mm-hmm.

Perry Carpenter (34:17.242)
I think it would depend on the type of risk you're trying to mitigate. So if it's like trying to stop wire fraud, because somebody got tripped, tricked by a deep fake, well, then it's a process that you'd want to put in place. So a policy that defines a process that requires you not just to respond to the video call or whatever you got, but there's a second level of confirmation. Yeah.

D. Mauro (34:22.859)
Mm-hmm.

D. Mauro (34:42.773)
Verification. Yeah. And I don't necessarily think that that is been exacerbated through deepfakes because really the whole. Yeah. I mean, the whole having somebody manage like in a small business, having the same person in charge of cutting the check and doing the accounting. Right. Like that's been an issue for a while that a lot of security leaders have been saying, can you just separate that process so that

Perry Carpenter (34:51.212)
Right. It's more of the same.

Perry Carpenter (35:07.147)
Mm-hmm.

Perry Carpenter (35:11.747)
Yeah.

D. Mauro (35:12.776)
So you don't have those insider threats, right? You just don't have some life trigger that triggers somebody into doing that.

Perry Carpenter (35:15.884)
Yeah. It's really hard in small businesses though, right? Because if you only have 10 people working there, then you naturally have people that wear multiple hats. At the same time though, if you've got 10 people and one of them's name is Bob and the other one's name John, and Bob is supposedly calling John and saying, we need you to transfer $10,000, well then maybe still don't just...

D. Mauro (35:22.816)
Right.

D. Mauro (35:26.346)
Yeah.

Perry Carpenter (35:43.638)
rely on the video call from Bob or the email, but actually get, you know, walk over to Bob's office or call him at home or something else. You can still figure out a way to do a double verification on things or to at least increase your assurance that it's likely. The other thing would be when it comes to like deep fakes, code words or gestures or things like that. It could be that

D. Mauro (35:46.41)
Right.

D. Mauro (35:56.426)
Mm-hmm.

Perry Carpenter (36:11.36)
you have in a policy or if it's you and your family agreed upon like, hey, if you ever get a call, and it sounds like me under duress, well, then just ask me to say the code word or ask me the secret ingredient to grandma's chili recipe. Something that shared knowledge that only that group of people would know. And maybe it's even if it's a video call, it's shared knowledge plus a gesture like you give the chip the

D. Mauro (36:25.268)
Right.

D. Mauro (36:29.386)
Hmm.

Perry Carpenter (36:41.176)
chili recipe, but you're also holding your finger on your nose type of thing. Can make you look and sound stupid, but if it saves you if it saves you heartache later on, it's probably worth it.

D. Mauro (36:44.637)
Right, yeah.

D. Mauro (36:53.461)
That's a good point, too, about the gestures. So when Deepfakes first came out, I saw a lot of suggested best practices where, you know, we'll have the person turn left to right or have them raise their hands, something that that could, you know, show that there's, you know, blurriness or that there's something off about the video. But to my knowledge, I think Deepfakes are getting better and better that I don't even know that that applies.

Perry Carpenter (37:05.033)
Mm-hmm.

Perry Carpenter (37:13.569)
Yeah.

Perry Carpenter (37:21.218)
Yeah, you can't rely on that. And what I tell cybersecurity people is, I think that we sometimes do the public a disservice. Like if we go on the news and somebody says, how can you detect a deep fake? And we say, well, know, have them wave their hand in front of their face, or if it's an image, look at the fingers and if they're messed up or look at the hair, look at the text. All those things may have been true at

D. Mauro (37:22.976)
anymore.

Right.

Perry Carpenter (37:50.762)
at a certain period in time. But in a lot of ways, they're only true for the people that are lazy right now and don't do good ones. Or maybe they were never true because the technology continues to advance so fast. And so if I tell somebody all those easy to do things, then I might actually be setting them up for failure or setting them up to get scammed sometime.

D. Mauro (38:15.882)
Right.

Perry Carpenter (38:18.584)
So I don't want to do that. Instead, I always say, ask why this piece of media or whatever it is landed in front of you in the first place. What story is that trying to tell? What emotion is that trying to invoke? What is it ultimately wanting me to do or believe? And if I can start to ask those questions, they sound a lot more like your old school fishing training, but are just as applicable.

D. Mauro (38:27.989)
Mm-hmm.

D. Mauro (38:44.637)
Right.

Perry Carpenter (38:47.118)
If I can ask and answer those questions, well, then I've got a pretty decent investigation going because here's the other thing that's interesting is the more we think about deep fakes, the more we forget that traditional media can just be used just as deceptively. I can crop a photo differently. I can take something out of context. All of that kind of stuff is still fair game and that would bypass any deep fake detector because it's real.

D. Mauro (39:06.697)
Mm-hmm.

D. Mauro (39:17.011)
Right. Yeah, it's just being used in a deceptive manner. You know, you talk about like media literacy and things, you know, what can people do to kind of.

Perry Carpenter (39:21.442)
Yeah.

D. Mauro (39:31.595)
How do we, I don't know really how to ask this, but how do we reduce the exploitation zone faster? Is it possible? Like how do we improve people's media literacy?

Perry Carpenter (39:35.032)
Yeah.

Perry Carpenter (39:42.144)
Yeah.

Perry Carpenter (39:46.624)
It's, I don't think you can force it on people, but I do think that we need like a generational initiative, similar to seat belts and things like that, where you and I are probably old enough where we remember when we didn't have to wear seat belts all the time.

D. Mauro (39:54.025)
Mm-hmm.

D. Mauro (40:01.917)
I remember people being upset that they couldn't have a beer in the car when they're driving home. I'm like, you hear that today? And everyone's like, what? I'm like, yeah, that was a thing. Yeah.

Perry Carpenter (40:05.37)
Exactly. Yeah. And, and yeah, I mean, yeah, it's a complete mindset shift. And now if you get in your car and you don't put your seatbelt on, you almost don't feel safe. Or if you have kids and they're in the back seat and they see somebody not put their seatbelt on, they freak out, right? Yeah. And so I think that there's that kind of shift that is going to have to happen, but that's going to take a lot of concerted effort.

D. Mauro (40:17.279)
Right. Yeah.

D. Mauro (40:23.477)
They'll say something. Yeah, exactly.

Perry Carpenter (40:33.198)
across multiple generations at the same time. But there are ways that we've seen that happen and work around the world. So Estonia, which was the recipient of a ton of Russian disinformation for a prolonged period of time, now K through 12, they have disinformation awareness programs. They have gamification around that. So they're really trying to build that in.

D. Mauro (40:48.159)
Yeah.

Perry Carpenter (40:59.926)
They have a resilience, know, cyber resilience and cognitive resilience strategy that's fundamental to their national defense. Same thing in Switzerland and Finland. They have that. You look at Taiwan, recipient of Chinese disinformation and information warfare campaigns for decades. And within their early childhood, all the way through elder care, they have

media literacy, they have disinformation awareness, they've got games so that you can practice cognitively being in these different situations. And I think here in the US, we're a little bit overdue for that kind of national strategy.

D. Mauro (41:40.393)
I was just about to say that seems brilliant for us right now. I mean, I would think we're way overdue for that. I mean, that would be so useful to have a section. And typically, the security awareness trainings that people attend, right? If it's a lunch and learn or something, some of them are just so traditional, right? It's like, here's how you spot a fish and here's this. they PowerPoint people to death and they don't...

Perry Carpenter (41:44.737)
Yeah.

Perry Carpenter (41:50.261)
Mm-hmm.

Perry Carpenter (42:01.358)
Mm-hmm.

Perry Carpenter (42:05.324)
they are.

D. Mauro (42:10.165)
You know, like they don't kind of empower them and inspire them and get them, you know, really to care about it, right? They just think it's still an IT thing. But really talking about, you know, AI and deepfakes and how this fundamentally changes our view of reality. Like it goes to the core of what is real and what isn't. And understanding that you need to...

Perry Carpenter (42:11.544)
Yeah.

Perry Carpenter (42:20.43)
Mm-hmm.

Perry Carpenter (42:31.672)
Yeah.

D. Mauro (42:39.199)
take every bit of information you're getting and verify. Right. I mean, it's just an extra step. But before you take any action on it or actually form a core belief based on that, it really behooves us all to really look it up. mean, there's so many times when someone will show me something on social media, they'll be like, can you believe this happened? And I'm like, hang on.

Perry Carpenter (42:44.622)
Mm-hmm.

Perry Carpenter (42:54.808)
Yeah.

Perry Carpenter (43:03.746)
Mm-hmm.

Perry Carpenter (43:07.628)
Right. Exactly. Yeah.

D. Mauro (43:08.135)
That didn't happen, by the way, right? And it didn't even take that long for me to show that, looks like 300 people also investigated this. And they all found that that actually didn't happen.

Perry Carpenter (43:19.065)
Mm-hmm.

Yeah. Well, and I think you put your finger on something really important is one of the reasons that it's hard to teach people about like deep fakes and disinformation is because when you start to talk about disinformation, doesn't matter where you sit politically, somebody feels like you're targeting their personally held belief somehow. your examples are coming at, yeah.

D. Mauro (43:40.757)
Mm-hmm.

D. Mauro (43:45.353)
Yeah, right. Like it's fake news. You're going to talk about fake news and then they have their own experiences with those buzzwords and they get on the defensive.

Perry Carpenter (43:53.123)
VM.

Yeah. And so it's hard to get under that and say, no, wait, yes, there are there's politics involved in this, because everybody around the world with any motive is going to be using these tactics. But we have to find ways to talk about this and even show examples in an apolitical way, but then also show real examples that may refute a

D. Mauro (44:16.341)
Right.

Perry Carpenter (44:22.296)
politically held belief or may reflect badly on one of the parties that we identify with because it is humans that are putting those campaigns together. of course, there are going to be offenders on every side of that aisle. But we have to get away from taking everything so personally.

D. Mauro (44:33.173)
Right.

D. Mauro (44:45.855)
Yeah. yeah. That's that's absolutely an understatement. I agree with that completely. And I think it's you know, what I found got people to pay attention more is when we focused on cybersecurity and raising awareness and we gave them tools to protect themselves, we showed them how to freeze their credit. We show them how to, you know, improve the privacy settings on their mobile devices, things like that. And then all of sudden people like

Perry Carpenter (45:04.558)
Mm-hmm.

Perry Carpenter (45:13.28)
Mm-hmm.

D. Mauro (45:15.339)
Well, this is actually useful. I can use this like and then they start to care more. They start to improve their cyber hygiene. And I think we need to bring that level of disinformation awareness into those kind of teachings as well.

Perry Carpenter (45:17.464)
Yeah.

Perry Carpenter (45:30.294)
Yeah. One of the fun things that I did, like in the last three chapters of the book is I have at the end of each chapter, all these little like games and activities. And one of the ones that I like to tell people about is I actually have people create their own piece of disinformation. So it's like, if you wanted to to make somebody believe X, whatever it is, what image would you go create or find? What?

D. Mauro (45:40.128)
Yep.

D. Mauro (45:49.483)
Mm-hmm.

D. Mauro (45:54.827)
Right.

D. Mauro (45:58.536)
Mm-hmm.

Perry Carpenter (46:00.104)
headline would you put with that? That would make it like super visceral and emotional. What story bullets would you do? And like, what would make that work on you? Because as soon as you go through the process of putting something like that together, the more skeptical you get about everything in your newsfeed as you start to go scrolling.

D. Mauro (46:19.083)
That's an interesting point. So by getting people to try and create their own avatar, right, by getting people to try and create their own disinformation campaign, right, like the public library is now going to be whatever it is, burning all the books or burning all the red books or whatever it is. Right. And then you create what image would you need to set?

Perry Carpenter (46:26.605)
Yeah.

Perry Carpenter (46:31.468)
Mm-hmm.

Perry Carpenter (46:37.696)
Yeah. Yeah, exactly.

D. Mauro (46:43.965)
What would you, you know, what would you do? Would you show a librarian? Would you show somebody lighting it? Would you show a picture of this local library? Where would you put it? Would you put it on Facebook, Instagram, the local, you know, email threads? And you're like, now that you've kind of just gone through even academically in doing it, now you see you start to think like a hacker, really, like as the as the phrase goes, right? You start to think like it's.

Perry Carpenter (46:44.214)
Mm-hmm.

Perry Carpenter (46:49.933)
Yeah.

Perry Carpenter (46:55.158)
Mm-hmm.

Perry Carpenter (47:02.933)
Yeah.

Perry Carpenter (47:07.008)
Yeah, exactly.

Yeah. And I, I think that that's way more powerful than people think at the very beginning, because as soon as you do that, you're like, wait, I see that what that headline is doing or wait, that photo that they're saying is of my public library or in reference to my public life. doesn't look anything like my library. that's just a stock photo that they grabbed somewhere or one that they, they just typed a prompt into mid journey or something like that.

D. Mauro (47:18.454)
yeah.

D. Mauro (47:24.031)
Right.

D. Mauro (47:31.217)
Right, exactly.

Yeah.

D. Mauro (47:39.115)
Right.

Perry Carpenter (47:39.286)
So then you get a little bit more critical of the secondary bits of the article and you're like, wait, that, yeah, I don't know. I got to go check that out myself. And one of the things that I have that I did not put in the book, I've got a number of tools and frameworks in the book, but one of the things I created after the book is what I call the fake framework, which is just, know, the F is for freeze and feel. So the first time you get this, you

D. Mauro (47:57.845)
Mm-hmm.

Perry Carpenter (48:08.642)
big reaction to something in your news stream or social media stream, the first thing you should do is just freeze and then like try to understand what emotion you're feeling with that. Or is it outrage? Is it fear? Is it urgency? Is it like an authority type of frame? What's going on there? And then the A is for analyze, know, analyze the story. What story is it trying to tell? What emotion is it actually trying to invoke? Whether that's the one you felt or not, it's a different story.

D. Mauro (48:15.019)
Mm-hmm.

D. Mauro (48:20.555)
Right.

D. Mauro (48:32.971)
Mm-hmm.

Perry Carpenter (48:38.574)
And then the I is for investigate. So try to understand the claims, the sources, see if you can find that being reported in other places that you already trust. And then the K is really just for keep vigilant and you know, know and confirm. And so what I tell people now is, if you don't do anything but that first step, if all you do is freeze and like acknowledge what you're feeling for a second,

And you don't do anything else. You just go on with your day. The internet's already a better place and you're already in a safer place because you've not reacted to something. You've not shared it. You've not responded to something and potentially given away information. You've not brought more out, more eyeballs, algorithmically to a piece of disinformation. so no matter what, if you can just, anytime you feel something strong on social media, just slow down for a second and start to dig a little bit.

D. Mauro (49:34.441)
Yeah, don't immediately reshare it. Right. That's one of the things I always coach people on is please don't reshare it because until you've gone through the other steps and analyzed and investigated it. Right. And you found that it is true. Like, like half the time, if not more, you're to find out that's not even true. So please don't reshare it until you've verified it. Yeah, that's phenomenal. It's good advice.

Perry Carpenter (49:37.79)
Exactly. Exactly.

Perry Carpenter (49:46.018)
Yeah. Yep.

Yeah.

Perry Carpenter (49:55.363)
bright.

Yeah. And even, you know, don't hit your little angry emoji or comment too, because if it's not real, then all you're doing is bringing everybody's eyeballs in your network potentially to that post. They're not going to investigate. Yeah.

D. Mauro (50:05.823)
Right, right.

D. Mauro (50:11.997)
Right, because the algorithm is going to feed. Yeah, algorithm is going to feed people to follow you into seeing that post now. Yeah.

Perry Carpenter (50:17.878)
Yep. Yep. And it's like that old thing. then you bring two friends and then they bring two friends. And then all of sudden this thing is viral when it could have, at least with your network, could have stopped with you if you were to slow down and just say, let me think more about this or investigate it or just go, you know what? I don't have time for that today. If that's real, that's going to show up somewhere else.

D. Mauro (50:25.087)
Yeah, that's exactly right.

D. Mauro (50:32.021)
Yeah.

D. Mauro (50:44.169)
Yeah, absolutely. So as we wind down here, tell me a little bit about what you're seeing in the future. Like where are we going? I just saw on the news about five individuals from Scattered Spider, like a big social engineering group, have been arrested or indicted. Most three of the five at least have been detained.

Perry Carpenter (50:52.867)
Mm.

D. Mauro (51:12.095)
But what are you seeing in the future in terms of how AI and deepfakes are going to be leveraged or what to look for? Are there any trends that you're seeing?

Perry Carpenter (51:28.322)
Definitely more and more sophisticated. I think that's probably the easiest thing to say. I... The trend that I'm seeing that is more disturbing than like every scam and everything else that's going on right now is that because we're so inundated with these right now and because we know how good they are, people are just deciding what they want to believe is true and not despite the evidence.

D. Mauro (51:54.859)
Mmm.

Perry Carpenter (51:57.806)
And we saw this with like Hurricane Helene when people were, there were a whole bunch of conspiracy theories about FEMA and their effectiveness or lack of thereof and where funding was going and a lot of social division, a lot of social strife. And in the middle of Hurricane Helene, one of the deep fake images that got posted and circulated a lot was like of this little girl who was wet, she was crying. She was wearing like a life vest in a boat and was clenching this puppy to her who was also wet.

D. Mauro (52:05.362)
Mm-hmm.

Perry Carpenter (52:26.946)
looked miserable. And it was all synthetically created and everybody was sharing it. And there were some fairly high profile people that started to share it too. And luckily, right now that yeah, yeah.

D. Mauro (52:28.169)
and it was all synthetically created.

D. Mauro (52:39.817)
Yeah, because they're not checking. like, if it's in my feed, I mean, it's always so true. whenever I'm training and it's the managing partner at a law firm, they're like, they're the biggest clickers on every phishing email because they're like, look, if it got to me, it must be important. It's like, no, man, it gets to everybody, right?

Perry Carpenter (52:50.592)
Right.

Perry Carpenter (52:56.054)
Yeah, but the weird thing with it and the thing that's like hard to figure out like where we go from here is that people called it out. So even on Twitter, people called it out and said that's synthetic. And what people were saying when they got, so some people took it down, but some very, very high profile people said, you know what, I don't know where this came from. I don't know if it's real or not. I'm going to leave it up because

D. Mauro (53:11.146)
good.

D. Mauro (53:16.797)
Okay, good.

Perry Carpenter (53:25.294)
in my, because I believe it represents what's going on out there. And yeah, but, the, not when they, they didn't share it that way, but they're at the end of the day, they're like, I don't care if it's real or not. I think it represents a truth. And I think that that's kind of where a lot of it's going to go is people are going to share things about the politicians that they love or the politicians that they hate.

D. Mauro (53:29.966)
But at least they explain the context of why they're using it, meaning...

D. Mauro (53:37.779)
Yeah.

D. Mauro (53:42.275)
I see what you're saying.

D. Mauro (53:53.995)
Hmm.

Perry Carpenter (53:54.988)
And then somebody is going to disprove that and say, well, that's synthetic. And they go, well, I don't care. It represents a truth that I already agree with. So more and more and

D. Mauro (54:00.157)
Right.

Or as you point out in the book, you point out the struggle that we have sometimes is when they're showing something that is truthful and we may not like it, then we're like, that's just fake. And it's like, no, that's actually the truth. Right.

Perry Carpenter (54:10.506)
Mm-hmm. Yeah.

Right. Yeah.

Yeah, so there's going to be a lot of that, a lot of people saying they don't care what's real or not. And then there's this whole what you're alluding to, which is "The Liar's Dividend", which the only person that stands to gain is the person who's deceptive. So if I get caught on tape doing something, I can just say it's a deep fake. And if I want to manufacture a story and throw it out there, well,

D. Mauro (54:29.811)
Right. That's what you called it. Yeah.

D. Mauro (54:35.669)
Yep.

D. Mauro (54:40.564)
Right.

Perry Carpenter (54:45.678)
I can create a deep fake and the people that already want to believe in it are going to believe in the people that I wouldn't have convinced. Maybe I get one or two of them, or maybe I get none, but I still create the division or the narrative or whatever that I want to create.

D. Mauro (54:54.133)
Yeah.

D. Mauro (54:59.645)
Amazing. That's great. Well, Perry Carpenter, thank you so much for your time today. I will have a link to the fake book in the show notes. where do you have speaking engagements coming up? Share with the listeners what you've what you've got coming up. I'm sure you're always doing a lot. So.

Perry Carpenter (55:02.626)
Yeah.

Perry Carpenter (55:07.406)
Thank you.

Perry Carpenter (55:18.933)
Yeah.

Yeah, I don't know if there's a bunch of publicly accessible speaking engagements that I have, anybody that wants to connect with me can get me on LinkedIn. I also have a newer podcast called The Fake Files, spelled the same way, that is going to get a lot of my time and attention coming soon. And then a lot of the standard security conferences I'm at and I'm speaking at, so I'll be at National Cybersecurity Convene.

D. Mauro (55:28.811)
you

Perry Carpenter (55:50.222)
the beginning of 2025 and then of course, Black Hat, Def Con, all of the standards.

D. Mauro (55:57.737)
Are you going to be doing any deep fake like in a village at DEF CON? That's gotta be cool. Yeah, that'll be cool.

Perry Carpenter (56:02.798)
Probably so, yeah. It's not all planned out yet, we'll see. Last year at DEF CON, I had the those Scambots that I created enter the social engineering competition and they did really well in that. So I'm trying to figure out if I should evolve that and see where that goes this year. But I'll definitely be around and doing fun stuff.

D. Mauro (56:18.144)
Yeah.

D. Mauro (56:24.981)
Well, I'm sure your coworkers will see you on a thousand different meetings all at once. So once you with with your ability to scale. So thank you so much, sir. I really appreciate your time. And everybody, please reach out to Perry. Follow him on LinkedIn and check out the book. I promise you it'll be a really easy to understand, but really impactful book all about the effect on our society today.

Perry Carpenter (56:29.504)
Right, exactly. Exactly.

Perry Carpenter (56:36.494)
All right, thank you.

D. Mauro (56:54.869)
Thank you, sir. I appreciate all you do and appreciate your time for joining us today.

Perry Carpenter (56:59.47)
Yeah, thank you. I appreciate you.

D. Mauro (57:01.258)
Thanks, man.


People on this episode