Cyber Crime Junkies
Entertaining & Sarcastic Podcast about dramatic stories on cyber and AI, which actually help people and organizations protect themselves online and stop cybercrime.
Find all content at www.CyberCrimeJunkies.com and videos on YouTube & Rumble @CyberCrimeJunkiesPodcast
Dive deeper with our newsletter on LinkedIn and Substack. THE CHAOS BRIEF.
Cyber Crime Junkies
Deepfake Attacks, Voice Cloning, and Why AI Social Engineering Works
Why modern cybercrime targets trust, urgency, and decision-making instead of systems
Deepfake Attacks, Voice Cloning, and Why AI Social Engineering Works
Traditional fraud used to feel obvious: misspellings, odd links, weird emails.
Now? Deepfakes embed perfectly familiar voices and faces into your feed — or your inbox.
Listen to Perry Carpenter on this. If you love this topic as much as we do grab Perry's incredible book FAIK available everywhere. Here's a non-affiliated link: https://www.barnesandnoble.com/w/faik-perry-carpenter/1145888787?ean=2940190971293
Don't Miss the Deepfake Webinar coming up! You WIll See how you can test out your own deepfake to better understand them. https://info.knowbe4.com/new-deepfake-training-na?partnerref=blog
Chapters
00:00 Why the Next Breach Won’t Look Like the Last One
01:44 Welcome to America in 2026
03:00 Why Deepfakes Are Exploding Right Now
05:20 Yes — A Voice Can Be Cloned in Seconds
07:30 What Deepfakes Actually Are (No Hype)
09:35 Legitimate Uses vs Weaponized Intent
12:45 Why Deepfake Companies Stay Quiet
17:05 Faces and Voices Are the New Attack Surface
18:30 Stop Asking “Is This Real?” Ask This Instead
20:55 Why Spotting Artifacts No Longer Works
23:45 The One Question That Cuts Through Deepfakes
25:50 What Leaders Should Actually Do Today
27:45 Old-School Security Still Wins
30:05 Why Detection Tools Are Losing the Race
33:10 Romance Scams, HR Fraud, and Deepfake Hiring
35:45 10 of 15 Job Candidates Were Fake — Here’s Why
38:40 Fake Workers, Real Access, Real Damage
42:20 Deepfakes as Multi-Stage Attack
Question? Text our Studio direct.
Growth without Interruption. Get peace of mind. Stay Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com
🔥New Exclusive Offers for our Listeners! 🔥
- 1. Remove Your Data Online Today! Try OPTERY Risk Free. Sign up here https://get.optery.com/DMauro-CyberCrimeJunkies
- 2. Or Turn it over to the Pros at DELETE ME and get 20% Off! Remove your data with 24/7 data broker monitoring. 🔥Sign up here and Get 20% off DELETE ME
- 3. 🔥Experience The Best AI Translation, Audio Reader & Voice Cloning! Try Eleven Labs Today risk free: https://try.elevenlabs.io/gla58o32c6hq
Dive Deeper:
🔗 Website: https://cybercrimejunkies.com
📰 Chaos Newsletter: https://open.substack.com/pub/chaosbrief
✅ LinkedIn: https://www.linkedin.com/in/daviddmauro/
📸 Instagram: https://www.instagram.com/cybercrimejunkies/
===========================================================
Chapters
00:00 Why the Next Breach Won’t Look Like the Last One
01:44 Welcome to America in 2026
03:00 Why Deepfakes Are Exploding Right Now
05:20 Yes — A Voice Can Be Cloned in Seconds
07:30 What Deepfakes Actually Are (No Hype)
09:35 Legitimate Uses vs Weaponized Intent
12:45 Why Deepfake Companies Stay Quiet
17:05 Faces and Voices Are the New Attack Surface
18:30 Stop Asking “Is This Real?” Ask This Instead
20:55 Why Spotting Artifacts No Longer Works
23:45 The One Question That Cuts Through Deepfakes
25:50 What Leaders Should Actually Do Today
27:45 Old-School Security Still Wins
30:05 Why Detection Tools Are Losing the Race
33:10 Romance Scams, HR Fraud, and Deepfake Hiring
35:45 10 of 15 Job Candidates Were Fake — Here’s Why
38:40 Fake Workers, Real Access, Real Damage
42:20 Deepfakes as Multi-Stage Attacks
44:00 How KnowBe4 Is Training Against Deepfakes
47:10 Final Takeaways for Leaders
speaker-0 (00:00.086)
Yeah, I know you got to figure. Hey, ever wonder why most organizations don't see breaches coming? Why only after the fact that they start to realize, hey, this is not what we prepared for. Dang, we were not ready. What happens when the next cyber attack doesn't start the way you thought it would, but starts with a voice you trust or a face you recognize? Synthetic media.
Deep fakes are exploding, not because of the reason you think. It's because the human psychology factors involved. They're becoming standard operating procedure in organized cyber crime. And nobody understands this better than today's guests. Perry Carpenter has spent decades studying deception and human behavior. While most people are still treating deep fakes like a novelty.
Perry has become the world's most authoritative expert. And today we're discussing a few things you haven't seen anywhere else. Watch for the nuances in the conversation. You'll see why deepfakes are causing damages that are anything but fake. And once you find out the latest and where it's headed, you'll stop asking whether a video or an audio clip is real or not, and start asking the real question of what.
really matters. This is Cybercrime Junkies, and now the show.
speaker-0 (01:43.18)
Welcome to America in 2026. Today we explore the gap between where deepfakes stop being impressive visually and technically and start becoming a real part of your risk management strategy as the damage that they do cause is anything but fake. Our guest is the legendary Perry Carpenter, strategic leader at the well-respected global security awareness company KnowBefore. And while most people are still treating deepfakes like
a novelty or a content tool, attackers are treating them as part of their tactics, techniques and procedures. has spent decades studying human behavior, deception and how social engineering actually works. is the author of the authoritative book, Fake, which is right there on my bookshelf, which is on the subject of deep fakes, which we dove into in our last episode with Perry earlier this year. There'll be a link to that episode in the show notes and a link to the book.
fake as well. Deep fakes have evolved since then and so has Perry's insight and expertise. Mr. Carpenter, welcome to the studio. Thank you so much for joining Zach and I.
Happy to be here. Thank you for having me.
Thanks.
speaker-0 (02:55.128)
So let's kick it off with the fast evolution of Deepfakes. saw this year, statistically, they rose dramatically. What are you seeing in terms of trending? You and Roger Grimes and all the leaders over at Noble Four really see a lot of these trends on a global scale firsthand. Walk us through what you're seeing and then we'll bounce off of that.
So from a percentage perspective, I think we've got a couple more years where we see those staggered numbers like I think before we hit record, the 2000 % increase that this last year, some of that is is just a, it's a function of the fact that we started, right? So any kind of comp.
percentage.
we were talking about the and had been reported.
speaker-2 (03:44.814)
with small numbers, Counting of small numbers is going to be exponential like that.
But that does not negate the fact that this is increasing.
rapidly and we are seeing and hearing more and more about the day.
I mean, about these every day. other thing that's compounding this is the fact that the tools have gotten so cheap and so easy to use. And not only are they cheap and easy to use, they're, they're shocking and effective at the thing that they're trying to do, which isn't necessarily a bad thing. It's not necessarily a bad thing to.
good.
speaker-2 (04:21.934)
impersonative voice or impersonative face.
bad thing comes based on the intent of the person that uses it.
and how they weaponize it.
That's a very good point. So let's be upfront about that because in general, we see the commercial use of the technology. I mean, it can help in customer service, replicating team meetings. There's a whole bunch of very practical, helpful ways of being able to, you know, rinse and repeat yourself, clone yourself, like being able to leverage the technology. Is it?
Has it gotten so good because the sample size has gotten so small now? It used to be where they needed hours and hours, if not days, of samples in order for the GANs, you know, the technology to match itself to where it becomes almost undetectable by the human eye and human ear. Is it because now they can do it with very small samples?
speaker-1 (05:26.294)
I don't think that there's a binary one single thing that we can point at and why it's gotten so good. Some of it is just a function of more and more and more and more training data that's there and higher and higher levels of compute that are behind it. So that, always helps. But then the other thing is that you do like, there are models right now that you can get a representation of somebody's vocal tone and texture.
or
speaker-2 (05:43.0)
something. You do not need much anymore.
speaker-2 (05:51.17)
very good.
speaker-1 (05:55.726)
through four seconds of voice. seconds? So, but to keep in mind. Yeah. Yeah.
Yeah. my.
speaker-0 (06:05.312)
No, was gonna- no, please, go on. You were gonna make- make a point.
Yeah, I was going to say that the thing to keep in mind with that is that the thing that gets replicated that quick is the texture of somebody's voice. It's like you can order a couple words and it. The thing that doesn't get right is the cadence of that person's voice. Who wants the words that they would choose, the disfluencies that they have, the little. Person. All is it's not going to be able to sample.
tone and
hear them say a single word can sound shockingly like
speaker-2 (06:32.77)
You know, stutters and stammer, it's not that effectively. for a couple minutes.
For that you still need minutes, know, between 30 seconds and two or three minutes can start to make that more and more effective. And then of course you've got some of the samples that can. And what I'm, what I've seen over the past three years really has been that the difference between the three hour sample size to create a.
three
speaker-2 (06:50.094)
still train on a couple hours.
speaker-2 (07:05.336)
Colin Cooper.
professional grade voice clone and the quality of a two minute voice clone has gotten those. Those are really close to right now. I'm I prefer the two minute clone of somebody's voice to.
each other and many times.
speaker-2 (07:20.224)
one that's three and hours long.
I take it you've sampled almost everyone that is available in your work. And I know that you've sampled a lot of deepfake detections we can get on that topic in just a bit. But why don't we define terms for people that might not even know what we're talking about. So how would you define a deepfake?
That's a good question. You know, some people, when they think about deep fake, they're thinking just about, you know, a video deep fake or something. Right. if we just go back to the two words, there's the word deep fake. Deep is running to deep learning neural networks. That's your artificial intelligence stuff. And the word fake is just fake. So it's essentially a fake that has been created through machine learning and artificial intelligence.
like
root in the word, the word related.
speaker-2 (08:05.89)
your artificial intelligence stuff.
speaker-2 (08:13.738)
through artificial intelligence or some kind of computer generation. A little bit more academic.
And to be more, to be like about it. The thing that we would use as a, and not a cinnamon. That's that's delicious. The thing, the thing that we would use as a deep fake would be synthetic. Yeah. Right. So synthetic as opposed to, so it's something wholly by a or.
a cinema.
speaker-0 (08:27.47)
It's a delicious-
no no no no no no no no no
speaker-2 (08:37.462)
natural, generated computer or significantly touched by some kind of
computer. So that's the default term. When a lot of people use the term deepfake, they're also typically talking about the... of it. They're talking about the... Deepfake doesn't have to be used in that way. There's that...
term.
speaker-2 (08:55.544)
negative aspect or deceptive use of it in that way, but that slant, especially from cybersecurity professionals.
Yeah, absolutely. I mean, you can go to some of these vendors. can Google them. 15, 20 of them will come up. A lot of them have free trials. HeyGen, think. HeyGen is a company that one of the first ones that I used. They've gotten very good. They're very good. Yeah, their ability to do it. it's not expensive at all. And it's just shocking. I mean,
Thanks.
speaker-0 (09:36.736)
If you go to their sites, they have samples of valid reasons to use this technology. Like they have samples of it, right? I mean, there is a valid use. it's almost like we're dealing with weaponry of some kind or something like we have to respect the weaponry, don't. Yeah, it really is to me because it's a hot button topic. It evokes emotion and yet we can't blame the technology.
really is that way.
speaker-0 (10:05.902)
You know what I mean? Like, we can want to blend the technology, but I don't know that we can because it's the intent behind it. It's the intent behind their use.
Right. Yeah, there was a show on Apple TV called The Morning Show has Jennifer Aniston and Reese Witherspoon, think. in this last season, they actually had a deep fake incident that was portrayed on screen to where the network that Jennifer Anderson was working for was, I believe going to be doing something with the Olympics. And so what they, what they wanted was to be able to translate her
Yeah, I remember that. Yeah.
speaker-1 (10:44.202)
her voice and her face doing her just and so we're Jen like technology for that, which is a very legitimate use because you trust that voice you want to blast that out and transparent about it. Of course, know, everybody that
be true.
know that Newscaster doesn't really know 40 languages.
But then she sees the same technology weaponized against her, where somebody is making her say something that said to set her up to be kind of a scapegoat or controversial figure in.
that she never saw.
speaker-2 (11:17.134)
a situation that happened.
And that's really where we're at today, isn't it? Because there is a great use. Think of this, this episode right here. If we could replicate it and deep fake it, we wouldn't just have to be speaking English and just have it be subtitled or have it be translated into another language. We could actually have us all speaking in that language, which resonates better for people and expands the reach. I mean, you
Exactly.
speaker-0 (11:50.634)
You can think of so many logical, practical, innocent ways. But then the reality comes in and that is, well, you know, threat actors, the way most people think of hackers, right? People are always, hackers are bad. It's like, no, no. Hacking is just a skill set. Like it's not, it's the threat actors, right? It's, the criminal, right? You're leveraging technology as opposed to just robbing a bank and they're, they're going to take it and they're going to impersonate.
commit fraud, commit personal attacks against individuals.
Yeah, it is just a new tool.
Yeah.
Go ahead, son. Yeah, I'm curious, if you're aware of any reports from the companies that are producing these tools, if they have estimates or any metrics for benevolent use versus malevolent use of this technology, probably behooves them not to share that data if they have it, I would assume. Yeah. That's a really good question. I've not seen reports from them.
speaker-0 (12:48.928)
It'd be nice if you could just...
speaker-1 (12:55.522)
That doesn't mean that they don't exist. would think that I would have found them if they did though. Now, you know, contrast that to like Anthropic and ChatGPT or Anthropic and our reports anytime they find somebody significantly weaponizing their systems, their
open AI and they are producing fine.
speaker-2 (13:12.952)
Throw in research out about that.
I haven't had with 11 lab page and or synthesia or some of the other ones that maybe do relib syncing like picks verse and sync.s others. It would be interesting to me, you know, maybe after we're done, I'll do a quick Google. And right now. Yeah, I've, I've looked and I haven't found anything and I, brought up the comparison that I was going to make, which is these AI LLM companies have been, I think very, very.
so and
speaker-2 (13:29.918)
search and see if I've missed something. Nothing comes to mind.
speaker-1 (13:44.12)
Maybe not very, very transparent, at least presenting some transparency in, really sharing the use cases for good or bad of their technology. I've not seen it on the, in these deep fake tools. so, here's my cynicism on why that is the case. So number one, when it comes to voice video and stuff like that, there's a huge backlash.
from the voice actor community, traditional community, people that do not want that kind
actor community.
representation, digital representation of themselves, especially without any kind of consent behind it or royalty behind it. Get to the large language models, many times.
when you get outside of that, when Anthropic or chat GPT puts out a big report saying, look at how our tool was, it can almost read like a press release because of power of the tool that's there. And when, when we're talking about like companies investing in these frontier model companies, because they're trying to build towards artificial general intelligence,
speaker-2 (14:36.844)
weaponized.
show.
speaker-2 (14:54.798)
GI article.
Then showing that power, regardless of whether it is being used for curing, used to create a nuclear weapon, hackers to do something, it is showing the. And so no matter what they're doing, if they can put that positive spin, it's like, look at this really unexpected, powerful use of our stuff, but we were able to absorb the good guys in this. It's, the best of all worlds for them. I don't think it works the same when it comes to voice video and things like that.
cancer or it's being or enable how power of the thing matter.
speaker-2 (15:18.584)
to stop it.
speaker-2 (15:24.878)
...
you don't think they need to prove the concept? It's already, that work is done. Yeah, well, I think that the societal backlash against the misuse of somebody's face or voice is different than the misuse of a tool that just looks like a tool.
class.
speaker-1 (15:44.59)
I wonder if that's what the negative connotation that everybody inherently seems to have around deepfakes. is a press release control demonstration of technology for good. Yeah, because because I think everybody here knows that even the the the good guys in this. Synthesia and 11 Labs, know, those are all companies that are trying to build safeguards.
In heavenly...
like hey jennan
speaker-1 (16:12.96)
in their products because they're trying to, they're trying to in corporations to do like warning and development and things like that. so they're trying to, to do it in ways that easy for the companies that want to use them that have just friction to create security, also in action to keep the, we all know that even with guardrails and even with security, there's going to be a
enable development and expanded communication.
speaker-2 (16:30.804)
enough. But all enough for curious bad guys out. No. Hard
good number of bad actors that are able to jiggle just the right way to get past all those that they want to. Because I think that they would get more negative PR because it feels more viscerally anti-human when those things happen than it does when you're just talking about a tool that deals with. That's great point.
doorknog, all of the defenses don't know necessarily publicize those numbers right now.
speaker-2 (17:05.73)
with language or code.
speaker-0 (17:09.932)
So Zach, one of the other things that you and I have talked about, because I have a belief that because business leaders, their face and their voice is now a new attack surface. And we've seen a dramatic rise in deepfakes. I believe it's going to be pretty standard in the tactics and techniques that is used. Meaning we all know that we're trying to be socially engineered when we get a phishing email.
Right. Or when we get a vish, right. A voice solicitation, when we get a smesh, when we get that text, right. We, we, we've been conditioned to say, here's another one, or I better watch this. Could this be real? And we're getting used to that. I believe that we're going to start to see it become very mainstream. When I presented that you had a very good counterpoint. And that was the human psychology of.
AI saturation. Why don't you share that? Because I would love to hear what Perry has to say.
speaker-0 (18:16.47)
I can't remember anymore.
If you're referring to my inherent distrust of any video, regardless of what it is, or I think that there's so
Social media, there's so much AI-generated content on social media that so many people are getting accustomed to going, okay, that's cool or bad or good. They recognize the emotional response, but they're just not even sure. So before they act on it or share it or do anything, they're going to verify it. And you think that might be helping us.
Two years ago, the game was, you decipher or determine whether something is artificially generated or natural? And I think for me personally, least that ship has sailed, right? The tech is so good. It's no longer a worthwhile endeavor to try to discern between the two. And we talked about just going back into intent. You know, why is this person? Person. Video.
voice, whatever it is speaking to me in this way, what is the intent behind me seeing this content? And that really is the gauge of legitimacy much more than looking for artifacts or anything else that might tip you off.
speaker-1 (19:37.312)
Yeah, that's the point that I've been making for probably a year and a half now is that, you know, about a year and a half ago, there was this crossover point where the tech started to get good enough to where many of the deep fakes that were out there, you might start to see some artifacts and so on. But many of the times that you could see something that was obviously a deep fake was because the person that created it didn't.
So.
speaker-2 (19:49.934)
of the were passable.
speaker-2 (19:57.4)
So what?
speaker-2 (20:04.429)
No.
better tools existed or they were lazy, they were ignorant or something else. It fell to the attacker not being as motivated or as resourced as they could have been. Now we're at the point where even a merely casual user can create something that can bypass the bar. Especially if they have a really good story behind the thing. It's like, what's the motive, you know, what is the
or the ignorant.
speaker-2 (20:12.376)
That's motive.
speaker-2 (20:23.284)
most of our cognitive defenses.
How weaponize the that they're putting out there? Motion that's behind that. does it within a social narrative already be happening? How is it weaponizing an us versus them type of
of a mentality or is there some kind of authority lever that's there? You know, all those traditional social engineering things, making us, you know, making any kind of potential defect within a deep fake, just kind of glide. And the thing that I've noticed over and over again is that when you tell somebody that something is a deep fake, they'll generally go, huh.
or urgency lever are
speaker-2 (20:57.1)
past of our defenses.
Verna over.
speaker-1 (21:10.862)
Yeah, it's actually obvious because, this person's blinking at a natural rate or something like that. What they don't realize though, is that many times the thing that they're pointing at and saying, well, it's obvious because, isn't actually a sign that it's a deep fake. Because if it was harvested video that was re-linked, that link rate was already there in that person. So, what they're trying to do is they're trying to comfort themselves and say, I know that
to none.
speaker-2 (21:26.83)
You live sing.
speaker-2 (21:35.918)
that something is fake because I will always see something.
off and in reality, they're not, we are not. That being said, one of the things that I can, and I almost hesitate to talk about tells because anytime you mention one, there's going to be half of the... Yeah, there's going to be a workaround or within a month it's going to be better and that tell won't work. But like if I hear a video that was made by Sora or by Vio, there's certain qualities within the...
ones that are out there that don't have those.
speaker-2 (21:58.626)
be there anymore.
speaker-1 (22:08.194)
the voice that come out to me is, know, somebody that does audio work and somebody that lives with headphones on and the things there's like a within the voice that's created by VO three. And there's a crunchiness that's in a voice. That being said, those are going to go away and it's going to get to be very, very quickly.
I thought it was reading this.
speaker-2 (22:23.234)
been created by SOR.
Fidelity human void.
There are things like watermarks that these companies try to embed. Those are super easy to strip out. And if you were just to type in like SOAR or MarkRover, there's going to be hundred different websites that come up that lets you do that. There's also workarounds within the to download things without watermarks. So never rely on a watermark. Even if the watermark is supposedly data, don't rely on that because there's way to under that out. So we are, we're quickly.
Watermark removed. 100 different.
speaker-2 (22:54.44)
embedded in
speaker-2 (23:01.602)
the entry.
in and I think are in a space where the synthetic media intertwines so much that it comes down to really having to ask the question, is the thing that I'm seeing trying to be like me? And that's going to be the ultimate question. think that's what was trying
and the real media are interested.
speaker-2 (23:21.474)
to hit on as well.
Yeah, like what is it asking me to do? Right? Like is it, is it asking me to do something and then right there, hopefully the context, the contextual awareness will bubble up to the top and somebody can make sure that they go verify something before they, before they act on it. I mean, for me,
What does it make me feel and then what does it want me to do or believe? Which is a horribly, horribly skeptical way to approach the digital world, but it's real. It is. And there's some innocuous things, right? Like cats jumping on trampolines. Doesn't really matter if that's real. I can live with that.
beef
speaker-0 (23:49.454)
Right.
speaker-2 (24:05.666)
or not.
No. Yeah.
Now the phrase that I'm seeing over and over and over in social media comments, whether things are real or whether things are obvious fake is just, know, haters will say this is AI. And I think that that is, that is indicative of the moment where people really don't know the difference, but they're, they're wanting to kind of comfort themselves with a phrase like that with things, you know, in a funny and, and self. I particularly
in
speaker-2 (24:33.122)
in a way.
particularly like when that phrase is applied to something so grotesquely AI that there's some... Yeah. exactly.
So if I'm a business owner or a leader in an organization or somebody in rural health care or in law and we don't have our own team of deep fake detection and experts on staff, right? What are you seeing organizations or where would you like to see organizations?
head to like what practical steps can business leaders, organizational leaders do to kind of create awareness for people? mean, other than a educate them that deepfakes are real, obviously hone in on the fundamentals of social engineering, right? Pausing, verifying independent verification.
I what you've said when you've been interviewed about code words, Like having words that, know, a non-technical way of verifying whether that person asking you to do something is actually that person, something only the two of you would know.
speaker-1 (26:00.846)
Mm-hmm.
Yeah, I think you hit on the gist of what I would say. so realize, number one, that we're where people just with the art act that they receive, you know, the video clips, the audio clips, the email or the whatever, we're in a place where you can't really tell the difference between what's real and what's fake. also in a place where
in a world
speaker-2 (26:29.016)
Many of the.
tools that we would hope that would tell us the difference are not as accurate as need to be first eye on and you here at the end of 2025 going into 2026 they're not there and so the things that we go back to are very old school security principles that have been around for decades if not switch our how do we how do we get mutual trust around something well that's things like shared like code words or even just saying hey the last time what was that
hundreds of years.
speaker-2 (26:56.27)
Hey, we made
book that you recommended to me, which actually a thing that stopped an attack against a fraud, asking that kind of question can untangle an attack when somebody is using a deep fake voice or face. So shared knowledge is processes and procedures that slow things down.
was
speaker-2 (27:04.654)
recently.
speaker-2 (27:11.779)
face.
That was very
speaker-0 (27:18.69)
That was very smooth, wasn't it? Like, had a sense something, his spider sense was going and they go, what was that book that you just recommended recently? Right? mean, that was just brilliant.
Yeah. And that's perfect. Because I think, you know, we talk about setting up code words and things like that. And that's, that sounds really good. Most people are to do it. It feels weird. Yeah. In a lot of contexts, but realizing the why or code is shared knowledge, verification. So there's always going to be something that we can go back to. And we saw that in
Yeah.
speaker-2 (27:48.846)
The eye behind the coat the word and trust patience.
speaker-1 (27:58.722)
you know, old spy movies or sci-fi movies where people might believe that the, you know, a host body was invaded by an alien. And then they're going, where did we first meet 20 years ago? Or what was the thing that you are? It's, know, can we get back to some is very key. And then the, the thing that we have to figure out how to rely on more and more comfortable with it.
unconsciousness or something.
speaker-2 (28:12.046)
You said to me in the bar, kind of shared knowledge.
speaker-2 (28:22.668)
get, yeah, and it's just slow the heck down.
don't just go with the emotional thing that's always been the thing that social engineers rely on is that knee jerk decision. And we have to slow down. need to find places where we might need to inject the steps and processes and dual verification and those kinds of things. The technology starts to prove to, you know,
new jerk reaction.
speaker-2 (28:35.032)
decision.
speaker-2 (28:47.33)
as now to improve. Maybe at some point detection becomes something more reliable.
I mean, isn't that the same best practice essentially as with phishing other than don't click on the email, right? Like besides the right beside the not clicking on the email, it really is that pausing, verifying, having a second set of eyes, independent, all your station. Yeah. Call your CEO. Like he doesn't want you like give cards over at Walmart. Like he's really not asking you to do that or she right. Like
That's a funny thing is so.
speaker-0 (29:24.706)
Just call them.
I'm sorry.
And it feels really underwhelming to give that answer when people say it, right? As they'll go, well, you know, what is the high tech way for me to figure this out? And you're like, it's actually the same thing we've been talking about for decades.
Exactly. Have you heard of Know Before? Like have you heard of Know Before? We have a whole content package that we'll train everybody on.
So, yeah. Well, and not only know before, but you know, all the other great, there's an entire discipline that's been focused around this. Yeah.
speaker-2 (29:57.998)
great competitors that are out.
speaker-0 (30:03.982)
There are so many, but since you're on, yeah.
You mentioned detection tools, maybe one day getting more efficient, effective. Do you have any hope that that will happen? It's an arms race. So there's always going to be an advantage to people who are at the very tip of the spear with the current technology. That being said, there's people that are lagging, that are finding cheapest, easy to use, accessible versions of the technology. And maybe those
in the most most
speaker-2 (30:38.05)
was that.
things will be the things that the deep fake detectors. Hopefully fingers crossed we get there. The other thing that's starting to work out better for some of the detector companies is that they're, they're trying to be all things to. And so they started to realize, we not. Yeah. So when you say deep fake, you could be talking about an image. You could be talking about a voice. You could be talking about a video. You could even be talking about text.
better at catching.
speaker-2 (30:49.858)
not every kind of deep fake.
How so?
speaker-2 (31:07.384)
generation.
or so on. And so in that, it's really hard to be really good at all of those different varieties of deep fakes. Also, it's one thing to be good at detecting a recorded packaged MP3 if it's just an audio file. It's another thing to be able to detect whether something is traces of being synthetically generated when it's being live streamed, like over a zoom call.
for.
speaker-2 (31:24.087)
before and
speaker-1 (31:36.372)
because there's different artifacts that exist within different tool sets. So companies that may, are going to double down on one thing or two things. They're going to say we're really good at voice, really good at. And then there'll be other ones that are saying we're really good at real time deep fake detection on zoom or teams or other.
So
speaker-2 (31:49.846)
and were captured video.
speaker-2 (31:59.448)
communication streaming platforms.
Because in that, you're for a number of different things. You may even be looking for liveness detection, the ability to frame drop. When frames are dropped, does the audio see the root size and shape that they're in? All of those things can be stacked up next to each other. You could even do real-time knowledge checks and things like that. When you start to narrow down the parameters of the problem that's trying to be solved, then
how many are being dropped, what are the conditions around.
seemed a mad boom.
speaker-1 (32:31.948)
they have a much of increasing their efficacy against those specific use cases that they're working towards. But nobody can be a be all end all deep fake detector. don't I think that's a recipe for failure for.
better chance.
speaker-2 (32:46.946)
many companies right now.
When you were on CNN, you spoke about that and you had sampled some deep fake detection samples for the interviewer and you had explained to her, look, this one claims to be everything, but I just made this deep fake and it's telling me that it's fairly accurate and it's real. I thought that resonated with a lot of people, which was good. One thing I was... Yeah, go ahead.
I was actually really frustrated in that moment. not with CNN, but the fact that every deepfake detector that I tried...
months ago, so I have repeated the test since then.
that everyone that I tried had mental errors. And if I intentionally tried to bypass them, I could, then also many of the, many of the things that intentional things that I did where I was like testing a real video and the real video would come up. like, And then I would try to make one that was obviously fake and it it would show up as being real. And you know, some of these were like plugged into systems like online
speaker-2 (33:39.32)
broke them or unintended.
seen in like this fake. Well, that's bad.
speaker-2 (33:57.422)
online dating platforms.
And if there's a system that's plugged into an online dating platform, I want that thing to be reliable. don't want somebody to, I don't want it to be as easy to bypass as I did.
I am.
speaker-2 (34:10.104)
I'm not an expert in bypassing these things. just have curiosity and time on my hands.
a little bit of curious.
Right. mean, and that brings up the scenarios in which this technology is being used, right? Romance scams, sextortion, elder fraud. You hear about calls from people impersonating children to their parents saying, you know, saying, you know, I, you know, got in some trouble, I'm stuck in court or I'm in jail. then, then they hand the phone over to the public defender who says, well, if you don't
provide bail right away, then they're going to have to stay in custody until Tuesday when the judge gets back or things like that. And people's emotions are racing. again, it's the same advice when that happens, right? It is context, verify, right? Realize where your child is, reach out to your child directly. Be like, hey, are you in jail, buddy?
They'll be like, no, I'm actually over at Todd's house, right? Right down the block. So things are okay. We have seen it. So Zach and I in our other life, like our actual livelihood in the MSSP, in the MSP world, we've come across clients that have experienced deepfakes, which just was shocking to me because that didn't happen years ago. One was a small organization, 20, 25 employees.
speaker-1 (35:21.186)
Yeah, yeah.
speaker-0 (35:45.128)
And they were hiring for somebody that can help them develop SQL databases and some, some, wasn't really code, but the point was is they didn't have to hire locally in person. could, they could expand and get better talent. So the interviews were done remotely and they were bombarded with really good resumes that hit the mark perfectly to perfectly, which kind of set them off a little bit, right?
And then the interviews of the people, they said looked completely believable. if it wasn't for the context and the awareness of the context of it, there'd be no way for them to be able to tell that it was by itself a deep fake. But the questions and how they answered, how they paused a little bit before answering, and then they would...
answer and it was a very generic answer when they were looking for specifics. But they were telling us, you know, 10 out of the 15 interviews they had 10 out of the 15, Perry were fraudulent. It was fake. And they were just really concerned. They're like, how are h like big HR big recruiting firms? How are they managing this with remote employees? I mean,
clearly policies and independent verification, making sure somebody verifies that the person that they see on that ID or et cetera is the actual person that they're hiring. But what else are you seeing? I've got to imagine people when they come up with deepfakes, they're coming up to you and going, let me tell you what happened. Are you hearing anything in the like in particular in the HR space, in the recruiting space? Cause it seems to me that is so right.
for things like this.
speaker-1 (37:43.158)
It is. It is. There's been a ton of discussion about it. You know, about, about a year ago, I think we, had known before, talked about one of our workers that we hired that was actually on for about five minutes before we detected what was going on. And so that was our little
discussion.
experiences with the North Korean.
speaker-2 (38:03.222)
and our real intro into that.
And the only deep fake involved in that was their, their photo. So was traditional, you know, best of the but it really woke us up as to of this. And then since then, and since talking about it, we've here, funny that this has happened to, and yeah. And to the extent like, you know, you're talking about the one that you had spoken to or 10 out of 15 of their, interviews that were, that's like two thirds.
they submitted to HR, was very new, rather than the hiring process.
the prevalence
speaker-2 (38:20.94)
hearing from company after company after comp.
much more devastating ways.
speaker-1 (38:36.398)
The interviewees turn that
That's what struck me. wasn't just one. It wasn't just one. was a lot.
Yeah.
Yeah. Well, and I know that we and other tech firms, you know, whenever we put out a job, it's not, it is not uncommon at all for us to have multiple applicants for that, that are fake workers. And you also talked about those, let's take the fake worker thing using, using a deep fake and say that, you know, that is one use case. But you talked.
now
speaker-2 (38:57.825)
single role that
speaker-2 (39:08.502)
of that.
of this, this is something that people are going to be dealing with. Two, because of the privilege of large length models. is fairly true. Take a phone, turn on chat, and then just have me as I'm talking to you where you and as you're asking questions, I've set it up to where it's receiving that voice input and form.
over and over over. can't see it in frame.
speaker-2 (39:34.68)
emulating answers for me that I'm just kind of eating or riffing at the time.
you know, regardless of whether you have an actual, to pretend somebody to be somebody else or whether you have a real person with a real identity who really wants that job. We're kind of dealing with a lot of the same things, right? It's, it's what are the compromises that we are our organization and how do we need to tighten things up so that we
person, but they're faking their knowledge.
speaker-2 (40:01.836)
letting them talk.
The person that we're hiring is who they say that they are and that the have is the knowledge that they actually should have. It lives in their brain, not just on the screen.
that they have knowledge.
speaker-0 (40:20.3)
Yeah. And to me, was in law enforcement before being in cybersecurity. And to me, it's still, it amounts to intent. It amounts to like the modus operandi changes, you know, their habits change, the technology allows them to harm people at scale, but still it's, we're still dealing with the same thing we've been dealing with for hundreds or thousands of years. Like it is still, you know, fraud.
Right. It's just it's it's just fraud like romance scam. It's fraud. Right. Like somebody impersonating your child asking for money. It's fraud. Like that's that's not the truth. Right. And when when when we are leading an organization, it's really hard to figure out what what policy should be set. How do we how do we raise awareness of this? And because of social media.
and the advancements in technology people I think people feel that it's very special or that the or that the deep fake is very advanced right and so we need something very advanced to do it but the the technology has advanced so much that it's it's it's becoming commonplace are you seeing
And I want to ask you about the about what Noble Four is is launching with because I think it's Are you seeing or do you have a feeling that it's going to be very, very common that it's it's like a multimedia campaign, right? There might be a text, there might be a one or two emails, a voice call, maybe a calendar invite to get on a zoom or a team's call.
and really target. Now granted that would be very similar to somebody doing spearfishing, right? They're actually going after that organization as opposed to blanket fishing. Do you believe that is where this is heading from where you sit?
speaker-1 (42:28.526)
Yeah. So, you know, when it, when it comes down to it, the way that I talk about it is that every deep fake is just, you know, the, the video clip, the audio clip is just one little, that little thing is meaningless and power is placed within a story. And so when you talk about multi-stage attack thing that queues it up, right. It is the,
little artifact.
speaker-2 (42:43.438)
thing.
or less until your context.
X that's the
speaker-1 (42:55.95)
You know, it is the trust that's given in maybe the menu that is wanting to make you scan the QR code. It's the, the authority that is in the coming from your CEO that's asking you to click on something and view a video of them asking you to do something. There's, there's always the, the, the, do you take that thing and how do you imbue it with the font and that additional context, those additional stages.
email this also asked
speaker-2 (43:19.21)
that you want.
speaker-2 (43:23.774)
additional asks.
are the things that are going to be, they are going to be the things that separate the deep fake script kiddies from the APT. Right. And that's the way that I think about.
You level attack.
right now.
All right, so.
speaker-0 (43:40.878)
Perry, one of the things we wanted to ask you about is, can you explain, and I don't know how much you're able to share, obviously don't share all the good stuff, but can you walk us through the rollout of the deep fake technology that KnowBe4 is going to be including in their security awareness program that is being provided for organizations? Because we're excited about sharing that with our clients. So I would love to hear about it from you.
Right. So that's a new addition. And it's one that customers have been asking us to do for a while. We were, you know, honestly not the first marketplace in our market segment to deliver that functionality, but I think we're the
intentional about how we delivered it.
So one of the things that is a core principle within KnowBefore is that we want to empower. Then we want, we're not going to roll out anything that's not scalable to 7.
the admins to do things.
speaker-2 (44:40.386)
the
And so it had to be scalable, had to be usable. Also had to be safe. And by that, mean, anytime you're creating a deep fake, you're essentially creating this little MP4 file that if you're not as the enabling somebody to create this, you're potentially a weapon. You know, if that file can be and then put out on Facebook, will that make a company look bad?
video.
intentional about it. The vendor that
speaker-2 (45:03.128)
Give him somebody
detached from the training platform.
speaker-1 (45:11.704)
Will make it, you know, will somebody be saying that they're about to have a merger? And that would be bad. Like if it's a big named company and now you've got the CFO saying that they're about that, that could impact markets. And so we didn't want to give people the.
to do a merger.
speaker-2 (45:26.126)
ability to accidentally have those situations happen.
And so we have a very tightly controlled ecosystem that allows the deep fakes of their own executives, but in safe and effective ways, but not ways that could be weaponized or be. The other thing that, that I'll put a fine point on is when you look at the competitive landscape with know before some of the competitors that are helping people do deep fakes, they
system that allows bad men to create their own.
speaker-2 (45:38.616)
very for train
speaker-2 (45:44.583)
disinformation.
speaker-2 (45:53.197)
or out there.
speaker-1 (45:58.924)
they may enable some of that disinformation effect if it were to be read. I don't know if that's happened, that definitely. Sure. But the other difference that no one has is that scalability and putting. Because generally, competitors when they do deep fakes right now, it is almost a custom where you have to email the vendor a video file saying, here's my executive and here's the scenario we want.
nation be weaponized. I know that could happen in the future. Before again, is that in the hands of the admin really can
speaker-2 (46:20.11)
engagement.
want you
take a or two of a bespoke gauge to get that back and that can be frustrating. What if the first time it comes back it's not what you want and it's taken two weeks? What if you don't want to pay an extra, you know, however much to do that? And so we wanted all that just to be in the platform the same as Chris.
Hey and
speaker-2 (46:47.384)
creating a fishing template and those kinds of things.
I think we did pretty good. There is definite, you know, as with any rollout.
There's definite room for improvement and we're making...
that improvement. But as far as checking for the things we accomplished, I think a good job and clients are really
In the boxes, we wanted to a, we did a really, really happy with it.
speaker-0 (47:08.718)
That's excellent. Yeah, I mean, and you have other deep fake awareness content as well. part of our curriculum that we share with our clients. So that's phenomenal. Perry, I thank you so much for your time and your insight. Zach, do you have anything further?
No, Perry, this was great. for your insights and transparency. really productive conversation. really appreciate it. I love being on with you guys.
Shed it now.
We really appreciate all your insight. We will have links to the webinar, links to the fake book, which is outstanding. If you want to learn about deep fakes, check out the book. Nobody knows it better than Mr. Carpenter. And thank you so much. We really appreciate your time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.