Cyber Crime Junkies

New Ways to Reduce Risks from Deep Fake

April 06, 2024 Cyber Crime Junkies by David Mauro Season 4 Episode 48
New Ways to Reduce Risks from Deep Fake
Cyber Crime Junkies
More Info
Cyber Crime Junkies
New Ways to Reduce Risks from Deep Fake
Apr 06, 2024 Season 4 Episode 48
Cyber Crime Junkies by David Mauro

NEW! Text Us Direct Here!

Are you real? What will Deepfake do to Security Risk? Shocking Examples. Summary

In this conversation, Dino Mauro and Mark Mosher discuss the rise of deepfake technology and its implications for individuals, families, and organizations. They explore the real-world applications of deepfake, the advancements in technology, and the dangers it poses in terms of cybercrime. The conversation highlights the use of deepfake in cyberbullying, fraud, and social engineering attacks, as well as its potential to compromise organizations through fake employment interviews. The hosts also touch on the involvement of federal agencies and Congress in addressing the deepfake threat. The conversation emphasizes the need for awareness and vigilance in the face of this rapidly evolving technology.


Topics: 

  • How Deep Fake Videos Increase Security Risks, 
  • why deep fake videos are dangerous, 
  • why deep fake videos increase security risks, 
  • new ways to reduce risks from deep fakes, 
  • dangers of synthetic media, 
  • new dangers of synthetic media, 
  • artificial intelligence risks in cyber security, 
  • new ways to reduce risk from deep fakes, 
  • new ways to reduce risk of deep fakes, 
  • how are deep fakes made, 
  • how are deep fake videos made, 
  • how are audio deep fakes made, 
  • how is ai making it harder to detect deep fakes, 
  • how ai is making it harder to detect deep fakes, 

Chapters

00:00 Introduction: The Need for Secure Managed File Transfer
05:12 Chapter 2: Deepfake's Real-World Applications and Damages
12:58 Chapter 3: Deepfake in Fake Employment Interviews

Accelerate your CMMC 2.0 compliance and address federal zero-trust requirements with Kiteworks' universal, secure file sharing platform made for every organization, and helpful to defense contractors.

Visit kiteworks.com to get started. 

We're thrilled to introduce Season 5 Cyber Flash Points to show what latest tech news means to online safety with short stories helping spread security awareness and the importance of online privacy protection.

"Cyber Flash Points" – your go-to source for practical and concise summaries.

So, tune in and welcome to "Cyber Flash Points”

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!

Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!

Show Notes Transcript

NEW! Text Us Direct Here!

Are you real? What will Deepfake do to Security Risk? Shocking Examples. Summary

In this conversation, Dino Mauro and Mark Mosher discuss the rise of deepfake technology and its implications for individuals, families, and organizations. They explore the real-world applications of deepfake, the advancements in technology, and the dangers it poses in terms of cybercrime. The conversation highlights the use of deepfake in cyberbullying, fraud, and social engineering attacks, as well as its potential to compromise organizations through fake employment interviews. The hosts also touch on the involvement of federal agencies and Congress in addressing the deepfake threat. The conversation emphasizes the need for awareness and vigilance in the face of this rapidly evolving technology.


Topics: 

  • How Deep Fake Videos Increase Security Risks, 
  • why deep fake videos are dangerous, 
  • why deep fake videos increase security risks, 
  • new ways to reduce risks from deep fakes, 
  • dangers of synthetic media, 
  • new dangers of synthetic media, 
  • artificial intelligence risks in cyber security, 
  • new ways to reduce risk from deep fakes, 
  • new ways to reduce risk of deep fakes, 
  • how are deep fakes made, 
  • how are deep fake videos made, 
  • how are audio deep fakes made, 
  • how is ai making it harder to detect deep fakes, 
  • how ai is making it harder to detect deep fakes, 

Chapters

00:00 Introduction: The Need for Secure Managed File Transfer
05:12 Chapter 2: Deepfake's Real-World Applications and Damages
12:58 Chapter 3: Deepfake in Fake Employment Interviews

Accelerate your CMMC 2.0 compliance and address federal zero-trust requirements with Kiteworks' universal, secure file sharing platform made for every organization, and helpful to defense contractors.

Visit kiteworks.com to get started. 

We're thrilled to introduce Season 5 Cyber Flash Points to show what latest tech news means to online safety with short stories helping spread security awareness and the importance of online privacy protection.

"Cyber Flash Points" – your go-to source for practical and concise summaries.

So, tune in and welcome to "Cyber Flash Points”

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!

Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!


Summary

In this conversation, Dino Mauro and Mark Mosher discuss the rise of deepfake technology and its implications for individuals, families, and organizations. They explore the real-world applications of deepfake, the advancements in technology, and the dangers it poses in terms of cybercrime. The conversation highlights the use of deepfake in cyberbullying, fraud, and social engineering attacks, as well as its potential to compromise organizations through fake employment interviews. The hosts also touch on the involvement of federal agencies and Congress in addressing the deepfake threat. The conversation emphasizes the need for awareness and vigilance in the face of this rapidly evolving technology.

Takeaways

Deepfake technology is rapidly advancing and poses a significant threat in terms of cybercrime.
Deepfake can be used for cyberbullying, fraud, and social engineering attacks.
Organizations need to be cautious of deepfake in the context of fake employment interviews.
Federal agencies and Congress are actively addressing the deepfake threat.
Awareness and vigilance are crucial in protecting against deepfake attacks.

Chapters

00:00 Introduction: The Need for Secure Managed File Transfer
05:12 Chapter 2: Deepfake's Real-World Applications and Damages
12:58 Chapter 3: Deepfake in Fake Employment Interviews

Dino Mauro (00:02.318)
Topics: latest cyber risks from deep fake, artificial intelligence risks in cyber security, new dangers from deepfakes, How Deep Fake Videos Increase Security Risks, why deep fake videos are dangerous, why deep fake videos increase security risks, new ways to reduce risks from deep fakes, dangers of synthetic media, new dangers of synthetic media, artificial intelligence risks in cyber security, new ways to reduce risk from deep fakes, new ways to reduce risk of deep fakes, how are deep fakes made, how are deep fake videos made, how are audio deep fakes made, how is ai making it harder to detect deep fakes, how ai is making it harder to detect deep fakes, 



Dino Mauro (01:13.646)
Hi, Cyber Crime Junkies So in this episode, we're going to discuss deepfake, and it's known as synthetic media. We're going to talk about the real world applications, how easy it is to generate these, the advancements in technology and how it's rapidly evolving every single week, and the dangers that it poses to each one of us individually, our families, as well as the organizations and the brands that we

work for or build on our own. And it has gotten the attention of the US Congress as well as international law enforcement. So let's check it out and I'm interested in hearing your feedback on it because it is absolutely the most advanced threat and it is the cyber crime tactic of the future.

Dino Mauro (02:14.574)
So, hey, welcome, Mark. Welcome to the studio, buddy. How are you? Wonderful, David. How are you? Doing fine. Doing fine. So, you know, have you ever heard of the term synthetic media? Synthetic media? Yes, I've heard the term, but I would struggle absolutely to define it for you. OK, so you may not know the term synthetic media, but you likely have heard of the phrase deep think from social media.

TV entertainment news. Well, today it's far beyond Photoshop and celebrity impersonations. There are real cyber crimes which have occurred with devastating consequences in real life. In fact, the U .S. military, Congress, and federal law enforcement have now all gotten involved. Today they certainly know all about it and they're paying careful attention monitoring advances in synthetic media.

deep fake, things like that regularly. And it's time our listeners are aware of it too, so they can protect themselves, their families, and the organization's brands that we all serve. So what is it and how is it used to commit cyber crimes? Great question. So back in March 2021, the FBI put out a public service announcement warning. They warned private industry, US citizens, and their families.

advising us of the very real threat that foreign governments, Russia, China, North Korea, others are putting out synthetic profile images, videos, and live disguised media, creating deep fake journalists, media personalities, IT engineers, business owners, and social media influencers across lots of platforms, even on LinkedIn.

and connecting with people with full profiles, lots of connections, and these people weren't even real. Their goal in this instance was to spread anti -U .S. propaganda for political purposes. Later on in 2021 and then earlier this year, U .S. military, Congress, and U .S. national security teams have been addressing deepfake aggressively, and congressional hearings have even been held in it. Have you seen any of those?

Dino Mauro (04:42.734)
Wait, now they're actually having congressional hearings. Yeah, I can't be not. Artificially intelligence created synthetic fake people? Yes, they are. And there's even future ones scheduled. What's key to understand here is synthetic media, the advancement in technology there, it's just getting started, right? This technology is at its inception or incubator life stage.

But it's growing exponentially with technology advances every single week. It caused, check this out Mark, it caused $220 million in damages last year.

220 million. Yeah. And this year it's expected. You think it's going down? No. If you were a betting person, you think it's going down? I would bet you dollars to donuts. It's going up. Right. This year it's expected to triple. Okay. And this, my friends, is the true cybercrime story of DeepFake. Lucky to work for a great group of people you really believe in. Find yourself making an impact.

Technology is a river that flows through every aspect of an organization and today is different. We put ourselves and our organizations literally at risk of complete destruction every single time we get online. One click, one distraction is all it takes. Hi, Sabro Crime and Junkies. This is your host, David Morrow, along with co -host Mark Mosher. Come join us as we explore our research into these blockbuster true crime stories. Along with interviews of leaders who built and protect.

great brands.

Dino Mauro (06:36.27)
So in this part of the episode we discuss generative adversarial networks. They're called DUNs. DUNs are lost in the technical terms. The point is this is how deepfake and synthetic media is created. It's basically two computers fighting against each other until things are undetected by the human eye. So let's figure this out. It's pretty shocking.

So generated or generative AI refers to programs that make it possible for machines to use things like text, audio files, and images in order to create content. And synthetic media is really like a catch -all term for the artificial production, manipulation, the modification of data by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original media.

Synthetic media is commonly referred to interchangeably with deep fake. It's highly convincing. I mean, it's really, really good content. The ongoing development of deep fake tech makes it more and more difficult to discern between real and fake content. And that gets to the heart of security. It gets to the heart of brands. It gets to the heart of the people behind who is claiming to be shown by video or by image, right?

Well, and David, isn't this some of the information that we've seen recently from federal law enforcement around remote workers, right? Doesn't this parallel or? It absolutely does. Yeah, it absolutely does. So deepfake tech, it's relatively new, but it's an emerging fraud and cybercrime. That's about that. Yeah, it's been a growing concern among consumers and organizations as they've exploited. The deepfake technology has been exploited by criminals.

to carry out social engineering attacks and to spread misinformation and fraud scams. The way it advances is through this process. Let's get a little geeky here for a second. It's called GANs. You ever heard of GANs? No, not familiar with that. That is Generative Adversarial Networks. So a key technology leveraged through GANs is to produce deep fakes and other synthetic media. It's based on this concept called GANs. It's Generative Adversarial Network. And again,

Dino Mauro (08:56.974)
Two machine learning networks are used and they develop synthetic content through adversary, like, like bidding against each other. Right? So the first network is the generator data that represents the type of content to be created. It's fed into the first network so it can learn the characteristics of that type of data. For example, when they want to make someone like a CEO, a business owner, a leader, or a, you know,

socially, right, somebody that we know, a neighbor, a mother, something like that, right, they feed that information to the first machine, right, everything, every video we have, every like we have, every post we do, all of that is fed into that machine. The generator then attempts to create new examples of that data, which exhibit the same characteristics of the original data, right? So it's going to sound just like him or her. It's going to have the

inflection in the voice, it's going to have the batting of the eyelids, the nodding of the head, the movement when they move from left to right exactly like the first one, right? These examples are then presented to the second machine learning network, which also has been trained, but through a different approach to learn the same identity. The second network, which is called the adversary network, attempts to detect flaws in the first one.

and rejects those that it considers as fake. These fakes are then returned to the first network. Okay. It shoots back to the first network so that it can learn through machine learning and artificial intelligence to improve its process of correcting and creating the new data. This back and forth goes and continues until this GAN network, the combination of both of them, produces fake content.

that is undetectable by the human eye and human ear.

Dino Mauro (11:00.814)
That explains the level of detail and why it actually looks real. And it does. It absolutely does. So you're saying these two are mutually teaching each other, finding the flaws in the design, feeding it back to the design, correcting that, spitting out a new product, having that other machine review it, and this goes back and forth, back and forth until it's undetectable? Correct. Undetectable by humans. By humans, right. Right. Wow. So recently in 2022,

a lot of security experts through our research, they've been on high alert for this, like the next evolution of social engineering, right? Which is the deep fake employment interviews. Yes, that's what I was talking about earlier. Yeah, we were talking about that the other day. Yeah, let's maybe touch on that. Yeah, that's the latest trend. It offers a glimpse into like the future arsenal criminals can use to create the fake persona. Now in this section, we show up on the screen.

for people watching the video, the FBI public warning about deepfake and synthetic media and how it's been used to gain access to hundreds of different companies throughout the United States and they're warning people. They've been able to afford some of them, but some of them have gotten through. And now we're gonna walk through what that means to all of us. Every time we get on a Zoom or a WebEx or a WebM, it's...

But the concern in the security industry came when a new advisory was issued just this past June. I mean, Mark, that's just a couple of months ago. And the FBI's Internet Crime Complaints Center, the IC3, warned of increased activity from fraudsters trying to game the online interview process for remote work positions. So in their warning, the FBI's warning, they gave an advisory that said,

criminals are using the combination of deep fake videos and stolen personal data, which is for sale on the dark web, to misrepresent themselves and get a job in an organization in the US. So wait a minute. So they're using social engineering, which is basic research. Yeah. And then they're using this artificial intelligence, machine learning, synthetic media types to pose as

Dino Mauro (13:28.334)
Potential workers. Yes, and and they're doing it live. So what's happening is this picture the scenario, you know, we you and I we've interviewed tons of people in the last year a lot of them have been by zoom. We don't know we haven't taken the blood test or met the person in person right live and in front of them right and what's happening is the deep fake technology is having someone's different.

face be on the video while we're watching it live? Wow. And federal law enforcement officials said in this advisory, they received a rash of complaints from businesses all over the US with this. So this isn't just a rare one -off scenario. It's really pretty shocking. They're trying to gain employment in a range of work from home positions that include IT, computer programming,

database maintenance and software related job functions. Why do you think that is? So if they're able to pass this off, they get through the interview process. Now they could be on the network. They could be IT, the database manager. That's a scary thought. Yeah, exactly. Like everything that like they have stolen credentials and the person, the interview, like they have a fake LinkedIn profile. And when they meet the person, it looks just like the person on the LinkedIn profile, right?

And when they meet the person live by zoom, it looks just like that person, even though in real life they look totally different. I've seen some of the videos. It's shocking, right? Like you can't tell you can't tell what they were able to find is there were a couple instances. The reason they got alerted is because in one or two of them, Mark, somebody like sneezed in reality, but the sneeze didn't happen over the video. Wow. Well, they caught it.

They caught it, but it was just because they had a HR person that was just really keen. There was just something about this interview that just seemed off, right? But the technology itself, you can't discern from the human eye. That's why they brought it. I mean, it's one thing for someone to kind of have a suspicion, right? Or think maybe my video glitched or something, because I heard a sneeze, but I didn't see them crouch down or put their...

Dino Mauro (15:53.774)
hand in a fist over their mouth or anything like that, right? But it was to the point where they were seeing it so often that they actually contacted federal law enforcement. And this is not just one company, tens of them, like all over the place, all over the US. So the belief here is, right, they want to use this deep fake synthetic media to gain access to the company systems, right? As a fraudulent employee to capture PII private, you know,

personal information, PHI, private healthcare information, intellectual property, and related information never meant for the public to be used. So that way they could use it in extortion campaigns or to gain premium yield in Bitcoin by selling that data on the dark web. Once they're in, right, once they are issued a company laptop or just access to company systems, right, the platforms, the SaaS programs, Salesforce programs,

the HR programs, the coding programs, the servers, right? Once they have access to those databases, they can use things like keyloggers, like our favorite, like rubber ducky, a rubber ducky, keylogger, things like that, and download gigs and terabytes of private data. So it's really, this is just another means, you know, we talk about the tactics all the time, that people, that hackers use bad actors and whatnot. This is, this is probably the most unique tactic that has come down the pike in quite some time. So,

They will use this to gain access that they otherwise wouldn't be granted to someone's network with the ill intent of stealing either information, proprietary information, whatever it may be, personal data, HR information, financial records, and nobody even, and they let them in. They actually gave them the password. Right. And some of them, some of them made it past the interviews. So some of them, what the FBI advisory said,

is some of the complaints also noted that the criminals had been using stolen personal identification information, PII, right? In conjunction with the fake videos to impersonate applicants, but later background checks dug up discrepancies between the individual who interviewed and the identity presented in the application. So some of them, yeah, they caught it because the HR person was very observant watching for this, right? This is why we talk about this as a sense of awareness, right?

Dino Mauro (18:19.95)
It's the state of total awareness of being aware of our surroundings, especially by video when we're working remotely. And they made it past that. It was just the follow -up background check where they caught someone. The issue is, you know they didn't catch them all, right? Right. Well, this takes me back to your earlier comment about congressional hearings. So are...

Is law enforcement playing catch up? Are they ahead of this? Why is Congress involved? Are we prepared for this yet? What does that landscape look like? Yeah, well, the FBI security warning is one of many that's been reported by federal agencies just in the past year, believe it or not. So this is not just a very rare, limited thing. The US Treasury, US State Department, and the FBI released an official warning again in 2022 indicating that companies must be cautious of North Korean IT workers.

pretending to be freelance contractors in order to infiltrate companies and collect revenue for their country. Because remember, we have sanctions against North Korea. But if they can work for a private company that might have a European base, a base in the US, they can generate revenue. All that revenue will just flow right to the hermit kingdom there. So organizations are unknowingly paying North Korean hackers.

And what's interesting is if they do this, they themselves, what they mentioned in these advisories from the US Treasury and State Department, Mark, is they said organizations that knowingly or even unknowingly, like they don't even have to know they're doing it. If they unknowingly pay North Korean hackers, they could potentially face legal consequences and violate government sanctions. Yep, that is correct. Pretty serious stuff. So on May 16th, 2022,

the US Department of State, US Department of the Treasury, and the FBI issued an advisory for the international community, the private sector, and the public, warning of a tense by North Korea that they are using these IT workers to obtain employment. And they pointed out the reputational risks to the private companies, right? I mean, if something like this came out, it would absolutely destroy the reputation and the trust because customers won't want to do business with them.

Dino Mauro (20:43.054)
Exactly. That's exactly what it always comes down to, isn't it? Yeah, that's exactly right. Now, previously in a congressional hearing last year or so on deep fake threats from countries and the IT industry and hackers as a whole, Congressman Ben Sasse from Nebraska asked Dan Coats, the director of national security, whether the US is prepared to deal with this. And in that congressional hearing, Representative Ben Sasse asked, when you think about the catastrophic potential to

public trust and to markets that could come from deep fake attacks. Are we organized in a way that we could possibly respond fast enough? Shockingly, in reply, Director Dan Coats said, we need to be more agile and it poses a major threat. And then he says, it is something that the intelligence community needs to be restructured.

in order to combat.

So check that out. So what we just heard was, let's unpack that a second. The answer was, yeah, no, we're not ready. And the intelligence community needs to be restructured in order to address it.

Dino Mauro (22:02.894)
That's that's pretty severe. Like, what is that? That answers my question. Yeah. What does that mean? What do you mean? What do you like? I saw that and I was like, whoa, whoa, whoa. What do you mean? Like it needs to be restructured. Like we're not ready for this. But the the the technology is advancing so fast. It's pretty crazy. I mean, deepfake overall so far has been used for things like this. Let me show you and listeners that are watching online and watching the video, you'll be able to.

see this, but usually it's been used for like lip syncing dubbed movies. It's been used for translated lectures, right? Or GCI, like you mentioned earlier. It's been used for live translations and press conferences, right? You know, when they do these deep fakes, they do them in up to 64 different languages. So you can make the lips move perfectly and the head gestures, everything else. They also do it to generate missing video.

in segments, right? So when they record a whole video, right, but oh, they forgot to add this one minute segment. Well, we can't bring the person back, right? So what do they do? They take the person and they generate it through deepfake. So these are like the commercial uses of it. Okay, this is like, or the innocent uses where they have talking memes, stickers, gifts, stuff like that. And the technology and how they do it, they use this like wave to lip framework.

where it takes the audio when you think about, you know, like when there's a recording, think about like the recording for this podcast, right? There's the audio and there's the video and it syncs it up. But then what it does is it watches the target, it captures their movements, and then you type in what you want them to say, and it takes what they have and what they know about it through artificial intelligence and machine learning. And then it makes it say that. It's pretty shocking. So.

Let me walk you through and I'm telling you there are hundreds of companies out there. There's a list that we found with literally like 300 different companies all over the world that openly advertise for this and they give you demos. So I went on and I even did a demo of two of them. I'm going to share that with you. So here's a couple samples. Here's one where it's called AI Studios and they just said, here, go ahead and just try this out. Tell us if you...

Dino Mauro (24:29.038)
think that it works, check their lips, check this human, right? As they say what you want them to say. And I could have typed in anything here, right? And I just went in and I just typed in, you know, I love Cyberscrime Junkies podcast, right? And then I got an account, it's a free account, and this is what came out of it. So what's shocking too is you're able to, are you able to see the video here where you're able to have it be a female?

have it be any nationality, any type of inflection. You can adjust the speed, the cadence, the pauses, the head movements, all of that. It's openly available. So I got this guy to mention our podcast.

Dino Mauro (25:14.83)
Stay with us. We'll be right back.

You know, we all have a lot of data and it has to positively, absolutely stay safe. It can't get into the wrong hands. And the biggest challenge we have is how to transfer it from here to there. We all know as leaders that legacy tools that transfer our important files and sensitive data are mostly outdated and fall short on security, especially with the demands of today's remote workforce. Relying on outdated technology puts our organization's brand at risk and that is unacceptable. So we're excited to invite you to step into the future of completely secured,

managed file transfer from our friends at Kiteworks. Kiteworks is absolutely positively the most secure managed file platform on the market today. They've been FedRAMP moderate authorized by the Department of Defense since 2017. And unlike traditional legacy systems with limited functionality, Kiteworks has unmatched software security with ongoing bounty programs and regular pen testing to minimize vulnerabilities. And the coolest part, they have easy to use one click appliance updates you will love.

Step into the future of secure managed file transfer with Kiteworks. Visit kiteworks .com to get started. That's kiteworks .com to get started today. And now the show.

Dino Mauro (26:30.51)
I love Cybercrime Junkies podcast. I love Cybercrime Junkies podcast. Which is pretty wild because he looks, he's standing like a real human being. He looks real. And now there's a background there, but that background is a green screen. You can have him sitting, standing at a coffee shop. You can place him anywhere, right? What was the other one you did with the girl? That one looked real. I mean, not that that one didn't look fake, but now that I'm sitting here looking for it, what was the other one that you did? Well, here's the other one.

So here's another one where you're able to go in and do a demo and have it go along anything that you want. So listen to this.

Welcome to Colossian Creator. I am synthetic media and not a real human, but if I were a real human you can bet I would spend my time watching and listening to the Cybercrime Junkies podcast. It's my absolute favorite podcast, filled with true cybercrime stories, interviews with leaders who build and protect great brands and filled with best practices so that I can, as well as my family and my organization can stay safe when online.

Dino Mauro (27:41.742)
So it's crazy now for those listening, you weren't able to see it, but we were able to place this human who, if you watched her lips and her body movements and her hand, her hands even move. It was on track. It was she was moving her mouth and and pursing her lips and articulating exactly with the content that we placed on there. And we just threw our logo up there, et cetera. But.

What is that was from a free account? That was from a free account. I did it in like five minutes. Imagine if you pay for it, what you get. Yeah. And imagine if you really wanted to have somebody demonstrate this for for a nefarious cause. Right. Like that's that's where it really gets it really gets scary. And I mean, it's happened in real life. I mean, think about what this means for society and families. There's a cyber bullying scenario where deepfakes were implicated.

Let me tell you about this story. Um, cyber bullying, as you know, it's a common issue, right? Especially among, among younger generations due to the high usage that younger generations have in social media, right? Rumors can be easily spread through social media and online platforms. And then when coupled with fake images and videos to suggest that the rumor is true, it can make the rumor more believable. Right. Yeah. And it could ruin reputations and.

data supports, which causes psychological effects that could lead to victims hurting themselves and social issues. Correct. So back in March, 2021, this actually happened. International news brought to light an incident where charges were filed involving alleged deep fakes after a cyber bullying attack in Pennsylvania, a mother allegedly manipulated images.

and videos of her daughter's cheer squad teammates. The deep fakes showed members of that cheer squad drinking, vaping and posing nude. Oh, all of which could get them cut from cheerly. Oh, my gosh. Yeah. It's like a it's like a cheer mom going insane with technology. Right. Several of the victims came forward about the cyber bullying. One victim claimed.

Dino Mauro (30:11.022)
that the mother went as far as encouraging suicide, furthering the harassment outside of the alleged deepfakes. Pretty shocking. Now, I mean, I don't know what happened with that case. We can look it up and update the episode later. But that was just one involving society, right? Just one that just involves cyberbullying. But it also really addresses how easily

and how accessible these tools and resources are to the public. I created those two avatar videos in like five minutes, right? If somebody has bad purpose, it can be used as so many different, it can bolster false rumors and be used in social engineering to compromise an organization. So yeah, so check this out. What I'm showing online here now is, here, let me share my screen, Mark. You'll be able to check it out.

So what I'm showing online is that public service announcement from this past June, June 28th, 2022, from the FBI talking about how the deep fakes and the stolen private information has been used to apply for those remote job positions that we were talking about earlier. But to the right of that, there are two sites. It's called thispersondoesnotexist .com and whichfaceisreal .com. You see those people there? They're not real. They don't exist.

They've never existed. They are AI generated. Look at the detail. That's the sweat glistening. Look at the airlines. They don't look like avatars. They look like family. They look like people. I think I know that one guy. Yeah, I know. It's just it's it's it's unmistakable and it's undecerned by by by the human eye. It's really, really shocking. And then here's the information on what the.

US government has issued on the actors leveraging this technology from from foreign governments. But when we think about some of the other times, like how has this happened in real life? Well, one case study is there is an energy company, the CEO was deep faked by phone and video to his employees and got him to wire two hundred and forty three thousand dollars within one hour. Yeah.

Dino Mauro (32:36.91)
So they compromised, they did that business email compromise that we talk about in other episodes, we talked about it earlier. And instead, right, what do we always tell people? Verify, right? Call your boss, right? But if you got a video from your boss. But if your boss calls you and sounds exactly like them, right? And gets on Zoom and looks exactly like them, right? What is the power of a criminal hacker at that point?

Right. Also, another case study was 35 million dollars was stolen by audio of an attorney who targeted a branch manager of a company that was involved in mergers and acquisition, got a ton of documents and transferred the funds as part of a merger and acquisition. But the funds went to the hacking organization and they got away with 35 million dollars. Wow. That was this year.

That was all this year, which is why that all led to the FBI warning of last month or month and a half ago. And then some of the congressional hearings. It's all starting to make sense now. Yeah, it's pretty shocking stuff. So, I mean, look, the cost of deepfake scams exceeded 250 million dollars last year in 2020, actually in 2020, 2021. Right. And the form of technology, it's in its early inception incubator stages. Right.

I'm thinking this hasn't even found its apex point. It hasn't even. It's not even in the regular toolkit that criminal organizations are using yet. So forget about the fact. Well, one other thing is they do have companies just like all those hundreds of companies that create the deepfakes. They have companies that there's hundreds of them now that are popping up to discern deepfakes, right? It's a platform that when this flag goes off,

they'll be able to tell that it's a deep fake, right? There's platforms out there. But let me ask you, you and I deal with this every day. I've never met a single company that has that technology. No. Right. And the technology, what the industry research is showing, that technology is not up to speed. It's not advancing as fast as the technology to develop it. The technology that's using the GANs process and things like that. They're clearly in the lead right now. Hopefully that other technology catches up.

Dino Mauro (34:58.926)
a B becomes more mainstream so that small to mid -sized businesses and even larger organizations can leverage it. But there's no doubt that as deepfake technology evolves, so will the sophistication of how criminals exploit the technology to attack businesses and consumers alike. There have been specific costly examples that have happened when used in cyber attacks, fraud, in...

hackers should be barred from them. But the warnings on Deepfakes, they're just getting started, right? The issue here at the end of the day, like, tell me what you think about it. Like, at the end of it all, like, it goes to the very heart of what is and what is not believable. Right. Well, you know, David, something tells me that this is probably not the last episode we're going to have to do on this subject. No, I think it's just getting started. I mean, when our own eyes and ears are deceived,

Whether we're distracted whether we're busy at work or at home, right? It creates a societal cynicism of what's even believable Well, we'll have to be sure and have all the cybercrime junkie followers evangelize this new message of artificial intelligence synthetic media deep fake Cyber crimes that is now taking over. Yeah, absolutely. Well, thanks everyone for listening to the true cybercrime story of deep