Cyber Crime Junkies

Got AI? Fake Videos. Social Engineering.

Cyber Crime Junkies. Host David Mauro. Season 6 Episode 21

🚨 NEW EPISODE ALERT! 🚨 🔒 Are you prepared for the risks of AI-powered deception? Got AI? What to watch for in Fake Videos and Social Engineering. 🤖 Are you ready to confront the dark side of AI? 🤖 Unmasking AI Deepfakes & Cyber Threats.

Send us a text

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss a video episode!

Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Youtube (FKA Google) Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!

Got AI? Fake Videos Social Engineering

🚨 NEW EPISODE ALERT! 🚨 🔒 Are you prepared for the risks of AI-powered deception? 🤖 Are you ready to confront the dark side of AI? 🤖 Unmasking AI Deepfakes & Cyber Threats.

Dino Mauro (00:07.31)
Come join us as we go behind the scenes of today's most notorious cybercrime, translating cybersecurity into everyday language that's practical and easy to understand. appreciate you making this an award-winning podcast by downloading our episodes on Apple and Spotify and subscribing to our YouTube channel. This is Cybercrime Junkies, and now the show.

Dino Mauro (00:46.136)
Today, I want to talk about deep fakes and AI impact on business. Defending AI cyber attacks. New ways AI is used in social engineering is at the very cutting edge for protecting small business today. Yep, absolutely. They have changed in the past few weeks and you need to know about them. Yep, today we will do that. Our sidebar includes some true cyber crime stories involving artificial intelligence and new emerging threats to watch for as well as some

unlikely and rarely discussed steps to take to protect yourself and your business. know right. This is really relevant topic. It honestly addresses what's in the headlines a lot. And I mean a lot lately. You're not wrong. So let's jump into it. Small talk is cheap. So let's dive in. Everyone's heard of it. But the more I talk to people, I realize very few of us understand the way it works and more dangerously how it can be used by others.

to harm us, damage our businesses, or even our loved ones. The questions people keep asking is whether or if artificial intelligence, some mystical technological marvel, or as we are starting to see it getting leveraged on the dark web and down on Main Street, has it crossed the line into a criminal's dream tool? Or both? Or both, exactly.

Today we're diving into the dark side of AI, where AI is being used to steal millions and even harm local residents, causing them to lose their employment and local reputation. AI deepfakes and voice cloning are turning science fiction nightmares into reality. Hey, let's define what we mean, shouldn't we? Great point. Go ahead, shoot. A deepfake is a piece of media, audio, video, or still images digitally altered to replace a person's face, voice, or body using AI to make it say or do something that the actual human person

never said or did. In fact, this form of AI technology has advanced so much so in the past six months that it is virtually undetectable by the human senses. Meaning a majority of us cannot tell with our eyes if something is a deep fake video or audio when they are done right. Meaning a majority of us cannot tell. The term is a combination of the words deep learning and fake. Exactly. that sounds ominous. I mean, for many of us,

Dino Mauro (03:07.266)
we've been leveraging AI to help us improve a few things in our personal lives as well as at work. Like what? Like what? Yeah, I mean, I really do not see it being leveraged as much as it could be. What are you all using it for on a steady basis? Well, I mean, I think kind of mostly summarizing meetings, digesting articles, improve grammar, translate our work into other languages and speed up repetitive processes. So here's what I'm getting at.

Cassidy, I want to know if our listeners are leveraging AI in other ways. cool. I'd love to hear about it. Text our studio direct at the number below in the show notes. So despite useful ways we use AI, what kind of criminal activities are we talking about here? Well, our research just got updated this week and we're seeing everything from elaborate financial scams to deeply personal attacks. I think what will shock listeners by this first story is how local it is.

and the damage which can happen to any, if us. Let's start with a case that really shows how AI can be weaponized against individuals. A high school principal in a Maryland suburb of Pikesville found himself at the center of a scandal that nearly destroyed his career and reputation. Hmm, what happened? So it happened to a school principal in Maryland. He got canned due to a deepfake. Holy crap, I heard about this. Yep. O-N.

January 16th, 2024, well, three teachers at the school received an email with an audio file attached. The subject line said, Pikesville principal disturbing recording. When they played it, they heard what sounded exactly like their boss, principal Eric Eisbeert making racist comments and threatening staff members. Yeah, it's terrible. It went viral, didn't it? Exactly. The recording was shared over 27,000 times on social media. The community was outraged. There were protests, death threats. It was absolute chaos.

The school district suspended Eward and launched an investigation. But here's the twist. Eward insisted the recording was fake. Hey, so one listener just texted into the studio, Samantha asks. And thank you, Samantha, for joining and asking today. Yeah, very cool. Thanks, Sam. Sam says. Well, that's what anyone would say in that situation, right? How could he prove it? Good point. That's the thing. He took a lie detector test and passed. This led to a deeper investigation.

Dino Mauro (05:24.876)
and eventually forensic experts discovered that the audio was actually created using AI voice cloning technology. It turns out it was all an elaborate scheme by Dazondarian, the school's athletic director who was being investigated by the principal for embezzling funds at the school, and who had a grudge against Icewort for investigating him. Wow, that's wild. So AI can now perfectly mimic someone's voice. That's terrifying. Yeah, it really is. And what's scarier?

is that criminals only need about three to five seconds of someone's voice to create a convincing clone. Wait, stop. Only three to five seconds of your voice as a sample? They can get that from pretty much anywhere online. A YouTube video, a TikTok, even a voicemail message. So if we'd only known the risks, we might have been more careful about what we post online. The way I see it, this also now expands the threats. Why doesn't it? Won't this...

now increase the overall landscape of what needs to be protected now? Well, yeah, holy crap. I never thought of that. Are there any other examples like how this technology is being misused? absolutely. There was a heartbreaking case involving a mother named Jennifer DiStefano. Jennifer DiStefano's phone rang one afternoon as she climbed out of her car outside the dance studio where her younger daughter Aubrey had a rehearsal. The caller showed up as unknown and she briefly contemplated not picking up.

But her older daughter, 15 year old Brianna, was away training for a ski race and DeStefano feared it could be a medical emergency. So she got this call from an unknown number and when she answered, she heard what sounded exactly like her daughter. Exactly like her. And her daughter was screaming and crying and saying, mom, I messed up. Then a man's voice cut in claiming he had kidnapped her daughter and demanding a ransom. That's every parent's worst nightmare. You got to ask. Was it real?

Thankfully, no, it was all fake. A cyber criminal had deep faked the voice. In the end, thank God her daughter was safe. That God, that piece of garbage. What a tool for doing that so uncool. I know, complete douche. The scammers had used AI to clone her daughter's voice from something they found online on TikTok. And it was so convincing that her own mom, Jennifer, never doubted for a second that it was really her daughter. That's absolutely chilling. How common are these kinds of scams?

Dino Mauro (07:47.01)
Well, they're becoming more prevalent. About 270,000 people here in the US reported falling for romance scams, losing about 1.3 billion D-darts. And now with AI in the mix, these scams are getting even more sophisticated. There was a case in Canada where an elderly couple lost $98,000 to a scammer, pretending to be their son using AI voice cloning. It sounds like we need better regulations and awareness around this technology. Are there any efforts being made to combat these crimes?

You know, that's a great point. Some countries are starting to take action. The United Kingdom, for example, recently made it a crime, in some cases a felony, to share inappropriate deep fake content. But it's a constant race to keep up with the technology. And it's not just individuals being targeted. Businesses are at risk too. Right? Well, that's what it sounds like. Any notable cases in the business world? yeah, there was a mind blowing case involving a company called Aero.

In February 2024, was hit by one of the most advanced AI crimes ever. Many other companies have been victims of AI attacks, but this particular heist was so incredible that it made headlines around the world. But out of all the companies in the world, why did these AI criminals go after Arup? Arup is an engineering firm with multi-million dollar projects around the globe. Despite being a well-organized company,

Airp had one major weakness that is common to many small and mid-sized businesses right here in the US. They had a remote workforce with several locations. Not a bad thing at all, generally speaking, but here it's bad. Why? It's not bad in and of itself. It's just can become a vulnerability if extra measures are not put into place. Like what? Like requiring validated verification prior to transferring.

funds here they lost $20 million in an elaborate social engineering AI cybercrime scam. The scammers used deepfake technology to impersonate the company's chief financial officer in a video call. They convinced an accountant in one of their divisions to conduct several wire transfers amounting in millions of dollars stolen and sent to various bank accounts owned by the cyber criminals. That's insane. Walk us through what happened. Well,

Dino Mauro (10:13.236)
They took advantage of the fact that Arup is a remote working company relying on email and video calls as many companies operate today. Yep. Understood. How did it happen? Well, imagine getting an email asking you for sensitive information. In this case, it was to make a wire transfer of lots of money. Like most people who understand risks of cybercrime, they do not act on it right away. Instead, they try and verify first. That seems to have been done here, but they got no answer. But to their credit, they

did not make any moves, waiting until they heard back from the person with authority. Makes sense. But then... But then? But then she got another email. This one was from the same person allegedly in leadership at her company, albeit at a different location. Okay. Only this time. This time, it included a calendar link for a video call. Okay. They all jump on this call and there was eight different people from the company on the video call.

eight different people live in real time. And the target victim was able to ask questions live on video and get all her answers clear and resolved. They told the accountant it was for a secret confidential urgent deal and that she needed to transfer the money. She asks a ton of questions and they explain away each and every one. the call ends and she feels fine. No concern. Seems totally legit. Seems legit. It does.

And so she proceeds to make a series of transactions she was authorized to do and it amounted to over $25 million dollars. Except? Well, except the fact that everyone on that live video call was in fake a live AI deep fake. And all of those transactions were in cyber crime fraud. And the company is out 20 million. Wait, so all of the other people on the video call were live and they were all deep fakes? There were eight other people on that call. They were all AI video

deepfakes conducted live. Wow. The big question everyone had was how did the accountant not notice that the CFO and other coworkers on the video call was a deep fake? The truth is that the technology has advanced faster than our human adaptability. It's what experts call the exploitation zone where technology like this is undetectable by the human eye. And yet we are not trained to be watching for daqi and companies lack

Dino Mauro (12:37.9)
policies to address this risk. some deep fakes are a lot more realistic than most people think, especially for people who aren't keeping up with technology. Yep, exactly. Deep fakes don't just copy faces and voices, it can even imitate mannerisms. In this case, the deep fake was so realistic that even the accountant who'd spoken with the real CFO many times couldn't tell the difference.

The other people on the video call looked and sounded just like his actual coworkers. Once they reported it to the police, news outlets picked up on it fast. The worst part? The criminals behind this huge heist have not been caught yet. So, what can people and businesses do to protect themselves from these kinds of scams? Great question. experts recommend implementing verifications required as a matter of policy.

something that gives another set of eyes on any transaction where sensitive information would be given away or funds wired or transferred. Yep. And they also recommend multi-factor authentication for any important transactions or communications. That way, even if someone can mimic your voice or appearance, they won't have access to your other verification methods. It's also crucial to be skeptical of any unexpected requests for money or personal information.

even if they seem to come from someone you know. Always verify through a separate trusted channel. And that's a tactic cyber criminals often use, isn't it? It is. They always try and get people off channel. When you think of the latest most recent Uber breach, it happened when an employee was contacted through WhatsApp rather than through their company communication platform. And in the case of other social engineering stories,

They always ask people to communicate away from LinkedIn, for example, and to go on Telegram or Watts APP instead of using the traditional chat features which can be monitored. Excellent point. It sounds like awareness and caution are key. Absolutely. And it's not just about personal protection. We need to be advocating for better regulations and ethical guidelines around AI development and use.

Dino Mauro (14:55.466)
Like in October 2023, the United Kingdom made it a criminal offense to share inappropriate deep fake content. The technology itself isn't inherently good or bad, but we need to ensure it's being used responsibly. Well said. It's clear that AI is going to play an increasingly large role in our lives for better or worse. The key is staying informed and proactive. You're right on the money there. As we wrap up, I think the main takeaway is this.

While AI has incredible potential to improve our lives, we need to be aware of its dark side, stay vigilant, question unexpected requests, and always verify important information through trusted channels. The future of AI is exciting, but it's up to us to ensure it's used ethically and responsibly. Mm-hmm. Couldn't agree more. This has been a fascinating and eye-opening discussion about the dark side of AI. Well, thanks for this, David. It was interesting. It has been. Thanks.

Any final parting words? Just this. While these stories can be frightening, knowledge is power. By staying informed and alert, or vigilant as we like to say, we can all help protect ourselves and others from these emerging threats. Until next time, stay safe out there, and thanks for being a cybercrime junkie. Well, that wraps this up. Thank you for joining us. We hope you enjoyed our episode. The next one is coming right up.

We appreciate you making this an award-winning podcast and downloading on Apple and Spotify and subscribing to our YouTube channel. This is Cybercrime Junkies and we thank you for watching.


People on this episode