Cyber Crime Junkies

How Deep Fake Videos Increase Security Risks.

April 05, 2024 Cyber Crime Junkies-David Mauro Season 4 Episode 41
Cyber Crime Junkies
How Deep Fake Videos Increase Security Risks.
Show Notes Transcript

Paul Eckloff was a US Secret Service Agent for 23 years.  Today we discuss how deep fake videos increase security risks. Topics include: artificial intelligence risks in cyber, new ways to reduce risk of deep fakes, how are deep fakes made, how are deep fake videos made, how are audio deep fakes made, and how is ai making it harder to detect deep fakes.

Catch Video Episode with Sample Deep Fakes here: https://youtu.be/1yFFK6uHt0I?si=qP9F1_uIZ7q6qGSS

Takeaways

  • Sample deepfakes are played. Can you tell? Over 85% of those tested could not.
  • Deepfake technology, created using techniques like GANs, diffusion models, and VAEs, can convincingly substitute one person's face or voice with another's.
  • The advancement of deepfake technology poses risks such as impersonating executives, enhancing social engineering campaigns, avoiding detection in malware, and conducting reconnaissance for future attacks.
  • The widespread availability and low cost of deepfake technology make it accessible to both legitimate businesses and threat actors, increasing the threat surface for organizations.
  • The potential for deepfakes to manipulate and deceive individuals, especially children, is a grave concern.


Chapters

  • 00:00 Introduction to Deepfake Technology and its Impact
  • 03:27 The Challenges of Detecting Deepfakes
  • 08:04 The Erosion of Trust: Seeing is No Longer Believing
  • 11:31 The Advancement of Deepfake Technology
  • 26:53 The Malicious Uses of Deepfake Technology
  • 36:17 The Risks of Deepfake Technology
  • 37:42 Consequences of Deepfakes
  • 40:38 Limitations of Deepfake Detection
  • 44:30 The Need to Get Ahead of Deepfakes
  • 45:28 The Rapid Advancement of AI Technologies
  • 47:02 Deepfakes in Different Languages
  • 50:44 The Impact on Personal Reputation
  • 51:59 The Importance of Verification
  • 56:09 The Role of Regulations
  • 58:44 The Fallacy of Trusting Regulations
  • 01:00:06 Applying Occam's Razor
  • 01:02:00 The Profound Changes of AI and Deepfakes
  • 01:06:17 The Power of AI to Elevate or Tear Down
  • 01:09:34 The Catastrophic Potential of Deepfakes


Try KiteWorks today at www.KiteWorks.com

Don't Miss our Video on this Exciting KiteWorks Offer!

Try KiteWorks today at www.KiteWorks.com

Don't miss this Video on it!

The Most Secure Managed File Transfer System. 








How Deep Fake Videos Increase Security Risks

Paul Eckloff was a US Secret Service Agent for 23 years.  Today we discuss how deep fake videos increase security risks. Topics include: artificial intelligence risks in cyber, new ways to reduce risk of deep fakes, how are deep fakes made, how are deep fake videos made, how are audio deep fakes made, how is ai making it harder to detect deep fakes,

Keywords

deepfake technology, cybersecurity, GANs, diffusion models, VAEs, impersonation, social engineering, malware detection, reconnaissance, threat actors, threat surface, manipulation, deception, deepfake technology, risks, financial scams, political manipulation, personal reputation, lack of humanity, victimization, erosion of trust, regulations, fraud, deception, verify information, skepticism



Summary

Paul Eklop, a former US Secret Service agent, joins the podcast to discuss the dangers of deepfake technology and its impact on cybersecurity and society. Deepfakes, created using techniques like GANs, diffusion models, and VAEs, can convincingly substitute one person's face or voice with another's, leading to potential misuse and deception. The advancement of deepfake technology poses risks such as impersonating executives, enhancing social engineering campaigns, avoiding detection in malware, and conducting reconnaissance for future attacks. The widespread availability and low cost of deepfake technology make it accessible to both legitimate businesses and threat actors, increasing the threat surface for organizations. The potential for deepfakes to manipulate and deceive individuals, especially children, is a grave concern. Deepfake technology poses significant risks to individuals, businesses, and society as a whole. The ability to create realistic fake videos and images can lead to various harmful consequences, including financial scams, political manipulation, and damage to personal reputation. The lack of humanity in these technologies allows for victimization and the erosion of trust. Regulations and laws alone cannot fully address the problem, as fraud and deception will always find a way. It is crucial for individuals to verify information, be skeptical of sources, and consider the simplest and most logical explanations.


Takeaways

Deepfake technology, created using techniques like GANs, diffusion models, and VAEs, can convincingly substitute one person's face or voice with another's.
The advancement of deepfake technology poses risks such as impersonating executives, enhancing social engineering campaigns, avoiding detection in malware, and conducting reconnaissance for future attacks.
The widespread availability and low cost of deepfake technology make it accessible to both legitimate businesses and threat actors, increasing the threat surface for organizations.
The potential for deepfakes to manipulate and deceive individuals, especially children, is a grave concern.
The impact of deepfakes extends beyond cybersecurity and has implications for society as a whole. Deepfake technology has the potential to cause significant harm, including financial scams and political manipulation.
The lack of humanity in these technologies allows for victimization and the erosion of trust.
Regulations and laws alone cannot fully address the problem of deepfakes.
Individuals should verify information, be skeptical of sources, and consider the simplest and most logical explanations.

Titles

The Manipulative Power of Deepfakes
Avoiding Detection: Deepfakes in Malware The Lack of Humanity in Deepfake Technology
The Risks and Consequences of Deepfake Technology

Sound Bites

"AI is making it harder to detect deepfakes"
"Seeing is no longer believing"
"Deepfakes are already pretty close to being indistinguishable"
"People say things in the comments of my LinkedIn posts they would never say to me at the grocery store."
"What happens if someone creates a deepfake of a violent, completely fake police action that looks racially motivated?"
"Nobody has a deepfake detection budget, right? Nobody has that technology."


Chapters

00:00 Introduction to Deepfake Technology and its Impact
03:27 The Challenges of Detecting Deepfakes
08:04 The Erosion of Trust: Seeing is No Longer Believing
11:31 The Advancement of Deepfake Technology
26:53 The Malicious Uses of Deepfake Technology
36:17 The Risks of Deepfake Technology
37:42 Consequences of Deepfakes
40:38 Limitations of Deepfake Detection
44:30 The Need to Get Ahead of Deepfakes
45:28 The Rapid Advancement of AI Technologies
47:02 Deepfakes in Different Languages
50:44 The Impact on Personal Reputation
51:59 The Importance of Verification
56:09 The Role of Regulations
58:44 The Fallacy of Trusting Regulations
01:00:06 Applying Occam's Razor
01:02:00 The Profound Changes of AI and Deepfakes
01:06:17 The Power of AI to Elevate or Tear Down
01:09:34 The Catastrophic Potential of Deepfakes


Dino Mauro (00:00.11)
Paul Eckloff was a U .S. Secret Service agent for 23 years, 14 of which were spent protecting the most powerful people on the planet, U .S. presidents. Before that, Paul was a high school teacher. Today, we join Paul as we sit down and explore the latest danger to society, one that the FBI alerted the public about back in July 2022. And since then, things have got exponentially worse.

artificial intelligence risks in cybersecurity have now become something so unbelievable and so unlike anything else. Synthetic media, deepfakes. You need to see and hear it for yourself to understand what's actually happening. This will have a generational impact and it will lead to what many fear will be a new era of needing to verify.

everything we see in here online, because what we think we see people actually doing may in fact be synthetic. It may in fact be fake. We're going to explore new ways to reduce risk of deep fakes, how deep fakes are made, how our deep fake videos made, how our audio deep fakes made and more. It's unlike anything else. This is the story.

of Paul Eckloff and how AI is making it harder to detect deepfakes.

Dino Mauro (01:59.341)
cyber news into business language we all understand. So please help us keep this going by subscribing for free to our YouTube channel and downloading our podcast episodes on Apple and Spotify so we can continue to bring you more of what matters. This is Cyber Crime Junkies, and now the show.

Dino Mauro (02:30.029)
All right, welcome everybody to cyber crime junkies. I am your host David Mauro In the studio today is Paul Eckloff. Paul was a US secret service agent for 23 years, 14 of which were spent protecting US presidents. Before that, Paul was a high school teacher. Very excited today, Paul. He's one of the top leaders with LexisNexis risk solutions.

And we're going to discuss various topics today, mostly concerning deepfaked technology, the risks that it poses in the cybersecurity space today, as well as the risks that it poses to all of us in terms of misinformation and the dangers that all of us face in light of this advancement in technology.

So without further ado, let me go ahead and introduce Paul. Hey, Dave, it's great to be back on cybercrime junkies to talk about deep fakes. Some of them are actually getting realistic. In fact, I'm not perfect, but I'm a pretty decent deep fake of Paul.

That wasn't even you, Mr. Paul Eckloff So the real Paul Eckloff is right here. So Paul, welcome to the studio. I don't know who that guy is. It's interesting. I didn't recognize the hand gestures. That would have been one telling thing for me. We can, we'll get into that, but it is interesting how deepfakes work and how, what they train on.

and what they learned from and what you begin with an end with. And like that deep fake of me, um, my avatar, which is, it's neat to have an avatar. It's sad that I'm not 12 foot tall and blue, but, um, so true. Maybe we can adjust that or an airbender, which I guess was before the other avatar, but deep fakes are just fascinating, but it's another one of these terms like cyber or these other things that we throw out and dismiss and.

Dino Mauro (04:47.021)
I don't know that we remove the human element from it, but deep fakes, they really represent. I don't know. This is what I call high probably, but it really represents a crossing of the Rubicon for the human species. And, you know, you look at the various revolutions. If you start with steam and electricity and machines, computers as the fourth industrial revolution, and then the fifth, perhaps data.

and artificial intelligence, it's a point where seeing is no longer believing. And that is a risk that will affect us all in both our professional and personal lives. Absolutely. It's all down to once again, when we talk about crime, cyber crime, physical crime, or my background in physical security, it comes down to how these crimes, and they do like my, my, my, uh, I deep think.

David, let me explain. As someone who does that a lot, these crimes, how they're done, because basically to commit crime, you have two choices, essentially, if you're going to ferment it down. As we said, I don't like to distill things down because I prefer beer, but it's either deception or physical force. I've got to convince you to do something that is against your own self -interest. Now, Sun Tzu mastered it, you know,

millennia ago, but if I can convince you to do it to yourself or fool you without firing a shot. It's far more effective in warfare crime espionage influence campaigns and deep fakes. Though they have been around for many many years. It was born. I mean deep fakes themselves were born around 2014 but.

Yeah, I mean, the first ones were really used in like social media and it was almost more like parlor games back then. Right. And then they really started to make news politically when when when we started to see some of the the deep fakes. We we have the deep fake that you and I have reviewed of President Obama, which we can show toward the end of the show where he.

Dino Mauro (07:09.837)
you know, that was done by one of the comedians. But the President Obama video was was done by a great comedian and he just showed the the danger of it. Right. The free the leader of the free world could be made to say various things. And and just like that, the leader of organizations.

can be made publicly to say things as well as internally to say things. And it can have a ripple effect because seeing is no longer believing. Seeing is no longer believing. And that video with Jordan Peele, it was meant not only to demonstrate the capability, and I believe it took hours and hours of computer crunching at the time to do it, not days, which can now be done in minutes. Correct. $19 on the internet as my deep fake will show you.

Was meant to to be a canary in the coal mine or just to be a warning of sorts to show you that You needed to check your sources. You needed to be aware of not everything you see is real and this is especially true and become far more important in the social media age we've come from a time when you and I grew up when the trusted news source whether it was Walter Cronkite or Walter Cronkite

These individuals, you trusted them as your news source. Well, now those people on the major networks get very little views. And yet Kim Kardashian's lunch will get millions and millions of views while she's become a quasi celebrity. Although, interestingly, just as deep fakes were essentially born of porn. I won't bridge that gap too much for how a lot of modern celebrity has begun. But when you look at the democratization,

of communication is what the internet and social media has done. Right. Good or bad, your neighbor's teenager's opinion on their social media platform can go viral or these people can can become content creators with very little in resources that can equal rival or surpass, you know, can surpass the major networks. Well, that sort of began it. So now I can create a deep. You're already designed to understand or expect the truth.

Dino Mauro (09:35.756)
from essentially a less trusted source. Not that the average American is less trusted than a news person and probably has less motive to deceive you perhaps. No, but I think the point you're getting at, and correct me if I'm wrong, but it's true. The change in journalistic verification before things go out, right? We've seen that where there's a lot of stories that go viral in the news, whether they start

at a, you know, a traditional news desk and then get launched out, but they're not vetted. It seems sometimes like they used to be like before, you know, they would have spoken to the exact source. They would have cross -examined them, verified anything that source was relying on before they reported it out. But today it's like, well, we were citing this blog post or this social media post.

which is relying on a source of a source and nobody's ever talking to the end source to find out whether it's true or not. That is a separate issue, right? But in that environment now, now we're launching in advancements in artificial intelligence and deep fake technology where now what we see may not actually even be accurate or what that person is in.

To me, the shocking thing is in the last six months, the technology has gotten so much better right before an election year. It is very, very, it's very, very alarming. And we've already seen the precipitate of that. We saw the deep fake voice of Joe Biden telling Democratic voters not to vote in the Northeast. And of course,

Instantly, it was decided that it was a Republican tactic. It turned out to be a Democratic rivals tactic. So you really never know. And the problem with these videos is as they become more and more accurate, as computing power becomes more powerful and you get down to the pixel and these algorithms get stronger, there really is only a matter of time before they were completely indistinguishable. And they're already pretty close. But when you look at what's going to happen,

Dino Mauro (12:00.428)
And the capabilities we talked about last time, the intersection of national security, physical security and cybersecurity, deep fakes are really at the forefront of that. And it really are because you see them being leveraged in other countries. And we're going to get to that in a second. For misinformation campaigns, espionage campaigns, manipulation, you know, fraudulent transactions, all of that. Absolutely. And.

And that's another thing that I'm surprised my avatar doesn't see and watching our previous two episodes. I use that word way too much. And we'll talk about the previous episodes and somebody's deep faking of my image later on. Um, yes, we will. We will. We'll, we'll get that fixed this time. Um, when you look at, yes, go ahead. Sorry. That's, that's absolutely terrifying. I don't have my own terrifying is this.

Okay, sorry. I had to, if to, to quote Kennedy, if not us, who, if not now, when, right? So just wanted to say that. And as Abraham Lincoln once famously said, never believe anything you read on the internet. Yes, he did. Yes. He was a, he was an avid Facebooker back in the day. That was when Facebook was first coming out. He may be, uh,

may have been a reincarnation of fraudster Thomas, but we can unpack that later. We're really pressing. So let's back up. Yeah, let's back up. Let's talk about what our deepfakes like how are they created? Let's go down the rabbit hole too much or get too technical. But we've done a lot of research. And so basically, they're created kind of in three various ways or their leverage three various ways. Gans GANs.

diffusion models and VAEs. Do we want to kind of explain or does it make sense if I just explain what each term means and then we can kind of explore the context in which they're used? Absolutely. It is a rabbit hole, especially as you get down to the VAEs. I think the diffusion models and the generalized adversarial networks are probably more important, but it's like a magic rabbit like Bun Banini that is coming up with these things. I don't think that...

Dino Mauro (14:22.604)
the scientists even fully understand what they're unleashing. Yeah, absolutely. So GANs are Genitive Adversarial Networks, GANs, right? And they flawlessly substitute, you know, one person's face in a video or an image for somebody else's, right? And they're very, very supportive in the creation of deepfakes, a Genitive Adversarial Network.

from my research is a deep learning architecture. It trains two neural networks to compete against each other. Hence the name, right? Generative adversarial networks, right? And when they compete against each other, they generate more authentic new data, right? So the two machines compete against each other and they spit out something even more authentic than the given training data that they were given.

For instance, you can generate new images from an existing image database or original music from a database of songs. The diffusion model, right? The diffusion model is a deep neural network that holds latent variables capable of learning the structure of a given image by removing its blur. So in that model, it's trained to know the concept of abstraction.

being like behind the image. So it can create new variations of that image. So in the context of machine learning, those diffusion models, they generate new data by reversing a diffusion process, meaning information loss due to noise intervention. Right? So the main idea here is really they add random noise and then remove it. And then they get a new image or data set and VAEs,

are variational auto encoders and in the artificial neuro network architecture, um, like the deep fake, uh, context they're used for like deep fake detection, right? Um, uh, they're, they're able to train only on like a real face and then detect any non -real images. So it looks like VAEs are something that is, uh, being researched and stuff for, for deep fake detection. That's what my research has shown. It sounded very official.

Dino Mauro (16:50.54)
So I just wanted to go with that. What is your, what is the significance of each one? How, how, how, how do they play in this, uh, grand scheme of things? It really is fascinating to understand it, especially the adversarial networks. When you have no deep learning is deep learning is essentially layers of a neural network, just like the human brain has the neurons. You can go all the way back to 1957, 1958. And the perceptron was a very early.

Machine that did these things now it only had one layer and it is interesting that we're changing perception and that name sticks so well but we look at the adversarial networks and you have you have the generator and the discriminator and the one that's producing it is trying to fool the discriminator it's really as they working together it's as if the true the Greeks and the Trojans open to the gate and the Greeks Are the Trojans watch them building the horse and as they built the horse?

Once they get, oh, all right, it's just a horse and there's no people in there, come on in. And then that's when they were like, it's good enough. It generally takes about a thousand iterations, but you have the, the GAN model is flawed because it actually can have a convergence where it breaks down. It's kind of interesting where at some point it can't produce it anymore because it gets, it gets stuck. I don't know the exact technical definition for it, but as it takes it's.

It goes through a loop maybe like all of the data runs through or something like that. Yeah. Garbage in garbage out. And that's eventually what's going to happen when there's enough AI text on the internet, it'll feed on itself. But when you look at how it does that, somebody sets a parameter where 50 % of the time we can't tell or 75 % of the time, and that'll make it last longer or shorter and how you get better or worse images. The diffusion model, as you said, is if I take an image and I zoom way into the pixels to where it's white noise,

and then zoom back out, filling in those gaps with my learned ability. And that does not have that same flaw as GAN, that diffusion model, which is what like stable diffusion is named for it. When you look at these, now they're all technically a form of generative AI, and we haven't even ever unpacked that there has for decades, there has been extractive AI, which doesn't create, but can pull using neural networks from existing data sets. And it's really powerful.

Dino Mauro (19:11.404)
But when you look at these deep fakes and how they use this, it really is interesting because it doesn't it's we talk about the singularity, which is a is a point in the future when AI will surpass human cognition. And people say, oh, it's you know, when you talk to singularity, that's like a black hole. Have we crossed that event horizon, which is the point in the black hole where you can no longer retreat like a rush in Ross and on going towards Cygnus X1 if you want a really nerdy reference. But at what point?

We reach that now I'm not a believer that these things will ever fully perhaps surpass American or not American goodness, human creativity or human cognition because of the unpredictable nature. I heard you had a guy on your show last month that spun off on cocaine addicted turtles. I don't know who that guy was, but I mean, what computer would ever want to do that? And when you look at how flawed humans are as exemplified by that guy,

Would a computer recreate human flaws or perfect them? Like if it's building on human ingenuity, would it not repeat those? I noticed, and this is really because it doesn't know what's good or not. There is no moral or emotional component to it. It is just a data set. So if it needs to spit out cocaine, you know, addicted turtles, it's going to spit that out. Not, not that, you know, that, that, that would cause a lot of problems, let alone Peter would be very upset.

So let's reptiles would would would would march in the streets. But luckily, yeah. Want her to lay their eggs. Imagine the concerts. So anyway, so OK, so you were just talking about the singularity in and when we started this episode, we played your your deep fake of your initial intro. And the thing was, is you and I spoke yesterday and you were wearing that gray sweatshirt with the orange or the other day. So I want to.

play the next one where you talk about singularity and singularity is really that point of no return, right? That, that, that, that real true issue that, that we have. So let me go ahead and play that one. And, uh, again, you didn't say this. This is actually from a paper or research that you had done. Yes, this is, this is completely the rapid advance. Oh, I'm sorry. Go ahead. Go ahead. Was this 100 % AI?

Dino Mauro (21:37.42)
that is mimicking and we'll talk about how it mimics. I should have worn the same outfit to make this more impactful, but, um, no, I like knowing which one's real, which one's not. So, because otherwise you'll be, otherwise I need to, I need to believe what I see. So at least now I know that you're there and you still got a bandaid from skin cancer surgery. Mine. Okay. All right. So I'm, I'm going to play this, this clip in, uh,

Just look at how accurate the lip movement is. The voice is identical. It is really remarkable in the fact that you never said this. Shocking. The rapid advancement of AI technologies as exemplified by deep fakes serves as a stark illustration of the pace at which artificial intelligence is evolving. This evolution ignites debates around the concept of the singularity.

A hypothetical future point where AI will surpass human intelligence, potentially leading to unforeseeable and transformative changes in human civilization. That's scary. Now let me ask you the, uh, so you didn't say that bottom line. I typed that I wrote that you typed it. I drained my avatar that I can write anything. I can change the pitch, the speed. I can affect the intonation. I can add, you know,

The hand movements at the end were just, did you just add those in in the avatar after? All I did, I added an ellipses that I wanted a little extra time at the end and what it has done is it's learned my mannerisms and how I speak and what's funny is it really is a bit of a caricature. Right. I'd like to think that I don't have a lisp but I must have a very slight and that's fine.

I must have a slight lisp and actually now it's interesting in kind of the, um, uh, butter, minehoff. Now, if you watch any newscaster tonight, listen closely, nearly every human has a lisp of some kind. It's just a matter of degrees. And so what, what the AI avatar has done is you see, I do gesture in the training work I was doing with it. I made sure to try to be animated.

Dino Mauro (24:01.932)
Because you wanted to capture, right? You wanted to capture as much as, as authentically you, right? In the first iteration, I just sat and read like this. So what I got was an avatar that could only act like this. So when I trained it the second time, I worked on providing video sources where I was more animated. My voice inflection changed. I gave it more to learn from so that it could create more. Because once again, that's how that works with that.

adversarial model, even the diffusion, it's got to know, it's got, it's got to dissect it down into data and then rebuild it as it creates. Well, there's two types of deep fakes too, in terms of the video context, there's the ones that you can see online. And, and first of all, for those that aren't familiar with this, this is not expensive. This is not hundreds of thousands of dollars. It's not, it is 20 bucks a month.

or less depending on which one you go with, you can easily recreate or create a new avatar. And so there seems to be two types. One where you can get an avatar designed by somebody else of a different person. That person has never existed, right? Or I think what is more dangerous is the one that you just showed. And that is of an actual person.

that is, that is merely saying whatever you want to program it to say. Absolutely. And face swapping is what is still done in some. And that is done by romance scammers, uh, like the Yagi boys or the other various groups and use sextortion where they'll take an avatar and just do a face swap and a voice clone, um, where it is a person that they've animated and it's capturing their face movements in real time. Rather than what I created,

which was a complete, completely newly generated video that never existed based on my previous recordings. Now I will give them credit. The company that I use is HN. I do pay for it. I'm not paid to advertise for them, but I do have to give them credit. I'm using a legal format and following all of the rules. I don't go on telegram and find a criminal way to do it that I could. I tried to fool. I'm going to give them some credit because I tried to fool them for today's video.

Dino Mauro (26:28.556)
and create an extortion video, even a playful one, to say that I'm Paul's avatar and I have him locked in a trunk. And if you don't send me a million, one million dollars. And it wouldn't let you create it. It triggered its own internal AI, triggered a content moderation policy. And even though I tried various things, because I did not have any, obviously I had playful and educational intent.

If they're listening, I did not write, of course, but that's really good that that's built in good for them. Good for them. Fantastic. A fantastic technology that does not even allow someone to do that. Trying then there may be ways to do it, but they're there and they've even come out with a version five. Now that's even more powerful. I have, I need to retrain my avatar again. And as I've talked to you before, I own an avatar of, of, of my boss that I can use. And I have used a fool.

the company that we did as a demonstration because deep fakes and the potential for, as I said, deception versus forced for deceiving humans into acting out of their self -interest. Once again, playing on those seven deadly sins, things that we talked about before. Yeah. It's so transformative because once again, for various reasons, you can't believe it. It's bigger than anybody can think of. I mean, I remember July of 22, the FBI issued the alert.

on deep fake technology. They may have done other ones. That was the first one that I really focused on. And that had to do with a lot of people that were applying for remote jobs remotely were using some form of deep fake technology, getting these jobs. It really wasn't them. They had stolen credentials on the dark web, which is not hard to do. It's literally right there. And then they were getting a job and getting access to company systems. And then

obviously just taking that access and giving it to those with criminal intent. But that was scary. And that was happening hundreds and hundreds of times with so many reports that the FBI actually issued an alert. And that was back in July of 22. So now fast forward to today and the technology has advanced so much. There is a great article. I will link it in the show notes here.

Dino Mauro (28:46.828)
It is by Recorded Future and Recorded Future's INSICT group did a massive investigation into this with a lot of R &D engineers on the potential malicious uses of AI by threat actors, right? The criminal hackers. They talked about how these targeted deep fakes,

can influence operations at an organization, right? It can be used to impersonate executives. That's what you were just talking about, right? So you had created a deep fake of your boss, one of the senior leaders at Nexus, Nexus Lexis, Nexus Risk Solutions. So that's a huge organization in charge of extremely important matters, you know, of

security, national security, physical security, as well as law. And then it can be used to impersonate executives, right? And then AI generated audio and video can enhance threat actors in their social engineering campaigns. So what's also interesting is this type of technology can be used by threat actors to avoid detection in the malware that they create. So

this advancement is allowing them to use these systems so that they can create malware that doesn't get detected by modern detection systems that all businesses are relying on. But then also for reconnaissance efforts too, so that they can identify vulnerable ways to get into an organization as well as, you know, where the keys to the kingdom are located so that in a future attack they can do that.

really good article, but that is really, it boils down to the fact that what they're talking about, Paul, is the new threat surface for a company or an organization, right, includes now executive voices and executive faces. Well, business email compromise used to cost or still costs millions, if not billions of dollars, spoofing an email from the boss saying forward this money here. Well, when it's,

Dino Mauro (31:08.716)
went to a spoofed voicemail, which got a Chinese company. If it's a video, now what's coinciding with your 2022 FBI report, what's really fascinating is Stanford in 2022 did a study. And I know this is where we really met up and realized our efforts aligned. They studied LinkedIn and they found over a thousand suspected fake profiles that were deep fake AI generated. And what's interesting is,

They weren't being used. Those in that study weren't being used for espionage, sexual or financial crimes or fraud. They were being used by legitimate businesses to make initial business contact because they knew that a blank email, hey, get your masters here or, you know, hey, buy our product. They're generally rejected unless there's that human element, which is why. Right. So they can. So there's. Oh, yeah. They contacted clients.

And if they bit, they were then go, well, let me refer you to my supervisor or my friend, a real person. So rather than an impersonal email, it was a real smiling face and somebody they could connect with that. And they can check them on LinkedIn and then they'll be there. Oh, look, they went to Harvard Law and now they're working in customer service. Right. Like it's like I've seen a lot of those those profiles and you're like, oh, OK.

Everybody went to Harvard. Everybody went to MIT. And then it's also like, but yeah, Bangladesh high school and something else. And you're like, what, that's not adding up, right? Like in the years don't make sense. But anyway, I get contacts on LinkedIn at least weekly by a objectively, extremely attractive, post -collegiate young Asian female who is an executive with Estee Lauder and a degree from Stanford and Harvard.

And then I get a direct message that'll say, you know, I find your posts intelligent and you're very attractive. We should align. Once again, we talked about social engineering before. If you're two in the, at the brew pub, you're not a 10 online. I even have a. So, so, so you've been getting my messages. So you've been getting my messages on LinkedIn. I was just looking. I'm like, I thought I used a better message last time when I reached out to you. You are the Cyrano Divergirak of. Yes.

Dino Mauro (33:30.6)
Nobody knows who that is we're too old The Steve Martin version of it I forget what that was hopefully they know It's the ability to fool people and there yeah talked about before Carl Sagan and your baloney detector It's just being broken down. We've reached a point once again a post -truth arrow, which unfortunately is a

It's a man. I wouldn't say post truth. It's a manufactured truth or all the news that's printed to fit. And I want to make a couple of predictions that really terrify me. I'm hoping they wrong. I'm wrong. When I applied for this job, I told my boss, I said, look, I will tell you that I'm extremely opinionated, but I'm not always right. In fact, I'm probably wrong more than I'm right because the science, what is the basis of the scientific method? Right. It's supposed to be the null hypothesis, meaning there is no correlation between.

baldness and all the time I spent in the sun. And then you have to prove that it is. We work reverse now where we say, you know, burning an orange and on the stove. And if you sniff it, you smell back from COVID. You don't have to prove it. You start the other way around. Right. It's going to be a couple of things that are going to happen. One, we see these horrible sextortion things where these primarily West African groups, although the Russians, Chinese, Ukrainians, they're all getting involved in it.

They create these deep fakes, they fool the young and I think they've killed 27 American children. Yeah, it's 27 Americans that have been killed. Then murdered by these people. Well, then they get them to send an inappropriate picture and then they extort them with it. The interesting thing with deepfake or the terrifying thing is they don't need you to send it. Right now, they just need to have contact with you, with your child and they can then create.

out of thin air compromising pictures. Well, you saw it with Taylor Swift. You saw it, right? Just a couple months ago, right? All these horrible compromising images, which she never did, but then they're out there. But if you're not Taylor Swift and you're a kid, right? And you're being told that they're gonna come and...

Dino Mauro (35:48.648)
tell your parents, tell your coaches, tell your teachers and everything else. And then they start doing it with all these fake pictures. It's, it's horrible. I mean, there've, there've been, there've been several, a lot of kids that have committed suicide from this. It's, uh, it goes well beyond the realm of national security and cybersecurity. Well, these groups celebrated online. They taunted one father by contacting his other children and saying, you should have heard your son beg for his life. It just enrages me the lack of, cause.

I think it's also born of the inhumanity of the internet. People, I've had death threats on Twitter over something I do for my job through a, we talk about journalists integrity. People say things in the comments of my LinkedIn posts they would never say to me at the grocery store. You have this distance, but when you look at the capability to create a completely fake video of someone doing something abhorrent, and that's just the average person they can get five or $10 ,000 out of.

My prediction is going, it scares me because I'm afraid I want to make sure this isn't, I forget the term, but where you sort of create, you create what you through mentioning it. I don't want to put it out into the universe, but a couple of things are going to happen. Someone could say, what if, what if someone created a deep fake of Jeff Bezos saying that I am retiring from Amazon and naming so -and -so is the head of my company and we will stop doing this.

And then somebody who did it short of the stock. Right. Crypto. We're going to see things where people are going to pump up. We've already seen it with celebrities. Putin used a deep fake of Zelensky to try to undermine Ukrainian politics by having him say things. What they did was they actually created videos of him drinking and said, we know you're an alcoholic or really sorry. And I forget that maybe I have it backwards, but you're seeing this already.

We're going to see that affect a business directly. What really terrifies me because you look at how, how, how worn the fabric of our society is now leading to abhorrent things. Um, you know, the riots in the streets where our, our, our, our, you know, courthouses were burned or an attack on the Capitol that were born of just either exaggerated truths or horrible societal things that happen. What happens if someone creates a deep,

Dino Mauro (38:12.488)
fake of a violent, completely fake police action that looks racially motivated. And it leads to, and it leads to riots. It will lead to death. Or a political, like even in the Jordan Peele video where he makes, he says things about Donald Trump or he talks about, he's joking because, Burt Peele is a brilliant comedian and his, his impersonation is believable. But that was when it was him having to use his voice.

You don't need him to do the impersonation of the voice anymore. An entire George Carlin, I think comedy special was created by somebody who was trying to celebrate him. The family didn't like it, but they used a comedian mimicking the voice in a deep fake video. But the merging of entertainment, because a lot of this started with Hollywood visual effects and then it bled over. It really does terrify me now. It's not the technology itself once again, because...

If you wanted, I could bludgeon you with this laptop or steal your accounts with it. It's really the... It's interesting that the lack of humanity in these technologies allows victimization and allows people to feel that they can victimize other people. Deepfakes don't seem real until it's you or some poor... There was a recent article I posted on LinkedIn about women and young girls whose privacy and dignity are being destroyed by people creating deepfakes.

of them and distributing it doesn't matter if it's real. It feels very real to people just establishing their own sense of self -worth, let alone an adult or, you know, if you look at your children. Well, think about it. I mean, even though, even when their relatives, let's say that woman or that daughter, their relatives see these horrible images and these videos of them doing these horrible things, even if they know it didn't happen.

It's still shocking to their morality, shocking to their emotional state. It's, it's, they still feel horrible. Like they go through a whole cycle of, of emotions. Some of them so acute that it's leading to people taking their own lives. But also even, you know, in the context of business and cybersecurity, I mean, we've seen, there's a couple of case studies.

Dino Mauro (40:38.504)
I want to just talk about this one. It happened at the Hong Kong firm. A deepfake scammer walks off, actually it was an entire organized crime unit, but they walked off with $25 million in a first of its kind artificial intelligence deepfake heist. So let me read you the facts and let's break it down. So.

significant financial loss suffered by a multinational company's Hong Kong office, uh, amounting to $200 million of Hong Kong dollars, which is, um, uh, it wound up being, I believe it was 25 .6 million us lot of money. Um, it was a sophisticated scam involving deep fake technology. The scam featured a digitally recreated version of the company's CFO, chief financial officer.

along with other employees, right? Who appeared on a video conference call. So this was either a zoom or a teams call. And from what our research shows, there was about eight people or so on that call. Seven of them were deep fakes. The only real person was the target, right? So they were able to convince that person through the

replication in the appearance of these, of the voice and the video, um, from, and they gathered that from publicly available video and audio footage. So the actual people that they showed, um, it's not like they sat there for a while and trained their avatar, right? They grabbed this from the internet, turn that around and were able to type in in real time.

Right. And respond to questions in what looks like a 15 minute call with this person. Um, so there was an initial fish sent allegedly from the United Kingdom's CFO, right. To a target employee based in Hong Kong, asking them to wire transfer funds for various projects, not to be disclosed to other organizations. Okay. Red flag. Number one. Well, the person.

Dino Mauro (43:01.096)
raised the flag and said, look, I'm not responding to this. We're not doing this. Right. I have doubts. So the CFO from the United Kingdom, right. Or the person portraying themselves as the CFO set up a meeting with the presence of other employees in there and convinced the target employee, right. The, the, the target Mark basically to make 15 different transactions totaling 25 .6 million dollars.

And during the multi -person video call with eight, seven people, which were deep faked, the eighth person being the target, they asked various questions. They were able to get all of their questions, but all of the other employees were, were deep faked. That's pretty shocking. We haven't seen a lot of that, but it's a, but that is the,

That's what's coming. Like the way I, when you were making your prediction in, in the context of cybersecurity, we talked to organizations, financial institutions, commercial entities every week. Nobody has a deep fake detection budget, right? Nobody has that technology, right? Nobody has it, right? And yet it's already captured $25 million in one.

attempt here, right? So this one, right. One that they're publicly that, that they made public in China where they don't usually admit a lot of fault or errors. So this is coming if it's not already here. And this is something that we have to get ahead of. And that's why I wanted to, to talk about this because the implications can be international. You can think, well, you know,

We're a big company. We don't have to worry about this. We have various leaders that speak various languages. They won't be able to do that. No, no, because as my friend Paul has shown, you can get us to speak various languages, can't you? So let me share a little something here for us. Hello everyone. In this video we will discuss how hackers protect you and why it is important to understand the thinking of hackers.

Dino Mauro (45:28.968)
Good morning, Dave. It's great to talk to you again and thank you for having me on Cybercrime Junkies. First,

Can I ask you why you made my face look like Mimi from the Drew Carey show last time? Everyone else looks great in your posts, but I looked like a Ringling Brothers reject. The rapid advancement of AI technologies, as exemplified by deepfakes, serves as a stark illustration of the pace at which artificial intelligence is evolving. This evolution ignites debates around the concept of the singularity, a hypothetical future point where AI will surpass human intelligence.

potentially leading to unforeseeable and transformative changes in human civilization.

I've been wanting to come to the Hope Conference for years, and I really regret never having done so. I mean, look at me. This is not where I wanted to be. Everyone who attends the Hope Hacker Conference gets inspired to do something right, to do something better. Had I taken the opportunity to come to Hope before I destroyed Twitter, I'm sure things would have worked out differently. We're gonna have to go to our - That was in Hindi. See if it's accurate, or all in Hindi if it's accurate, but it's astounding that it -

do that. And that's just one of many languages. I tested that one for fun for your I know that you're like the David Hasselhoff of the Netherlands. I've heard so and you favor him a bit. But if you have a single is this the is this the David Hasselhoff drunk on the floor of like grabbing a hamburger like that. That is me. Like that part.

Dino Mauro (47:27.464)
This is the Golden God with the Golden God version of David Elson. He has various, yeah, he's singing at the Berlin Wall. Yes. You also have, which one did you show that you said was very, very accurate? Was it the Mandarin one? Mandarin Marl? I've been told that it was a previous one I did when I was working with David Mamen from GSU. It is amazing work on the dark web as well. Crime translating an introduction.

interview of him into both French and Mandarin that native speakers said was pronounced right. Word choice was right. Wow. Um, they said that he had, it was interesting. His French, the French version, his French friend said it had a French Canadian accent. Now what's interesting is that just may be how the model was trained for that language. Um, unbelievable. Here is one. Yeah, here is one, uh, in German. Let me play this. See.

what this looks like because I invite our listeners that are fluent in these languages to let us know how they, what, what they thought of these because I'm busy in Deutsch auch, but I can't fully interpret it that well. I took it in high school and college. I'm telling you here, check this one out. It's, it's a short clip, but, but check it out.

In this video we will discuss how hackers protect you and why it is important to understand the ways of thinking of hackers. If you have a few minutes, we ask you to watch this exclusive interview until the end. It's cool. Questions from listeners and viewers are received. We hear and read all questions, so keep them in mind. You are important. Let us check.

Okay. And so your voice fluent in German. It's astounding. It is astounding because while I don't, I concede, I don't speak German. I don't understand what I meant. I was watching the hand movements, the pause, the intonation of the voice, the lip syncing. It was flawless. That's what scares me the most. It's, and those are.

Dino Mauro (49:45.032)
are built from existing videos that would you talking about hackers and asking for comments, obviously from, from viewers and listeners, but it's taking the original source video. So the movements of your hands, I believe run affected. It's then training itself on your lip movements for various sounds based on its algorithms. And then obviously using large language models and other things to do the language translation. And, and, and this, this can be done in images as well. So, um,

I didn't have a high res headshot of you last time. So for our last episode, I just crammed one. I think I scraped one off of LinkedIn or something I found online. And it seemingly upset you, I believe, to the point where to the point where I got I got this sent to me, which is which is a deep fake. Or so you claim to tell me that it's deep. Let's see what this one says. This is good morning, Dave.

It's great to talk to you again, and thank you for having me on Cybercrime Junkies. First, can I ask you why you made my face look like Mimi from the Drew Carey show last time? Everyone else looks great in your posts, but I looked like a Ringling Brothers reject.

Dino Mauro (51:02.216)
Yeah, that's that's deep fake evil, Paul. But you didn't say that. You did not record yourself saying that to me. No, I typed. I typed it in. You just typed it in. But let's let's see if I can share share this. Just this window to see if this works. If not, we can edit this out. Let's unpack. I mean, that's a handsome guy. These I mean, Brett Johnson looks great. Look at my picture.

I am teeth whitening. I look like a Dennis Quaid teeth whitening video. I mean, it is, I mean, I get it. The problem is it's not inaccurate, but everybody else you put, you made it look so good. So next time. This episode, we're going to get a high res headshot and we're going to do it right. We're going to get our crack marketing staff over there to get it done right. Pay attention.

I don't know which way to point. Yeah. Get over there and do this right. So you didn't upset me. I'm just disappointed. It's hilarious. So, um, so, I mean, look, the, the element now that the threat surface for organizations includes executive leadership, their faces and their voices, the website branding.

all of that, right? There are so many more controls, it seems, that need to be put in place by organizations and industries. I mean, so far, the federal government has, they've made some, you know,

comments, some executive orders, some initiatives on artificial intelligence in general. And they've really kind of pointed out the obvious, in my opinion, like, hey, this is a risk. This is pretty bad. But, but what's, what's your prediction? Like, do you see some, some additional controls or some additional regulations coming out, or is it going to be suggestions and best practices for organizations? Um,

Dino Mauro (53:24.04)
What do you think we can do about all of this Paul is what I'm asking Well, that's an excellent question, but it unpacks a couple of things the first example Which doesn't apply to us. I assume we're pointing out the negative effects of deep fakes and I want to very quickly say that one place that it's really helpful is in cancer detection because you can train AI to pick out From histological samples pick out cancerous tissue

The problem that they have is there aren't it since there many of these cancers are very rare that there aren't enough samples to train to train the detectors. So using GAN or using a diffusion model, they can create enough versions, millions of versions of those tumors and then train AI to detect them that they're going to be able to detect cancers with a precision that they never could with the human eye. That's one example. All great things. All great things. While we're

talking about the doom and gloom because there is the great potential for harm from deep fakes. We've already seen it. What I think is interesting is you find this is the opposite of the crypto evangelists. AI doesn't seem to have evangelists telling you how great it is. Because it's a different type of technology, you'll almost never find an article in the crypto world that says something's going to go down because that doesn't sell coins or prop up that rug before they can pull it.

With AI, what you're finding is these tech leaders, whether it's Google or, and I can't point out specific examples, where it's an executive in these companies that are saying AI is so incredibly dangerous, deepfakes are dangerous, and we must regulate and stop the development by small companies. Well, ask yourself, if you're heavily invested in AI and deepfakes, why would you want to point out that it's dangerous? The same reason.

that kids who grow up in homes where the liquor is locked up and forbidden tend to drink a lot more than the ones where dad has a beer with dinner every night. And that's right. People I know quite well. When you say that it's incredibly dangerous, it might point out why. How does anybody and if you have listeners that do is I apologize, but how does anybody still take an actual cigarette, let alone vaping, light it on fire and suck on the smoke in their mouth in 2024? Knowing how dangerous it is.

Dino Mauro (55:39.528)
Why do I know why people continue to I don't know why you start. I think it's the danger factor. So you're seeing some people saying how dangerous it is because it only props up their investments because the seven deadly sins someone goes well, it's sort of a fear of missing out. It's forbidden right as people know that it's forbidden than they want it more. I better do it on the regulatory side. What I find interesting is it's a little of the same but you can regulate it. But the last time I checked frauds illegal.

And yet we lost $30 ,000 in two years. Murder is illegal, but people get killed, speeding is against, you know, maybe a movie, but people blast by like all the time. So regulations are important. Unfortunately, they have to sort of be a self set of rules and everything that you put in place for the people who follow the rules will just be exploited by those who don't. So if you have to regulate it and you have to pass laws, I just warn people not to think.

that, oh, there's regulations and there's these laws. So what you see, you can now trust. Right. That's a great point. Excellent point. It's actually dangerous because if you think, well, it's regulated, this has to be real. It's not true. One of the really neat ways that I read that a company developed for generative images was they created a way to embed, just like you've heard of those little espionage things they hide in JPEGs.

You can hide code and pixels in an image that when it's, when AI trains on it, the model will collapse. Now that doesn't, that that's only two. So it doesn't steal your intellectual property. Like an artist, it doesn't prevent it from using other ones to create one like yours, which is right. If you go into any of these chat GPT four or Dolly and tell it, make a painting in the style of a Renaissance master. I've done that online, creating images.

But you look at the regulations and laws and they're not the answer. They're a piece of it. Education and skepticism, checking your sources. The problem is what is your source? Because you can do a layer upon layer upon layer fraud like that one in Hong Kong, where not only the CEO was was generative, but other members in the video were fake. So that, well, I know that guy, he's here.

Dino Mauro (58:02.376)
What you see and you see this in politics, you see this in disruptive campaigns, in influence campaigns, you have something that's known as confirming the consequence. You come out with some completely silly thing like.

David Mauro is an alien. And if he's an alien, then he could be using this to steal Paul's image and create a deep fake to defraud the bank, which would then undermine the Federal Reserve and cause the collapse of civilization. Well, then all the discussions about the collapse of civilization, and you kind of forget that it all started on nonsense. That's called confirming the consequence, or the if -then fallacy. And so if you start at a fake video,

Sometimes you get so far beyond it, like the post truth or the post, the post, I would call it, it's not even post truth. It's post deception. You've got it at your honor. You're on a, you're on a, you've built your house on sand and it's, it's going to a road. But if that's never detected or once you did, once you disrupt that narrative and point out that that's fake, it doesn't matter because all the things about how evil it is, because you you're using alien technology from area 51 are, are already accepted.

even though they were born of non -religion. It's a complicated - Let's synthesize this down from a personal level as we see images on social media or read things online, whether it's about something leadership says or something pertaining to an election.

or a stance that a politician is making or who's running even. What should people do? I mean, verify, right? Like don't believe it at face value. Well, also there's a couple of, not just not believing in face value, I'm not telling you not to trust any news sources, but you also have motivators. Why are they saying what they're saying? Who would benefit from it? Follow the money.

Dino Mauro (01:00:06.088)
But there is a concept of Occam's razor and parsimony. You've got to look at something and the most logical answer is most likely the simplest, giving no other inputs to the equation. Within that horrible, 15 miles from my house, that horrible and tragic bridge collapse, Francis Scott Key bridge collapse. It was terrible. There are still conspiracies about why the power went out or when it hit the bridge.

the loss of life there, the infrastructure. Was it a Russian cyber attack? I heard that. Like there's no evidence of any of that. There's no evidence of any of that. But unfortunately, people live in, you know, the absence of evidence is not evidence of absence. They live in this, right. Once again, deception world. And unfortunately, where deception rhymes with truth and this conspiracy, people assume these horrible motives in all of these things when the simple answer is probably it was some horrible human error. Now, it could rise to criminality.

Where there was, because what you had was the boat, the power went out and the redundant systems came on and brought the lights back on, but not the propulsion system. Why is there not redundant power to the propulsion? And we're going down once again to the rabbit hole of that magic rabbit. But when you look at what you should believe and you see a video like that, what if it were AI generated? I mean, I have heard at one point when deep fakes were first being born, you know, and the technology was so amazing.

like the Academy Award for visual effects was Independence Day when they had a laser at the White House and blew up, which was essentially a model and some visual effects that the capability has gotten a lot better. A lot of this technology was born of Hollywood. So their motivations were entertainment. If you take that same power, and as we talked about the democratization of the Internet, the democratization of technology, a mill -aid partner of mine once talked about it as a convergence of technology. It used to be nation states.

that had to have like Whopper, you know, he had to have this horrible computer and you know, Professor Falcon in his game of chess. Well, now I got a laptop from that I ordered off of Amazon that I can do the same because of software as a service or the other things that I can use the computing power of a million other people. So an individual with one keyboard can destroy a nation. But now imagine if you still have the resources of that nation state at your disposal, it still is an asymmetric equation where

Dino Mauro (01:02:29.768)
You know, the nation states that would seek to harm us. Although I find it interesting that the FBI and CISA just, they censored and did sanctions against Chinese cyber groups. I think it was APT 30, a free -rush number. I get a lot of crazy names like the, you know, the... The drunk panda and all the others. Yeah. The board panda or the indifferent ape. You know, they censored them on that for espionage, but they were very careful to say,

And this is funny because they had a reason they said, but we're not saying that they use this to influence American politics. It was funny. You can see it in there like, oh, they'll tear down your infrastructure. They'll prepare for war in Hong Kong. They'll destroy our shipping and they'll potentially endanger all of our infrastructure by destroying our water facilities. Oh, but they wouldn't influence our election. We're not saying that. Right. Because they need people to trust the elective process. Right. It's funny that you look at anything. So once again, I think I've added.

way too many layers to that. No, it's so true though. It's good context, Paul, because when we think about how all of these different advancements in technology can, first of all, it's just making the entire risk that we all take as people as well as organizations. It's expanded the risk surface.

And this is something that has advanced so quickly. I read something yesterday that there have been more AI generated images in since generative AI kind of went to the masses in November of 22 or 23. When, when, when, when the first, it was 22, sorry. When it, when, when it first came out,

there've been more images generated by people, right? Than in 150 years before. And so you see just the vast amount of buy -in and use of this, that it's something that needs to be addressed pretty quickly by organizations. I mean, nobody right now, nobody is, everybody thinks, well, deep fake is really just for the, for the politicians and

Dino Mauro (01:04:52.008)
political things and or for just social media and everything else. But we saw in Hong Kong, $25 million in one act, right? That is the beginning. That is the that is the shot across the bow. That's the beginning of what we're going to see on scale here in the United States. And I hate to keep saying the terrifying thing, but when you look throughout history, it's not a gradual.

just sort of geometric progression. It's exponential or worse. It's a convert, it's these clopotic waves where they build upon themselves. Like if you consider that communication up for two or 3000 years, nothing could really go faster than the speed of a horse. No communication, no message could move. Then you had, you know, you had the automobile, the telegraph, the telephone, the internet, all of these things happened very quick in a span of about a hundred years.

And now as you see generative AI that can be used for evil, like bringing General Tarkin back to Rogue One, you look at things like that that can ruin your favorite movie franchise. Although that was pretty, we all love seeing General Tarkin in there, or Grand Moff Tarkin. But you now see AI, and really when ChatGPT came out in 2023, we're going to see another compressed period of development.

And we've seen it with deepfakes, we've seen it with videos. What just tends to lag behind that as it did with communication, the precipitate of that and the benefits to society and the downfall is that information could travel much more quickly, both in wartime and in peacetime and for good things and bad things. We're seeing the same. We're seeing the same with deepfakes. We're seeing the same with AI. That it has the power to elevate human discourse, but also tear it down.

So if you look at people, we can go into accelerationism and some of the other interesting auto catalysis and other things that could be happening here. It's very interesting what's happening. We're at a really interesting point in humanity that has profound change, both positive and negative. Hopefully we can stay ahead of the curve. And I'd love to do on a future episode, when I discovered, if you're kind enough to have me back. Well, it depends on the headshot and what I can do to it.

Dino Mauro (01:07:14.024)
I mean, I don't know whether I'll come back, although I look, I might make my teeth is wet again. That was awesome. My dentist appreciated that he sold you and I will see. So, um, this technology I have now, it can take my avatar. I can, and the voice clone, you know, the video and not supplied text, but supplied context. I can put in my bio, I can put in a couple of anecdotes and it can live stream in real time and answer questions. How I tell it.

Approach it from a fifth grader or approach it from a PhD Nobel Prize winner in astrophysics and answer questions about AI. And it will do it. I haven't tested it yet. It purports to do it in real time for live streams. Whereas this show, you could interview me, but it would be my avatar that I've told it, my background, put in my bio, put in some previous examples, and it will theoretically answer how I would answer in my voice and image.

So as we are wrapping up this episode, you just scared the living dickens out of me now, because now that is in real time live and you can give an image with a voice that will sound and look exactly as real as possible, um, undetectable by the human eye and ear. And then they can just on the spot.

address all these questions. It'll be using a large language model like chat GPT for the text based on the prompts. Cause you can ask chat GPT approach this as sure broad right. Or how would I examine it? Yeah. Tell it, give it the context of the background you want it to do. Answer questions in that context. I haven't dug into it yet. It just came out. Um, and it's a map. It's for live streams for companies. Um,

But as I said, predict a deep faked video that will affect a business decision and manipulate stocks and something that'll cause a societal impact before it can be. Once again, as we talked about getting down the road of the if -then fallacy, creating a video that's going to incite violence, it'll be too late once the violence starts. The video was proven false or the use of force by the police was disproven or the statement that something that caused it doesn't matter at that.

Dino Mauro (01:09:34.632)
And those things are so real. The damage has already been done. Yes. The damage has already been done. At what point is it catastrophic? I don't know. Paul Eichloff, thank you so much. This will not be the last time that you're on and we will listeners and viewers watch for us to address this. Paul and I are going to go and research the live stream capability of DeepFake because I think that is the next level of

of things we need to be at the cutting edge of and research and get ahead of this. This is absolutely horrifying. So Paul, thank you so much. As always, thank you not only for your service to our country, but for the entertainment and the insight and the...

Dino Mauro (01:10:28.904)
Well, that wraps this up. Thanks for joining everybody. Hope you got value out of digging deeper behind the scenes of security and cybercrime today. Please don't forget to help keep this going by subscribing free to our YouTube channel at Cybercrime Junkies podcast and download and enjoy all of our past episodes on Apple and Spotify podcasts so we can continue to bring you more of what matters. This is Cybercrime Junkies and we thank you for joining us.