
Cyber Crime Junkies
Translating Cyber into Plain Terms. Newest AI, Social Engineering, and Ransomware Attack Insight to Protect Businesses and Reduce Risk. Latest Cyber News from the Dark web, research, and insider info. Interviews of Global Technology Leaders, sharing True Cyber Crime stories and advice on how to manage cyber risk.
Find all content at www.CyberCrimeJunkies.com and videos on YouTube @CyberCrimeJunkiesPodcast
Cyber Crime Junkies
Shadow AI & Ransomware--Agentic AI EXPOSED
New Episode! Tell us your feedback! Is Your Company Safe From AI Attacks?
This episode covers the intersection of artificial intelligence and cybersecurity, exploring how AI can be used for both protection and malicious purposes. We examine how AI is used to create advanced ai cybercrime and deep fakes, and the rising threat of social engineering attacks. Learn how ai cybersecurity can help defend against ai hackers and other emerging threats.
Feeling Kind? Consider Supporting Our Channel by subscribing! Over 84% of viewers do not subscribe to our channel!
πLike, Subscribe, and Comment on our Channel or this Video!
βJoin me on my other channels: Main Site | LinkedIn | X/Twitter | Meta/Instagram |
Dive Deeper: π https://cybercrimejunkies.com
Growth without Interruption. Get peace of mind. Stay Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com
π₯New Special Offers! π₯
- Remove Your Private Data Online Risk Free Today. Try Optery Risk Free. Protect your privacy and remove your data from data brokers and more.
π₯No risk.π₯Sign up here https://get.optery.com/DMauro-CyberCrimeJunkies - π₯Want to Try AI Translation, Audio Reader & Voice Cloning? Try Eleven Labs Today π₯ Want Translator, Audio Reader or prefer a Custom AI Agent for your organization? Highest quality we found anywhere. You can try ELEVAN LABS here risk free: https://try.elevenlabs.io/gla58o32c6hq
π§ Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss a video episode!
Dive Deeper:
π Website: https://cybercrimejunkies.com
Engage with us on Socials:
β
LinkedIn: https://www.linkedin.com/in/daviddmauro/
π± X/Twitter: https://x.com/CybercrimeJunky
πΈ Instagram: https://www.instagram.com/cybercrimejunkies/
Discover the most significant Agentic AI Cybersecurity Threat in 2025 and how it's changing the landscape of online security. As AI technology advances, so do the risks associated with it. In this video, we'll delve into the biggest Agentic AI Cybersecurity Threat of the year, exploring its implications and what you can do to protect yourself and your organization from potential attacks. From AI-powered malware to sophisticated phishing schemes, we'll cover it all. Stay ahead of the threats and learn how to safeguard your digital assets in this informative and timely video.
Speaker 2 (00:11.39)
AI is either your company's secret weapon or your secret stalker. You know that helpful AI assistant everyone's raving about? Yeah, hackers are using the same technology to build polymorphic ransomware that changes on the fly and recodes itself, and AI deepfakes undetectable by human eyes and ears. Scams that make the old Nigerian prince emails look like they were made with crayons and paste. Today we sit down with cybersecurity legend
Matt Rosenquist, former Intel exec and now CISO at Mercury Risk where he coaches global CISOs on how to do cyber and AI the right way. And today we dive into absolutely crazy stories about shadow AI sneaking into your companies, cultures and lives and agentic AI, basically Jarvis from Iron Man, but working for the bad guys. This isn't some science fiction movie. It's happening right now.
So grab your coffee, lock your doors and tune in because if AI doesn't scare the bejesus out of you yet, it's about to. This is Cybercrime Junkies and now the show.
Speaker 2 (01:30.638)
Catch us on YouTube, follow us on LinkedIn, and dive deeper at cybercrimejunkies.com. Don't just watch, be the type of person that fights back. This is Cybercrime Junkies, and now the show.
Speaker 2 (01:50.154)
All right, well, welcome everybody to Cyber Crime Junkies. I am your host, David Morrow, and in the studio today is the one and only Matthew Rosenquist, founder of Cybersecurity Insights, CISO at Mercury Risk, former Intel Security Executive, a strategist, board advisor, and keynote speaker. Matt, welcome to the studio again. Thank you, sir, for joining us.
Always a pleasure! Looking forward to our discussion today.
I always enjoy speaking with you, always learn something. You don't hold anything back and that's what I think everybody appreciates.
Well, let's see how many people I can make mad today!
Yes. And social media doesn't help that at all. So let's talk, you know, I don't know if you've heard, but there's a thing called AI out there. let me just break it to you. It's kind of a big deal. So there's machine learning involved, lots of blinking lights, lots of data centers, and a lot of organizations have employees that are engaging in it with without visibility really into it.
Speaker 2 (03:00.416)
And there's, you know, development of the use of it through a Gentic AI where they're creating automations and bots, but they're not configured right. Really raising a lot of risk that traditional cybersecurity teams are really struggling to get their heads wrapped around it. What insight do you have? What are you seeing? What is the industry seeing?
Well, I mean, this is a sector of the industry and it's absolutely is a disruptive technology, but it's really beneficial, right? We see a lot of companies going towards it. see it in everybody's marketing, although that'll pull back a little bit, but because it is such a benefit to companies and to organizations, they're moving fast towards it.
Right? It's a competitive advantage. If you can get there first, it's a competitive advantage. If you can reduce your core, extend out a solution or a service before your competitors. So there is a rush to embrace and implement these suites of technologies that continue to evolve.
So given that environment, sadly, cybersecurity tends to take a distant backseat, right? Second, third, fourth row, way back, you know, you know. And so we often aren't considered and we go back to the first axiom of cybersecurity, right? It's not relevant until it fails. Well, if you're rushing so fast to implement this, you're only looking for the good things.
And only when a catastrophic failure happens, then that gives you enough pause to go, yeah, I forgot about those things of security and privacy and safety. Hmm. What can we slap on top of this to make it perfectly secure, private and safe? Right? Isn't there just some code or a person or little black box that we can do? And we get back into that cycle of misunderstanding and.
Speaker 1 (05:00.288)
Great expectations with not enough investment or time and this becomes the normal world of cybersecurity.
I first, there's there's a lot of hype, right? Every product, every platform, every SaaS program, everything that has even been around for several years now has an AI component to it. It's more of a marketing spin than anything else, it seems. But they are leveraging machine learning. But one thing that really I don't think a lot of business leaders have a good handle on is the fact that
The users are without a clear policy, without prompt training, without controlled access to it, right? They're DIYing it. You have different departments using different forms of AI, and the security team has no visibility into it, Is that a real thing? Is that what we're seeing?
Yeah, absolutely. It is right. you know, we back to the days of shadow IT where people bring them in their own devices, setting up their own wireless networks. had that all the time. go track them down. you know, there becomes, unfortunately, a lot of holes in the traditional security controls you have in place because you're not anticipating that. So we're seeing that with AI systems.
these systems, you don't have to go out and buy a big appliance and you don't have to invest millions of dollars or have a big server under your desk. Right. You can go out and use chat GPT or any other AI that you want. Things of them.
Speaker 2 (06:44.078)
Yeah, I'm envisioning Mrs. Buttermaker using a pro version, a consumer grade pro version of Chad GPT. Someone else is using Claude, someone else is using Perplexity, someone else is using Copilot, right? And they're all over the place.
Yeah, and you don't know what information is going in. You don't know how they're connecting them. that's, unfortunately, that's the tip of the iceberg, right? That's what we can see and what we typically talk about in the beginning. We don't know if your, you know, HR department is accidentally putting information into an, you know, a SaaS AI tool and now that's exposed. That absolutely is a risk.
it probably isn't the biggest risk. It's the one that will get people's attention at first, but there are so many risks under that, the top of the iceberg of what we can see. So it's not just the end users that need to be aware of this. It's also the developers, it's the integrators, it's the infrastructure people and cybersecurity, right? And for cybersecurity and we use AI tools as well. So we can also be part of a problem.
But we do need to understand what's out there. And if you don't set guidance, you don't set a path and guardrails, people will go everywhere. Because without guardrails, we're all, you know, singing and dancing in the park anywhere we want to. So we don't stay on the path.
Let me ask you this, does do traditional detection platforms, or do they have visibility into it? Like, do they are some of the platforms or some of the LLMs that are out there that are being used? Do they provide logs that like a SIM could adjust?
Speaker 1 (08:33.134)
So mileage varies, different tools typically can have some hint of access. So for example, your endpoint might be able to see if you're going out to a site or if it's installed something, your network filters might be able to see if you're going to a particular URL, right? Things of that sort. Your web application firewall might see if you're logging in, right? But you don't have the depth.
of information that you need. And so it's very, very limited, right? You might be able to detect, hey, someone went out to chat to UPT, but you can't tell what they put into it.
Right. can't tell what's wrong with their prompting.
Yeah, they may be able to see sensitive data left, but they didn't know that it went out to one LLM versus another, right? So getting the big picture in a way that you can understand and prioritize the risk in semi real time so that you can interdict that becomes problematic. Now there are some tools that are coming up that are trying to address this, right? They're trying to pull the veil back from some of those AI systems.
And there are tools and processes to let's create policy or let's go train end users or let's go train developers. But all of those are going to be a step behind whatever the latest disruptive tool is, capability is. We're still trying to get around the, you know, chat GPT and perplexity, right? The LLMs while agentic AI is already rushing forward. So we will always be a step behind the latest.
Speaker 1 (10:13.312)
And that means there's a window of opportunity and risk that is inherent to the game that we're playing.
So for listeners that might not know the difference between generative AI and agentic AI and some of the risks that agentic AI poses, can you walk us through that?
Yeah. So generative AI, you know, and you're using, you know, a subset of machine learning, you're using deep learning and essentially it's creating something for you. could be creating a paragraph or a story. could be creating a music song for you, an image, a movie, all those cool tools, you know, create me an image of a dog dancing on a skyscraper shirt. Those are all generative AI.
And they've got essentially you give an input and they give an output, which is great. If your input is sensitive, you're giving it to the world because it's incorporating it. That is generative AI tools, multimodal, right? So you can create a movie with sound or text or whatever. Then you've got agentic AI. And the way I like to describe this is in Ironman, Ironman has Jarvis. Jarvis is simply told to achieve an objective.
And then Jarvis goes and breaks that down, figures out what it needs to do, what it needs to order or buy, how it needs to manufacture. it goes off and engages all these other systems, brings it together and then delivers it and works with you on that. That is an agent based AI systems. And these are incredibly powerful from an automation and figuring out
Speaker 1 (11:53.838)
problems and doing a lot of work for you. So they hold a tremendous promise and value proposition as they evolve. And we're still in the beginning, right? We're still dealing with a, you know, a four year old mentality in two weeks. That'll be an eight year old and you know, within three months, it'll be a, you're dealing with a teenager who wants that. You know, the risks between generative AI.
which in many cases is really around privacy, loss of sensitive data, confidentiality, and so forth, compared to agentic AI, the risks of agentic AI are essentially undermining all of your traditional security controls.
How so? Can you walk us through that?
Okay, so there's a general rule here if we think about it. The value of an agentic system is predicated on the amount of access it has to other systems and to sensitive data. So if you have a jar...
Configuration comes in place, right? Like you're setting it up, what is it able to access?
Speaker 1 (13:00.0)
If you had your Jarvis and Jarvis didn't have access to anything, it's not very valuable, right? But if you give it access to everything, the internet and your data and manufacturing facilities and all that, it can build you an Ironman suit. So given that natural pull for agentic systems to get more and more access and more and more control, it's not just being able to read data, it's being able to manipulate.
data and transactions, create new transactions. Right? One of the use cases, early use cases for an agentic system is as a personal travel agent. Okay. Right. And you can already go up to a webpage and say, Hey, find me a trip to Rome. And it'll come back with flights and hotel and transport. That's fine. But in a agentic system, you can tell it, Hey, I want to go to Rome. want to see the Coliseum and I want to do this.
And it will go off and not only will it find the best, you know, routes and cost savings and everything, and find a hotel that's facing to the South because you like a certain view in the morning, it's going to go to your bank account and it's going to pre book all of these for you. And it's going to get reviews and it's going to set up a profile and it, but if it doesn't have access to your bank, it.
can't schedule that. If it doesn't know what type of hotels that you normally stay at and the special requests, it can't do it. So again, to be the most valuable, you're absolutely going to want your Jarvis to have access to all your previous trips, to all your work documents and who you're going to meet and your bank accounts and all these other things. All of that. Things could happen.
calendar and That's in presentation
Speaker 1 (14:56.876)
Right? This is my agent. Of course it's secure. It's my, why would they design something not secure? Why would they? trying to get it to market as fast as possible. Right. So in doing that, that undermines everything. It would be typical in that situation to go, Hey, Jarvis is requesting access to your PC and all your files. So it can, it can coordinate. well sure.
Well it needs access to your network and the internet and all your bank accounts and well of course and you're not even thinking about it but now you've undermined all of your security.
Attacker right because shouldn't an attacker get in and getting access to that agent. They now have access to everything
They now have not only access, but authority because you've granted them authority. They don't have to hack anything, right? They just hack the one system and they're in, and now you've already given this one system authorization to do all these things and to act on your behalf and to speak and communicate on your behalf.
So are we seeing that occur now with thread actors? I believe we have.
Speaker 1 (16:06.402)
We are, we are seeing threat actors drooling and waiting, waiting for these tools to come into play. And so when you look at specific vendors who are trying to enact this, some of them are holding back a little bit, right? We see it with Microsoft, we see it with Apple. They're releasing some of these agents. Apple actually in its recent announcement kind of held back quite a bit in regards to AI.
because they do want to be able to provide that capability, but they realize there comes with it, equitable risk. you know, Apple's well known for its privacy and other things trying to protect, you know, its users kind of in a wall garden. That's great. Opening up something like this without proper forethought can undermine all of that.
So there's that natural tension. We want to move forward super fast and give the best experience and awesome capabilities to our customers. But at the same time, it comes with a lot of risks, risks that we don't even necessarily understand yet.
Yeah, because depending on what the agentic bot or platform is going to be able to do, just you can just imagine the the risk that we're giving away because should somebody unauthorized get access it it like no alarms or no alerts are going to be set off to the security teams because it they assume it's you right and and you've already pre approved.
all of these actions in order to make it effective and useful and have a great user experience.
Speaker 1 (17:47.274)
And if you don't, as a security team, set down those guardrails to say, okay, you can have an agentic system, but only give it access here, here, and here. allow us to configure alerts, allow us to get telemetry from it. Maybe we, maybe we can detect if it gets taken over. Right. And, let's also develop a crisis response plan because if it takes over the personality and access of our CEO, that's kind of a big problem. We need to, we need to jump on that quick. Right.
And so there's a lot of things that security wants. want access to telemetry. We want to put certain limitations. We want to vet, right? Don't just install an agentic system from anywhere, you know, some dot R U web page, right? Don't do that, right? Let's set some guidelines. There's a lot of things that security wants. And right now they're like, yeah, we'll get to you right now. We're trying to build functionality. We'll get you in a while.
First, let's make it work and then we'll circle back around with you. And that's obviously not very productive, especially when something breaks.
Are you seeing the maturity of just a regular, let's say a manufacturing organization, midsize, right? Are you seeing the maturity of that begin with generative use of AI and then they see its potential, they get used to using it and then they want to design certain, you know, certain agents or are you seeing groups going straight for seeking out agents and development of that in the market?
You know, for AI, a lot of it is marketing, right? And like any industry, you want to pick a couple of really awesome proof points to be able to show the value and the potential. So, and we saw that in AI, right? ChatGPT was at one moment where, hey, we can make a publicly available tool and it can blow people's minds, right? Great. And they released ChatGPT at a loss every single day, this massive loss.
Speaker 1 (19:53.07)
but the adoption was faster, know, a hundred times faster than Facebook and everything else. So we see an agentic, the same thing. We're seeing certain use cases that are bubbling up and the AI teams are thinking about it. Hey, will this be a moment that people look back on and go, we have to have that. Or will this just be, that's a narrow case. It's let's not go with the little, little opportunity.
Let's find a showcase use and then drive towards that. So we're seeing a few of them that are bubbling up and we've got innovative companies that are then focusing on typically one or two use cases in the AI space and are trying to get traction on those. they want that.
Are those in like the customer service? know, Titan's bots? you seeing on like, like, what are you seeing?
So for GPT, generative AI, or are you talking?
For a gentic like the ones the ones that are bubbling up the ones that it seems like there's there's key demand for
Speaker 1 (21:02.828)
Yeah. So right now what we're seeing, we're seeing specific use cases where certain functionality can be automated. it's things people are doing already because in order to create an agentic, think about it, an intelligent framework, you kind of have to have instructions, right? You can't say, Hey, you know, make up what you should do. You actually need to train it.
In order for that, you need the training data, which means you need policies and procedures, and this is what you do in these kinds of situations. So that has to become the training data. So right now, what we see is we see use cases being developed to automate variable, you know, work, right? It's not just pulling a hammer, right? This is all I do. There's some thought or decisions that have to be made and automating that.
to where you can do it at scale 24 seven, know, and it's not necessarily going to make or break your business because it's new technology, but we want to start automating things to reduce the workload, increase capacity, ensure consistency over time and geographies. That's where you're starting to see some.
That's amazing. How is this affecting the I mean, first and foremost, everybody's afraid you hear it or you see it in social media about like fear of losing jobs and you have these outrageous claims online about like, with AI, know, organizations won't need attorneys anymore. They won't need this industry anymore. What are you seeing as you're talking with leaders that are
developing this or other organizations that are beginning to implement it.
Speaker 1 (22:49.006)
When it comes to economies and what people are going to accept and not accept and dive into 100 % and what the consumer base is going to embrace or not embrace is incredibly difficult. Consumers can be outrageously fickle. When you look at the latest tool, toy craze, who could have predicted something like that? What we can look at pragmatically is
AI agents can absolutely take over a certain amount of work in their early phases. They need a little bit of help and definitely they need human oversight. But as time goes on, these systems become more consistent and they become more accurate. Yeah. More, more capable, more accurate, more effective. And the need to have oversight is, going to start to wane.
Now, is that going to change what we do and how we do it during the day? Absolutely. Right. And we can look back in historical precedent for that. You know, when the automobiles started to take off, there were a lot of blacksmiths out there and they were very, very worried. Rightfully so.
I use that analogy all the time and while it might have damaged the horse and buggy industry, destroy-
Not damaged, it obliterated it, absolutely.
Speaker 2 (24:15.406)
it created a exponentially larger industry, right? Oil, gas, refineries, logistic transportation, all of it. So, you know.
far.
Speaker 1 (24:26.208)
All of right?
That's probably what we're looking at. We're looking at a long-term, massive net gain. And we see that, you know, with the introduction of the automobile or electricity or things of that sort, steam engine, right? Yes. Horse farms kind of went away once we had the steam engine, but look how society progressed. You're playing old telephone system and the operator's there, right? That you used to have to call or contact.
Hey, I'd like to make a collect call, right? And all those people who used to put in phone booths or repair them. Today's telecommunications, internet, right? It got rid of a lot of those jobs and yet look at the telecommunications industry. So should we be concerned short-term? Yeah, there's going to be pain. This is about change with change and disruption. can be painful. It's the way it is. Those who adapt and learn those new skills are going to be the ones that are needed.
So from the 10,000 foot level, I'm not worried about that. I'm not. And we even see that in the cybersecurity space. People are concerned, you I don't need any first level technical sock, you know, personnel because I'm to have an AI do it. Well, not really. Right. Don't go off and hand out the pink slips yet. Yes, you'll be able to scale, but those AI systems, when they do find something, will then escalate it to what you call second level.
which really just means your first level is now going to be elevated and trained to be second level for all of these alerts that are going to come in and be prepared for them. So, you know, there's a lot more work than you anticipate.
Speaker 2 (26:08.632)
So how does that affect leadership in cybersecurity? When we think of CSO transformation, how is the role of the CSO evolving in light of this?
That's a big question, right? There's lots of things going on in cybersecurity because the expectations of cybersecurity are going up. Whether it's with the consumers, apparently consumers don't like their data breached or their systems compromised, yeah, little things like that, right? You've got regulators, right? We've got more regulations that are coming in, which means you've got auditors and the expectations of auditors aren't so easy anymore.
Showing them a paper copy of your security policy doesn't fly anymore. At least it shouldn't, right? And you've got your fellow C-suite. Security incurs a cost. It's not only dollars, but it's also time for training to their employees and limitations on what they can do. And you've got CISOs and boards that are now being brought in saying, cyber security is important. You really need to oversee this. There's liability, there's regulatory compliance.
It's a huge cost. Who knew it was going to be such a huge cost? And it keeps going up every year. We don't know why. So the expectations go up and that creates massive problems. And we're seeing a moment. We see it down the road, right? It's this cliff and we have to bridge that because we have to move away from only focusing on regulatory compliance and a little bit of risk management.
We have to actually transform from that technical risk to actually being a business contributor. For the amount of money, time, effort, and friction that we introduce into the environment, right? We have to elevate our game. We have to contribute to make the organization a competitive advantage. We have to contribute and enable revenue and things of that sort. Market share, new initiatives. We can't be the office of no.
Speaker 1 (28:08.834)
We can't only focus on the technical aspects and we can't be using fear to get that next budget cycle because it's just not gonna work.
That's brilliant because it first of all fear, uncertainty and doubt doesn't drive behavior, right? Like it creates an anxiety, but being able to communicate with them in business terms, being able to explain if not ROI, at least ROM, like at least, you know, return on mitigation, which has a positive effect on a P and L and being able to tie some of the metrics to actual business outcome.
I mean, that to me seems, you know, it's like anything else being able to translate it in understandable terms matters. Right? Yeah.
And we have to, as part of that transformation, we actually, we do have to communicate. That's, that's a foundational, right? But we have to take it even a step further. We have to start cooperating and working with those profit centers, right? How can cybersecurity help your marketing, help your sales, reach out to your customers? How can we, you know, potentially build in security features that are a competitive advantage to your competitors?
Let's increase your share of market. Let's make sure your margins are nice and thick, right? Nice and comfy. Let's keep those average selling prices up. What kind of ancillary or tangential services can we add on? Can we use, see this now, we see smart companies using cybersecurity to move people from a freemium model tier that paid model, that first paid model, right? Hey, that's bringing in money. That makes the
Speaker 2 (29:46.037)
Exactly, chill.
Speaker 1 (29:53.518)
difference that makes a huge difference. And if CISOs don't embrace that transformation, if they don't adapt, while still asking, Hey, I want a 20 to 25 % budget increase every year, every year. Right. You're not adding the value you need to be, and you will wither and die. So you either adapt or you're gone. And that's the cliff that we see.
cybersecurity cannot adapt and be a business enabler and contributor, right? Not just technical risk, I address technical risk, but actually contribute to the primary goals of the organization, it won't be sustainable. simply won't.
Absolutely. So let's flip the script. What are the threat actors, attackers, and for listeners, hackers, because that's who they envision them. What are they, how are they leveraging AI? I've seen it in development of polymorphic ransomware, obviously AI, voice cloning, video deep fakes, image deep fakes. So social engineering is being high powered. It's definitely improved.
the effectiveness of phishing, right? Because the traditional red flags aren't there, like the grammatical error. can be very specific. That email can look like it is from somebody from that region, sounding exactly like that person. They can create dossiers. The recon that they can do on an organization is amazing. What else are we seeing? I mean,
You're nailing all the great ones here, right?
Speaker 2 (31:35.662)
I kind of took all the fire out. So I'm sorry. Those are bigger risks. it seems like they are there. It's seemingly once again, they a step ahead simply because they're more agile and more dynamic. Right.
Yes. And they're less risk averse, right? They get to these tools and they don't care if it's perfect, right? Throw it out there. If it works, if it's broken, oh well, we'll fix it. Right. And yet security needs to make sure you don't put out a broken tool. You don't put out a tool that can cause harm. So they can run wild and haphazard trying different things and it's okay if it fails or it breaks somebody, whatever. There's always other victims, right? So they're able to innovate and rapidly adapt.
in this fail quick revise and so forth. But you covered a lot of the big ones, right? In the generative AI space, it's a lot of social engineering. It's a lot of phishing, impersonation, fraud, all of those things. They can create authentic looking receipts. They can impersonate your boss, not only the way they write, right? But also, you know, how they sound and what they look like. You know, you've got video mimicry and all these other
fun, sexy things. But down at the basics, what really helps them in the social engineering is really around that phishing because they don't have to pay somebody pennies an hour in Malaysia or somewhere else to be writing out spam messages or interacting back and forth. They can use an AI bot to do that. And it could be a sexy AI bot that's drawing someone in. It could be somebody that's communicating.
in local language, right? And actually understanding the accent of the person and matching that, right? There's no spelling errors. The grammar is, is accommodating to that person, the word choice, sentence structure, everything. Everything. you can do. Yeah.
Speaker 2 (33:32.472)
The syntax, all of it, it sounds like they're pro.
So it's no longer, you know, fumble fingering, you know, messages that are going to have errors and typos and it doesn't sound right. it looks obvious. These can be brilliant. And because you have this continual learning cycle, when it fails, it can learn. can then adapt automatically. So you're not sending hundreds or thousands out.
by proof and adept.
Speaker 1 (34:05.858)
You're sending out millions and the success rate is going up because of the effectiveness. And that's just on the generative AI aspects, right? Which are a little bit more mature. You also touched a little bit on the agentic side because again, all these tools are open. Anybody can download them, install them on your PC or laptop and run them. We're seeing attackers using agentic systems as well to automate the entire process.
Let's say I want to go after you. And you know what, based on the intelligence that this AI has gone off and looked at, you may be a great candidate for whaling or for getting you to click on a malicious webpage, right? Or something like that. You know, some type of fishing hole attack. Great. So this agentic AI system determines that, goes off, creates a web domain, creates a webpage, creates the malware behind it.
arranges the words in a tantalizing way just for you, right? Sets it all up and starts hitting you with phishing emails for you to go to that website.
Right.
Speaker 2 (35:15.988)
And it's just all automatic. can imagine the context, right? have somebody who's developing and manufacturing a certain part, right? You can have a white paper about the use of that part in the industry that just came out from some, from one of the leading experts locally, and they can push that out and then boom, once they click, there's the attack, right? Like it's so incredibly at scale now.
that it's just going to keep evolving and maturing.
Yeah. And that level of automation flows through. mean, that's an example of social engineering. We're seeing other examples, right? Malware, right? Where it's trying to create a piece of malware and sends it out and, no, you know, some of the big anti-malware engines are detecting it. It realizes that, and this is the AI system, right? It realizes that. So it goes back, rejiggers it and reconfigures it and submits it. Okay, it goes through.
Now I'm going to keep monitoring and I'm going to keep adapting at speed automatically. So you're, you know, your attacker is sleeping like a baby while all these agents are running, doing all of these things. We've seen other agentic agents that are basically given a target and they will go off and they will look for network vulnerabilities and application vulnerabilities. They will enumerate.
Their network, their web, they will identify and list out all their executives, capture all, know, search a dark web, capture as much information about their logins or emails, plaster them with social engineering attacks, try and use their passwords all over the place. All sorts of things automatically at speed.
Speaker 2 (37:07.478)
Unbelievable.
This isn't a brilliant hacker that can only attack one person and it's going to take four or five days and a whole bunch of Red Bull. This is a PC or a server that's doing a thousand of these attacks simultaneously within minutes. That's the we have to deal with. And then it'll learn from its mistakes.
Right. How have you seen the polymorphic malware like ransomware and even just exfiltration malware where they're not even going to release the ransomware, the crypto locker type, you know, locking down malware. How are we seeing where they are adapting? I've read some reports where
They're adapting, they're turning off detection systems. And so even the traditional, you know, SIM, MDR, XDR tools that are out there are being turned off by these things.
Yeah. And we have to be clear, right? Those practices have always been around. So if you go back five years or 10 years, malware would typically try and disable your anti-malware. It's on your system. The difference is when you have an attacker that let's say they go and they get a foothold or a toehold into your network.
Speaker 2 (38:20.91)
But
Speaker 1 (38:29.518)
In that attacker's mind, they already have a list. They're going to go down, hey, I'm going to look for these top 10, top 20 vulnerabilities. These are my most likely ones and that's what I'm going to spend my time doing. The difference here is with these agentic systems, they're not limited to 10 or 20 vulnerabilities. I'm going to look at two or 5,000 vulnerabilities.
Take a look at everything.
Speaker 1 (38:55.51)
And if they get a toehold in any one of those, they're gonna then expand from each one. So the scalability of what they can look at and then take advantage of makes a massive difference. It is a force multiplier. So there may be a vulnerability to be able to turn off your anti-malware, but it may not be a common one. So your average attacker is not gonna see it. But the agentic systems,
they're going to have a really good chance of tracking down that obscure linked vulnerability that's chained through five different systems. So they can turn off or modify or render, you know, inactive your defenses. So their ability to scale and the depth is something that, that typical people can't deal with.
No, and you can see how much, what an advantage they gain by developing agents and automating this. And then when we rise to the level of the advanced persistent threats, right? And the nation state attacks, what are we seeing there? Like what is, like we've seen a massive increase in them. What is it that you're seeing?
So number one, we're seeing a massive increase in their activity and the barriers that they're willing to cross. And that's even before AI gets into the picture. But what we're seeing with AI is they realize the advantage of this. They're already using it now for a lot of that generative AI, the social engineering attacks, the foot holds. There's also a lot of work being done for the more advanced penetration type of attacks.
that we see some nation states do. And particularly Russia and China and Iran. When we talk about the other ones that are going after money like North Korea, they just want to get in and grab money and run out. But when you talk about China, they want to be deep in the systems. They want to be ever present. So they want to remain stealthy. Those types of attackers are patient and they need to be.
Speaker 1 (41:07.734)
You can't fast fail using AI because that will give you away and that's not good, right? You don't even want the victim to know that you're even interested in them. So they're spending more time looking at these AI systems, not only in traditional IT networks, but also in OT networks because agentic systems can play in those realms as well.
like the SCADA systems and things like that, like on the manufacturing floor, things like that, right?
Yeah. So think telecommunications, the backbone of telecommunications. Think of energy systems. Think of what your nuclear power plant is running on. Think of the systems in a manufacturing line or in a dock that are controlling cranes or trains, right? All of those systems are OT environment, SCADA systems and so forth. And they tend to focus in sectors that are part of critical infrastructure. They're in other sectors as well, but they are.
the underpinnings of a lot of our critical infrastructure. And this is, you know, one of the places that these nation states, aggressive nation states, want to, in fact, want to be able to conduct operations, whether it's gathering data or at some point being able to turn something off, right? Corrupt something, take something out. Whether it's your financial system, your healthcare system, your transportation system, we're going to ground all planes or we're going to stop all shipments.
Right.
Speaker 1 (42:39.63)
Right. Right. Well, we're going to take out the entire GPS navigational system for ships. Okay. So we can't ship food in. We can't push our product. That's going to hit our economy, our foods. when you look at what these strategic thinking, highly capable, infinite funded organizations are trying to achieve. Yeah. AI, especially agentics stuff. This is very powerful, but
For most of them, they want to be a little bit more cautious. They want to test it. They want to validate it and they're absolutely going to use it and they're working and investing in it because they see the benefits.
Unbelievable. What is a small midsize organization? What are the top things in your experience that they should be doing today? I know the answer is probably not very sexy. obviously, you're obviously the basics and probably everything that we've been saying for for 10 years. It's like the same thing, right? Like, have visibility, have have detection.
I mean, you know, we talked about it
Speaker 2 (43:49.396)
eventually, right? So that you can see what is happening because a lot of them don't even have that. And they don't have incident response plans. They haven't run tabletop exercises. So they're not prepared for the day of when it happens. It's like those to me always bubble to the top.
Yeah, it comes back to the basics, as you've said, right? You do need visibility. You do need to set down some, some guardrails, right? Which means you also need to be proactively involved. And that's one of the things that I talk with my clients about, especially to see so you need to be proactively involved with what every organization in your company is doing with AI or any disruptive technology. Right?
You shouldn't be the one they're keeping out of that room. They should be embracing you, which means you have to partner with them. You can't go in and just say, no, no, no, no. You need to go in with the mentality of going, hey, that's awesome. I see the benefits of it. Let's figure out how we can do that securely. So this doesn't fight you. Right. Let's add on top of that and see if we can make this a competitive advantage. Right. What can I do to.
If you're seen as a facilitator and a highly motivated person for their success, they're going to embrace you. And if they embrace you, now you know the projects, right? Then you can understand, okay, what are the risks? And you can build a plan with them to say, we've got some risks here. Do we want to accept those for a little while and then maybe put some controls? Let's plan for that. Do you want to not have those risks? We can plan for that.
What works for your business and what are you willing to accept as a legacy risk? I'm not going to accept the risk. I'm going to communicate and educate you. This is your product. This is your profit line. So, and I'm going to tell the CEO, yeah, you know, we've discussed the risks. They know what they accepted. So, you know, you, I'm going to help you accept whatever risk you need and mitigate whatever risk you don't. I'm here to help you. If you want to accept it all great. Let me document that.
Speaker 1 (45:57.91)
Let me communicate to the CEO and board that you're okay with this. They may be okay with it too. They represent the shareholders. Good to go.
Right? Yeah. mean, it seems to me that it is like no other, I mean, it seems to me that it is like traditional risk management. is like the essential making of an internal business case. And frankly, that is something that CISOs ideally should have been doing all over the place. Like they should have been doing that a long time ago. Right? Like they should have been doing this before. Right?
They should have, but again, we have a little ugly problem that we never want to talk about in our industry. The vast majority of people in the CISO space are technologists. They grew up as a technologist, so they see the problem as a technology problem. And this is part of that chasm we talked about, this cliff that's coming, because if you only see it as a technology problem, you're already communicating and conveying, whether you're intending to or not, cybersecurity is fixable.
And if people believe that, that means I'm going to invest in you once and not invest in you again. They're going to be thinking, Hey, cybersecurity, it's technology. You're the person. It's not my problem. It's your problem. You go fix it. Right. So you'll have no support from any of the other C-suites. They don't want you involved. Just go fix a problem. It's technology. Just, you know, I don't need to be informed. My people don't need to be involved. Right. You get this misconception from the executives.
from the CEO and board and they then struggle to understand how do we balance the risks and the costs and the friction and stop talking to us by the way about technology. We're business leaders. We're not security experts, nor should we be. Start talking to us in business terms, not technology alerts and scanning and we don't, that's a foreign language to us. You need to work with us.
Speaker 2 (47:49.931)
Exactly.
Speaker 1 (47:58.894)
We make the ultimate decisions and you're not communicating. And when you have all this miscommunication, we tend to fall back to FUD, right? FUD, don't understand. I'm just going to use fear and scare them. They throw me another bag of money. But that may be the last bag of money they throw at you because fear only works for so long. you know, we have.
The geniuses out of them. Right, exactly.
Speaker 2 (48:23.982)
You said something interesting. Yeah, you said something interesting and you said, rather than explain to them that cybersecurity is fixable. I mean, that's kind of like, and I've seen CISOs do this or IT leaders playing the CISO role. I'm like, that is like an attorney finalizing a contract and say, we have no more legal problems.
Yeah, we're done.
go ahead and do that. I'm like, that is not the role of an attorney. Like it is an ongoing process eternally. Like there will always be problems, but we've saw we've, we've won this battle. Like this battle is resolved. Now we're going to address other ones, right? It's, always going to be an ongoing thing. And I see cyber security very, very similar to that. there's always look, look at how fast it's, you know,
I mean, look at how much it's evolving every single year.
And yet we have a vast majority, vast majority of CISOs who are not talking in those terms.
Speaker 2 (49:27.33)
Because they come from the technology space and they are communicating in terms of right.
from an engineer
And I'm not only blaming them, right? And I'm-
No, it's part of that community. It's human nature. It's human nature.
But it's also the media, right? When they report on an issue, they report on the technical issue. It was hacked, right? They're not saying, hey, it was a behavioral or a process issue. They're not, you know, going down into the relevance. It's all.
Speaker 2 (49:56.174)
That's not going to get clicks. Like that's not going to get clicks on their story, right? Yeah.
you need MFA and all the security vendors. do the same thing. Why? Because it benefits them. If it's a fixable solution and we're the fix, we're going to make a lot of money. So I'm going to phrase the problem as it is fixable. And of course we're the solution, write us a check. So the entire industry has looked at this and communicates this in a way that undermines our longevity.
in being able to continually manage it. And as you said, there's always something that comes up and people don't have the background knowledge or the common sense to recognize it. But if you went to your doctor and you had an exam and the doctor said, you're good, we're going to live forever. You're going to know there's something wrong here. That does not make sense.
Yeah, exactly.
Speaker 2 (50:54.798)
that insurance plan like who are you sending me to like this
This is expectation of security, right? Well, we're not going to ever be hacked. Yeah, absolutely. It's fixable and you're going to fix it. And so we're going to be impervious.
So as we wrap up, me about what you had predicted, because each year I always look to your predictions and they're pretty damn good. How are your predictions, you know, filtering out from 2025, the ones that you made in the late-
Go back and look, actually you and I should sit down and go through them and be brutal as always, right? For the 2020s. I know I talked about the rise of nation states and wow, have we seen that? anything, I may have underestimated that. We've seen a lot more ransomware and I know that was coming up.
Yeah.
Speaker 2 (51:48.718)
Because in 2024, there was a dip in ransomware short term. Yeah, exactly. It was being reported as, as, as there was a dip, like people were thinking it's going away now. I'm like, what are you talking about? Like it is just like we, disrupted lockbit for a little bit. Like that is not.
Listen, it's a metric.
Speaker 1 (52:11.47)
the attackers changed their name, they rebranded they're back at it
Let them rebrand and they're coming back, trust me. They always do. Yeah.
always do. Did you hear the latest? The latest on ransomware? And this goes back to the AI. So normally, nowadays, if you get hit by ransomware, they've not only locked your data, or they've typically extracted it, and they're gonna extort you, hey, if you don't pay us, we're gonna publish this, right? We're gonna make it public.
The latest extortion is not only are we going to make it public, we're going to submit it to all the AI engines. Learning in their corporates of learning and it will forever be imprinted available and used as part of generative.
They're gonna throw it in the L.L.A.
Speaker 2 (53:03.758)
was a that was a art artist and creative website, right? The artists and creative like all of the proprietary artwork. They're like, Oh, you don't want to pay us? No. Okay. Oh, you think you can restore from backups? Oh, okay. How about if we take all of the proprietary artwork and feed it right into AI? They're like, let us write you a check. That's right. Like, let's solve this. Because all of a sudden because they I think we saw I forgot which
ransomware gang it was, but they were contacting the SEC. They were contacting the regulators. And I'm like, that is really bold and really brilliant from a criminal side. Like that is leverage, right? And now with LLMs, they're threatening to feed it up because once you do, it goes out to sea. Once it's out to sea, it's gone, right? Once you feed it in there, it's no longer proprietary. We can all get it. If we prompt it, right, we'll be able to pull it all down.
It'll be integrated seamlessly, right? It's one thing for me to threaten, hey, I'm gonna put your dirty picture on the internet. Well, someone would have to look for it and track it down. Whereas if I integrate it into the knowledge set of common tools, all of them are gonna leverage out all the time because it's part of that corpus of knowledge. So it's basically saying, I'm gonna not only put it out on the internet, ha ha ha, I'm going to make it available and integrate it into these massively used tools.
Right. Unbelievable.
That can be compelling, right? Think of nation secrets, intellectual property, protected works, healthcare information, all.
Speaker 2 (54:41.966)
And how CISOs communicate this is important because they can't just say that and raise fear. They have to say what that means is the designs of our product, right, will now be available to everybody with an internet connection, including all of our competitors, right? Because now that is a real practical understanding of what that means. Because it's not just somebody would have to go and get and
search for it, et cetera. It'd be like, no, any general prompt in our industry will provide that right to them. And that's scary.
Imagine your secrets, right? It's not copyrighted or anything. It's one thing to say, I'm going to go put your recipe on a webpage. It's another thing to say, I'm going to put it in chat, cheapy T for anybody who asks for a chocolate chip cookie recipe. Because down the road, you're not going to be able to sue people because someone's going to be able to go, Hey, no, this was available via all these LLMs. And it looks like they published it before you did.
Exactly.
Speaker 2 (55:48.206)
Right.
So somebody else owns it, not you. You're not going to be able to claim ownership of it. Oh, there's all sorts of legal issues and IP protection swirls around this. It's, it's, it's beautifully ugly. is, it's amazing.
is really is and then and then you see some of the heads of the LLM companies where I'm not going to name names because I use their products. But I'm telling you, they're like, yeah, we got no guardrails on this. They're like, this thing is bigger than all of us. And it's that is not the it's not the confidence level I want to see. Right. But you just like it's brutal.
Well, you know, one of them just agreed to a what, one point something billion settlement because of copyright issues and so forth. So that's going to spin off a whole new legal, you know, set of lawsuits for all the others and everything. it's, the wild west out there. AI is just churning up the mud. It's great. Everybody is just hog heaven and we're just starting to come into that next disruptive technology, right? Quantum.
So I don't see us having a breather between disruptive technologies. They're going to overlap.
Speaker 2 (57:03.03)
generational. Yeah, these are generational dynamic shifts. And it seems like quantum is coming faster than anybody had predicted. So it'll be phenomenal to to keep tabs on this. Thank you so much for your time. I want to be respectful of your time. Definitely. Let's get together as you are making your 2026 predictions toward the end of the calendar year. I would love to get back together with you do a recap.
of 2025, how that laid out and what 2026 is looking like.
Absolutely.
just to depress everybody because that's why we're here. Like this is why we're here. It's like sitting at a bar or a restaurant and just, you know, going, can't believe this is going on. But it's, it's so fascinating. And, the more everybody is aware of it and the more they, they get a sense of it, right. And then they can apply to their, to their own lives. And I think that we can all
get a little better, you know? We have to know where the dangerous neighborhoods are before we walk through them. You know what I mean? Like, just to know, to be aware, right?
Speaker 1 (58:16.492)
more foresight we have, the better prepared we can be and the better choices that we can make to accept or mitigate risks.
You said that much more articulately than me. I was, I was envisioning walking down the street and like seeing a dark alley with a bunch of like garbage cans rustling and all this mist and really dangerous. And I'm like, it's probably a good idea not to walk down there. You know what I mean? Let's just do that. Let's, let's do that. Let's start.
Let's just start there and it'll be better. So, hey-
Speaker 2 (58:56.918)
I like that too. Absolutely. So I will have links to your information in our show notes. Follow Matthew. You don't need more followers, by the way. You got like a gazillion. But please do because his LinkedIn posts, the work, the white papers, the public speaking that Matthew provides, it's absolutely brilliant. There's literally hundreds of us in the industry that follow you.
We can all cry in our beers. It's great.
Speaker 2 (59:22.562)
continue on, keep going. And I am very, very honored and appreciative that you took time to sit with us and share your insight.
It's always great to chat with you.
Thanks buddy. Talk soon. Thanks.
TAGS/KEYWORDS:
AI,
Artificial intelligence
ai for business,crime documentary,true crime documentary,cyber security,cybersecurity,hacking,what is cyber security,artificial intelligence,ai,ai tools,prompt engineering,best ai tools,agentic ai,risk management,generative ai,best identity theft protection,social engineering,cybersecurity awareness,business strategy,true crime,true crime stories,zero trust,phishing,cyber security explained,truly criminal,ai for beginners,cybersecurity for beginners, cyber crime junkies,How Hackers Think, new AI, AI Guide, AI use