Cyber Crime Junkies
Entertaining & Sarcastic Podcast about dramatic stories on cyber and AI, which actually help people and organizations protect themselves online and stop cybercrime.
Find all content at www.CyberCrimeJunkies.com and videos on YouTube & Rumble @CyberCrimeJunkiesPodcast
Dive deeper with our newsletter on LinkedIn and Substack. THE CHAOS BRIEF.
Cyber Crime Junkies
Are YOU the Next Victim? STOP Leaving Your Life EXPOSED.
New Episode🔥Guest Dan Elliott (vCISO with RECORDED FUTURE). We emphasize the critical need for robust vendor due diligence and continuous risk assessment to improve automation without compromising security. Understanding artificial intelligence and its ai risks is no longer a future problem, but a present challenge. This episode focuses on the stark reality of cyber risks, moving past sensationalized portrayals to examine how attacks actually happen through overlooked vendors and the AI tools we use.
Chapters
00:00 The Reality of Cybersecurity Threats
03:11 Understanding Cyber Threat Intelligence
05:48 Analogies in Cybersecurity
10:47 The Impact of AI on Cybersecurity
13:05 Shadow AI and Its Risks
18:06 Establishing AI Policies and Training
21:19 Current Trends in Cybersecurity Risks
26:43 Regulatory Landscape and AI
27:02 Deepfakes and Their Implications
31:27 Vendor Management and Supply Chain Risks
39:20 The State of Cyber Insurance
Growth without Interruption. Get peace of mind. Stay Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com
🔥New Exclusive Offers for our Listeners! 🔥
- 1. Remove Your Data Online Today! Try OPTERY Risk Free. Sign up here https://get.optery.com/DMauro-CyberCrimeJunkies
- 2. Or Turn it over to the Pros at DELETE ME and get 20% Off! Remove your data with 24/7 data broker monitoring. 🔥Sign up here and Get 20% off DELETE ME
- 3. 🔥Experience The Best AI Translation, Audio Reader & Voice Cloning! Try Eleven Labs Today risk free: https://try.elevenlabs.io/gla58o32c6hq
Dive Deeper:
🔗 Website: https://cybercrimejunkies.com
📰 Chaos Newsletter: https://open.substack.com/pub/chaosbrief
✅ LinkedIn: https://www.linkedin.com/in/daviddmauro/
📸 Instagram: https://www.instagram.com/cybercrimejunkies/
===========================================================
New Episode🔥Special Guest Dan Elliott (vCISO with RECORDED FUTURE). We emphasize the critical need for robust vendor due diligence and continuous risk assessment to improve automation without compromising security. Understanding artificial intelligence and its ai risks is no longer a future problem, but a present challenge. This episode focuses on the stark reality of cyber risks, moving past sensationalized portrayals to examine how attacks actually happen through overlooked vendors and the AI tools we use.
Chapters
00:00 The Reality of Cybersecurity Threats
03:11 Understanding Cyber Threat Intelligence
05:48 Analogies in Cybersecurity
10:47 The Impact of AI on Cybersecurity
13:05 Shadow AI and Its Risks
18:06 Establishing AI Policies and Training
21:19 Current Trends in Cybersecurity Risks
26:43 Regulatory Landscape and AI
27:02 Deepfakes and Their Implications
31:27 Vendor Management and Supply Chain Risks
39:20 The State of Cyber Insurance
speaker-0 (00:04.874)
Have you ever noticed how we keep telling ourselves AI is magic and cyber risk is a future problem right up until the time it walks in the front door wearing a name badge? Because in this episode, we're not talking about Hollywood hackers or hoodie cliches. We're talking about how attacks actually happen now quietly through vendors. You forgot you hired through AI tools your employees are already using through people who don't break in.
or hack. log in. Why is it that everyone thinks they need to build Fort Knox when it comes to cybersecurity? When attackers are just looking for the unlocked side door. I sit down with global see so well respected Dan Elliott from Recorded Future and you'll hear stories never published before. Real world examples of how threat actors actually think, hunt and choose their targets.
And while waiting for regulation or insurance to save you, is the adult version of closing your eyes and believing the monster isn't staring right in front of you. Watch this all the way through. It's excellent, not because it's scary, because once you see how it actually works, you can't unsee it. This is Cybercrime Junkies, and now the show.
speaker-0 (01:50.37)
Welcome everybody to Cybercrime Junkies. I am your host David Morrow. In the studio today is Dan Elliott. Dan is a field CISO, Chief Information Security Officer, focused on Asia Pacific and Japan at Recorded Future, where he helps businesses and security leaders turn real world threat intelligence into decisions that actually reduce risk. Recorded Future, for those who may not know, is a
globally respected threat intelligence company that basically connects the dots of the open web, the dark web, technical sources, customer telemetry. So teams can spot and prioritize these threats faster. Dan, sir, welcome to the studio. Welcome back. Hope you've been well.
I've been doing fantastic. It's an absolute pleasure to be back. Thanks so much for having me.
Well, I am excited about hearing what you're doing for Record a Future. Share what you can. I'm nothing confidential, but share what you're you you you left Zurich. Are you still your home abode is still in Australia? You were in Canada when I first met you. Your home abode is still in Australia, but yet you're covering all of the Pacific.
That is correct. still the Canadian in Australia left Zurich. You know, it, it, I'll say the, opportunities in one's career are often the ones you don't predict and you don't expect. The opportunity came about with recorded future. And I think I've always been an evangelist for, for cyber threat intelligence. So it was that I referred to that Ikegai that, you know, Japanese concept of doing something you love, something the world needs.
speaker-1 (03:37.838)
And this is, it's just been a perfect fit. So I spend most of my time traveling and speaking to CISOs, not just recorded future clients, but CISOs across the region and around the globe to discuss where they're looking to go with their organization, the challenges they're having and where cyber threat intelligence sits within their program, within their ecosystem.
Right. Yeah, absolutely. And you do a lot of public speaking for those who may not know Dan. If you're a fan of the show, you would know Dan. Definitely check out all your work on LinkedIn. You publish a lot and it's really good content. You also have very relatable stories. one of my favorite storytellers. And I always appreciate that because I think all of this is about what resonates with people. And so being able to
to make it edutainment, right? being able to educate people in a fun way and a relatable way is it's both a art and a skill.
Thank you very much. A very generous compliment. I have a lot of fun doing it. And I think that cyber gets the bad rap of being too technical and a concept that...
Or the Department of no, right? Like you can't do that. We can't. We're not allowed to do that. We can't use that app. We can't do that.
speaker-1 (04:57.506)
That's, and I think that if we, I mean, you know, I love analogies. Most people want to be able to drive a car. Not everybody has to be a mechanic in order to do so. So I think that our job is to make it driving, to give people the opportunity where they can sit behind the wheel. They can feel somewhat in control without expecting all business leaders to act as mechanics.
So, so Dan, before we talk, I have a few questions about AI and the advancements in social engineering. Before we get there, let's talk about analogies. So Bear in the Woods, one of my favorite analogies. And then you moved to Australia and you shared in our last conversation, the shark in the water. And I, I steal everything from you. I hope you know that I use that all the time.
And I've gotten so much feedback from people that were like the shark in the water. I understand what you're saying now. Like, can you share that for the listeners who may not have heard those prior episodes?
Sure.
It's really about the concept of you don't need to build Ford Knox in order to improve yourself as an organization or personally protecting online. Correct?
speaker-1 (06:17.006)
Correct. I think that, and I slowly build on all these analogies as I go along.
Are going to come up with a new one now that you record a future? Is there like a fish one because it's Pacific? You know, like is there like a sushi one or something like that? Maybe we'll have to think about that. We'll noodle.
Well, we'll get there. mean, so the concept of both shark in the water and bear in the woods is the same, which is roughly the concept that, and I learned the shark in the water when I came out here, which is we, get a lot of locals that say there are certain times of day you obviously don't swim because that's when sharks are out hunting. If you think of the shark as the hacker, as the criminal who's out there looking, there are certain places that they hunt. But for those of us who are operating outside of that window, it's, you don't have to be an Olympic swimmer.
And the bear in the woods, don't have to be an Olympic athlete to run from the bear, but you have to do enough that you're not the furthest one out. You're not the closest one to the bear. and I always tell people now that for me, understanding threat intelligence is taking the blindfold off and understanding your environment in the water. I do a lot now across Asia and steal a concept from my old world in the intelligence community, which was.
that concept that along a nation's perimeter, if you're afraid of your neighbor attacking you, you can stick soldiers end to end and have everybody stand one beside each other and use that as your guard post to tell where somebody is attacking from. It's not efficient. And when an attacker comes in, you got one guy there. think that's it.
speaker-0 (07:53.76)
It's not deep either, right? Yeah.
Threat intelligence, whether it's cyber threat intel or my old world, human intelligence, is the concept that I want to bring the resources to the right places. It's not about more resources, it's about efficiency. Bringing resources to the spot where attackers are most likely to be. And you'll never be 100 % accurate, but you can be more accurate and more efficient. So intelligence brings your resources to that point on the border where you're most likely at risk. And it's the same in the water.
in Australia here, they have shark sensors. So they can tell people where the sharks are most likely to be and to be hunting. So you can avoid that.
And in the analogy of the armament on the border, right? You don't want to do the thin line all the way around your border. It's not efficient and it's not deep. You want to know where they're coming at the time they're coming so that you have all of your army or a vast majority of it in that location.
Yep. And I think that that threat Intel often gets labeled as an add-on. And the problem with that concept is it's, okay, I still have to have a soldier everywhere, but then I'll get extra soldiers in the spots where threat Intel tells me to be. Whereas if you start from the intelligence space, then you're building an efficient army that has depth at scale in the right places. It's the same way governments do it. You know, I've seen governments go through deficit reductions in multiple decades.
speaker-1 (09:21.034)
And the one thing that they never scaled back on was intelligence. I think that whatever analogy you want to draw from is that idea that we need to build efficient security programs so that we can actually protect ourselves and our resources.
Yeah. And when we think about shark in the water, I have a couple of stats for you. Mako shark. you know, somebody sees fins. We don't know if it's a dolphin or shark. We're not going to stick around and find out. We're going to swim to shore. The bottom line is we will never be able to really out swim the shark, but we don't have to. And that's the good news of the story. We just have to out swim that guy, right? We just have to out swim somebody else who's closer to the shark. Right. And the beauty of it is
You know, a Mako shark can swim 45 miles an hour. A great white can swim on average around 20, 25 miles an hour. The fastest human. Olympic gold swimmer that swam was measured at eight miles an hour. So me, I'm not going to out swim any of them, right? Just like online, I'm not going to defeat a hacker. But the point is, I need to do enough resistance so that they move on to the people still using password for their password.
Admin for their password, etc. Right. And that's that's really a great lesson for a lot of people because it's it's good news. It's not just it's not just fear, uncertainty and doubt. Let's segue. No, but.
there. I would say just, I mean, so many people that believe that they personally or organizationally, they don't have the maturity to handle cybersecurity. And I think that's a great point. You know, I've close friends down here that say they grew up being told, go swimming in the ocean, just don't be the furthest one out, you know, and no matter how mature your organization is, you don't have to be the fastest, you don't have to be the most secure, you have to make your organization secure enough.
speaker-1 (11:18.392)
that for your size, for your scale, for your industry, threat actors just don't want to bother with.
Right. Go swimming next to the guy who just finished a cheeseburger while he's walking out into the water. You're like, I'm swimming by this guy because that shark's getting him first. That's good. So let's talk about AI and not the AI that everyone's talking about because AI is ridiculous. Now the conversation of it, I almost don't like bringing it up because I was at a convention a couple of months back and there was a fork. There was a
product, there was a booth, they had a freaking fork and they're like, it's AI infused. It can track all this stuff. I'm like, why would I want that? Like, stop adding AI into everything. It doesn't need to be in everything. Like it's ridiculous sometimes. Okay. I just wanted to say that now how the bands out there. Yeah, I need that fork. This I've got to my data analytics, my business intelligence for what we're eating, but the
But the there's a lot of kind of new risks that we're for those of us that like to spread awareness. Right. Like, look at, you know, it's where the Jerry Seinfeld of like cybersecurity were like, did you ever notice this? Like, we're just like observational humor. like what's shocking to me is. Obviously, AI deepfakes and stuff, they're fascinating because they're they're not that big in terms of.
the percentage that they have involved in social engineering and fishing yet, but it's growing. It's growing massively percentage wise, but it's still not a big, big, big threat in terms of how common it is. That's going to that gap seems like it's going to widen, but I'm going to ask you about that in just a second. But to me, so many of the small midsize organizations have kind of just sat back and watched a little.
speaker-0 (13:23.352)
while this generational technology, you know, takes over. And that is an action that inaction is a decision. And that's causing a very big rise in shadow AI. What are you seeing? What are you advising people on? On shadow way.
So first, if we take it back a half step and kind of describe what shadow AI is.
space. So shadow AI to me is the use of artificial intelligence that's ungoverned, unmanaged by your organization. And that's, that's a swath of use cases.
Right.
ABC company, ABC company, the executive team doesn't have an AI policy because they haven't made a decision what to use, what department they would use it in, how to implement it. so somebody's using this in their department. Somebody's using a free version of this. Someone's using a different version of something else and there's no training of them. That is kind of where a large rise. There was a recent report, I don't know if it was from Axios or whomever.
speaker-0 (14:34.616)
But it surveyed, you know, a couple hundred thousand U.S. employees and the results were somewhere in the 70 % or higher range of those people anonymously said, yeah, I use AI. My employers don't know.
Yeah, yeah, the last 77 by by.
care.
speaker-0 (14:54.572)
That's what I saw. Yeah, that's what I saw. Yeah.
employees are going use it regardless. And it wasn't even just, I mean, it's not just, it's this notion that regardless of what my employer says, I'm going to be using tools. I don't think that's, I don't think that's necessarily wrong. And I'll draw that back a bit. because I think everybody's fault organizations ask for efficiency. ask for scale and you know, I have yet to be in an organization in the public or private sector that doesn't want efficiency and scale.
That's, that's profitability. So employees are driven down that road. And I think that AI is about automation. It's, it's about getting to better answers, more efficiently, or getting processes done in an automated and an autonomous fashion. That's the ideal space and organizational leaders that believe that I'm not our organizations, not using AI are fooling themselves. think that whether it's through.
you know, back channels like Slack or Microsoft tools or Google, they're all using AI. It's what they're doing with it. And if we move on to more forward tools like, you know, large language models like cloud and chat GBT, I mean, those are more overt and those are the ones being talked about, but AI is across our environments. And it becomes a beholden on an organization to look at the rules.
And I think they're easy three to five pieces that any organization at any scale can step into. But I think it's important for an organization to get out in front because that 77%, I would even say is on the low end. think it's more likely that eight, nine out of 10 employees are going to say, I understand AI company wants me to be more efficient. If I can get this job done in six hours and be more profitable.
speaker-1 (16:53.558)
Why am I working eight at a less profitable rate? So it can be malicious, but I think more often than not, it's just employees.
I think more often than not, it's just it's either driven out of their own fear because they're afraid that their competitors, whether it's internal competitors, right, they're up for a promotion against somebody. They want to be as competitive as possible or it's well intended. Right. I just want to do better at what I'm doing. Right. And it's faster. You know, the results overall are outstanding. Right. I use it every day. Like it's it's phenomenal. It's yeah, it's it's wonderful.
But but but there's a risk, right? Because I'm still surprised how many I meet with when we're doing these awareness trainings that are like we don't have an AI policy. We didn't even end in one environment was a health care environment. And here in the US, have HIPAA regulations and they had nurses that were working at three o'clock in the morning uploading medical records, getting the summaries done because they don't have time and they and they just had to do it. Nobody had showed them.
You have to select a HIPAA compliant platform to use so that it's sandboxed and you have to anonymize your data because there's still like the minimum necessity rule and other rules within that. But the point is, I know that's a specific example, that's a good sample of part of the reason of not having a policy and not articulating, communicating with employees is how you have people that are trying to do their best with good intent.
But you're not training, you're not providing prompt training. You're not providing guardrails on how they can use this beast of a machine safely, right? Yeah.
speaker-1 (18:39.618)
Yeah, I think that in that example, you hit probably three of my five rules of the five when we talk about it.
What are they? Give me your five guidelines. Let's walk through those.
So I think as an organization, you have to establish your broad guardrails and that's your AI policies. What AI tools do we want to accept and what do we not want to accept? And once you get a broad AI policy in place to determine that, you have to train your employees as to why. Why do we have these guardrails? What is AI? Why do we want to use it? Why do we not want to use it?
And you can't blanket this. This isn't like all AI is allowed. No AI is allowed. You can't get into all or nothing. And I think it's about making training specific, role specific. I have seen organizations that do general AI training or point their employees to some of those large organizations that offer AI training. That's okay as a starter, but I think organizationally, we have to look at role-based training. So how
How would HR use AI tools? How would operations use AI tools? How is our marketing department using AI tools? And so they understand the guardrails for their role and they understand the use cases for their role. And those are the three that I think you hit the nail on the head on right there. have to, that training for that nursing staff, they want to use the tool. They don't necessarily understand which tools they're allowed to use or why they are.
speaker-1 (20:12.938)
able to use them from a regulatory perspective and then they don't.
Governance from a governance stance. Yeah
Yeah, and I think there has to be then an ability to flow with it for an organization, whether from a security position, IT or leadership, so that we can monitor and we can move with it. mean, I see security is moving in the direction of AI enabled, know, security posture, autonomous threat posture, but you can't do that unless you're constantly growing, you're constantly iterating.
through it. And I think that's, that's a piece that is left with a lot of risk doctrine. We do one and done, you write a policy, you walk away, doesn't have to change for another five years. When it comes to AI policy, this is a constantly iterative process. I think that's really important as well.
So the five, I covered the three, the fourth is the security visibility into the results of their use of it, right? And then what's the fifth? How would you state the fifth?
speaker-1 (21:24.066)
that iteration. So.
Makes sense.
have to iterate. You have to not just have a process for overall governance, for training, for role-based and for security, but you also have to have a process in there to iterate all four. And it has to work alongside everybody's process.
That's absolutely wonderful. That's fantastic. Yeah, no, that's very helpful. That's exactly what I'm seeing. I just haven't articulated it in those five guardrails. So once again, great story, great way of looking at it. What are you seeing for guiding organizations in your new role? What are the latest concerns?
Trump.
speaker-0 (22:11.756)
Like what all has changed this year from last? Is it very similar or is there a whole group of people that you're seeing that they're like, wow, polymorphic ransomware now is writing its own code. I can't sleep at night. Like are you is there anything different that you're seeing or hearing?
Great question. think what, so right now I'm speaking to lot of CISOs and CIOs and at those levels, it's business risk. the technical risks, the security concerns are important. And I don't think they're changing. They're constantly changing. So there's new threat perspectives. There's new tactics and techniques, but I'm seeing a lot of business risks. So concerns over...
efficiency and automation, concerns over needing to downsize workforce or reduce cost that's driven. And at the same time, mean, those things are driven by financial concerns and there's global shifts in that. But there's also the concern of I want to, I need to downsize my workforce or my overall cost. I need to improve automation, but I can't reduce security. So because these threats are shifting so quickly. Right.
I still have to watch the door. You know, I can't, I can't take the bouncer off the door. need to watch what's going on. So I think that it's a perfect recipe for AI for, for automation at scale. When you start talking to organizations that are extremely concerned about their workforce are very concerned about being able to move more efficiently and not just continuing to add headcount and continuing to add tools. so we're, kind of at a cost.
in those spaces.
speaker-0 (23:56.014)
What see from the regulatory landscape? I mean, I'm here in the US where we basically can do whatever the heck we want. We have no regulations on anything. It's the Wild West. We will slap AI on anything, aka the fork. You know, we have a couple different states that we're trying to regulate AI use, which would be very difficult to manage because how would a company do that?
Right. Because we don't operate in one state. None of us do. Even small businesses don't. So the federal government just recently kind of said, hey, hold off on that. We're going to be issuing, you know, federal guidelines. And we have we do have NIST that came out in 2023 in this risk management framework for AI. That was helpful. It's a good guide. It's vague, but it's still a good guide. And obviously, UK
as or the EU has their their AI act, which is pretty restrictive. But I mean, they lead the way in privacy. But is there anything in in the region you're covering or anything that you see out there that is really like. You know, as it resonates with you, like is there somebody that's really doing it well that you see and you're like, we should really be.
You know, looking, you modeling ourselves after the Netherlands version of it or something like that is what I'm asking about. I just did a terrible way.
You know, we love the Dutch. say.
speaker-0 (25:33.236)
Yeah, always do. They're pretty good.
Yeah, you know, if you ever find yourself in a location prone to flooding where it wouldn't choose. I think that across this region at least, every country is doing it slightly differently. So I think that I've seen Singapore is doing some wonderful things. Australia is trying to really reach in that space. Japan, I was just out in Japan last week.
speaking to some of the regulators there, there's a move in a direction in that space. I don't think anybody has hit the nail on the head. And part of the problem is that regulation typically is not a leading indicator. And Australia is a bit different in that from a standpoint, Australia has put regulation out front and industry follows to some degree. You have the US, Canada, even the EU where regulation was a lagging indicator.
of where it was that bringing up the low end of that bell curve. don't think, I hope, I'll say it this way. I hope that regulation is not the line we're following because that's too late. Government does not want this to pass as
history.
speaker-1 (26:47.31)
That's what I, yeah, I, to me, I look at.
I mean, but I'm reporting from the States. So here were the kids in the back of the class. You know, so long as we could bring our guns by our AI tools, we're fine. Like we're, we're not the, we're not the, the poster children for leading the way here. I was just curious, cause you see a lot more than I see. So I was curious if there's, if there's, you know, a model out there that you're like, wow, this one's really good.
You know, I-
Just not, not, not yet. Right. It's just not, not right.
And I don't think so. The industry I talk to most frequently for this space is, is banks to find banks and financial regulators are out in front in this space. have to be to be fair to them. have money to do it and they have the data that they're concerned about. So, some of the banks in Australia are doing phenomenal things. Singapore, the banking system and the regulators there are amazing. And I think that.
speaker-0 (27:25.944)
Makes sense.
speaker-1 (27:51.212)
That's really where a lot of other industries are looking. What are the banks doing? What are financial regulators doing? And we'll follow behind. I think if we look at AI from a privacy lens, that's the starting point. Where are we concerned about our data going and let's build backwards from that.
Absolutely. What are you seeing in? Let's talk about AI deepfakes just for a little bit, because I'm so fascinated by them. I will say. I've seen them out in the wild, believe it or not, I've come across some. Customers and some clients, even small businesses who are experiencing it while they're trying to hire like in the HR role in the recruiting role where they are interviewing developers and.
You know, they don't have to live in their town. So this person is pretending to be, you know, Carol from Iowa. And in fact, it's, you know, Claude from Europe, you know what I mean? But they're but they're deep fake. They've written the resume to be this perfect resume. It's too perfect. And then, you know, the the reason they're getting caught is their their interview skills aren't that good. Right. But the
person that is showing up on video is they said just undetectable by the human eye. It's not bad. Like it is. It is they sound right. They look right. There's nothing weird about the background. You know, is that. Are CISOs talking about that globally? Like are they are they concerned about this because it it does fly in the face of what we believe to be true and not true. Well, you know.
We've all kind of grown up believing seeing is believing like, you know, our court systems are based on eyewitness testimony. Like everything is that is the truth. And yet now it's not. We have to still verify it.
speaker-1 (29:49.686)
Yeah, it's that trust but verify. You know, the CISOs I'm talking to now are starting to look at this from the point of what they can do about it. They accept that it's there. It's what do I do next? I think we used to get to a point where we would have this discussion to move your hand in front of your face and there are tricks, right? The AM models cannot beat the tricks. They're getting so well designed, so advanced, so fast that the tricks don't work.
Mm-hmm
speaker-1 (30:19.392)
Most the CISOs I talk to, it's about working with HR, working with your other departments to understand the risks and being aware is half of it, right? If you have a notion that it could be there, then your brain will start to, that reticular activating system will start to tweak on those things. Right. So I don't think there's a whole lot of technical tools that are going to put in place and perfect this. It's about
the interviewer, the individual being better placed. And, you know, I've seen it myself with some of the organizations I've worked with have experienced this during the HR process where we see certain state actors using this as an access point to try to get data down the road, get access down the road. I think it's only going to increase the most of what I see now is not full video interviews. It's a case where you're getting somebody who jumps on video for 10 seconds.
looks convincing and appears that you know they are who they say they are and then they're like oh my camera's acting up or the internet's not good enough let me turn my my video off. Video deepfakes are still a lot easier than video deepfakes so that's the the the tendency I'm seeing right now and that's one of those hinge points. Did you actually have a video interview with you know Carol from Iowa or did she suddenly go off video three seconds into your call?
Right. So, so we're getting people used to those concepts. CISOs are definitely aware of it. I just don't think yet we've got the security tools that are moving efficiently in that space. So it's about awareness. It's understanding where the threats are in my industry, in my region, and then making my teams in all other areas of the business aware of that potential risk, that potential.
That's excellent. Let me ask you about vendor management and vendor risk. lot of the data breaches this year have been supply chain, right? It is what we saw years ago, the big target breach here. wasn't targeted. was the HVAC vendor, right? Like it's that flow down of all of your subcontractors need to have your level or a strong level at least of resilience. You know, how should security teams kind of
speaker-0 (32:42.574)
as vendors from a risk perspective. Are they having those conversations? Are they talking to you about that?
So, man, another great question. Third party risks, supply chain risks.
Thanks, I used AI to come up with it.
I was gonna say this sounds like time, yeah, you know.
Sounds like a smarter question than Maro could have thought of. You're like, you're like, he doesn't really come up with this kind of question usually. So I'm just teasing.
speaker-1 (33:10.312)
You hit me with them. It's early morning.
You know, it's so forget about the time to.
speaker-1 (33:18.85)
So the supply chain risk is at the tip of everybody's top. A lot of it. Yeah. The challenge that I'm seeing right now, regardless if it's supply chain and on a tooling perspective, a vendor tool or a vendor who has access to your environment or access to your people. Okay. You we're seeing most threats don't break in. They walk in. They access through.
Yeah, that's what I thought.
speaker-1 (33:47.502)
credentials they've taken. Supply chain risk right now, the real challenge is getting ahead of it. Organizations that I saw last year, and I saw this when I was in the insurance industry too, that they were starting too late to start to look at their third party risk. And then looking at just scope and scale. Like we don't know who all of our vendors are. We don't know who we're doing business with. And we don't necessarily know what access they have. Now let's...
pull all this together in a big project and see who we're going to assess for risk on an annual or a biannual or once every three year basis. And the system falls apart in year two. And so most of what I'm chatting with CISOs about now is how do you maintain that? So you've started a third party risk or supply chain risk program. How do you maintain that in year two and year three when those organizations you have to assess
double and triple and your insurance company also is asking for more. So you have built a third party risk program and by year two, your broker, your insurer is asking for double the information and you collapse under your own weight. So I think this is another one of those spaces where how can I bring in AI if we add that risk still, how can I bring in AI tooling so that I can
efficiently manage my third party risk while also assessing the risk of that AI tool. So it's
It's a complex problem.
speaker-0 (35:26.712)
So is really getting. Yeah, is is that one of the things you advise organizations on like on really getting ahead of your supply chain? Because I see so many organizations. I mean, I'm just telling you, if when I ask, I'm like, do you know who all of your vendors are? Like most leaders are like, I have no freaking idea.
Like I know we have we have five that do this. We have three or four that do this. Everything else, let alone what access they would have or what the state of their security is. Right. And that's frightening.
I think, yeah, these are problems that we see it across organizations at all size and all maturity that there's shadow procurement as well. you know, marketing needs to move faster. They want to run this project. So they reach out and find a vendor that they need and, know, finance needs a tool set and IT is taking too long to approve it. So they just go out and pay for it on a company credit card. And, and I think
getting ahead of your supply chain is again about making all business units aware of the risk. Once they're all aware of the risk, then you have to build an efficient process for managing and risk assessing each of those vendors, everyone in your supply chain. Yeah, I spend a lot of time speaking with CISOs and CROs, speaking with risk leaders about supply chain risk and how you can efficiently get in, I won't even say get in front of it.
Keep up with it. I see very few organizations that are in front of their supply chain risk. It's about how can I use automation to keep up with the growing risk from my supply chain.
speaker-0 (37:21.304)
That's so interesting because there's really a lot of layers of shadow, right? There's the old adage of shadow technology, somebody bringing in a laptop, you know, bring your own device device, right? It's being connected to the organization network. Now there's shadow AI and it's a really good phrase about shadow vendors, right? Because
That's so true. Marketing will go and engage there and somebody else will go and engage here, procurement or shipping or logistics. And then when you think about it, well, some of these have more access than others. Who are they in the first place? Like to keep track of all of that is something that AI could be very beneficial to actually to help out an organization. Well,
I think it's also about so determining who your vendors are and where they are and what access they have. Then it's also about assessing risk. So when I looked at it, when I was in the insurance industry, one of the things we were looking at was what the risk level is. And a lot of organizations were looking at the size of their spend.
in order to initially look at it. So what organization are we spending the most on in our supply chain? Because they're likely the riskiest. And the problem with that is that it's a false sense. It's a false narrative. Well, because I'm spending the most on Amazon, no, it ends up being this reverse order that I think that's where the AI tooling that I'm seeing in third party risk works really well. Like we're bringing out some third party risk tools that is about
How can I efficiently assess my risk to nth party? So who are my vendors working with that's pushing down the line? And I mean, yes, there's always going to be risks in the, you know, the big firewall vendors, the big storage vendors, the cloud providers, but you know, at scale, the problem is all those little vendors are still sitting in my environment and I need to know who they're doing business with because that's the risk I'm owning now. You know, that HVAC company,
speaker-0 (39:16.366)
Thanks
speaker-1 (39:27.884)
I still tell the story of Target because that's a great one. Target was not thinking that this small regional HVAC vendor was going to make them story carried. think we're now, what's it, 10 years later, 20 years later? It's amazing, years, that damn time flies. That they were probably thinking of their largest vendors, not their small regional stores.
Yeah, that's exactly right. Yeah. Well, it's so interesting. Let me ask you, I know you're not in insurance anymore, but I heard the other day on a IT podcast or something, because I'm a dork about insurance claims, at least I believe it was here in the US. Cyber insurance had paid out like right around 50%. And that was it. Because of
the way organizations are doing their applications, right? They're just searching for the lowest premium. They're not, you know, Oh, do you have MFA? Yeah, we have it on that one thing. But say we do, right? And know that that's not what they're asking. They're asking, do you have MFA on all of your systems, right? Things like that. That's a material. It's a material, a statement when you're doing your application. Are you aware or does that come up in any of the conversations? I imagine it does to some degree.
Are there any significant sightings or events that you've come across recently about like, I'm sure I mean, the the the industry boomed for a while and then so many of them went out of business or got out of that line of writing for obvious reasons. What are you seeing where you sit today compared to where you
So, um, I now get plausible deniability now that I sit outside the industry. it's a, it's a wonderful place to be. Cyber insurance is, it gets a bad rap. Okay. It is there for a reason, but it should be the front line. It should be the last line. And I think that most of the conversations I have with CISOs now are their touch point with cyber insurance is once a year when.
speaker-1 (41:42.22)
when risk or when their insurance leaders come to them to fill out an application or sit through an interview. And so because it's such a small tech point of their overall business responsibility, there's often a lack of understanding of, know, what can we do? What are we doing? And I hear that insurance doesn't pay out. It's difficult. I don't think insurers have it completely figured out as to what the risks are.
I don't think that the questionnaire model is an efficient way to do things because I remember I would see the question of do you have fire? Do you use firewalls? You do you use MFA? Like what? Like, my favorite is still. And now I sit on this side of the fence is do you use cyber threat intelligence? It's like, okay, not all are the same like this. So I think that the problem is those questionnaires.
15, 20 pages long, still just scratch the surface of the risks and insurers can just add another question when they see a breach. Let's add this question in because that's a risk without understanding what that means. So you have insurance people speaking at this angle and CISO speaking at this angle and neither of understands the other. So it's definitely a problem. I think that what I speak to insurers about now and I
get the joy of speaking to them externally now is they need to manage their book of business the way that your average organization manages their supply chain risk. Like how are you as an insurer, how are you looking at companies you insure any differently than company ABC looks at all of their vendors and looks at all of their supply chain. And I think that there's shift in the model. don't, if ABC Corp,
Great analogy.
speaker-1 (43:36.918)
sent out 15 page questionnaires to all of their supply chain, they would expect to get inaccurate information back. Right. And we need to get to a point where everybody's expecting more realistic answers. And I think the payouts will match that. I think is the direction.
That's great. Well, as we wrap up, my friend, what is on the horizon for you other than the holidays with your family?
Yeah, is. mean, 2026 is a big year. think that cyber threat intelligence for me is a passion project. What I'm getting the chance to do now is really talk to leaders about CTI at core. So cyber threat intelligence at the core of program. And that's probably the biggest shift. I'm helping leaders position that understanding that to move faster, to really point your defenses when we go back to that
you know, soldiers at the border, how can we use threat intelligence? How can we use AI tools and autonomous tools in a way that you can position your risk program better? And that's the stuff I love. enjoy looking at what threat actors are doing, but that is really my edutainment piece, right? The threat actor shifts are always unique and it's novel and it's fun.
And they have cool names. They have cool names. Yeah, they have cool names. We could make good characters for a carousel. That's always fun.
speaker-1 (45:10.19)
to efficiency. How can we as an organization and how can we with our entire supply chain work more efficiently to be more secure and scale? And that's really what I'm now getting hyped up about. It's going to be a great year coming up.
That's fantastic. Well, we will watch for you and we will have links to your contact information. I encourage everybody to follow you on LinkedIn and connect with you. You always put out really interesting, great stories and really interesting things that business leaders can really get. It's always in plain terms. Everybody can understand it. It's not overly technical at all. So I always appreciate that.
dubbing it down for people like me always helps. So I appreciate all you do and I wish you and your family wonderful holidays, my friend. I mean that.
Happy holidays to you, happy holidays to your listeners, and thanks so much for having me. I love what I do, and I think that that's at the core of it. It's a lot easier to do good work when you love what you're doing.
Absolutely. All right. Thank you so much, buddy. We'll see you.
TOPICS: ai explained,true crime documentary,how to hack,Information Security,information technology,cyber security,cybersecurity,cybersecurity roadmap,true crime stories,true crime,hacking,ransom,AI,how to access dark web,hacker,gen ai,social engineering,phishing,ai tools,ai for beginners,ai ethics,ai video,small business cyber security,AI tools,artificial intelligence,cyber news,cyber risk management,vendor due diligence,ai risks
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.