Cyber Crime Junkies
Socializing Cybersecurity. Translating Cyber into business terms. Newest AI, Social Engineering and Ransomware Attack Insight to Protect Businesses and Reduce Risk. Latest Cyber News from the Dark web, research and insider info. Interviews of Global Technology Leaders, sharing True Cyber Crime stories and advice on how to manages cyber risk.
Find all content at www.CyberCrimeJunkies.com and videos on YouTube @CyberCrimeJunkiesPodcast
Cyber Crime Junkies
Phone Rings. It's A Social Engineer Callling.
Video Episode: https://youtu.be/oH7FpvHOhVA
This is the story of Matt Smallman, author of “Unlocking Your Call Centre”, about how call centers can be security risks. Our discussion gives exclusive insight into how biometrics fail in security, problems with voice authentication and explore military tactics and biometrics.
We explore how to balance convenience with security, how to protect call centers from social engineering and ways to reduce cyber security risks in call centers without compromising the customer experience.
Have a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-446
Get peace of mind. Get Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com
Imagine setting yourself apart from the competition because your organization is always secure, always available, and always ahead of the curve. That’s NetGain Technologies – your total one source for cybersecurity, IT support, and technology planning.
🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!
Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/
Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast
Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!
Matt Smallman Transcript
Call Center Security Risks
Video: https://youtu.be/oH7FpvHOhVA
Summary
This is the story of Matt Smallman, author of “Unlocking Your Call Centre”, about how call centers can be security risks. Our discussion gives exclusive insight into how biometrics fail in security, problems with voice authentication and explore military tactics and biometrics. We explore how to balance convenience with security, how to protect call centers from social engineering and ways to reduce cyber security risks in call centers without compromising the customer experience.
TAGS: call center security risks, how to balance convenience with security, how to protect call centers from social engineering, how biometrics fail in security, ways to reduce cyber security risks in call centers, how call centers can be security risks, cyber security risks in call centers, how call centers balance security with customer experience, problems with voice authentication, is voice authentication accurate, latest methods to authentic users, latest ways deep fakes can be detected, ways deep fakes can be detected, how can deep fakes be detected, new ways to reduce cyber security risks in call centers, how to improve customer service desk experience, what is deep fake detection technology, what is deep fake detection, how incident response plans need updated,
Takeaways
• Call centers face security risks that can compromise customer data and experience.
• Balancing cybersecurity and convenience is crucial to ensure effective security measures without hindering business operations.
• Implementing modern security technologies, such as passive and probabilistic authentication, can enhance call center security.
• Business processes play a significant role in call center security and should be optimized to improve efficiency and customer experience. Anomaly detection is an effective method for identifying security threats and anomalies in behavior.
• Addressing security in the call center requires a comprehensive approach that considers usability, efficiency, and reputation.
• Implementing security solutions involves articulating the problem, understanding available methods, and developing a strategy.
• The threat of synthetic speech poses a significant risk in automating and scaling social engineering attacks.
• The erosion of traditional security measures, such as social security numbers and passwords, necessitates the adoption of more secure authentication methods.
• The rise of deepfake technologies presents challenges for media organizations in verifying information.
• Planning for breaches and incident response is crucial, and organizations should have contingency plans in place.
• Maintaining a rigorous security posture is essential to mitigate risks and protect against cyber threats.
Chapters
00:00 Background
05:06 The Role of Call Centers in Customer Service
08:40 Balancing Cybersecurity and Convenience
12:17 Fixing Processes and Implementing Platforms
15:08 The Role of Technology in Call Center Security
23:59 The Importance of Business Processes in Security
26:13 Passive and Probabilistic Authentication
32:02 The Threat of Synthetic Speech
36:03 The Erosion of Traditional Security Measures
44:35 Planning for Breaches and Incident Response
D. Mauro (00:02.909)
All right, well, welcome everybody to Cybercrime Junkies. I'm your host, David Morrow. And in the studio today, we have my illustrious cohost, sometimes on time, sometimes not on time, sometimes checking email, but always positive. Mr. Mark Mosher, Marco are you today?
Mark Mosher (00:21.405)
Hahaha!
Mark Mosher (00:25.478)
Oh, wonderful, David. Man, we've got a full house today. This is exciting. Tell us who's in the studio with us today.
D. Mauro (00:31.905)
We do. We have Johnny Mack, John McLaughlin of CTG Security Group. Johnny, welcome to the studio.
John McLaughlin (00:43.102)
Thank you, David. Thank you, Mark. Great to be here.
D. Mauro (00:46.269)
Yeah, Johnny Mac's actually been with us since the inaugural couple episodes on this podcast. So it's been fun. And we are joined from across the pond, our friend and colleague, Mr. Matt Smallman. Matt has, today we're going to talk about kind of how call centers can be security risks, new ways to reduce.
security risks in call centers. Matt is a former British Army military leader who was involved in the British Army during the war on terror portion. He developed techniques and tactics to help soldiers prevent terrorist attacks in Iraq and Afghanistan. Since then, he's become a biometrics expert, creator of call center security model.
on a mission to eliminate kind of frustrating and time-consuming and pointless security processes from call centers. And he's co-author of an annual intelligent authentication and fraud prevention in Teleview when he was with, in his work through Opus Research. And he is the author of the book, Unlock Your Call Center, a Proven Way to Upgrade Security Efficiency and Caller Experience.
Matt, welcome to the studio.
Matt Smallman (02:12.302)
Thanks David, thanks for the introduction, you've obviously done your research.
D. Mauro (02:15.637)
Yeah, no worries. I'm happy to...
Mark Mosher (02:17.803)
And for a small fee, he can introduce you like that on virtual meetings at any given
Matt Smallman (02:20.806)
Yeah, I need I'm just gonna record that and use that elsewhere. Yeah, brilliant
D. Mauro (02:21.233)
Yeah, absolutely. Yeah, you just you just call us up and we'll just be like, and here's Matt. So, so, so tell us about tell us about this topic. Like this is something that is that is very interesting. And it's relevant because it's in the news, like, we've seen large data breaches involving social engineering or MFA fatigue or certain other tactics.
techniques, processes that have targeted call centers, help desks, right? Walk us through kind of, first walk us through a little bit about your background and what drove you to say, hey, when I grow up kids, I wanna focus on call center security. I mean, firemen, policemen, call center security.
Matt Smallman (02:53.887)
Yeah.
John McLaughlin (03:11.326)
security.
Matt Smallman (03:14.103)
Yeah.
D. Mauro (03:16.297)
I'm just teasing. I had no idea how I was going to wind up here. So walk us through your story line.
Matt Smallman (03:20.798)
Yeah. Yeah, I mean, you started off with the military story and that was an interesting, exciting, challenging time of my life, but it's not really a career for those who want to bring up families or have a bit more stability. And one of the things I really loved was, particularly those kind of episodes you talked about, were how we use our understanding of what the enemy is trying to do to us.
D. Mauro (03:39.631)
Certainly.
Matt Smallman (03:50.274)
how we get the best that our scientists have to offer and how we put that together on the ground in a set of processes and tactics that are not just technology on its own or not just tactics on their own, but they're a kind of combination of that being used by smart people. And I really loved doing that. The challenge with it is like, the military is an enormous machine and to do well in the military, you need effectively larger and larger scale leadership jobs.
And that just wasn't, that just didn't really float my boat. So I really loved working with small specialist teams with scientists figuring out the best way to put all this stuff to use. Uh, so I obviously wasn't going to be for me forever. Uh, and I left and I found myself as, as many people do in, in financial services and wandering around different elements of financial services and looking at payment processes and backend systems and then
One day I found myself, uh, in a contact center and I come back to this for like, on this podcast day, we've got two roles. Yeah. We're, we're users of these services. Like I guarantee each one of us has used one of these services in the last three or four weeks, if not in the last week, if not in the last day or two, and, uh, and we all have these terrible experiences with them and we all go and tell people about how terrible these experiences are. And a lot of the terribleness of that experience is not.
D. Mauro (05:00.714)
Absolutely. Right.
D. Mauro (05:06.435)
Right.
Matt Smallman (05:12.906)
the people on the end of that phone because they're doing the best job they can. But it is like the place where stuff goes to die in an enterprise. Yeah. It's like when all the processes are broken, when we haven't bothered to integrate systems, we'll just put them on screens in front of humans and we'll make them fix it all. So they've got a really hard job. And my desk was on a contact center floor and I'm like, I'm a bright guy. I kind of know how to do a lot of things, but I cannot do their job. I could not do their job day in, day out.
D. Mauro (05:17.571)
Right.
John McLaughlin (05:21.95)
Thank you.
Matt Smallman (05:41.41)
because it's just really, really challenging. They are the kind of the glue that holds these processes together. And one big part of that is the identification, authentication, security process. And it's just kind of a joke. Like your social security number, because no one knows what your social security number is. That lets you into a bank account, does it? Really? Is that really a good security measure? And...
over time, it became really obviously one of our biggest constraints to improving the performance of our call centres as this bank I worked for at this time was really that security process like it was holding back the quality of that human to human conversation. It's not like some of these things aren't and haven't subsequently been automated and put into mobile apps and you can do it all yourself. Regardless of what you can do yourself at some point, you need to come back and we'll want to speak to a human. It may be that you could do it yourself, but you choose not to because of its
emotional importance, the urgency, the need for reassurance around that. And I think there'll always be a role for humans in customer service, regardless of how much we automate stuff. And therefore if we want them to do those kind of higher level tasks, provide reassurance, provide trust, we need to get out the process stuff out of the way. And particularly those processes that really don't serve anyone. And that's where I found myself in the, in the security space and
rather naively, I think probably almost 15 years ago now, I was looking at kind of a portfolio of projects, how do we improve this productivity, this whole organisation? And one of the things was, we want a security experience that feels like you're going to a bank manager of old, where you walk in the branch and they recognise you as their valued customer and they don't make you do a dance and demonstrate your, measure your inside leg and...
your gate and all those kind of things, they just get on and serve you. We kind of get security out of the way of the experience we're trying to deliver. So I was looking for solutions to that. And I found myself in the world of voice biometrics, which very naively at the time I thought, well, like the science of this has existed for like 50 years. Surely this is a solved problem.
D. Mauro (07:45.697)
Yeah, that's what I wanted to ask you about. Yeah. So what you recognized, it seems to me, like you recognized the inherent balance or challenge of cybersecurity versus convenience. Right. We need cybersecurity measures or layers or processes. Right. Because otherwise, you know.
bad actors will take advantage and socially engineer people, cause financial reputational harm, of course. But on the other side, when you turn that dial too tight, then it becomes, it interferes with business operations and the customer experience. So it sounds like to me that you recognized that inherent struggle. And then you found through working with scientists and technologists,
certain processes, platforms, things like that, that can smooth out the experience.
Matt Smallman (08:47.902)
Yeah, that's what I think. So I think when we compare the telephone channel to some of the other technologies that we deal with every day, we just got to remember how low a bandwidth channel that is. Yeah, like I've just I've just speaking. So if I take if I take speech and I turn it into text, that's just a really low rate of data transmission. Now, there is a lot of other stuff associated with the voice and the device and all the rest of it. But historically, we've just been using that really low, low bandwidth, low concurrency
D. Mauro (08:55.14)
Mm-hmm.
D. Mauro (08:59.463)
Mm-hmm.
Matt Smallman (09:17.346)
communication method to try and get through security processes. And there are two things that happen. One is security processes that exist because people think they must have a security process and everyone else is doing it. And they don't really understand the risks they're trying to mitigate. And that's where we end up with the, please tell me your date of birth and your mother's maiden name for data privacy reasons. And we haven't really done the risk assessment as to what we're trying to mitigate. But yes, there are bad actors out there. And they will seek to exploit those things. And as we've
D. Mauro (09:28.237)
Right.
Matt Smallman (09:47.146)
heading right into this kind of cliff edge or brick wall of real-time payments right now, as things become more and more real-time, the value of compromising that channel increases and therefore we recognise that we need to increase the security of it. But what we've historically done in the call centre channel is effectively trade off usability for security. Like we've made ourselves a little bit more secure but significantly less usable.
John McLaughlin (10:10.47)
Thanks.
Matt Smallman (10:11.99)
So now you have to get a one-time text, which we know is a little bit more secure because it has a little bit more work, but it's not a huge amount more work for someone to compromise that. But from a process perspective...
D. Mauro (10:20.677)
No, it's not necessarily that I didn't mean to interrupt you, but it's not necessarily that much more secure either. Right? Like it's not a very like
Matt Smallman (10:24.362)
Yeah, no, but it gives it gives you a security bump the day you implement it, it gives you a security bump because the fraudsters have to iterate and adapt to it. But actually, you end up trending back to where you were. It's no real it's not significantly harder. But the business process is harder for those genuine callers and like 99.5% of callers, if not more like I think even in our highest risk industries.
Mark Mosher (10:27.745)
Oh.
John McLaughlin (10:28.021)
Thank you.
D. Mauro (10:32.392)
Right.
Mark Mosher (10:33.407)
Right.
D. Mauro (10:48.693)
Yeah, they just want to get there right. They just want to ask a question about their balance, or they just want to, you know, yeah, they just want some help, right? Yeah.
Matt Smallman (10:56.47)
Exactly. Yeah. What's the risk with that? Yeah. So we find ourselves in this bizarre situation where we've just made it harder for everyone in order to make it a little harder for some people who've just adapted around it. So it's kind of this never ending cycle like how like, you can build a security process that's impossible to like a knowledge-based authentication or even kind of a light two factor like SMS, you can make that harder and harder and harder and ultimately...
Mark Mosher (10:59.542)
Right, right.
Matt Smallman (11:22.998)
you start losing more and more customers. And the organisation I was with was losing maybe 15 to 20% of callers because it, because the nature of its business, it was like an off, it was, it was dealing with people who had left their long-term savings. They moved to other countries and they just used us to kind of keep some savings in a currency that they were comfortable about. And invariably when they needed to speak to us, they didn't have online usernames and passwords because they just weren't interacting with us very frequently. So when they called us, then we'd end up in this whole bunch of solicits.
D. Mauro (11:25.742)
Hmm.
D. Mauro (11:32.653)
Wow, that high.
John McLaughlin (11:48.622)
Thank you.
Matt Smallman (11:52.262)
situations where we know they're the real customer because they feel like the real customer, the agent wants to serve them, but they didn't jump through our security hoop. So that's where we really ended up back at this bind. What else is inherent in that interaction that we can use to provide more security that we can get out of the way of the call center agent?
D. Mauro (12:17.33)
So, is it a, in your experience, have you found that it is a process that needs fixed? Or is it a platform that needs to be implemented? Is it both? It probably sounds like it's a little bit of both.
Matt Smallman (12:33.078)
Well, I think, I mean, these things exist, there's a certain amount of organisational inertia that exists around these things. Yep. So like it's always been there. And you'd be surprised at how poorly kind of encode from your cyber security background, you'll be surprised at how poorly encoded and developed some of these processes actually are like, sometimes you'll go to an organisation and they have to challenge people for four pieces of demographic information. Like we use something called like a
D. Mauro (12:40.777)
Hmm.
John McLaughlin (12:51.257)
Thank you.
Matt Smallman (13:00.478)
a two plus two system where they have two sets of questions and they basically have to pick from set one, two questions and they ask them, they're like your mother's maiden name, your date of birth and then from set two, they might have like your, what type of account you hold and your postcode or something like that. Something ridiculously easy to compromise from mail theft or from online breaches or channel. And, but they literally have a list, yeah, and a CRM screen in front of them. So it's the, it's the
D. Mauro (13:21.185)
Mm-hmm.
D. Mauro (13:29.198)
Right.
Matt Smallman (13:29.25)
person who's has to conform with that process. So they are picking the information. And what we see in some organisations, they always pick the information that they know the real customers are going to be find easier to answer. So it's not really two plus two from these question lists. It's just the questions that we always ask. And but these, but there's somebody in security who thinks, oh, we've got a, we've got a random randomised security challenge because we do these different things, but it's not actually encoded in software anywhere. It's not actually verified.
John McLaughlin (13:53.16)
Goodbye.
Matt Smallman (13:58.65)
discretion is all down to the agent. So you have other compromises where the agent can be socially engineered. Oh, I'm not sure I think I lived in, yeah, it was London, wasn't it? And the agent's natural bias is to get on with it. And they're like, click the button, or they didn't hear properly, or they didn't hear the hesitation. So the systems
John McLaughlin (14:01.518)
and
D. Mauro (14:08.837)
Absolutely. Right.
Mark Mosher (14:10.06)
Oh wow.
D. Mauro (14:15.681)
Well, we've seen that in a couple large breaches in the news. I mean, we literally saw that. We saw in the MGM, Caesar's breach, various different types of breaches, but very similar. And for listeners of our show, they know that we've talked about these breaches for a while and it wasn't just 10 minutes skimming LinkedIn and making a call to a call center. This was six.
Mark Mosher (14:20.405)
Yeah.
Mark Mosher (14:41.751)
light.
D. Mauro (14:44.613)
plus months or longer of OSINT and research. And when they made those calls, they were very, very good. They knew everything they needed to know. And that's why they got them. But you're right. I mean, they just want to help, right? They don't want to, they want a good review on their customer set survey. They don't want to get in the way.
John McLaughlin (15:02.512)
Bye.
Matt Smallman (15:08.706)
And we thought we did some work we're looking at kind of like, so you just got to, it's really easy when you come from a security background to think, well, they should have spotted that. And you listen to these calls afterwards in a conference room full of executives and like, oh, it's obviously a fraudster. How do they not identify it? And, and you forget that they're doing, they're doing 50 or 60 of these a day. And they see on average, they see a fraudster a month, maybe in a, in a large bank, try to, try to impersonate a real customer. So they're doing 60 times. Well, I can't even do the maths.
D. Mauro (15:28.308)
Mm-hmm.
Matt Smallman (15:37.674)
But it's their natural biases is the other way. So we have to get the process out of their heads and we have to put it in systems so that we can have at least some confidence that it's consistently operating. And then you can get onto the decision about what the right level of thresholds or what the right risk of balance is. Because how do you make knowledge-based authentication harder? Do you just like add another question or you like give...
John McLaughlin (15:42.435)
Okay.
John McLaughlin (16:02.014)
Yeah.
D. Mauro (16:02.41)
Right.
Matt Smallman (16:03.118)
like, oh, now you need to tell me your, um, the address you lived at before the address you not, you're now living out or two addresses before that. Um, and, and that, that adds significant cognitive load to the, the caller, which makes them less likely to want to do business with you in the future, less likely to call you, which has some side effects, positive side effects. So we just, it's just the human in that loop. And I think the other thing when we think about kind of from a cybersecurity perspective, they are often some of the most privileged users in an enterprise. Yeah.
Mark Mosher (16:08.562)
I'm going to go ahead and close the video.
John McLaughlin (16:10.269)
Right.
Matt Smallman (16:32.474)
No one queries the fact that they go into hundreds of customer records a day, review hundreds of transactions, key multiple payments, key multiple changes, orders, or the rest of it. No one queries that because that's part of their job. So these guys have access to nearly it and gals have access to huge amounts of both information, but processes as well. And that, that is a big vulnerability and people are recognizing that because largely we've done a good job or we're doing
better and better job at securing the digital front ends. And now it's the humans that are the weak spot. And we really haven't invested in them in the same way in giving them the protections around them that enable them to continue to do their job safely.
D. Mauro (17:16.389)
Excellent. So how so that was a great explanation. How do how do you do this?
Matt Smallman (17:23.37)
Well, again, there's no there's no perfect answer to this stuff. So I talked about how I came at this from the position of voice biometrics. Now that is a fantastic technology and we will talk, but we'll also talk shortly, I'm sure, about some of the challenges and vulnerabilities that all security technologies have and biometrics has as well. Yeah.
D. Mauro (17:40.949)
Well, yeah, because as soon as I hear voice, yeah, because as soon as I hear voice biometrics, I think deep fake because I've been working with I've been working with some former Secret Service people here in the US and they have these phenomenal deep fake examples. And this is from relative commercial grade deep fake platforms. This is not even what the federal government has access to. This is just
Mark Mosher (17:44.001)
Ha ha ha.
Matt Smallman (17:47.519)
Absolutely, yeah.
Mark Mosher (17:48.211)
right yeah
Matt Smallman (17:59.233)
Yeah.
D. Mauro (18:09.154)
run of the mill if you pay enough money you can get... Yeah, exactly.
Mark Mosher (18:10.165)
which you can buy off the shelf.
Matt Smallman (18:11.29)
So let's just part, we will definitely, we have to come back to that because you can't discuss this topic without coming back to that. But I think it's too easy to jump to that when solutions are presented to you and forget kind of where you're coming from. And where you're coming from is this mother's maiden name, social security number, inside leg measurement, maybe a pin, maybe a password, maybe a two factor code, texted to someone.
D. Mauro (18:19.158)
Right.
D. Mauro (18:22.524)
Yeah.
D. Mauro (18:29.805)
Right.
D. Mauro (18:36.561)
Oh, yeah. Yeah, finding I'm on board with finding a platform to take that out of the hands of the call center and focus on training them on more other on more mission critical skill sets and support. I mean, that part makes perfect sense and removing that from the from the process completely makes sense to me. How like I assume you're kind of vendor agnostic as to
Matt Smallman (18:58.518)
But, but... Actually...
D. Mauro (19:05.601)
what platform a certain organization may use, but it's more about like what does your model look like and then this type of layer needs put in there.
Matt Smallman (19:18.282)
Yeah, so it's take a step back from biometrics for start. And what I tend to say, I'm a pragmatist at heart, a business pragmatist at heart, is what I would call modern security technology. So we have these kind of friction full, poor kind of knowledge-based authentication processes that really try to obtain security through friction, through making it hard. And that's just not sustainable. We've kind of brought in some transitional methods that I would call things like SMSOTP and...
D. Mauro (19:38.937)
Mm-hmm.
Matt Smallman (19:46.594)
pin generators that they've traded off usability for security. We've made it harder to do stuff and we've gained a little bit more security. But what we really want to do is to move to what I refer to as modern security technologies, which are to a larger extent passive. They take, they are, they are not part of the core of the conversation that takes place between the agent and the caller. They're probabilistic. So they allow us to kind of vary the degree of security or risk we're prepared to accept.
They're not deterministic in the kind of knowledge based education way. They allow that variability. They have some other properties as well that we'd like to look at. So they can be as often used as much for positive authentication as they can be for anomaly detection. And there's a couple of there's, there's a continuing evolution of technologies in this space. biometrics is part of that network authentication is part of that behavior, behavioral analysis is part of that. No, no one organization is gonna
D. Mauro (20:39.877)
Mm-hmm.
Mark Mosher (20:40.961)
Wow.
Matt Smallman (20:44.406)
be able to, and we've been doing this for kind of going on a decade now, you just can't switch that because the call centre is this channel of last resort for many organisations, particularly those who don't have physical locations, you can't just deny service to people who call it, you can't just switch off methods of authenticating people, you have to have a you have to have a mix, you will always have a mix of these technologies.
Matt Smallman (21:10.038)
I'm sure knowledge basics authentication should be called a technology, but you have to you'll always have a mix. So you need a way of thinking about this whole problem that supports the fact there's going to be a mix. So you need to understand the risks associated with the transactions you're doing with the information you're providing with the callers who are calling you. And in that context, you can then make a decision about what the appropriate mix of both authentication technologies and or detection technologies might be and then what are the most appropriate technologies in that stack? Because if you're
If you're an insurer who only ever interacts with their customer at renewal time or during a claims process, then actually there's not a lot of risk associated with those interactions. There is data privacy implications of them, but there isn't an awful lot of risk and there isn't a high frequency of contact. So getting a caller to enroll with a voice biometric solution where they need to provide their consent and an amount of recording.
D. Mauro (21:54.533)
Correct.
Matt Smallman (22:08.95)
which will cost time and effort, isn't necessarily an appropriate solution for that, that organisation. They may look at things like network authentication or any validation or some other methods by which they can be confident that the caller is who they claim to be to the sufficiently for the thing the caller is asking to do. And it's this change from kind of front door, let's make sure the front door is really secure. And if you got past the front door, we'll trust you with everything to, well, we're just
just keep, we'll keep monitoring who they are, what they what they're claiming to do. And if the risk of the transaction changes, or the risk of the customer changes, then we may need to either make sure we've screened it against some other detection technologies, or make sure they really are who they claim to be with stronger and stronger authentication methods. But again, it's really challenging in this channel, because we just don't have a lot of data, we just have a phone call. So you don't have a lot of data to triangulate off. And that's probably the kind of the macro view.
that sits above the kind of specific technologies. And then the technologies themselves are well proven, but the business implementation of those, like as I said, let's start this biometric, voice biometric has been around for 50 years, first patents from IBM in the mid 60s, but it hasn't been implemented particularly effectively until probably the last five or six years, because it's not just about the technology, this is a...
John McLaughlin (23:06.494)
Thanks for watching!
Matt Smallman (23:30.758)
human process, people who people are people are involved in it, people need to trust it, people need to accept that how it's using they need to have confidence that it's working in the way in which people which they expect it to. So it's, it's a, as much an imprint like the technology has improved, definitely, absolutely. The cost has come down, the complexity of integration is dramatically reduced, even in the last like two years, making it accessible to more organizations, but that doesn't mean they don't have to solve those business process problems, which are
ultimately how I like, I am not a scientist. I just sit between the technology and the business processes and help the two talk to each other in languages that they in the language they can each understand.
Mark Mosher (24:10.922)
Right. Yep.
D. Mauro (24:12.957)
Excellent. So Mark. Yeah, I was going to say Mark John. Do you guys have any questions?
John McLaughlin (24:14.258)
So Matt, I have a question.
John McLaughlin (24:18.726)
Yeah, so yeah, my question, Matt, would be is some of these technologies, some of the call centers I call into, it's like you've been authenticated, you've been pre authenticated. I'm like, I didn't do anything. What if I was spoofing this phone number? Are some of them just giving up and, and saying, okay, you know, as I guess you've mentioned levels of authentication, right? So if I'm just calling in to say, hey, I need to make a claim on my homeowners.
D. Mauro (24:29.113)
Mm-hmm.
John McLaughlin (24:46.278)
That's one thing, but if I say I need to transfer a million dollars, you know, to my friend Mark Moser, then maybe authentication kind of goes off the hook, right? But I have noticed recently a couple of call centers I get into are, I seem to be, you know, self-authenticated without doing a whole heck of a lot. So I was curious about that.
Matt Smallman (25:06.518)
Yeah, I think mostly that's probably based on your device you're using for that interaction. So like, and particularly when you've got a high frequency of interaction, like, if it's if it, I don't know the specific order, if it's your bank, then you probably enrolled that device when you set up your online banking, for example. And depending on the operating system, there may or may not be a back channel to the app that's on that device that can tell the call center whether or not the
D. Mauro (25:24.236)
Uh.
John McLaughlin (25:25.501)
Gotcha.
Matt Smallman (25:35.234)
device is in a call at that point. There are other techniques that can be used to figure out whether you are the genuine user, like when is the device unlocked at this point? Can I be comfortable with that? And then when you look at the network traffic that's going from like maybe you're using Wi-Fi calling and we can see the path for that. Maybe you're on network, but you're on the right network. And maybe we've got through a partner or somebody.
D. Mauro (25:54.519)
The IP.
Matt Smallman (26:02.494)
a deal with your network that tells us some more information about what cell tower you're connected to and that the SIM hasn't been swapped in the last 30 days and that this is consistent with you. No other calls have been seen. So we can make a risk assessment based on all these data points about whether or not it's worth putting you through your mother's maiden name and date of birth because frankly the fraudster is going to know your mother's maiden name and date of birth. So what's the point of that process anyway?
Mark Mosher (26:13.537)
Oh wow.
D. Mauro (26:29.188)
Right.
John McLaughlin (26:29.214)
Exactly.
Matt Smallman (26:29.394)
So that's where we talk about kind of passive and probabilistic. Yeah, that none of those is really like, is it really you? It could be your wife picking up the phone and using it on your behalf. But the risk associated with that is not as significant as some random third party in another state or even another country who's trying to impersonate your phone number with a with a VoIP call or something like that. So yeah, so that is the principle we're trying to push forward here, which is like passive and probabilistic. Like we can make.
D. Mauro (26:36.805)
Mm-hmm.
John McLaughlin (26:50.447)
Okay, yeah, that's good.
Matt Smallman (26:57.174)
Don't make people don't think that friction is work. Don't think that friction is security. Friction is not security. They're not the same thing. They can sometimes have that effect, but they are not the same thing.
John McLaughlin (27:01.182)
Hahaha
D. Mauro (27:08.909)
Well, and the inconvenience is not bad. The inconvenience of, let's say, somebody having some behavior analysis examined is not bad. I mean, we talk about often if we're located in one state and we're taking a road trip and we're stop at a gas station, three states away and we swipe our credit card through a gas pump, right? We'll get a text from our bank. And they're like,
Mark Mosher (27:10.176)
Yeah.
Matt Smallman (27:29.985)
Yeah.
D. Mauro (27:36.881)
our year card was just used at in this location, blah, blah. And I'm always like, that's good. Like I'm glad they did that because I could sit here at home and buy things from China all day long and they're not going to ping me, but they know that my device or my credit card was used three hours away. And so they're verifying that's an anomaly, right? That's something very odd. It's, it's, it's outside the norm of regular behavior.
Matt Smallman (28:04.898)
And for many organizations, that kind of anomaly detection is actually far better than any security farce that you might put people through. The fact that this person is phoning us on a number that belongs to a customer that we know is genuine, but actually the call looks like it's SIP traffic rooted from India is like, okay, we need to do something different with this call.
D. Mauro (28:14.464)
Mm-hmm.
John McLaughlin (28:29.416)
Mm-hmm.
Matt Smallman (28:33.858)
these kind of anomaly detections, they all suffer from this kind of false positive risk here. So it's about getting it's noticeable that your credit card company didn't decline the transaction, they just decided to alert you about it, which is a which is a different step from world stop, we're not letting this through, which might have happened if you try to buy petrol and another or gas, sorry, my vernacular, right in another in another country, or if or if the same card have been seen at another physical location, kind of within an hour of that, which was
D. Mauro (28:38.113)
Mm-hmm.
D. Mauro (28:42.937)
Right.
D. Mauro (28:48.494)
Right.
D. Mauro (28:53.601)
No, that's okay. I know what it means. Yes.
Matt Smallman (29:02.766)
clearly impossible to travel between. So there were different measures on this kind of spectrum.
D. Mauro (29:05.251)
Yes.
D. Mauro (29:11.989)
Now, do you address this mission and this in your book, unlock the call center proven way to upgrade security, efficiency and color experience? This is all part of it, right?
Matt Smallman (29:22.822)
Yeah, so what I've tried to do with the book is kind of like, and it was a, it's a fascinating experience. And if you've had the opportunity to write a book about an area which you're passionate about, it's a great process because it forces you to go back to square one and kind of explain at first principles how you reach the conclusions that you've kind of, you started to intuitively reach because of experience and pattern recognition.
Uh, and you have to go back to first principles. And so what I've done in the book is try to lay that out from the beginning. Like what, what is the cost of poor security in the call center? Like there's not just a security, absolutely, but there's a usability impact. And some of that's obvious, like handle time, people spending 20, 30, 60 seconds of what might be a four or five minute call. So 25% of all talk time spent on security processes. That's a huge expense to a large organization. So.
There's a usability and efficiency aspect to it. And then there's the security aspects. And then there are softer kind of, uh, business reputational aspects. Right. So we tried to kind of, how do you actually articulate those in a way in which like moves beyond intuition? Like no one who, who lives in, it lives in a call center, operates a call center, manages it, thinks to themselves, you know what, this is like, I'd love this, these, these questions are perfect. Why would I do anything different? They don't think that they struggle to articulate.
what the problem is and how to fix it. So what I've tried to do is like, here is how to articulate the problem or here are some techniques that I've found helpful to articulate the problem. And then, so what does a solution look like and how to understand what methods are available to you? What's the right mix? What's the right sequence of those mix? Because you're not gonna do all of this overnight. Some of it's cheap, some of it's expensive, some of it's not appropriate, some of it's perfect. So you need to figure out that method and mix, your kind of strategy.
D. Mauro (31:01.614)
Right.
Matt Smallman (31:10.174)
And then when you've chosen how you're going to do that, you need to think, move on to implementation and actually taking technology. Yes, you need to plug the boxes and the wires in and make sure it hooks up with the right systems here, there, and everywhere, but actually you have to put it in front of your customers and your agents. So that's what we try to do in the book. I largely think we try to write it from a kind of time neutral perspective and try to kind of, because the technology moves on.
Uh, but I, so I think we've, we've achieved that mostly. So it published, uh, back in 22 now. So actually when I look at it now, I'm like, I don't think, I don't think, oh, no, I need to write that actually. The lessons are still trusted and true. Uh, we have client new clients coming on all the time. And we're like, well, this is, this is the playbook. This is how we just walk through the, these steps. Um, so yeah, I would, I would recommend it to anyone who's interested. unlo both the US and UK English spellings, just in case.
D. Mauro (31:35.841)
Hmm.
D. Mauro (31:52.611)
Right.
D. Mauro (32:02.969)
Absolutely. We will have links. Yeah, we'll have links to that in the showbook. I mean, in the show notes. Yeah, no, it's really interesting. I mean, I would think that the vast amount of data breaches in the past few years and the data leaks that have been out there really have driven this too, haven't they? Because
John McLaughlin (32:03.334)
Ha ha ha!
Matt Smallman (32:10.99)
show next year.
D. Mauro (32:30.853)
asking somebody about their social security number or their security questions or things like that, we can find them online. Like they're available right now on the dark web. It's right there. Like it's right there. Like anything that you guys have, we can find it in about five minutes right there. Like it's so there has to be a better way.
Matt Smallman (32:38.923)
Yeah, I think...
Mark Mosher (32:42.683)
Yeah.
Matt Smallman (32:52.406)
I mean, I think that's what we had to start like, when you think about the evolution of service like this, when you kind of started off going to your bank in the turn of the last century, like it was it was a manual ledger on the desk. Yep. So the like the friction was like you had to actually walk into the branch. Yep. So this attacks aren't scalable in that kind of environment. You're often recognized by the person who as the person who recently been served anyway.
D. Mauro (33:04.641)
Mm-hmm, right.
Matt Smallman (33:17.898)
It was all there in the book and you couldn't really do anything other than compromise that. So, but when you try to move to remote type service, we have to kind of replicate elements of that. And it just hasn't stood the test of time in the same way. And as you say, those things that really gave us security through obscurity, like social security number, no one used their social security number. Now everyone uses their social security number. It's in everything.
So it's just available and it might as well be free. Mother's maiden name is another classic example. No one knows anyone else's mother's maiden name, unless you go on Facebook and there is spot on. So we've effectively eroded the value of all of these security measures and then we replace them with pins and passwords, which have that kind of, they are secret, but we know exactly how people really behave with pins and passwords. They reuse them across
John McLaughlin (33:50.078)
Sure.
D. Mauro (33:50.381)
Right. Yeah.
Mark Mosher (33:50.69)
Okay.
D. Mauro (34:08.129)
They reuse them all the time and yeah.
Matt Smallman (34:09.266)
across multiple things, yeah. And people don't protect them properly when they have them. So now pins and passwords have been compromised. But I'm not, I'm not like your, people aren't that bad. So like your bank pin or your bank passwords, people do have a kind of a sense of like not share. It's like some services have better protections than other services. So even those who aren't using unique ones, sometimes we can force them to use unique ones with just bizarre rules.
D. Mauro (34:14.853)
Correct.
John McLaughlin (34:17.219)
sure.
D. Mauro (34:31.876)
Correct.
Matt Smallman (34:38.442)
Sometimes we have different lengths of pins and passwords that we make people use or make change them, all those kinds of things that kind of adds a little bit of security. But coming back to your point about AI and synthetic speech, I actually see the biggest threat from this technology right now being the ability to automate and scale social engineering, phone-based social engineering. So I can, these things are so, they have...
D. Mauro (35:01.057)
Yes.
Matt Smallman (35:06.102)
when I talk to clients about this kind of synthetic speech evolution, we are beyond the human ear test point now yet you can't you cannot really tell and particularly you can't tell on a eight kilohertz phone where the audio is down sampled anyway, you can't tell whether it's just an awkward line or a breakup or something to do with the line versus the voice. So when somebody calls you and claims to be your bank and is using a voice that sounds very, very professional and
local to where you are, you don't know if that's a real person or whether that's one of a thousand calls being originated out of a box sitting in the back of a dark data center somewhere. And now we've got this opportunity to scale attacks against consumers because consumers don't have anything like the same protections that we have in the enterprise around looking at all our inbound traffic, understanding the call routing.
D. Mauro (35:46.821)
Correct.
Matt Smallman (35:59.01)
uh, analyzing the behavior in the other, they just got a phone call. They just pick it up. And despite regulators best attempts to kind of stop cold calling and spam calling, some people are going to answer. And those people who answer then get treated to a voice that sounds really realistic. And now because it's connected up to a kind of optimized large language model and a very, and a real time synthetic voice, like we can have a real conversation with them. And ultimately the objective of that real time conversation is to obtain their PNL password.
Mark Mosher (36:03.974)
Yep.
Matt Smallman (36:27.95)
and or to keep them talking whilst we initiate an interaction in another channel that's going to generate a two-factor authentication code that we can then ask them for and then automatically compromise that. So these AI voices, yes, there is a theoretical risk against voice biometric systems, but that's like the top of the pyramid. To actually, with past sound-alike, to actually be enough alike to match against a voice print is significantly harder than just tricking someone into giving you your pin and password.
So for enterprises that aren't properly checking the voices of the callers who are, their customers who are calling them, they're now exposed this enormous risk of just the, you basically can't trust that the, the person who is giving you this pin or password is the customer who they claim to be, or even as you said, the, the kind of two factor multifactor authentication challenges.
D. Mauro (37:17.189)
Two things popped in my mind. One, in the last deep fake technologies, whether it's synthetic voice or synthetic video, the FBI here in the States, they issued a warning about this back in July 22, which was a long time ago in technology life cycles. But it has gotten so much better just in the last four or five months.
the level of it and the customization to where whenever I show people some of these deep fakes, both audio and video, they can't tell. The average person can't tell. And so what the second point that popped in my head is I don't know a company on the planet right now that has deep fake detection budget planning for 2024, right? Or 2025.
Is that where you see us going, at least on the voice end? I imagine it's already implemented in certain financial institutions, right? I mean, what is your, what's your insight?
Matt Smallman (38:24.23)
Yeah, so we are in this really interesting, we're in this really interesting phase right now yet, where, so the commercial technology, if you think about the commercial objectives of the firms developing this technology, they have largely achieved their objectives in terms of sound alike. And now their commercial incentives are about speed and cost really, like how quickly can I do it and how cheaply can I do it. So they're, they're incentive to actually
D. Mauro (38:38.824)
Mm-hmm.
D. Mauro (38:48.728)
Mm-hmm.
Matt Smallman (38:52.61)
be alike, like to fool a biometric system, isn't significant. And in fact, there's some reputational downside to them of being involved in some of these things. So Eleven Labs was in, was, was a, well, Eleven Labs actually gave up the user who created the Biden voicemail call.
because it's not in their interest to be associated because the audio is watermarked, yep. So they can listen to the audio, they can ping it back to an exact specific user and say it was him and the FBI can go and arrest them or I'm not sure of American judicial process but I saw someone arrested them. So it's not in those firms' interests to kind of get too much better or to be involved.
D. Mauro (39:14.661)
Well they did. Really.
D. Mauro (39:19.449)
Yeah.
D. Mauro (39:26.617)
Right.
Matt Smallman (39:38.438)
in trying to defeat security technologies, because there's consequences of doing.
D. Mauro (39:42.869)
Of course, right, because otherwise they will cease to exist, right? They will become a target of law enforcement.
Matt Smallman (39:46.231)
Yeah.
Matt Smallman (39:50.018)
that's not to say that kind of the technology principles don't become more mainstream and they're not then used by more nefarious actors and that the Moore's law or however much we've accelerated off that now means that actually the processing power required to produce those voices without watermarks and without being a commercial enterprise is reduced significantly. So there is that risk about the kind of defeat, conscious defeat effort. That said though,
D. Mauro (39:54.934)
Absolutely.
Matt Smallman (40:18.102)
Voice biometric solutions have always had to worry about this risk. And like even like a decade ago, the first things that we put in synthetic speech risk has been a thing that we've worried about. We didn't used to worry about it very much because you like the voice that we heard didn't even sound like the real person. Like you could tell you can't have a conversation with an agent with was it kind of that.
Mark Mosher (40:34.765)
Right. Yeah.
Matt Smallman (40:41.45)
Microsoft kind of speech to text service or text to speech service you used to use in your dictation model kind of five years ago, because no one would believe that was a real person. But now with the focus, particularly for things like podcasts and media or like sound alike, emotions, inflection, all of that, and a lot of that's driven by just huge training corpuses. And that's an interesting kind of issue is that regardless of how many algorithms there are, there are only so many training corpuses and that there are remnants of all of those training corpuses in
the actual output, which is something you can think about detecting as well. But you get to this point where now actually they do sound alike and it's hard for humans to tell the difference. So we can start switching on these detection technologies. Um, but there's still obviously, um, they're not quite in many cases, even till maybe a year or two ago, they were failing biometric tests, like they were just, they're just nothing like the real speaker and, and many of them are still not like the real speaker.
D. Mauro (41:34.009)
Mm-hmm.
Matt Smallman (41:39.534)
think if you looked at some of the press we've seen around breaches at large organizations in the US and the UK, the reality is that one thing you saw on TV or in the paper, like that was like after multiple attempts and iterations of the process and a lot of time and effort spent to get to that point. It wasn't a all press this button and off we go. That, and that was just to pass the kind of biometric test. Like, does it sound enough like the person we think it does? And
D. Mauro (41:58.509)
Right.
Matt Smallman (42:08.99)
In many cases, that's to some degree, we can just tighten those thresholds and not reject significant numbers of real customers. But ultimately, we do have to think about how we detect those. And to some extent, this has become like an antivirus of the 80s and 90s. If anyone remembers patching your Symantec or McAfee files, like, oh, there's a new patch update. Let's update that. And to some extent, that's the position we're in right now.
Mark Mosher (42:28.205)
Thank you.
D. Mauro (42:28.954)
Of course.
Matt Smallman (42:35.63)
There's this kind of constant evolution. Like we have the technologies to detect this. It's just that we have a kind of known set of actors which we can detect, but then we have the kind of the zero day type new voices, new algorithm from new vendor we haven't seen before that we need to have some protection against as well. And as I said, there are a range of techniques and again, not a scientist, usually say not a lawyer but not a scientist today. And that
There are a range of techniques that make that possible. And voice biometric vendors interested in you're on the kind of the cutting edge of this stuff. And I would expect, and if I know that there are media organizations who, like going into your election cycle in the US, yeah, this is a huge reputational risk for all of your media organizations. Like, yeah.
D. Mauro (43:19.697)
Oh yeah, it's going to be the Wild West. And in the next six months, it's going to be hilarious to watch because it's going to be, it's already starting. Like my elderly in-laws or friends of ours will come and say, did you hear that they said this? And, you know, I saw and I'm like, it's not accurate. They're not like, that's not like, let's verify this. And it's like a re-education of.
Mark Mosher (43:20.85)
Yeah, yeah.
Mark Mosher (43:25.844)
Thank you.
Mark Mosher (43:39.689)
Hehehehe
Matt Smallman (43:43.826)
Yeah, and this, but it is that it's that transition from someone wrote something that's something about someone that has some degree of belief, but caution against it.
D. Mauro (43:52.361)
It's just a challenge, right? It's, yeah, it gets back to just human nature, believing what we see and we've always, you know, right, or believe what we hear because that has been the way that we have been able to correlate things. And the scary part is those that wanna do harm are getting so advanced and...
Mark Mosher (43:55.115)
Right.
Matt Smallman (44:00.948)
See in here. Yeah.
D. Mauro (44:15.757)
behaviorally, they are early adopters, they are offensive, that's what they want to do, and they're at the cutting edge. And it's a matter of whether these institutions can keep up. I wanted to ask you what
Matt Smallman (44:27.742)
I think that is the role of media organisations are going to have to like, before they air anything now, they are going to have to verify it. They're going to have to at least have some degree of verification. Yeah. It's true. Yeah. Good point.
John McLaughlin (44:35.427)
That's it.
D. Mauro (44:35.845)
We have to go back to true journalism, right? We have to go back to real journalism because that's kind of the way it used to be.
John McLaughlin (44:38.738)
Right.
D. Mauro (44:44.681)
Yeah, I mean. No, go ahead, please.
John McLaughlin (44:44.946)
So Matt, I'm sorry, David. So, you know, seeing that the show is called Cybercrime Junkies, can you give us one of the most egregious maybe breaches you've seen or something remarkable that fans of the show can kind of latch onto a little bit?
D. Mauro (45:04.285)
Yeah, I was just about to ask about the CFO who did the video deep fake through that through that business email compromise. I believe it was a financial institution in Hong Kong. Yeah, China.
John McLaughlin (45:17.03)
I know.
Matt Smallman (45:20.578)
Yeah, I am somewhat skeptical about some of these things that you see out there. I think the advantage of the telephone channel, I suppose, is that the scale is not significant. It's a single customer per interaction. I think that the difference is something like the MGM scenario, where you kind of give the keys to the kingdom out over a single phone call. We don't tend to see attacks kind of...
D. Mauro (45:36.837)
Correct.
D. Mauro (45:43.416)
Right.
Matt Smallman (45:49.806)
huge scale, because it is this one to one kind of interaction. What we do see is that when a fraudster finds a methodology that works for them, they double down on it at scale, and these tools are enabling them to scale that and to some extent, whilst it shouldn't be true, fraud is a cost of doing business. And in some industries, and it is just something you kind of seek to minimise, but do you expect you'll never completely avoid and you just try to find the right balance of two.
D. Mauro (46:01.049)
Hmm.
Matt Smallman (46:18.702)
I think if you looked at those kind of executive compromise things, and again, I think this is the other example where deep fakes are more likely to be. So think about the number of internal business processes that you're even you're aware of that just dependent on like a phone call or less an email now because we're aware of that risk. So we've moved off emails and said, so don't email me this, just phone me and tell me to do it. Okay, so now we phone and tell it and it's just not like those internal business process compromises that they're probably the next ones I'd go after.
after scale attack against consumers for using kind of deep fake technology. That said though, I think we've seen a couple of press stories in the last few years about this and I'm always somewhat skeptical because if I have been duped about somebody else has tricked me or socially engineered me into it, like I'm going to want to defend myself and in the absence of cool recordings and some verifiable data that says, no, that really did sound like that person.
Mark Mosher (46:51.885)
Thank you.
Matt Smallman (47:16.306)
even a biometric system can't tell them apart and we couldn't detect any deep fake on it. In the absence of actual call recordings to verify against it, and given the kind of popularity of stories in the press right now, it's quite easy to go, oh, it was a deep fake. And I'm just not sure Fordsters in many cases need to be that sophisticated just yet. I think there are organisations that are still wide open on their internal business processes that don't have appropriate controls, even deep fakes aside.
I am somewhat skeptical in the absence of direct evidence. I'm not saying it's not possible. I'm just saying it's not necessary right now. Organizations processes are still not controlled well enough. It doesn't probably answer your question.
D. Mauro (47:53.226)
Mm-hmm.
John McLaughlin (47:58.642)
So do you feel like, and Matt, the other question I had was, do you feel like the share of the purse is the same for a CISO looking at endpoint and all the things he needs to do to lock down the network versus the call center? Do you feel like that the resources are there more than they have been in the past, I guess, is my question.
D. Mauro (47:58.795)
Excellent.
Matt Smallman (48:21.034)
Well, I will say like, it never even used to be in the CISOs remit, as what I will have said, like when I started this a decade ago, it wasn't, it wasn't a CISO type issue, if there was even a CISO, it wasn't a security issue. Uh, it might be a risk issue, it might be a business risk issue. Um, but I've definitely seen a change over the last 18 to 24 months of, uh, technology orientated CISOs and CISO functions taking some responsibility or interest in this domain. I think that's.
D. Mauro (48:26.626)
Hmm.
Matt Smallman (48:51.202)
good on the whole because they bring rigor to it. So like red team testing, penetration testing, those principles are sound and are applicable to this channel. I think it has challenges where it comes into more kind of probabilistic nature, and the difference between denying versus accepting service. I think a lot of, still a lot of orientation is around kind of perimeter defense.
D. Mauro (49:18.225)
Mm-hmm.
Matt Smallman (49:20.194)
because this is the channel of last resort, we still, there's still a customer, yeah, there's still a customer and we have to, we have to serve them to some degree. Uh, and I think they potentially get less comfortable with these more probabilistic, um, authentication technologies. And that, I think we, we're definitely working it through, uh, in, in places for positive. And I think the rigor that, that those functions and those kind of ways of thinking bring to the problem is, is positive, but I think it's still
It's still not there yet. I think I see examples. Yeah. I mean, it's like red, red team testing. Like I could break into any cool center. I wanted to, um, that doesn't mean that, um, Ford says they're going to do that all day, every day, because the incentives aren't there or the technology isn't there, or I have particular knowledge that they don't have access to, or, or in most cases, there's just far easier ways to compromise your organization. So I think there's this kind of never ending kind of where, where's the biggest threat is it like internal processes.
Is it like your employees? Is it your network? Is it your deployed devices? Is it your call sensor? Yeah, and at some point, but unfortunately, because the phone channel has been neglected, it is tending to come to fore. And you see that with like the MGM scenario, like these MFA tokens are really hard to authenticate. Like people call all day, every day to reset their MFA tokens, particularly after a public holiday.
John McLaughlin (50:22.33)
It's whack-a-mole, right?
D. Mauro (50:25.428)
Mm-hmm.
Matt Smallman (50:42.282)
or when we've got them on some 30-day reset token, and therefore it just disappears into the noise, and we have to have a call center to support because our employees need to work, and that's where the hole is found. So I had a bit of a rant there.
John McLaughlin (50:58.526)
Great. No, no, great.
D. Mauro (50:58.969)
So, no, that's really, really interesting. So, let me, as we're wrapping up, how does this affect in the guidance that you're providing financial institutions, companies in the commercial space, how is this affecting how they plan for a breach? If they're doing incident response planning, do you...
get involved in that or what are you seeing in your experience about how they are addressing this? Because you bring up some excellent points that I don't think a lot of people have really thought about in terms of the behavioral aspect and the practical aspect of the traditional ways that we authenticate.
Matt Smallman (51:48.53)
Yeah, it's really, again, I think that kind of CISO type approach is really strong in this space. I've been doing this for a while. And I think the when I first started talking to people about voice magic, it's like a magic solution. Yep. It's not but it's not magic. It's a probabilistic technology. And you throw enough interactions at a probability and you're going to get the wrong side of the decision at one at some point. So it is it's inevitable. And I often think like
I don't consider an organisation to be properly live until they've had their first incident, because they are inevitable. And I always encourage people to plan for them as exactly as you say, to have that contingency plan in place. When this happens, these are the steps that we will take to evaluate the risk, identify if it's systemic or identify if it's just a one off, confirm whether it's in our risk appetite or not, accept it, reject it, investigate it, whatever the appropriate actions might be. And I just don't see that.
very often in the call center space, except with the technologies we operate in. It is far more reactive. So I think that is a great thing that is being brought to this by kind of security professionals. And yeah, security professionals are supposed to risk professionals. It's inevitable. Breaches are inevitable. We need to plan for them and have the processes by which we mop up and clean up after them. And those organizations that have implemented those plans, it's really interesting. Like who...
D. Mauro (53:05.331)
Mm-hmm.
Matt Smallman (53:16.138)
after some of that stuff that happened in the press, did any of those organizations change their posture? Security posture. And I think because in many cases, we'd worked through those scenarios, or people had worked through those scenarios, they knew what the response, they knew what the appropriate response was. And it was, this isn't the risk worried about, let's check everything was as we expected it to be. Yep, it was as we expected it to be, okay. And that.
Mark Mosher (53:21.697)
Right. Yep.
D. Mauro (53:23.097)
Right.
D. Mauro (53:42.053)
Absolutely. That's fantastic. Matt Smallman, thank you so much for your time. This has been a great conversation. Like, yeah, I mean, your experience, and you've talked about something that really affects a lot of people on either side of it, right? On a regular basis. And this is going to be something that is changing so fast. So we thank you so much. We'll have links to...
Matt Smallman (53:47.626)
and to you for having me.
Mark Mosher (53:49.782)
insightful.
D. Mauro (54:10.013)
your book and your consulting arm in the show notes. We encourage people to check it out. You do work here in the States throughout North America as well as the UK. So just because they buy their gasoline in liters and call it petrol does not mean we don't wanna do business with them. We always wanna do business with our friends there.
Mark Mosher (54:27.093)
from.
Mark Mosher (54:32.17)
Right.
Matt Smallman (54:35.022)
I think there's a really interesting kind of parallel. I think that the US is about to experience what the UK experienced maybe five, six years ago with the advent of real-time payments and other challenges like privacy regulations. So I think it's a really interesting time to bring some of those lessons to bear in the US as well. So yeah, I've got a couple of flights booked already. So I look forward to spending more time in the US this year.
D. Mauro (54:45.345)
Oh yeah.
D. Mauro (54:53.483)
Absolutely.
Mark Mosher (54:57.692)
Excellent.
D. Mauro (54:58.465)
Well, great. Well, let us know when you're where you're going to be. So hopefully we can we can meet up in person, my friend. All right. Very good. Well, thank you so much. We we appreciate it, everybody. Thanks.
Matt Smallman (55:04.715)
I won't be specific just yet, but that's fine.
Mark Mosher (55:07.099)
Hahaha
Matt Smallman (55:10.594)
Thanks, guys.