Phishing For Answers

Empowering Your Workforce: Andrew Obadiaru on Balancing Cybersecurity Awareness, AI Impacts, and Innovative Training Strategies

Joshua Crumbaugh, Founder & CEO of PhishFirewall

Send us a text

The episode emphasizes the critical role of security awareness in protecting organizations from cyber threats, particularly phishing attacks. Andrew Obadiaru, CISO of Cobalt, discusses strategies for enhancing employee education, implementing phishing simulations, and leveraging AI to stay ahead of evolving cybercriminal tactics.

• The human element is key in cybersecurity defense 
• Importance of security awareness training for all employees 
• Insights into conducting phishing simulations and their benefits 
• Current trends in phishing attacks and use of AI 
• Strategies for engaging employees in security training 
• Tips for maintaining vigilance against cyber threats 
• AI as a tool for enhancing security and its associated risks

Joshua Crumbaugh is a world-renowned ethical hacker and a subject matter expert in social engineering and behavioral science. As the CEO and Founder of PhishFirewall, he brings a unique perspective on cybersecurity, leveraging his deep expertise to help organizations understand and combat human-centered vulnerabilities in their security posture. His work focuses on redefining security awareness through cutting-edge AI, behavioral insights, and innovative phishing simulations.

PhishFirewall uses AI-driven micro-training and continuous, TikTok-style video content to eliminate 99% of risky clicks—zero admin effort required. Ready to see how we can fortify your team against phishing threats? Schedule a quick demo today!

Joshua Crumbaugh:

Hello and welcome to another episode of Fishing for Answers. Today I am here with Andrew Obaidu. I'm sorry, I lost it right at the last second. Go ahead, Obaidu. Did I get it right?

Andrew Obadiaru:

Excellent, you got it.

Joshua Crumbaugh:

I'm terrible. I do apologize, but today we're here with him. He is the CISO Andrew is the CISO of Cobalt, and so we're going to dig into a little bit of what he sees, or his views on the human element are I'd?

Andrew Obadiaru:

love just to sort of open it up and ask you, you know, do you have any overarching sort of opinions about security awareness or phishing or anything like that? Sure, josh, thanks for having me. It's a pleasure to be here. I think you know security awareness, I think, is important, right, you know, for any organization, security awareness is important. For any organization, security awareness is a key component of any security stack, because that's how you keep your folks up.

Joshua Crumbaugh:

Your biggest defense is really your employees, your human elements.

Andrew Obadiaru:

If they get it wrong, one screw up could create problems for your organization. So for us, we invest heavily in security awareness. It's important that our users understand what is expected of them from a policy standpoint from an acceptable use policy standpoint and when it comes to phishing, it's the same thing and there's a lot of different things you have to worry about. Now, with the introduction of AI, phishing has become a lot more sophisticated, so the more you're able to educate your user community about that, you're able to stay a little bit ahead of what cyber criminals are trying to do. All this, I think I place a lot of credence on security awareness, because it's really a key component of our security stack and our security strategy.

Joshua Crumbaugh:

That's wonderful and music to my ears, so I'd love to dig into that. And one of the places I'd like to start is around fishing simulations, and I want to ask the question have you ever had anyone get upset because of a fishing simulation? And then, if the answer is yes, could you tell us about it and maybe what lessons you learned and maybe any changes you made as a result?

Andrew Obadiaru:

Yeah, I mean looking back at my career, I don't think I've ever had a situation where anyone has specifically raised objections to fishing simulation. I mean in the past anyone has specifically raised objections to efficient simulation. I mean in the past we've had concerns about the frequency of it. But to not to completely say hey, I don't want to participate in this. It's just not an option that we give to our employees because of the potential implications of efficient going through the process.

Andrew Obadiaru:

So if you are not aware, if we don't conduct simulation, then we can't evaluate the sophisticated nature of our employees. How well do they truly understand out there? How do we bring to their awareness some of the sophisticated nature of these new phishing emails that you have out there? So I think we can underscore the importance of phishing simulation. It allows us to evaluate the levels of skills of our employees, how well they're able to make those determinations, because you're really not there, it's really between them and their computer. So if they're not adequately trained or even informed on what they should be looking for, then you run a big risk of being fallen prey to some of these very sophisticated fishing simulations out there.

Joshua Crumbaugh:

So I think, for me.

Andrew Obadiaru:

It's not an option for us. I haven't met an employee yet that's going to tell us well, we don't want to do fishing simulations, just an option. So the importance of that. We've emphasized why this is important for us as an organization and I think, everyone has been very cooperative and understanding of that.

Andrew Obadiaru:

So in all of the organizations that have worked, fishing simulation has been a key component of our security strategy or security awareness program. Uh, so no one has really raised uh concern around. Oh well, we don't want to have phishing simulation. Like I said, you know the frequency could be troubling for people. Uh, you don't want to overdo it and people start to take it for granted, uh, so what we've tried to do is to kind of peg it up quarterly. Quarterly gives them the ability to understand and we try to make it a little bit more fun for them as well. So it's not just that they understand and we try to simulate something that is so real and something that they can easily relate with. So for me, I've never had an employee raise a concern.

Joshua Crumbaugh:

I think it's also part of your methodology, your goal, how you go about it. If the goal is nothing more than to trick the user, I think that's when you're going to have upset users. But when the goal is to educate and you can properly communicate, I find that you don't get a lot of those. But it really depends on methodology and that's the reason that I ask Very interesting and I feel like maybe a great follow-up question then to that is, when you run phishing simulations, what would you rank as your top three goals?

Andrew Obadiaru:

as your top three goals? Well, we are looking for one the percentage of folks that are clicking. Right, you know who is clicking on this right, who is sharing credentials online, who is raising concerns about what they suspect? Right Reporting in a suspicious email. And, luckily for us, our numbers have been very low. We're actually below the average organizationally. So I think that's been good.

Andrew Obadiaru:

But you know, off the back, that's what I'll be looking at. If we get an increase or increased percentages of folks that are clicking, that's gonna be concerning for me. Right, that means they haven't been paying attention to our training. Our training is not sufficient enough to give them the information they need to be able to act in those moments. So, for us, those are the key metrics we'll look at who's clicking, what percentage of folks are actually clicking, who's reporting is a suspicious email and who's going as far as sharing credentials via this simulation. So those will be the three big ones that I'll be focused in on and, as I said, luckily for us, those numbers have been very low, actually below the average industry standards.

Joshua Crumbaugh:

I know it from what I'm seeing it very much changes based on industry. I would imagine you see not only your own, but potentially some of your clients as well. What, what types of attacks are you seeing most frequently right now?

Andrew Obadiaru:

from from the fishing standpoint.

Joshua Crumbaugh:

Well, any social engineering or human in general. I mean you could be smishing, phishing, vishing, you name it.

Andrew Obadiaru:

Yeah, it's mostly phishing right, because that's still the easiest way to get information out of users right. So it becomes how sophisticated that phishing is right, how believable it is. So I've seen a lot of that, not luckily for us. There was a time we were getting that it would deploy different methods, even going as far as hey, there's a gift card from this, or when is the holiday season, we get the UPS, the FedEx site where you're expecting a delivery, or somebody may have sent a picture from your Facebook and they say, hey, we were attending your daughter's game yesterday. Blah, blah, blah. So there are different ways that we go about that.

Andrew Obadiaru:

With the introduction of AI, obviously, it's a lot more believable. Now what we used to use as checkpoints are no longer true, because we used to look at spelling errors and you know just how you construct that simulation right. How is the fishing rig? What logo? If there's a logo in there, does the logo look real? Right? What do you use in things like that? But now AI is helping to take care of some of those concerns for them, which is to their own advantage. That means for us we have to exercise more due diligence now, pay more close attention. The fishing simulation we put together has to be extremely believable, has to be AI-driven as well. To the extent that we can leverage AI in driving those simulations, we do that as well.

Andrew Obadiaru:

So the goal is to be one step or two ahead of them, right Anticipate what they can do and then leverage that.

Joshua Crumbaugh:

So let me ask you this I know I've seen a lot of phishing attacks targeting my finance team, targeting my IT team, targeting my executive leadership, my HR teams. Are you seeing any of that as well? Absolutely.

Andrew Obadiaru:

The executive team remains a big target. Our finance team remains a big target. We've had a few instances where folks try to redirect emails. Organizations share some of that information time to time. So the areas you've mentioned, these are the key areas of because really, at the end of the day, and there's a financial benefit to these individuals right, so they're looking for where to get the biggest bank and so if you're able to tap into a CEO's email, for example, or even a CEO's computer, that's a wealth of knowledge, of information that you can certainly leverage.

Andrew Obadiaru:

And same thing for our financial teams as well our financial department.

Andrew Obadiaru:

So those are the areas where we work closely with those individuals more to just really say if anything comes across suspicious, send that to us. So we have different technologies that we've deployed, in addition to the simulation exercise, that allows them to channel some of this suspicious email to us into a different domain where we can evaluate it and determine whether it is a legitimate email or not. But those groups you mentioned remain a big target for most of these cyber criminals. You know they want to be able to tap into your executive team or your finance team or the tech IT team, or even the people team as well.

Joshua Crumbaugh:

So are you doing any role-based phishing or role-based training to really adapt to that changing and more targeted threat?

Andrew Obadiaru:

Well, we do a little bit right. We're not really segregating the training. So what we try to do is to kind of you know, because if it's true for them, it's true for everybody else, right? So if they are not successful in fishing an executive, they may be successful in fishing a regular employee. So to us the risk is still the same. I don't want my environment to be compromised, right, whether it's coming through an executive or even coming through a regular employee. So we don't segregate in that way. But we tend to you know, whatever would land from the process, we incorporate into the training, into the simulation, and we roll it out across the organization. So yeah, Okay, wonderful.

Joshua Crumbaugh:

So I have a couple of other questions. So it is Security Awareness Month. Are you doing anything special this month as a result?

Andrew Obadiaru:

Well, you know, we just we're still in the middle of our awareness. We typically do it every October, so we're doing that. We also baking in every week.

Joshua Crumbaugh:

Is that like an annual training that you do?

Andrew Obadiaru:

Annual training is from the first of October to the end of October for every user and if we have new folks coming to the organization within that period, in addition to the annual training we also deploy new user training as well. So for this period what we've done is to kind of build some kind of reward structure into the process to say, hey, if you complete this within a record time, you get this. If you ever answer all the questions right, and then if you follow some of us, you know acceptable use policy, you know we encourage folks to, you know to do all of that. And it's some level of you know, some token of a gift that we can give out, that we give out to those that really follow those policies very well.

Joshua Crumbaugh:

Okay, so I think you mentioned earlier a little bit about making it entertaining. Did I hear that?

Andrew Obadiaru:

Yeah, like get me filled, type of thing.

Andrew Obadiaru:

Yeah, of that, uh, and the reason we do that for those folks that are not technically savvy, right, and I find some of this stuff uh, boring, uh.

Andrew Obadiaru:

So we find a way to make it a little bit more engaging for them by deploying some kind of game into the process. So we work with the third party and I help us build out some of this training so they incorporate some games in there to kind of, you know, make it a little bit more entertaining and engaging for them so they're not just trying to click, click, click and get through the process. So we incorporated gaming into some of the questions and answer process as well. So we find that, you know, to be beneficial, you know, to the employees the pace of time they take to finish their training. It's not a little bit more as a game style If you're doing it without any kind of games into it, they just want to rush through it and get it over with. But when you incorporate some kind of gaming, a gaming file, into the process, they're a little bit more, you know, interested especially the game it's interesting enough for them.

Joshua Crumbaugh:

So I'm curious, just to get your opinions. Ai is a. It's a wonderful thing, it's very cool. I know AI has been around since 1958, I think it was, or something like that right, but in the past two years to everyone's credit and the fact that it's so hype's credit it's changed and technology is all of a sudden exploding once again. But with all of that excitement and all of the opportunity that comes with it, there's risk and there's new things that we need to be training our people about. So, from an awareness perspective, what are the top things that you believe we need to be training users about as it pertains to AI?

Andrew Obadiaru:

That's a very good question. I mean, ai comes with a lot of benefit and advantages. There's also risk baked into it, which may be easily you know, I may be easily aware of it, and the ordinary end user may not be. What does risk are To them? They're just imputing data and trying to get output out of it. So what we've done is to kind of let them know, be cognizant of the data you share in this process, because we lose control of that data once you put it in. It's no longer within our purview. So that means, on the back end of that, whoever's receiving that data? Once you put it in, it's no longer within our purview, so that means, on the back end of that, whoever is receiving that data, whoever owns the LLM that means we're not going to have any visibility.

Andrew Obadiaru:

So sensitive information, you can share that via AI. You can type that into that prompt to seek any particular feedback. So we've also developed, you know, ai driven policy to really, you know, educate our users on what some of those risks are and how to kind of move when you're within that space. So there's also the question of, you know, the veracity of the information you get out of AI. You know, how credible is this? How much reliance can you place on this information? So I think those are the kinds of things that we're cognizant about and we try to pass on to our end users.

Andrew Obadiaru:

So our policy is very well focused more around data protection and what information you can share, why you mustn't share this kind of information. And, from a developer standpoint, we try to discourage them from using it for code development, putting codes into it and things of that nature, because if you put your code into there and somebody could essentially, you know, compromise the back end of that, and then that information is exposed, so, uh, so those are the kind of areas we're focusing on from a risk perspective. Uh, we're not trying to, you know, uh, prohibit the use of ai across the organization, but what we're trying to do is to really, uh, educate them on the risk and have them, you know, keep that in mind or be cognizant of that risk as they go through these different AI platforms.

Joshua Crumbaugh:

Interesting. No, I think those are great points. Just a couple that I might add, and something I'm growing more and more concerned of every day, which is the reason we're so well. We, you know we don't just let anyone into the data, but you know what about those vendors that you rely on that have access to your data? And I think about that more and more, and I think that that's going to be a big part of the AI concern is what are other people doing with the data as it pertains to AI? What are other people doing with the data as it pertains to AI? Do I have to worry about somebody else's employee uploading it because, hey, claude can do stuff? Or what about Claude I don't know if you saw the new news last week or this week, sorry where it can now watch everything happening on your screen, interact with it, and so I can give it a prompt.

Joshua Crumbaugh:

Hey, do this on your screen. Interact with it and so I can give it a prompt. Hey, do this, and it knows how to move my mouse around and interact with my environment. Did you see that? Update?

Andrew Obadiaru:

I didn't see that yet, but I heard about it. I haven't had time to really review it and fully understand the storyline behind that.

Joshua Crumbaugh:

So right now I think it's not just something where you can install an agent or open it up and just have it run. It's only available through the API, so it's very limited right now in terms of what you can get on it. But still, I watched some videos Very interesting. You've got just generative ai taking off. Uh, runway has one that, uh, which I guess I'm way off subject, but it's still just really cool stuff.

Joshua Crumbaugh:

Uh, runway has one where I can upload a video and then say, turn like say of myself and say, hey, uh, make me a lego man and it'll take that exact video with everything and I'll be a lego man. Or, um, I can say, you know, put me in the jungle. Or, like, you know, put me in a scary graveyard because it's almost halloween, right? Uh and uh, and so it's, I don't know. There's some really cool stuff. Um, now I will say what about? Uh, like, uh, uh. What are your thoughts on GitHub's Copilot? If GitHub is compromised, well, our code's compromised anyway, right? So what are your thoughts on AIs or utilizing coding ai's like that?

Andrew Obadiaru:

yeah, again, that's. That's an important point, uh, and we've had those discussions with our developers. We do have a github co-pilot. Uh, we have it on limited use at the moment for, for the obvious reasons you just touched on uh so the risk is certainly there, even though we've gotten tons of assurances from github that the system is secure.

Joshua Crumbaugh:

Blah blah blah.

Andrew Obadiaru:

So microsoft has never had any issues exactly so we try to do as part of either our onboarding process or periodic renewal process or periodic touch points in our vendors, and would ask those questions, right, so we drive into that process a little bit more how, like, how much of ai you're using? How does that work? You know? Can you inform us on the extent of ai? Uh, how much of that will interact with how does that work? You know? Can you inform us on the extent of AI? How much of that will interact with our data, et cetera, et cetera? What measures you have in place to control that? So, luckily for us, the vast majority of our vendors are not really tapping deep into AI in that kind of way where it's concerning for us. So so that's good, but I expect that to change over time.

Andrew Obadiaru:

But I think what we've done, we've been very proactive in that regard and to stay on top of it, to work closely with our vendors to encourage them to be transparent with us on that use. So I think you know, for GitHub, I mean we certainly we use it. It's a very popular solution. That's part of, you know, the developer team. So we've had those conversations with our developers and they are cognizant of that risk. So I think they're very measured in the use as well. So I think we overlook that process. We have access to the GitHub co-pilot. We review some of the information in there as well. So we look at the different repos, we do our different things with that, just to kind of ensure that we're not exposing ourselves too much here and to just continue to keep our developers informed of that risk. So when you are aware of that risk, you're able to kind of, like you know, operate differently as against if you just don't have any clue what could potentially go wrong, you know.

Joshua Crumbaugh:

Yeah. So you know, one thing that I'm realizing more and more is that we, as cybersecurity professionals, really do have to stay on top of everything that's happening in AI, because that impacts our day to day job, not just in terms of new tools that may be available to us, but also it's other things, like what types of technologies and tools might be available and out there that my users are interested, that my users might be using, like I'll give you an example you do pen testing, like software, or SAS pen testing or something like that. Right, did I describe that? Okay, yes, correct. So there is this tool called Interpreter, where it allows me to integrate ChatGPT into Kali Linux command line terminal, so I can just tell it what I want it to do.

Joshua Crumbaugh:

So what if one of your pen testers was doing that to you know, obfuscate data or whatever, or heck, even just using it to pass data out of the network or something like that? I'm not saying that they would, but like, I feel like those are the types of things that as long as we're aware of them, we can block them, but as soon as we stop becoming aware of them, then that that's when they can happen, and so I just think it's almost a reminder for all of us, like this advance in technology, that we have to stay up with advances in technology so we can secure them.

Andrew Obadiaru:

Very, very good point and we've had those concerns as well. But I think what we've done is we don't usually ask about pain testing at this point. So I think all of our testers also aware of that we do use Kali. It's a pretty cool tool.

Joshua Crumbaugh:

I will say that it's a pretty cool tool.

Andrew Obadiaru:

But I think the question is, you know, first of all, do our clients want that right? Secondly, you know, can you 100% rely on the outcome of that right? So I think for now it's still on the you know evaluation. To what extent do we want to deal with AI or use it? We use AI in limited capacity, mostly for report writing, report generation. But as far as actual pen testing, we don't deploy air at all. And I think a lot of testers we've also communicated that to them You're not allowed to do that right. So we have methodologies we build, we have processes we build that they have to follow. So no one at this point is doing that, thankfully for them for that.

Joshua Crumbaugh:

To be fair, the number one thing that it's really good at is understanding the flags that you can set or the different variables that you can set for all of your different pen testing tools. So it just really makes a good pen tester better. It's not like it's just gonna go hack on its own. Um, we did get it to try on its own and it was terrible. It was just absolutely terrible. It ran an in-map scan on like the top 25 ports or something like that, and called it a day. Um, I think it had the minus a flag, so it was running scripts. So at least there was that, but I mean, it wasn't very good.

Joshua Crumbaugh:

Where I found the value was because I was just playing with it and one of the things I asked it to do was develop polymorphic malware, and initially it was just terrible. Then I reminded it hey, you're sitting on Kali Linux, you have a lot of tools to do this. And then it of course asked me well, here's two that I would suggest. Which one do you want to use? I tell it and it creates this wonderful prompt. That was spot on and it understood the context of how to structure that very easily, so I didn't have to spend 35 minutes or probably more like five, but five minutes reading through the man file to figure out how to structure that command. If it's not one of the ones that I use every day, so I don't know. I find it really interesting and I'm very excited to see where things are going in the future.

Andrew Obadiaru:

Yeah, I think over time it's going to get to that. I think most organizations and us, everyone is just a little bit being cautious at this point to see how quickly you want to make that jump.

Joshua Crumbaugh:

I think the only people not being cautious are the bad guys, precisely.

Andrew Obadiaru:

And it's helping them a lot, right. So we just want to be cautious and, you know, obviously with time there's going to be additional use of AI in every aspect of some of these security solutions. I think we just kind of, you know, taking all of the necessary measures to evaluate and see how well-based we can leverage that and to what extent we can use it.

Joshua Crumbaugh:

And I will say the real Skynet is just AI phishing us. If we got to be honest, like everyone is worried about the machines attacking and they are every day in our inbox, in our text messages, but I, I do it. It's funny we wrote an article about that and there's so many people worried about Skynet that we get like 50,000 impressions a month on the word Skynet to our website and I mean we're so far down the search rankings because we're a cybersecurity site, we're not like sci-fi or whatever right, and it was only mentioned a few times on that page, so we don't even have good keyword density or anything like that, and so it's just funny to see how much and how deep people are going on the internet researching about Skynet. So I have a question I like to ask everybody and I know it's not such a simple question but if you had to pick one or the other, which would you pick? Carrot or stick? Hey, that rhymes.

Andrew Obadiaru:

That's a tough one, right. So I think it's a combination. If I have to pick one, I'll do the carrot right Because, the way I've looked at it, you know we all have a stake in this right. So it's not just a responsibility designated just for security folks, right. The protection of our data and our organization as a whole. Everyone should have a shared responsibility in that. So if we have a shared responsibility protecting the data we're trafficking, then it's a common knowledge. You shared responsibility protecting the data we traffic in, then it's a common knowledge. I don't have to force you to do anything if you, if you share the goal and the responsibility of doing that. So I think you know that's. That's my approach to it obviously.

Joshua Crumbaugh:

I think part of that is creating the culture. Hey, everybody, we're all in this together. Security is everybody's responsibility and it makes it easy for everybody when we all do our part.

Andrew Obadiaru:

Exactly, I think when you do it that way, you tend to get more buying, more cooperation. You know your corporate is better and everyone tends to operate a little bit better that way, as against taking a heavy hand approach and trying to compel folks to do something just because you said so.

Joshua Crumbaugh:

It's that old adage you catch more flies with honey than with vinegar. I'll tell you, personally, I am a very big advocate of carrot. When you do have to use stick, I like to start with very gentle sticks like hey, sorry, but you're considered to be high risk because you keep clicking on things. If you stop clicking on things, we can lower your risk. Or you know just the light stuff like hey, you had this mandatory training for compliance we needed you to do. You didn't do it. We're going to have to disable your account until you can complete it. To me, those are they're still sticks, but they're a lot nicer of sticks than you know going straight to reporting them to HR or straight to like having their supervisor contact them, or these different things that tend to, in my opinion, lead to a negative culture.

Andrew Obadiaru:

I completely agree with that and we do. We do some of that as well, the subtle, you know, push and and edge and and continuous reminders, things like that. We do that and because we're also security a company most of our folks are security conscious, so it probably would be different for companies that wedges right. So it's a totally different mindset, totally different culture. So absolutely, we poke a little bit, we hedge a little bit and it hasn't gotten to a point where we have to really raise that stick. So at this point it's just, as you said, subtle reminders of the importance, because everyone who works here understands the rationale and thinking behind it all.

Joshua Crumbaugh:

I would hope you wouldn't have to use the stick in a cybersecurity company. I would hope you wouldn't have to use the stick in a cybersecurity company, but I did have an intern walk into my office one day on his first day of employment asking me about the gift cards that I had asked him to order. So we got a good laugh out of that one. We got that too, but no, I mean, and that's actually a really good segue. So, yeah, far as having employees sign a waiver stating that they understand, but it's a social media waiver, listen, when you say you work for us on LinkedIn, you're going to be targeted immediately. You're going to get text messages. You're going to get emails, not just to your work email but also to your personal devices, and I think we need people to understand that, because it's happening often within minutes. That's what I'm seeing. Is that what you're seeing?

Andrew Obadiaru:

Yeah, I think you know people generally. Yeah, we see that a lot and I see that particularly a whole lot myself personally. So I've just kind of adopted the approach of not to you know, we just kind of dismiss it. I expect that anyone who walks here would have a good sense of that and be equipped to deal with that where an education is necessary. And we certainly do educate them on that, and that's part of what we do, what we refer to as security, breakfast, security every week, where we kind of educate folks on emerging topics, you know prevailing topics, things that are out there, what they should be cognizant about, what they should be aware of. Some of the initiatives we're doing Just kind of like, you know, provide sort of reminders of how you should be coping with some of these types of stuff, that it would happen, be aware of it and you know this is how you react to it.

Joshua Crumbaugh:

I like it, and I think that that's really critical is to continuously keep security front of mind. We actually, early in my career, I came from a red team background If you couldn't tell by me throwing interpreter into Kali Linux, but I used to run red teams Before that. I was on red teams and even while I ran them I was often on the red teams. That gives me a little bit of a unique perspective, and so I never mind I shouldn't go there. So instead I'm going to. I'm just going to switch subjects, but let's jump on over here to have we covered everything. No, there's no way. I think we have. Okay, so I have some fun stuff, a few fun things that we can still do here. Because I did used to be a hacker and because I run a fishing company, I am always looking for good ideas. Like, maybe you have an idea for a fish that I haven't heard of, so do you have any fun creative fishing ideas? How are you going to fish me? That's a good one.

Andrew Obadiaru:

I think if I was to fish you well, I would be so focused on you know, staying on the other side, as against trying to fish you. But I think today, depending on where you are and you know what is relevant to you, I tend to believe that social media is still the biggest way right to really get to someone. If I have access to your social media profile, for example, which is easily accessible and you share a lot of information there, I think that would be the first thing I would go to right to say, hey, what pertinent information I could get out of this, I could use. That. First thing I would go to right To say, hey, what pertinent information I could get out of this I could use. That would be believable to you, right, that would make you really reconsider. Even if you're not falling in, you're going to really weigh in heavily. So that would be one area I would go.

Joshua Crumbaugh:

Another area would be if it's in the holiday season.

Andrew Obadiaru:

I mean, there's a lot of different things happening in the holiday season.

Joshua Crumbaugh:

Oh, absolutely. It's a great time to fish people because they're busy, they're distracted, they're distracted, they're busy, they're in the UK.

Andrew Obadiaru:

They're drunk. So I think you know I would have to get more detail on the situation to really plan it out, but I think social media will be a key component of it and with the whole remote first culture, so I mean the whole social engineering becomes a little bit more easier. Now, right, you're not dealing with parameters of a network or office network. You're dealing with an individual who is in the comfort of their home and you're trying to engage them. I think they're more susceptible that way as against when they're in an office setting where other things may come into place. If I'm including on your social media profile, I think there's a wealth of information I can certainly tap into.

Joshua Crumbaugh:

Oh, absolutely and most people's, and I think they don't even realize everything that they share Like. The example I love is on Facebook. Almost everybody has on their timeline someone from their graduating class that wishes them a happy birthday.

Andrew Obadiaru:

Exactly.

Joshua Crumbaugh:

And that tells me exactly how old you are, because one of those people is going to have weaker security than you. I can figure out how old they are and now I know how old you are. I know when your birthday is and they use things like their kids' names and their pets' names for their birthdays and then they post a picture of their pet on their Facebook and they put the pet's name and they post their kids' on Facebook and their kids' names and their kids' birthdays Maybe not directly but indirectly, On their birthday saying happy 11th birthday.

Andrew Obadiaru:

And now we know your kid's birthday too, so mostly on the comment side right, the people that are weighing in are sharing a lot more right, you may have just been you and your son. No, no information was given. But somebody in the camera is going to say oh my god, brian is so tall that I remember brian the last time I saw him.

Andrew Obadiaru:

You know what I mean. So there's so many information that we share people too much, you know, in social media, which you know again belongs to them. They they're not, you know, they just see it. They don't see it as a possibility that you could tap into this information to do something outside of uh, you know what they've intended that information for. So people could be. There's a lot of information you could leverage in getting to somebody and really fishing that individual.

Joshua Crumbaugh:

Yeah, no, I agree, the big corporate giants of America in particular, because that's the only way we're going to, you know, end up getting that level of privacy, and I am. I do like seeing California and Colorado moving in those directions, but that's a handful of states. We need more and ideally some sort of national privacy legislation that helps to protect our identities. I know I, interestingly enough, had done this social engineering demonstration for this reporter up in Canada and they give me all of these people that I'm supposed to dox, but every last one of them is up in Canada, and here's the problem. They've got a lot better privacy up there and I couldn't find much on anybody Made it difficult, to say the least. I was like don't you have anyone in the US you could give me? Make it a little bit easier?

Andrew Obadiaru:

I mean the EU they are really really big on data privacy and information protection.

Joshua Crumbaugh:

They are.

Andrew Obadiaru:

They just came up with an AI regulation, right, which is, you know I was just reading some information on it. So they always in the forefront of some of these, you know, information sharing platforms. They want to be able to, you know, manage what information you share, what you do with that information. So the AI obviously is going to create a lot of talent for a number of organizations how they use AI within these EU countries. So I just started to. I did a customer review of it. Haven't gotten into a lot of detail, but it was very telling. You know they've classified AI into multiple categories, right. So you have to pick one. Each one requires a series of policies and controls you must have in place. So it's going to make it a lot more difficult to use AI within the EU if those regulations take hold and they start to really implement those fines, you know.

Joshua Crumbaugh:

Yes, in theory, it's going to make it more difficult at the corporate level.

Joshua Crumbaugh:

It's going to make it more difficult at the business level or enterprise level, but for that individual and the average employee that I'm worried about as they learn about all these new tools that I'm worried about, as they learn about all these new tools, they're still going to want to use it, and so I do wonder how well that plays out and how we balance things, because I feel like some of the AI legislation is a little bit like trying to put or close Pandora's box. By definition, we can't close Pandora's box. I wish we had put a little bit more foresight into this. You know, when you look at like schools, like MIT, they started their AI programs back in the 50s, and so it's not like we haven't had plenty of brilliant minds studying this, analyzing it, thinking about how we protect it. You know these systems. I just think we didn't have anyone in DC asking any of those people questions about what might come from it. But no, I just it's going to be interesting to see how things play out.

Joshua Crumbaugh:

But I truly see it transforming every aspect of our lives.

Andrew Obadiaru:

No, I completely agree. I mean, no one anticipated the level of adoption of AI. The adoption rate has grown tremendously. I think in the last two years we've seen something to the extent of like 75% of organizations are using AI in one way or the other. So that adoption obviously brings a lot of challenge. So from our perspective, we help a number of organizations pen test those LLM-supported systems, the systems that support the LLMs. So we help to kind of mitigate some of that risk, testing it, giving you a sense of what the vulnerabilities are, and then you can take measures to act on it. But it's here to stay, it's not going anywhere. People are adopting at very increased pace you know.

Joshua Crumbaugh:

so we expect that adoption rate to continue to go up. Yeah, yeah, I was just telling a large language model to write me a song about its instruction set and it did not. But I could also tell based on it that it seems like and it's just this one particular instance, but I've seen it a couple other places too that it's we've got a layer like it's. It's being blocked almost at a WAF level and not at the LLM level. So they're so worried about what's going to come out. They're monitoring and the problem with that is what if we didn't think of a word that that that hacker uses? It's like the you know, dan, do anything now. Prompts that people came out with when chat GPT was, you know, first getting big and all of a sudden it would be the evil sidekick that you wanted it to be.

Andrew Obadiaru:

Right.

Joshua Crumbaugh:

Very true. Okay, so we are almost out of time here. I do try to keep these to right about 45 minutes, but we are in security awareness month and I got to ask do you have any tips for that end user?

Andrew Obadiaru:

Yeah, I think they just have to exercise your diligence right. They have to be very cognizant. They can't take things for granted. The threat is real. The threat is real and you are going to fall prey if you don't have that mindset.

Andrew Obadiaru:

So I believe the employees is really any organization, your employees is really your biggest wall of defense and if they are not clued in, you are gonna find there's no amount of technology you can deploy that can make up for it through awareness that you can give to your user community. So the more aware your user community is, I think the better secure your organization will be. So encourage them to be cognizant, to exercise due diligence. Whether it's just a random email, look at those emails again. Take the time to truly evaluate and just follow an acceptable use policy of your organization, where you can go, where you can't go, where you can do things of that nature within the different web out there, where you can go, where you can't go, where you can do things of that nature within the different web out there where you can deploy, you know. So when you stay in that mind space, I think you know you will do well, not just for yourself but also for your larger organization as well.

Joshua Crumbaugh:

So, as a way of breaking down bridges between cybersecurity and maybe other departments, I am asking everybody a question. You can feel free to say plead the fifth, or to tell me, or to answer however you want, but one question I have is have you ever fallen for a fish?

Andrew Obadiaru:

Personally, yes, yes, no, I haven't. I haven't Because I'm, you know, outside of my work. I'm just also a very skeptical person. I'm always thinking I've got to be a different motivation to this right.

Joshua Crumbaugh:

Now on my cell phone.

Andrew Obadiaru:

I get it on my cell phone as well, not just in my work email. And I'm also over the course of my career. I try not to. I try to use my work email purely for work. I don't manage both. So the chances of me getting something that is personal in my work email is very limited. Right, I know a number of folks that use their personal email, their personal computer, for you know companies. I try to separate that for that reason. So my emails don't come to anything that has to do with the work. My work environment is very different from my personal, so the chances of somebody sending me something that is personal on my work is very slim. So if I get that, then all of my antennae go up and I'm like what is it? So I'm not quick to click. By my very nature I'm very measured not quick to click.

Joshua Crumbaugh:

By my very nature, I'm very measured. I had to ask just because I had somebody approach me and they were just talking about how everybody's always ashamed to admit that they and, and how they had realized that that them and a bunch of their friends had fallen for the same scams. And if any one of them had just bothered to share their story, they all would have or at least a few of them would have been able to avoid it. And so I added that ever since to it, because I just think you know if it has happened, it makes us a little bit more human. I actually did. Once. I had a Google alert set up. I had them forwarded. I knew it was a legitimate Google alert, wasn't the fact that it was a fake Google alert. It was what was in the Google alert that got me.

Joshua Crumbaugh:

It was just the same title to one of my talks that I gave at a few conferences, and it was a video, and I was like, oh, I haven't seen it on this website. Uh, what did they write up about it? And so I click and it was blocked by, uh, by the WAF. But in that moment I was like, oh, I should have known better. Um, but they, they caught me and, uh, and it was that you know, that sort of instinctual whatever. And it was that sort of instinctual whatever and it was really interesting. It was definitely Russian, but what I found interesting about it was that when I started not just searching my name with sites that have been added in the last 24 hours, but I started looking at other speakers at these same conferences, I was finding all of those same types of bait pages for them too, but no it. I mean I, I personally had that one. My favorite story is when one of the fish that I created to get people like us almost got me.

Andrew Obadiaru:

I think for me it's a combination of luck. It's not just that I'm too savvy, I'm just too in a position not to be phished. I think I've been lucky I'm not being phished with a true, sophisticated phishing. Who knows how I would act in this situation, but I think today I know a lot more better than I did maybe 10, 15 years ago. I think it's also a cultural thing. You just have to take that culture into the. You know, when I get an email address right, if the email address does not match the body of the email, if you're sending me email, you're from some company and it's that whatever. Whatever, which is, yeah, inconsistent. That tells me something, is? It's kind of unusual here. So, uh, yeah, I just don't click for a variety. I just, you know, even when it's legit, I still kind of like exercise some caution before clicking. I think that's been very helpful to me, yeah.

Joshua Crumbaugh:

All right. Well, hey, I appreciate you joining us today. Thank you everybody for another, for joining us for another episode of fishing for answers and, as always, be sure, be secure.