Phishing For Answers

Phishing Exposed: Ashok Kakani’s Game Plan for Cyber Leadership & AI Risk Management

Joshua Crumbaugh, Founder & CEO of PhishFirewall

Send us a text

Cybersecurity is evolving, with human behavior at its core and the need for robust security awareness training becoming more critical. Through the journey of Ashok Kakani, we explore the intersection of personal experiences, phishing incidents, and innovative approaches to training that engage employees and build a resilient cyber culture.

• Ashok's transition from science to cybersecurity 
• Importance of front-line training in mitigating phishing 
• Real-life phishing incident and lessons learned 
• The role of AI in enhancing and complicating security 
• Dangers of inadequate PII management 
• Need for role-based training to combat specific threats 
• Engaging employees through gamification in security awareness training 
• Fostering a supportive environment in cybersecurity culture

Joshua Crumbaugh is a world-renowned ethical hacker and a subject matter expert in social engineering and behavioral science. As the CEO and Founder of PhishFirewall, he brings a unique perspective on cybersecurity, leveraging his deep expertise to help organizations understand and combat human-centered vulnerabilities in their security posture. His work focuses on redefining security awareness through cutting-edge AI, behavioral insights, and innovative phishing simulations.

PhishFirewall uses AI-driven micro-training and continuous, TikTok-style video content to eliminate 99% of risky clicks—zero admin effort required. Ready to see how we can fortify your team against phishing threats? Schedule a quick demo today!

Joshua Crumbaugh:

Hello and welcome to another episode of Phishing for Answers. Today I am lucky to have Ashok Kakani with me. He is the CISO of Compunel. Did I get your name right?

Ashok Kakani:

Yes, yes.

Joshua Crumbaugh:

Well, hey, so I would love to learn a little bit about how you got into cybersecurity. I'm always fascinated by that part.

Ashok Kakani:

Yeah, so definitely Thanks for allowing me to join your conversation today. Thank you for being here.

Ashok Kakani:

I started my journey as a scientist in ISRO, which is a NASA equivalent, and then moved into product development. For the first 10 years Came to us in 2006. That's when I joined JPMorgan Chase. So that is the time when I moved from monitoring into cybersecurity because there were some requirements where they were looking for some person who can drive some of those cybersecurity programs. At that particular time, cybersecurity was not a separate group, and so I was, since I had some background with product development and worked on multiple operating systems and databases, so it was a perfect fit for me to take up that opportunity. So that's how I came into cybersecurity, been in the cybersecurity for the last 16 years, so working with a lot of CISOs to learn more things on a day-to-day basis.

Joshua Crumbaugh:

So I find most people when they did accidentally find cybersecurity because I think for most of us we stumbled into it Maybe not the younger generation, but if you're above a certain age, cybersecurity was not a thing when, when we were younger and so we were in technology and then one day learned about it. I know for me I stumbled across Backtrack Linux and immediately learned that penetration testing was a thing and, no joke, quit my job that day because I was like this is what I'm going to do Many years later. Don't regret it one bit. But yeah, very, very cool story. Um, I'd love to start. You were telling me a little bit about some real life uh, fishing, that had happened that. Maybe you could tell me a little bit more about that story.

Ashok Kakani:

Uh, that the call into the help desk yeah, I think a few months back we had an incident with one of our partner or a customer where an IT manager reached out to the help desk right and then the help desk did that basic validation and then they did shared the password directly to the individual and then later found out that it was somebody with malicious intent had access to that and then they got impacted with the ransomware attack. So then, fortunately, there were some early alerts which came through their SOC. There were some early alerts which came through their SOC Security Operations Center, so thereby they were able to quickly jump in and then remediate it. And then so I was working closely with the team and that's when we added some additional controls on people who cannot get a privilege access by default and whenever any password reset happens, at least there is a dual control in place where the password is sent to the manager, not to the individual, and then the individual have to reach out to the manager. That way at least we can put some additional controls and later, as part of that process, to learn more about how we can put some additional controls and later, as part of that process, to learn more about how we can avoid that going forward for the future.

Ashok Kakani:

Customers right, we, we. We spoke with a few of the new vendors who are in the market who can be utilized to do that. The platform provides a mechanism where a user registers as part of the onboarding process. Next time, before the help desk person gets the call, it goes through this particular platform where the person will be validated so that, once validated, it goes into the help desk. And also, wherever possible, we are trying to enable self-service password reset so that we avoid this particular issue going forward.

Joshua Crumbaugh:

I like it. I think one of the most interesting things about these different stories of phishing attacks is how did we adapt? What are we doing different in order to prevent that from happening again? I love it that you're adding the voice recognition on the call center lines. I think that's a great thing.

Joshua Crumbaugh:

For too long, call centers have been, well, one of the focal points of the social engineering community, from both an offensive and defensive perspective, because they are generally all too easy to get you know, the human to do something and it's because they prey on human nature. And I think you know too often security awareness programs neglect that element of training that customer service rep, that front line defender, on the types of attacks that they'll actually face in their job. It's more generic type stuff when they need to hear hey, they're going to call you, they're going to pretend to be the customer and they're probably even going to have some information that they're going to use to get you to trust them. That's, to me, is the type of stuff I think they need to be hearing. What do you think around that? I mean, what type of security awareness training should we be giving our frontline defenders?

Ashok Kakani:

So yeah, I think primarily you know what we put. Some of the controls is primarily somebody's calling you, because nowadays somebody can even right spoof your phone number. So first thing we normally ask is that now have them call back right, rather than me just responding to your call. Get their number or just reach out to the number which is registered under their name, so that that is one way of validating that you're talking to the right person.

Joshua Crumbaugh:

It's a low-tech step that just works wonders as stopping a attack.

Ashok Kakani:

Yes, so that will be a first step. And then the second thing is, primarily is that we call it as a conditional access. Primarily is, even if you want to reset the password, they are only allowed from a corporate domain joint systems and not from any other place. So that way you can reset it. But no, but they can only access it that url, only from a corporate domain joint systems. So that is another way of preventing that one, and so so that then the user can always modify the password. But as far as getting that first level of change, we can put these type of controls possible.

Joshua Crumbaugh:

Okay, yeah, no, I like it, and one of the themes I'm seeing is, you know, combining the low tech process with the higher tech capabilities to help fight and mitigate these attacks. I hope I'm not putting words in your mouth. That's sort of my takeaway, and I like it. I mean, I think that there's a lot of really cool technologies that can help, but there's also just a lot of simple, basic things that we can do that can help in terms of process and procedure to really mitigate these threats as well. Um, let me ask you a question about AI. Um, from a security awareness perspective, what do you think about AI and what do you think that leaders in cybersecurity should be doing about AI?

Ashok Kakani:

Yeah, I think primarily now. I think we have to live with AI, right, with all the new, latest technology, latest innovation coming in. So we as a security people, we try to put some governance and controls around that. So, with no fake video and fake voice, so that becomes, you know, even somebody listening to this one in the future can replicate my voice and then they may be able to call my help desk if needed, right.

Joshua Crumbaugh:

I was going to swap my face out for brad pitts, just just saying yeah, so that is the risk we are going to have it.

Ashok Kakani:

So in that particular case now we need to have in a technology or process in place to identify that right, deep fake wherever possible and then, right, that has to be part of our training and awareness program.

Ashok Kakani:

So recently we actually did a training program for one of the company where we had more than 100 people. We actually created a deep fake video a see-saw or a CEO coming in and talking about something which is completely irrelevant, right. So just to show that, how much easy for somebody to create a fake video about anybody in the company and which can be done in less than a few hours, right, with all the tools and technology we currently have. So we have been working closely with a few of the partners on how we can identify it, whether it is before a help desk get a call, can we validate the authenticity of the user, whether it is a human or a non-human. And then the second one is whether we can even put some additional controls on what is available for a user, right, so we put some DLP controls on if somebody is trying to put something into chat, gpt or any other Gemini and other places. How can we avoid putting some PII data into that, because that can go into public information possibly?

Joshua Crumbaugh:

Yeah, so there's a couple, there's two different directions. I want to go after that, so I'm a little conflicted as to which path. Let's talk about the PII and using PII. I think there's an educational aspect of it, but also a DLP aspect. What's your approach? I mean, how do we prevent them from dumping that sensitive information into an AI? I don't know about you, but I actually use an AI all day long, every day, because it helps me do things much more quickly, and so I imagine a lot of people are that way. Now I'm in security, so I'm obviously very aware of what I'm putting in it, but I don't think everybody necessarily is. How do you deal with that risk? Because it's something everyone's facing and I don't know that we have any good answers. So I'm just curious what your approach is. What's your answer there?

Ashok Kakani:

So I think that it starts with the first one. Is that no company coming out with a policy, right, whether AI usage is allowed in the company or not, right? And then put some guardrails around that, whether you want to allow employees to use any of those AI, llm models, et cetera. So that would be step one. And then step two is that right? If the company wants to adopt some of those things, what are the things we are going to enable it? Right? Example, maybe Microsoft Copilot?

Ashok Kakani:

So in that particular case, do you have guardrails around what data is visible for the individuals, et cetera? So we call it as AI governance. So that means we need to put the tools and technology in place. Where number one is do you have a mechanism to scan all your data which is available and then classify them and then label them? So that means you have to configure your AI model to say that, no, if you are trying to share information to an end user, only share information which has been classified as public or internal and then don't share any information which is classified as maybe sensitive or confidential.

Ashok Kakani:

Right, we can put some governance and controls around that so that you can see exactly what is allowed for a user to get it. That is number one. And number two is that we can also do call-discount tokenization. So let's say you want to use an ai model, but if you put saying that no, hey, writer, write some information about a show, rather than giving that my name, my personal information, so the the platform can tokenize it and then it will anonymize it and send it to the llm and then, once the data comes back, it will re um tokenize it and then it will give back to that in the format what you're looking for, right. So these are different mechanisms how we do that, or no options to do it right to prevent some unnecessary data leakage possible okay, I know.

Joshua Crumbaugh:

No, no makes perfect sense and I like the approach. One of the things that I've been thinking about is I see a lot of companies deploying an internal LLM and giving it a lot of their business data. I'm curious have you done any of that? Or you know, still avoiding?

Ashok Kakani:

that. Yeah, I think that is something right. A dedicated LLM primarily, if I know, to take it and then train with your data, is the recommended best practices we, you know, we recommend to the companies rather than using so many of the open source ones, have something built so that you can train that model specifically for your needs. That way, the data resides within your boundary so that you can put all controls possible around that particular model.

Joshua Crumbaugh:

So that's one of the things that I've been curious about on that is how are you handling role based access? So if we dump a bunch of information about the company in there, there's obviously information that the CEO is going to have and want in the LLM but that we don't want an intern having access to. So how do you handle training the model on this, all this different information, but then telling the model who it can and can't give that?

Ashok Kakani:

information to. So actually the model will not be doing that. I think we put we call it as a governance layer which will be enabled for role-based access. So that means me as an individual, when I log in, if I'm part of a security team, the the, the role based access gives you saying that now you have access to all the data. Or if somebody is logging as an hr, a manager, then we will say that no, this particular hr manager have can be given access only to the data which has been classified as HR. So that means when you're-.

Joshua Crumbaugh:

Dynamically assigning different-. I like that. That's a good answer and a great approach to it, Cause I have been curious about how that's being dealt with and I think that's a great way of dealing with it. How successful has the program been?

Ashok Kakani:

Yeah, so currently you know we are testing it and a few of the customers I think we are seeing right. So primarily to make it work is primarily doing its data classification right. When you have a data classified and then when you have this governance model, if you classify some data, as's say HR or finance or confidential and secret with additional things, you can always put that control so that the user can say that okay, this user belongs to HR. That means the data which is given to LLM or which can be written by LLM can only do that anything related to HR. So it is a better way of implementing it and I think it's not a complex thing to solve.

Joshua Crumbaugh:

Makes sense. That's great, great advice there. Okay, pivoting a little bit, I have one question that I love to ask every well, I have a bunch of questions I love to ask every guest, but the big question of the day is carrot or stick. Uh, so, when trying to build a culture, if you could only use one carrot or stick, what would you choose and why?

Ashok Kakani:

what would you choose and why? So I think each one have his own benefits out of it, right. But when I look into it, right so humans are the weakest link, right. So that means if I'm constantly being pressured on something, then even accidentally, I can click on it. Right. So stick may not work, but I think carrot is one. Which I will do is more of encouraging the team to do the right thing and then, right, even if you fail, right, are you reporting it on a timely basis? So that means that is what I will be looking into. But on the other side, is that right, whether it is carrot or a stick?

Ashok Kakani:

What is your layer in defense? What are your preventive controls you have in place? Even if something happens, how quickly you can recover from that incident is the strongest area you need to concentrate on, because things can happen, right, whether it is today or tomorrow or some other day. But do you have layer indifference? Example is if somebody is clicking on a malicious link, do you have a mechanism to block that link? Right, number one. Or even if somebody clicks on the link.

Joshua Crumbaugh:

do you have a mechanism to detect the malware?

Ashok Kakani:

Yeah, so if something gets downloaded, do you have a mechanism to detect it? Or even if it gets installed, do you have a mechanism to block it? Right? So that means there are different ways or different levels where you can stop that one, and then how quickly you can respond to the incidents etc. Is what makes it much more better. But as far as that is concerned, I think I'll go with carrot, if you are specifically asking carrot or stick.

Joshua Crumbaugh:

I like that answer and I do agree in the layered defense 100%. I'm a really, really big fan of carrot. I think that sometimes sticks are overused in our industry. Fan of carrot.

Joshua Crumbaugh:

I think that sometimes sticks are overused in our industry and uh, and it's often because I probably out of frustration, uh, but largely because, in my opinion, when I look at our industry I I really do see that for the most part, we've got a lot of very technical people that are very, very good at cybersecurity but maybe don't necessarily understand psychology and behavioral science and and they're running these programs. And so what does that lead to? We get these overly complex security awareness training programs that don't get the user, that don't connect to the user, and so the user is just oh well, this is cyber, it's too complex to understand, I'm just not going to listen and I think we almost do ourselves an injustice because of that. And so that's always been one of my anthems, or cries, if you will, is that we've got to really think about the psychology and focus on it, and to me I call it social engineering for good. I'm curious when it comes to deploying the carrots or building that security first culture, are there any maybe psychological or behavioral science?

Joshua Crumbaugh:

or just random tips that you've discovered that can help make those security awareness programs that much more effective or help really drive that call yeah.

Ashok Kakani:

So I think, when wherever I try to do is pretty much I don't have one single training program across the whole company, right, because it's not going to work, because you need to have a customized training and awareness program primarily for each and every type of role. Somebody is doing it, right? So somebody is in a HR. We may have a separate awareness, specifically role-based training, right? So primarily saying that, if it is HR, what are the different mechanism of people who can reach out to? Right, it can be a maybe a new employee who is trying to send a document which with maybe some malicious code inside that. Or, if it is going to be, maybe somebody from the finance team, you may get a phishing link that you're getting a mail from a CEO asking to wire transfer or something like that. So what we try to do is give them the role-based training so that they know exactly what they can do and what they are not supposed to do.

Joshua Crumbaugh:

I had to do the applause because no one brings up role-based training. I normally bring it up, I'm impressed, so just I love it. It's a great approach. Go ahead Continue.

Ashok Kakani:

Yeah. And the second one is that, right, even with that current permission, they must not be able to do something with that. And then you need to have we call it as a just-in-time access or, uh, no, uh, maybe a dual control in place wherever possible so that, even if something happens, then you're're not getting impacted. That is the another one which I normally recommend, right? And the other one which I do is, if somebody is getting constantly getting phished, then what we do is that we work with them and then identify what are the critical activities they do and then we move them from their continuous access to just in time access. So that means, if I'm constantly kicking on a phishing email every time, right, rather than just right, blaming him, we want to see that right. What control we need to put it around him or her so that, right, we have a better controls around that. Even if they click on it, how can we stop the leak from there?

Joshua Crumbaugh:

Yeah, no, I really think that adaptive controls based on individual user risk have got to be part of the security program and the plan. So I like that, that direction, what you're doing, the just in time access I actually think that this that's the first time that's come up on this show. So just you know great great advice. Um, just you know great great advice. Um what? So? You mentioned a few roles, uh that you you focus on.

Joshua Crumbaugh:

Um, any other roles that you think are highly critical when, when somebody's planning on on doing role-based training, they're going to go down that rabbit hole, uh, where do they start?

Joshua Crumbaugh:

Because it's a big job. They you obviously can't cover every role in the organization, but there are some that are very critical because, as an ethical hacker, what I found was sure, yes, the user is often the weakest link and that did get us in the network. But it wasn't the average user's mistakes that led to us being able to move around within the network and escalate our privileges and get full control of everything. It was the mistakes of the different departments within the organization, like development, with not using secure development practices, or, you know, the IT team keeping a password spreadsheet or reusing the same admin password on a million different machines or whatever it happened to be, and it sounds like you're you're doing a lot to address those mistakes. Maybe you just talk a little bit more about, uh, how you address those other aspects of the human layer, a human element, and just you know where do we start, where do we prioritize our efforts?

Ashok Kakani:

yeah, so I think the there are a few things which we normally look into. Let's say it's once again layer in defense, right? So number one is that we we do. We call it as a password spray attack simulation. So that means we try to go and run against the most commonly used password for the company and then make sure that nobody is using it. And no even right. So that will be running on a very regular basis to confirm that nobody is reusing the most commonly seen password. So that will be number one which will be constantly running against our employees, so that we know if somebody is reusing the password, then we know about it. And then the second thing is that we also have a mechanism to we get a threat intel feed from outside on, from the, you know, from the dark web, whether any of our employees information has been compromised recently. So that means if we come across any of those things, we immediately work with them and then rotate the password so that that risk is also eliminated.

Joshua Crumbaugh:

And then the third one we look into.

Ashok Kakani:

It is more from if something happens, what right? If there is a fire in your house, what are the four things you will be trying to protect, right? The same thing you need to look into it from your company perspective. If something happens, what are your crown jewels? So that means we need to look into all the different scenarios. We call it as a tabletop exercise. We run a simulation against if something happens, what are the different point of failures? And then do you have controls around that, do you have a monitoring around that? So in that particular case, you will be able to identify what are the weakest points and then you will be able to eliminate it or you will be able to put some compensative controls around that. So that will be my day one approach as far as handling these type of requirements.

Joshua Crumbaugh:

I like it. I think that's really really great advice there. What tips do you have to make it more engaging? I know so often security awareness training is boring. Do you have any tips to make it more entertaining for the users?

Ashok Kakani:

So I think a few things. What we have done in the previous company is primarily identify those six differences and things like that. Make it more of a puzzle or make it more of an interactive one. Whoever identifies those first, I think, 10 issues Maybe send a phishing link and then say, right, say that whoever identify those five different things will be given a $25 gift card or something like that. Right, and also make it as whoever has reported more phishings in the last one year. We just recognize them. Right, so recognizing more people. Right, have more interaction on that will be. And then, right, reduce the training modules from one hour to quick two, three minute videos so that, right, you can actually make it more like a capture the flag, even in this fishing and awareness one, and then give them more like a medals or recognition saying that somebody has reached this particular level. Right, make it more gamified so that they don't feel that they are obligated to do this no, I, I actually uh, I really like it.

Joshua Crumbaugh:

It and it reminds me of a conversation I had. I did this penetration test for this hedge fund in New York City and we were doing the readout at the end and we're talking about all of the different social engineering vulnerabilities that they have and he says, well, we're already doing the training and, frankly, my people's time is money. And he says I won't allow for more than one hour a year of training. And on its surface I was like that's, that's not enough and we're already failing, it's not working. We're already failing, it's not working.

Joshua Crumbaugh:

But that conversation, interestingly enough, actually led to way down the road and a couple other things me founding Fish Firewall, because one of the things that I realized was that an hour is not much time if you use it just in one big chunk, but if you break that into one-minute chunks, you can now be in that user's inbox literally every single week of the year and not even eat up an hour of their time, because we're only taking one minute at a time or less than a minute, and so I like that. The higher frequency, shorter training bits probably much simpler. You know more, you know simplified right, awesome. Now you mentioned gamification and and I think fishing simulations and, to me, gamification really really go together um, because I I all too often run fishing simulations and had somebody get upset. Um, let's just start there. Have you ever had that experience where you ran the simulation and it caused some sort of, you know, uproar?

Ashok Kakani:

Yeah, I think that was a funny example, right? So, because what we did was we sent a phishing simulation that the bonus is going to be released next week, right. And then HR was week, right.

Ashok Kakani:

And then and then hr was and a lot of people started reaching out to the hr team that, no, hey, I got an email that no bonus is going to be released and things like that. Right, I did not have any of my appraisal till now. Right, I think it becomes more of a complaint on that and then we have to pull that off because, right, it caused so many people just calling the HR about the process which has not been completed, etc. That was one funny incident on that.

Joshua Crumbaugh:

I've had that happen too. We were playing around I wouldn't say playing around because it made its way into production, but we were doing AI based phishing. And this is years, years ago. And so what it would do is it would look at news feeds for any specific type of content that would give us a really great phishing idea, or that would tie to one of our I guess pre-made phishing templates. You know idea, or that would tie to one of our I guess pre-made phishing templates.

Joshua Crumbaugh:

So one of these alerts triggers on you know this company that was a client exiting bankruptcy, and so our system fires off all of these benefits, altercation, phish to their personnel, selfish to their personnel. The one and only time I ever spoke to their CEO, my relationship was with their CISO. But I get a call from their CEO. He is just mad. I mean he's pissed and I apologize. I explain that the technology is just doing what the bad guys do. We're trying to emulate the threat. Anyway, next day goes and I don't hear from the the CEO again, but I do hear from the CISO and he says hey, we just had a bunch of fish come in that were very similar to the one that you ran yesterday, but they were all malicious and no one fell for it. Thank you for running that, and I thought it was funny just how quickly within 24 hours we went from. You know, this is the most terrible thing on earth. To thank God that we did that. It prevented us from having a breach.

Joshua Crumbaugh:

But I think the lesson for me there was that we really have to be careful when phishing or we can create a very negative employee experience. And and so I guess my answer to that was why don't we gamify phishing where we we take phishing and instead of just having it be an oh, we just randomly send this email and try and trick you. Why don't we tell the user hey, we're going to send one, here's the red flags, you need to be on the lookout and make a game of cat and mouse out of it? And to me, that was the real answer to how do we address that, because you really don't have any good options. Either you do something like that or you just don't run those sensitive phishing simulations, and then that opens you up to risk. So I've said a lot. What are your thoughts around that?

Ashok Kakani:

Yeah, I think, definitely agree on that particular one, right, because a lot of times I think one on the other side is sometimes, you would have seen. So, if I take a day off, right Next day when I come in, I have around 100 to 200 emails sitting in my inbox right Easily, 100 to 200 emails sitting in my inbox right easily. If you're talking about one weekly, we are talking about like more than thousand emails sitting in the inbox, right? So, um, what I, what I have done, it is that, no, just email communication alone may not help it, right? So what we used to do it is that now we used to put that into the the banner, now screensaver banner, we try, we try to put something on that one and then we also put it in our internal website that, okay, now, hey, this week is our fishing week, so these is, these are some of the scenarios, right? So that way, even if they miss their emails, at least they have other options to look into.

Joshua Crumbaugh:

Yeah, no, I like it and I think that's great advice. Now we are running a little bit low on time, but before we wrap, I do want to spend a little bit more time on phishing. I personally believe that phishing is incredibly important, but I think ph to spend a little bit more time on phishing. I personally believe that phishing is incredibly important, but I think phishing gets a really bad rep sometimes. What are some of the tips that you would have around phishing Things like how frequently should we be doing it? What types of phishing should we be running with our employees? Just any suggestions you have of phishing should we be running with our employees. Just any suggestions you have around phishing simulations.

Ashok Kakani:

So I think primarily when I look into it, I'm looking to it more from the role-based one, right. So that pretty much is.

Joshua Crumbaugh:

Well, they're being phished based on their role already by the bad guys, so you almost have to do role-based phishing, right.

Ashok Kakani:

So you almost have to do roll-based fishing, right, yeah, so yeah, primarily, what is that? It's that okay. What is the risk? Normally, what I do is that I think if we have a frequency of once in three months right, I'm just giving as an example, right, and then if we are going to be constantly seeing few people who are clicking on those links, then we actually increase the frequency for them. And then for the people who are actually doing good, we reduce the frequency but we still send the awareness information as much as possible, right. And then whoever is constantly getting fish, then we also try to put some additional compensatory control, saying that if they want to do anything, they may have to jump into maybe another machine to do some critical activities, etc. And then another example which I normally ask is that if an executive does not need access to any other system other than email, why do we need him to have access to any other system other than email? Why we need him to have access to VPN.

Joshua Crumbaugh:

Yeah, agreed, I think limited or at least the principle of least privilege is possible, right?

Ashok Kakani:

So if somebody does not need access to that, right so I normally go back and review their role and then what access they currently have and then have a mechanism to review. If they are not using that access for the last 30 days or 60 days or 90 days based on the company, Do we really need them access to that continuously and then remove that access and then make that available on demand access? So that means if I'm doing a backup once in three months and then I need access to the backup server only at that particular time window, why I need to have access for all three months? Rather, I can give them access only on that particular day automatically, so that even if I'm getting phased, I will know the access to that backup server is not at all there. So reduce your attack surface as much as possible and then provide them all the tools and technology for them to learn you know a couple of roles that I think don't get as much attention as they should, that you mentioned.

Joshua Crumbaugh:

one of them here was the heavily targeted users. Like a lot of people in your C-suite, anyone high profile, your sales team, you know these people are more public, they're much easier for the bad guys to find, they get targeted more, and so I've always thought you know you need that. Hey, you're a heavily targeted individual. Here's what you need to know training. But you know another one that I think is really important is that high access, that privileged access user. You, that that privileged access user, whether it's a network admin, a domain admin or just a a you know bp or a finance executive, uh, or an hr executive they have a lot of access, and so I think you also need that. With great power comes great responsibility uh type training as well. But I love how much we focused on role based training today, because that's the future it's. It's been proven that training is 15 times more effective when it's contextualized to the individual in their role, because when it's generic they don't listen, but when it's specific to them it hits home and it's so much more effective. So I love that.

Joshua Crumbaugh:

You even brought up role based fishing. Normally I have to bring that up. So another thing that I just think is so incredibly important your finance. People need to be fished in the same way that they're going to be targeted every day, so that you can build what I like to call human virus definitions.

Joshua Crumbaugh:

And so, you know, running these role-based phishing simulations to me, I think, is a critical part of understanding your risk, of addressing it and then from there, it also gives you that intelligence for those adaptive controls that you were talking about. So if this person's clicking on every phishing simulation and they don't report anything and they're not doing the training, well, I know they're a riskier user. So building additional controls around them, like segmenting them off the network, you know, restricting their access to on demand and different things like that, I think, is the exact approach we have to take. We can't mitigate everyone's behavior, but you know for those that we can't, we have compensating controls. And then what I like about it is it makes it better for the average user when, when you can make the controls really strict for the person that needs it but lighten them up for the person that doesn't need it. To me that's a better experience and it means they like cybersecurity better.

Ashok Kakani:

Yep, and another thing which I normally tell, that is that you know if you are going out of office. There is no need for you to put out of office, even for your external users, right, so you can just put it on for your internal user out of office. The other people doesn't need to know, right, other than whom you really know about it. Right, because that is another entry point. Because if if I set an out of office for anybody outside, then that can become an entry point for somebody to fish my team members saying that no, they know the scenario, and then they can say that I'm currently out in Miami and then I don't have access to my laptop and this is what I need to do.

Joshua Crumbaugh:

Yeah, no, actually he was out. He told me to reach out to you.

Ashok Kakani:

Yes, you're right.

Joshua Crumbaugh:

Not only that, I actually wrote a script. So in the early days of this company, before we were even called Fish Firewall, I used to be directly involved in email marketing, right. And so we, you know, the first time we bought this big list of like 20,000 email addresses and we send a blast and I get like 500 out of office responses off that first one. And when I opened them up I realized, man, they've got a lot of personal contact information in their signature inside of these things. If I write a script, I can scrape this and add it to our CRM. And so I was able to write a script and it would grab all of this information and use it to supplement the data inside of our CRM. Maybe I shouldn't admit that, but it's. You know. I think too often people don't think about how much data they might leak just with something as simple as an out-of-office response, and that can be used by social engineers in particular.

Ashok Kakani:

Right Yep.

Joshua Crumbaugh:

All right, any final words of advice or words of wisdom for our audience before we jump off today?

Ashok Kakani:

audience before we jump off today. I think primarily, you know, my recommendation is always is that you know humans are the weakest element and then right, even I can click on a malicious link at any point in time, accidentally. So that means you need to have a multi-layer in defense possible and then just, you know, support each other, each other. So don't point finger at anybody, because humans do make mistakes, but don't point finger at anyone, just support them and then go. It's a teamwork, it's not individual in any company.

Joshua Crumbaugh:

No, I love it. I love the advice. It's great advice. We've got to be very supportive and encouraging of our, our co-workers. Just because cybersecurity is my expert area or expertise, doesn't mean it's everybody else's, and, and I think that you know, respecting them and understanding that they're, you know, very, very skilled in a different area that's not cybersecurity is just incredibly important. So, thank you, thank you, it's been a wonderful podcast today and and we'll be back again, I think, monday with another episode. Thank you so much for joining us. Have a great day and, yeah, bye.

Ashok Kakani:

Thank you.