
Phishing For Answers
“Phishing for Answers” brings you insider knowledge from the front lines of cybersecurity. Listen in as we speak with seasoned professionals about overcoming phishing attacks, managing user training, and implementing solutions that work. From practical insights to actionable strategies, this podcast is your guide to strengthening security awareness across your organization.
Phishing For Answers
AI in Healthcare Security: Oracle Health's CISO Speaks
Steve Fridakis, CISO of Oracle Health, shares his journey through cybersecurity across industries and explores the transformative impact of AI on healthcare security.
• 25 years of cybersecurity experience spanning airlines, United Nations, media (HBO), and healthcare
• Common security foundations across industries despite significant differences in threat landscapes
• AI enabling physicians to capture diagnoses using natural language while validating against patient history
• AI security tools helping validate systems and correlate petabytes of daily log information
• Current cybersecurity mindset shifting equal focus to recovery capabilities alongside prevention
• Zero Trust implementation minimizing breach impact when inevitable human errors occur
• Simple security fundamentals remaining the root cause of most breaches despite evolving threats
• Leadership in security requiring empathy and understanding that people need to do their jobs
• Building security cultures based on shared responsibility rather than compliance mandates
• Success in cybersecurity measured by resilience and recovery speed rather than perfect prevention
Building effective security requires understanding that "our people—not our tech, not our firewalls—they are our first and last line of defense."
Joshua Crumbaugh is a world-renowned ethical hacker and a subject matter expert in social engineering and behavioral science. As the CEO and Founder of PhishFirewall, he brings a unique perspective on cybersecurity, leveraging his deep expertise to help organizations understand and combat human-centered vulnerabilities in their security posture. His work focuses on redefining security awareness through cutting-edge AI, behavioral insights, and innovative phishing simulations.
PhishFirewall uses AI-driven micro-training and continuous, TikTok-style video content to eliminate 99% of risky clicks—zero admin effort required. Ready to see how we can fortify your team against phishing threats? Schedule a quick demo today!
Hello and welcome to another episode of Phishing for Answers. Today we are very lucky to have Steve Fredakis, the CISO of GE Healthcare, with us. Steve, why don't you introduce yourself? Tell us a little bit about you, thank you, Joshua.
Stephen Fridakis:It's a true pleasure to be here with you today. I'm the Chief Information Security Officer of Oracle Health. Thank you, Joshua, it's a true pleasure to be here with you today. I'm the Chief Information Security Officer of Oracle Health. Oracle Health is one of the business units of Oracle. We concentrate on running several components of the clinical system ambulatory as well as acute and that involves several of the biggest health systems in the world, including the Department of Defense, Veteran Affairs, as well as several healthcare systems for certain countries like Sweden. So I hope to be able to convey to your participants about what we do, the technologies we utilize, as well as my personal philosophy of dealing with cybersecurity every day.
Joshua Crumbaugh:Awesome. Well, I'm really excited. How did you get into cybersecurity to begin with?
Stephen Fridakis:I've been in cybersecurity for about 25 years. I started my career writing software for airlines in several countries around the world. My native Greece moved on to the UK, belgium and then ended up in the US. I was a consultant and I was doing a lot of work around cybersecurity compliance and such my first opportunity to be a CISO was with the United Nations.
Stephen Fridakis:I stayed with the United Nations for quite a few years and then migrated to media with HBO and other companies. I've been in healthcare by way of Google Health and now Oracle Health for about six years. There are a lot of commonalities across industries. Obviously, I wouldn't say that media and healthcare are that similar. But when it comes to looking at cybersecurity regulations, controls, there are a lot of commonalities and a lot of things that you can learn from one industry and move to another, so I'm sure we'll talk about that.
Joshua Crumbaugh:Yeah, well, I was actually going to ask that. Do you have any examples?
Stephen Fridakis:Yes, everything that we do has to do with managing risk, so I want to talk about two things commonalities and also things that are quite different. What I learned when I was working at HBO media security and managing content is very much bound to a timeframe, and what I mean with.
Stephen Fridakis:That is that when something has not aired, it is the most important thing you do and you have to safeguard. However, once something airs, then pretty much anyone can have it for free or you know what I'm saying subject to, you know, subscription rights and so on, but there is something to be said about understanding where, in this time frame of producing something and broadcasting, it is. That's quite important for media. Now let's talk about commonalities, though.
Stephen Fridakis:The ability to understand your ecosystem, your subscribers, your partners and how that extends, is quite similar in industries Like in healthcare. We have our immediate customer, which can be a hospital, but hospitals rely on doctors, they rely on what we call health exchanges that are quite broad. So it is quite challenging to understand how, as a matter of fact, there is no perimeter, but you do need to understand how many people access your applications, your information, your data and how that extends as your ecosystem broadens. And you need to truly understand that, because that defines the kind of threats, the kind of decisions you need to make and the impact the decisions will have. So that is the case, across all industries.
Joshua Crumbaugh:Okay, yeah, that makes perfect sense. So, pivoting a little bit, ai is. I mean there's crazy news coming out every day, like over the weekend you've got a new, I guess transformer that Google created that can remember, now has memory built in and it has short term, long term memory type thing. So I mean AI is just not only skyrocketing and getting much more intelligent daily, but it sort of comes with that double edged sword. So I want to start with the good instead of focusing on all the negative that comes with that double-edged sword. So I want to start with the good Instead of focusing on all the negative that comes with AI. What are some of the cool things that you're excited about as it pertains to AI?
Stephen Fridakis:So I'll tell you about a couple of things. First of all, I am excited, but at the same time I am becoming a bit of a stickler for specificity. When it comes to AI, it is the quintessential buzzword For years now we have been using natural language processing in healthcare.
Stephen Fridakis:We'll be using pattern matching in healthcare and obviously that evolves, evolves. We do see the ability for physicians to just capture diagnosis and instructions using standard language and that is validated and embellished so that the patient can understand what's going on, but at the same time, talking about patterns and such the patient can now benefit from a system that says that something that the doctor just said is counterindicated, given the patient's history or given the patient's other medical information that the system has access to. That's powerful.
Joshua Crumbaugh:Oh, I agree.
Stephen Fridakis:As a cybersecurity person, though, I find that we now have the opportunity to utilize advanced AI as a cloud security tool that can actually truly help us validate our systems, determining whether the indicators that we find, acquire or produce are applicable or produce are applicable, and, frankly, to truly allow us to correlate the log information that we have in a much more advanced way, in ways that simple relations or simple associations could not do before. We collect, on a daily basis, petabytes of information across all our systems. It is quite difficult to know, once we associate that with some intelligence, whether something is what we call in play. Are we actually being attacked and how are we being attacked?
Stephen Fridakis:Having this information serves multiple purposes. First of all, the most rudimentary we have to let our customers know, and some of that is quite ominous. We have to let our customers within 48 to 72 hours at most, if it is a confirmed attack that involves their systems and data. Ai brings an incredible capability for us to be able to do this in a trustworthy, timely manner. So I am very hopeful because it still evolves, and I'm very excited by what I see and by the applications that I see in the healthcare and other areas. But you did say about double-edged sword, so I'm going to let you ask the question about you. Know, we talked about the good, now there's the ugly.
Joshua Crumbaugh:Well, I mean, I think, yeah, I was having a conversation the other day, before we get to the bad, with the CISO of Alberta Health Services, one of the largest hospital systems up in I actually think the largest up in Canada and he was talking about how they had a few rooms and it's just a pilot program right now, so it's not fully implemented, but they've they've done this pilot where they've got these AI microphones in the room. It listens to the entire thing, creates the notes for the doctor. The doctor simply has to approve them and it's saving three to five minutes per patient, per doctor. Just the mass amount of time savings that will come just out of something like that alone, I think in the future, is very interesting.
Joshua Crumbaugh:The double-edged sword, of course, we're seeing attacks skyrocket. I saw I think it was Google that had published something saying that they had seen a 700 percent increase in attacks and that's just every type of cyber attack that could be targeting them, but a 700 percent increase in 2024. So I don't know if everyone saw that level of increase, but AI is certainly making the attacks much more frequent and often more sophisticated. I'm curious what are you seeing there and in terms of the attackers utilizing AI to make the process of targeting you more easy.
Stephen Fridakis:I can only guess. I'm not sure about the specific number, and the reason for that is that we cannot claim that we know of all the attacks that we are being targeted with. That's fair. I trust that the number that you quote is probably right.
Stephen Fridakis:But, here's a couple of things. One thing is going back many years. As I said, I've been doing this for now at a decision level for about 25 years, and what I have seen is that the tools to apply an attack used to be either really simple or quite sophisticated. Unfortunately, that distinction is not the case anymore. Tools are becoming available just to download, as long as you know how to find your way to the dark web which is quite easy, then you can have access to tools.
Stephen Fridakis:We also see a lot of proliferation of attack as a service, so you don't even need to know the tool anymore, as long as you can subscribe to somebody who is going to do it for you, and then obviously you and them will be the beneficiary of the results or whatever on earth they find out. So that is quite prevalent. So attacks are easy. Is that because of AI? To some degree, ai can make them more sophisticated, more human-like, more human-like, and they can disguise the potential finding or the potential ability of a CISO and the tools that the CISO has at its disposal to be able to prevent the attacks. So we do see, therefore, ai making sophistication easier, or AI making sophistication easier, the ability to utilize cloud tools to make them broad, more stealthy and such. So we do certainly see that.
Stephen Fridakis:But I want to make sure that, at least from my perspective, it's not about AI. It's more about the availability of a technology and the security vendor's ability to catch up with a threat, and that is always a balance that we cannot always meet, unfortunately. So we're in this area right now that everybody's catching up. The other thing that we have to say about AI is that there is a significant gap of training, awareness, understanding of what you're dealing with. We have rushed to make technologies available for the benefits that you and I just spoke about, but at the same time, we haven't caught up with the update of our controls, with the update of our documentation, even our own technical understanding of what is going on. So we're dealing with that right now and it's a significant catch up.
Joshua Crumbaugh:So, to that point, there's new training that we have to do with users, there are new questions that we have to ask our vendors. There's just all kinds of different aspects of the cybersecurity process that have really changed fundamentally as a result of AI. What are you looking at there? And, particularly when it comes to awareness, what do we need to be teaching our users as it pertains to AI?
Stephen Fridakis:So I love your podcast because of the human side of security. When I first became aware, it actually prompted me to sit back and say I could not agree more, so I'm going to answer the question from that angle. So any change that we make is not a technical change. As a matter of fact, the technology we can most of the times, understand at the level that we need to understand it, given some effort. But technology is just one piece of a puzzle. At the end of the day, it is more a matter of a cultural change, as well as understanding the governance aspect of things. In other words, what exactly are you being asked to manage, what is the decisions you need to make, how are you equipped to deal with that, and where does accountability truly lie now? So that's the perspective that we need to deploy as we're dealing with AI.
Stephen Fridakis:So here's what I'm saying it is not about code. I had this conversation last week.
Stephen Fridakis:Last week we were talking about an architecture that we're putting in place for the new generation of healthcare applications that our organization is deploying. And we said that the way AI works is that there is this thing called an LLM, which, in essence, is the key component of management decisions and perspective. And we said that we do not know how to manage it as security people. It is not code, Therefore it cannot be managed through a DevOps process and we're struggling to try to find methods to know how it is managed. So we fall back to you know, talking about the catch-up, right? So we'll fall back into traditional methods of I want to know who changed it we're going to keep it very tight and I want to know at what point it changed version management and so on. But in the same breath, we say is that enough? Management, and so on. But in the same breath, we say is that enough? Do we know enough? And do we know the impact of a change, in a semantical sense, of our AI model and what kind of obligations do I have? So that's a big cultural change that is playing out right now and, frankly, we're talking across our peers in multiple industries and we say, well, how are you approaching this? And this is what I'm doing and some. Hopefully, standard will prevail and something will come out of this, but it's a big story. The other thing is that we are waiting to see potential regulatory or even legal challenges where we are going to be asked to bring to the table some sort of disclosure, some sort of validation of the model, and say this is what we do and see how that's going to hold water or how that's going to be treated, because we do not actually know what the expectations are. As you know, there's very little guidance. There was recently a NIST extension that addressed AI and we're all ready and we use that to some extent to augment our controls, but there has not been any true check validation even on it yet, so that we know to what degree those things, as I said, are still actionable. So a lot of work is being done there.
Stephen Fridakis:I do know you asked me a question about awareness and training. So what we're doing is we are catching up on the fundamentals. We make sure everybody understands the fundamentals components of this technology, because they're new and they're different. We are developing a vocabulary so that when we say semantics, we know what we're talking about. When we say LLM implementation, we know what we're talking about when we say LLM implementation, we know what we're talking about and also what we say about the different pieces of what is natural language to us and how that's implemented. We're also augmenting our documentation and we have some challenges with that because we want to capture what can be what needs to be understood by security people, what is the model. But we're also quite concerned about our obligations to understand as the model changes.
Stephen Fridakis:We talked about LLM before. How do I know whether there are any new privacy concerns that I need to be aware of and probably disclose to our chief privacy officer, and how? The DevOps process, which we said, doesn't fully apply here. But anyway, any changes to the AI engine affect privacy issues or other operational issues that we're supposed to know, issues or other operational issues that we're supposed to know? I know I probably mumbled a lot here and said a lot of different things, but it was like a stream of consciousness of all the things that are going on in my head right now we're trying to reconcile in this new world of AI, so I hope I made some sense.
Joshua Crumbaugh:You absolutely did. The only thing I might add to that is that OWASP put out an AI top 10 as well. I agree with you we're not there by any means. It's very infancy.
Joshua Crumbaugh:One of the biggest issues that I see around LLMs is role-based access controls Very, very difficult to implement, and I see all of these different enterprises that are trying to give everyone access to an AI. They want to load it up with all of their corporate data so that you can very quickly get in there, get data and help the customer more easily. All great, but how do I make sure that somebody who is an entry level temp doesn't have the same level of access as my CEO? So I think that we still have a lot to figure out around AI, the least of which is the fact that there's still hallucinations and we've got to get rid of those, which is the fact that there's still hallucinations and we've got to get rid of those. Now I think the hallucinations are OK because we've got it to the point where it's as smart as a human, and humans give us wrong data sometimes too, so that's right.
Joshua Crumbaugh:But you know, I do think that it's interesting and it's it's really reshaping cybersecurity, and I think we're going to continue to see that, and one of the things I'm interested in and seeing is all of the different applications. How are we going to take these AI technologies, use them for defense? How are we going to merge different AI technologies together to create really cool new, interesting capabilities? So it's going to be really cool new, interesting capabilities. So it's gonna be really, really exciting. I think 2025 will just from a technology perspective, I think it's gonna blow everyone's mind, I agree.
Stephen Fridakis:We're gonna see new security tools that will emerge, but we also going to see some kind of breaches that we have not seen before, that we will be challenged to defend. So exciting times, you say yes, but also times that we need to be prepared for.
Joshua Crumbaugh:Oh yeah, and I think that, as a security leader, you have to stay on top of everything that's changing, what's happening in the market, no matter how quickly it's moving, and I know that's a challenge right now. That's part of the reason we talk about it on this podcast, because it is something new every day and it's exciting. But, man, is it going to be a little bit difficult to keep up with.
Stephen Fridakis:But I want to say something to that, because there is a new, at least in my head. There is a new level of cyber consciousness that is going on right now. It is inevitable you will be breached, I will be breached, everybody will be breached. The difference is the degree of impact and damage that would be made, but it is absolutely inevitable. So our mind is shifting to devote exactly the same effort, or even more, to quick recovery rather than just defense and living.
Stephen Fridakis:I'm sorry, I do apologize, but the fallacy that we can defend all possible attacks. It's just impossible to talk about AI Now. Our emphasis is, especially in a clinical setting don't turn away, patients Do not have to postpone planned surgery. So if a system is attacked and we saw this last year, major health systems, health networks being attacked If the system is attacked, how quickly can we recover? So there's a shift, therefore, in our mindset. Right now certainly mine I want to make sure that recovery is possible in a matter of hours, or at least days, and at the same time don't get the wrong idea we will continue defending, but we are now going to spend the same amount of time on both. So there has been a shift in our minds.
Joshua Crumbaugh:Well, I think that's very logical and the smart approach very logical. And the smart approach I mean. The reality is we don't. While technology may work on binary systems, it is not binary when at least not security anyway because we're trying to drive as close to zero as we possibly can. So on the defensive side, of course, we want to stop everything that we can, but we also have to assume that at some point this layer of security is going to fail. What else do I have there to help me recover quickly or to mitigate the threat and keep it from spreading? So I completely agree. I think both are equally as important in cybersecurity.
Joshua Crumbaugh:So, around that exact topic, there's this argument in cybersecurity between whether or not you should, whether or not it should matter if the users click. So there's a lot of people that I want to stop every click. And then there's this camp that says it shouldn't matter if they click. Now, I think it's probably somewhere in between. Stop every click. And then there's this camp that says it shouldn't matter if they click. Now, I think it's probably somewhere in between, but I'm curious what side of this?
Stephen Fridakis:argument are you on and why Are you referring to whether it is? How does that affect cyber awareness?
Joshua Crumbaugh:Well, no, I guess the question I'm asking. So last week I had a conversation with a CISO of a very, very large construction company and he says I'm never going to get my users to stop clicking, and it shouldn't matter. I don't care about awareness. All I care about is, you know, locking this down, so if they click, whatever. Now, that's a very extreme opinion and I don't hear a lot of people with that opinion, but it's not the first time I've heard it either. So I'm just curious what your thoughts are on that.
Stephen Fridakis:So you see, I would agree with that colleague you mentioned, if I knew well, his network, because in a closed network where you can manage endpoints and you can manage users, yeah, it's possible to do that because there is a level of predictability there. However, in a case like what I described before doctors, doctor's that are just affiliated with a clinical system they're not necessarily always known and sometimes they just come in using a publicly available, free, you know, gmail, yahoo, whatever account Now it is we really need to recognize that we need to anticipate security people. Recognize that we need to anticipate security people. Decisions that people make every day, from clicking on an email link to neglecting password hygiene I mean, these are human errors.
Stephen Fridakis:now, that we can expect to happen, I mean with a big enough workforce every year will happen but what I have found is that, um, an implementation of the method such as zero trust will minimize the potential impact. We take it as a given that they will click. We will implement you know, speaking about what we're saying before a human-centered approach to security that puts emphasis on awareness, shared responsibility and all of that. So we say okay, but at the end of the day, there will be someone who will click. Now what we're trying to do is minimize the blast, and what we do with that. Even in my current organization, when you log on using your endpoint, or even a personal endpoint, you have very, very little access to such as you navigate to the internet, you'll be asked to up your credentials using MFA or using VPN. If you try to do code, then you can use a different kind of token device and so on. Zero trust is a method that can actually help us minimize I didn't say eliminate, but minimize the potential impact that the accidental or you know, whatever click or the download or such would have, or the download or such would have, and in networks unlike the network that I suspect the colleague of yours has, but in networks like ours that essentially there is no perimeter and they are so fluid by means of partners, ecosystems, networks and all of that, the ability for us to totally anticipate is essentially minimal, if non-existent. Therefore, we have to take it as a given and make sure that the access is very limited first. So I think that's one thing.
Stephen Fridakis:The second thing is timely awareness of the event. So we found over time that while we capture the log and the incident, the amount of time that it takes from the caption to the storing within the scene, to the determination that there is something, is ours. We need to collapse that, and the only way to collapse that is to obviously counter that to a manageable number of false positives, because the last thing we want is yes, I know, but I do not know whether it's real. Anyway, awareness and a small blast radius is one thing. Timely notification, so that you can truly take advantage of something, is another, and that is the other thing I want to say. We need to be able to bring down to a matter of I don't want to say sequence, I'm realistic down to a matter of I don't want to say sequence, I'm realistic. I want to say you know less than an hour, the fact that you have incidents going on and you have a situation that requires some attention going on, then that's absolutely essential.
Joshua Crumbaugh:Oh, I couldn't agree more and I really do think that it's more of a gray area because you know I want to stop every click I can, but the reality is is the occasional one will slip through. So when it does, I want to make sure, just like you said, least privileged that they have very, very little access within the network means that with your more privileged users, I truly believe that there needs to be that well, what I like to call, with great power comes a great responsibility training, just basically hey, you've got a lot of access, you better not click. Be very careful to sort of help them understand that. And of course you should have a bunch of controls around that, even to help mitigate, even with a very privileged user having all of their data breached.
Stephen Fridakis:What we have done to that end is we have tried to take the human factor out. So we have now I wouldn't say we don't have it, we have tons but we are trying to get into a space where humanly generated, and certainly humanly checked in code does not exist. Most of our code will be machine generated and certainly the code that will be committed will be committed following only something that is machine-driven. We are never going to get away from some human intervention for certain incidents, for certain situations, but the norm would be machine-generated, very closely guarded method to commit code, and I think that's going to get us to a point where not only things are fast because we commit code all the time, but also things are predictable from the checks and balances that you have to meet.
Joshua Crumbaugh:Yeah, that makes perfect sense. Okay, so let's switch gears a little bit here, and my favorite question to ask everybody is say you could only use one character stick, what do you pick and why?
Stephen Fridakis:Well, I like the carrot approach, and the carrot approach would be that I would love to be able to enforce a baseline of security across all my systems. So, despite the technology, despite the automation, despite, the containerization and all of the things that we use.
Stephen Fridakis:At the end of the day, we find that the applications and systems we deploy may not even have the rudimentary security features, applications, whatever. So I do apologize and probably it's anticlimactic. However, I cannot talk about advanced security controls and AI when I have endpoints that may not meet the basic baseline.
Stephen Fridakis:Now, why, oh my God, there are so many. What we find over time, even in very sophisticated organizations, is that having going back to the human aspect, having the discipline as a human to actually utilize golden images of things and not fix directly code when you shouldn't be Having the discipline to create code that only pulls the necessary extensions, rather than just keep on utilizing an artifact that you have just because it's convenient. So if I only had one way, one thing to have is I would love to have all my applications meet the baseline, and I have to tell you, my baseline is not that complicated.
Joshua Crumbaugh:No, I've been through it personally. Where I push it, you know we talk through the baseline, we get it all established, we get it out there, we push it, we check back in, you know, six months and it's all ripped out, or maybe not all of it, but so much of it. So I completely understand it and to me that's where cybersecurity starts. You know, there's so many just little configuration changes that we can make that make a drastic difference and and make it that much more difficult for the hackers to get in and once they are in. As a former ethical hacker, you know I can, I can attest to the fact that it is those minor configurations that make a difference between us getting in and not being able to move around and getting in and being able to get the full.
Stephen Fridakis:You know I couldn't agree more yeah, the kingdom.
Joshua Crumbaugh:So, uh, yeah, wholeheartedly. You also had a comment about, uh, you know, just the carrot approach. Um, I, I completely agree on carrot. I think that it's very critical that we do the simple carrots whenever we can. When I say simple carrots, you know something as small as hey, great job. Thank you for reporting that suspected email. You're a rock star.
Joshua Crumbaugh:That's the type of thing that drives that better engagement, that drives that cultural change that we're looking for.
Joshua Crumbaugh:You know, I can crack a whip all day long, but you know, even if they do the motions, they're not going to have their heart in it. They're not going to be in it, if you will. And that's the reason I think it's so critical to make them want to be more secure. And I think the way we do that and I'm curious your thoughts on this, but I think the way we do that and I'm curious your thoughts on this, uh, but I I think the way that we do that is a we connect it to them personally, uh, whether it's, you know, giving them tools and resources they can take home and give to their family, uh, which has them training their family, making them the expert, uh, at home, um, or whether it's just connecting it to their job. You know a developer, owasp, top 10 training that connects it directly to their job and it sits the why behind it. So I think those things are really critical when we really want to drive that strong, more security aware culture.
Stephen Fridakis:I love what you say about the why Over the years I had to deal with several threats. Some of them made the papers the HBO hack a few years back.
Joshua Crumbaugh:Oh, I remember that one.
Stephen Fridakis:Look threats, you said about AI tools and others.
Stephen Fridakis:They evolve in sophistication.
Stephen Fridakis:However, the underlying root I have found of any attack is actually simple it's a human mistake, an unpacked system, a misconfiguration, and those stay remarkably constant. Now, what can we do? Security leaders? We need to adapt, we need to become vigilant, but we also need to remain agile, so our strategy cannot be the same old story. So we must be able to pivot and understand that the current threat is not about you know, check your firewall rules and make sure you don't have an any to any on the top.
Stephen Fridakis:You've got to do that, but you need to prepare for what we were saying about the AI driven attack. You need to be able to understand that. You need to be able to see how, say, your supply chain will affect that and you need to be able to understand that, at the end of the day, is going to be something quite simple and, like I said, we found in that particular attack I mentioned, there was an issue with easy access to bypass MFA or things that we find. Now it's just another somebody exploiting some patch that you have not applied for whatever. So it comes down to fundamentals and, quite frankly, simple stuff it really is.
Joshua Crumbaugh:And I mean, and it's not even just the unpatched stuff, I mean, for example, all of the broadcast traffic that exists on a network. That's as simple as turning it off because you don't need broadcast traffic. So no, I couldn't agree more. And, uh, and those simple things really do make a huge, huge difference at the end of the day. Um so, yeah, yeah, uh, 100% Okay. So, pivoting a little bit, I have now had the opportunity to talk to uh multiple cyber security leaders that have actually seen deepfakes targeting their organization. I'm curious have you heard of or seen any deepfakes yourself that are targeting your organization?
Joshua Crumbaugh:or any that you work for.
Stephen Fridakis:Yes, the thing is because the majority of our implementations we provide software as a service and as such, pretty much every installation is bespoke. Certain of our clients may share with us their domain controller, others don't, and also the degree of complexity it is quite different. We do see, we anticipate and we develop. I'm trying to make a point to always sort of say it is all simple, but now I'm going to go away from that, unfortunately. So we do develop cryptographic methods to actually defend that. We are very concerned about our reliance on third-party code. Some of the medical applications are not just software, they're actually involved devices, medical devices that I'm sure you've seen in any visit in an emergency room or a hospital. So we're very, very concerned about anything that creates a dependency to firmware or software from a third party, and a lot of those still utilize quite outdated operating systems or, as I said, firmware.
Joshua Crumbaugh:I've seen Windows 3.1 in modern history still running in a network.
Stephen Fridakis:And, unfortunately, the dependencies or my ability to make improvements of something say oh my God, why are you still using TLS 1.0? Guess what, if I upgrade as much as I want to, half of those devices may not work. So I know you did ask me a question about deepfake, but the idea is, yes, it is a flavor of that when you find that you are dependent on something that you cannot necessarily validate day and we are working with our partners and vendors and such to be able to establish control so that we know that what we get is actually trustworthy and, to some cryptographic level, establish a control that we say we know its origination and we know its broker version management and all that goes with it.
Joshua Crumbaugh:Yeah, okay, so we are running low on time here as I want to make it more of a wide open question for you when it comes to driving a more security aware culture, what are some of the top recommendations or tips that you would have for somebody out there that's maybe in the process of building or just starting that drive toward a more security work culture change?
Stephen Fridakis:Every time I go to an organization and I try to understand how things work, I'm saying that it is about leadership. Dealing with cybersecurity starts with creating an environment where people feel accountable but, at the same time, they feel supported and equipped with the right tools and methods to make good decisions. Leadership in cybersecurity is not about mandating compliance what you said about the stick. Before we deal with compliance, we deal with auditors but it's mostly about fostering a sense of shared responsibility across every level of the organization. If anyone clicks, we may have a problem. If anyone downloads, or if anyone insists on going into a website that may not be necessarily secure.
Stephen Fridakis:It is absolutely something we all need to understand Now. As security people, we must lead with empathy. People still need to do their jobs and they may not necessarily understand what I'm talking about. You have to be patient and you have to have absolute clarity because, at the end of the day, our people not our tech, not our firewalls they are our first and last line of defense. So that's my mindset and that's why we spend most of the time on training awareness above and beyond checkboxes, so that people understand that we're in this together and the security guy cannot do that alone.
Joshua Crumbaugh:I couldn't have said it better myself. With that, I mean, I feel like there needs to be an amen or something at the end of it.
Joshua Crumbaugh:But no, that was really great advice and I'm right there with you. I do, at my core, believe that security does start and end with our people. Um, to that, I mean, I truly believe that literally 100 of attacks or a breach is start with human error, and even the more technical ones that involve a zero day. Well, that zero day started with a developer writing insecure code, and that's part of the reason that I have the passion around really driving that true culture change. Really building more aware organizations is because I've seen it firsthand and I know how critical it is, but I also know how effective a well-trained employee can actually be, yes, indeed, awesome.
Joshua Crumbaugh:Well, this has been just a fabulous interview. Any last words before we end today's show?
Stephen Fridakis:I have learned in cybersecurity that it's not about perfection, it's about resilience. You'll face breaches, you will be breached and mistakes will happen, but how we prepare our team, how quickly we recover and the transparency and trust we maintain with our stakeholders is the success of a leader in our field. That's what I've learned.
Joshua Crumbaugh:And I thank you so much, Joseph, for the opportunity.
Stephen Fridakis:I really enjoyed this time together with you.
Joshua Crumbaugh:I enjoyed it too. Thank you so much for joining us and for those of you joining online, thank you. We'll see you again tomorrow.