Idan Plotnik, CEO of Apiiro Security, recently joined Dennis Fisher on the Decipher podcast to talk about taking a risk-based approach to software development and security. This is a condensed and edited version of the conversation.
Dennis Fisher: Where did the idea for this technology came from? Was it a problem that you had seen that you thought, okay, I have an idea for solving this?
Idan Plotnik: I'm in the industry for 19 years plus my last startup was acquired by Microsoft and I was a general manager for engineering at Microsoft. You want to move from waterfall methodologies to agile development methodologies, but on the other hand, you don't want to release code with risks. You want to be able to assess the risk based on multi-dimensional approach and do you need manual processes? You need the risk assessment questionnaires. You need to make sure the people that answer these questionnaires really understand. It's a challenge that I felt. And then I went to other large organizations and I asked them if they have the same challenge while releasing code to production. And then we said, Hey, this, this is going to be a huge problem.
Dennis Fisher: Microsoft obviously has a pretty mature SDLC, so they understand how code should be developed and assessed. And you're looking for all these known problems. How different is the way that a large organization, such as Microsoft, handles it from an SMB or a start-up?
Idan Plotnik: It's a very, very good and complex question. So even in a mature, secure development life cycle processes, you start from a threat model, and then you go to add this security design review, and then you go through compliance reviews in some cases, and then you go through a security code review, and then you go through penetration testing and then you go through vulnerability scannings throughout the CI/CD pipeline. And then you're overwhelmed. You're saying, what's going on here? Like I have one person that is responsible for security across 100 developers. This is the best case, by the way, the ratio is one to 100 in the best case scenario. And then this guy says, Hey, what's going on here? I went through all the processes. I did all the phases, but now I have a thousand vulnerabilities. I have 2000 tasks to do, to remediate where what's first.
I'm telling you from experience that you need to decide if you block the product from getting into your customer's hands or you release it with risks. And this is the fundamental problem that our product solves. Do you need to focus on risky code changes? Okay. What are the material changes that you are introducing into your application, if you're changing the layout of your login page, who cares versus you changed the logic of an API that is responsible for money transfer. This is a risky change. Now I'm not talking about vulnerabilities. I'm talking about a fundamental change that you're taking all these changes and passing them through the same vulnerability scanning pipeline. We are saying something else, we are saying let's differentiate between changes. And we will not only differentiate between changes based on their technical aspects. We will differentiate between changes across their attack surface technical impact. What's the business impact of the change? And what's the business impact of the application? What's the knowledge and the behavior of the developers that made these material changes? And only then we will decide which changes will go through.
Dennis Fisher: Humans are terrible at assessing risk. I think that's one of the things that I've learned over the last 20 years. We're not good at that.
Idan Plotnik: It's not only that we are not good with that. It's a simple thing that our mind cannot hold so many risk factors. And when I say risk factors, I mean, like where do you deploy this application? Is it on-prem, is it in the cloud? What's the knowledge of your contributors? What is the risk of the application code? What's the attack surface? What's the outcome from your third party scanning tools or what are the security controls? So many risk factors that a human being cannot calculate. You need a machine to do that for you, and then just point you to the right direction and say, go here on this specific change. This is the most risky change in your application. Go and have a meaningful conversation with the developer or the compliance officer, because you added PII to the application.
"There are so many risk factors that a human being cannot calculate. You need a machine to do that for you."
Dennis Fisher: Software security experts for years have been saying, we need to build secure software and get the security people involved as early in the process as possible. It's a lot more efficient than trying to secure something after the fact. And this to me seems like something that is completely built to work in that process, the way that software is built and delivered now, as opposed to the way it was delivered 20 years ago.
Idan Plotnik: I totally agree. And I think everyone bought into it. For the last, I don't know, 16 months, I was talking with more than 250 companies from 50 developer shops to 20,000 developers. Everyone bought the idea that they need to integrate security as early as they can. Now, the problem is, and from our point of view, getting security into the CI/CD pipeline, in some cases it's good, but it's too late. You want security to be in on the design phase to tell you to prioritize across all the feature requests, all the user stories to prioritize, what are the most risky features that are going to be developed in the next release, and then handle them at the design phase and run the contextual threat models or security design reviews as early as you can. I can say it depends on what your development processes are, but you can run the security assessment or the risk assessment on your develop branch or the feature branch or the main branch before you trigger the vulnerability scanning processes. And there, at this point, you can do a few things. One, as I told you, you can trigger automatic workflows and say, if I have these types of changes, I need to bring in a pen tester before I even release this code throughout the pipeline, because a pen tester might find vulnerabilities that you can't find in automatic tools. And if he will find these vulnerabilities or material changes, then when it will go through the pipeline, it will reduce the noise that I will get at the end of the process.
Dennis Fisher: Is there a way to introduce this kind of technology and mindset for developers as they're learning to code at university or in their initial jobs out of school where they're trying to figure out how software is actually built?
Idan Plotnik: So there are things that you can do, and there are things that you can't do for example, to learn the basics. You can learn the basics for writing secure software, of course, but there are risks that you can't teach at school again. I'm going back to the example of I'm a developer. I accidentally added your home address and amount of money that you have in the bank to an internet-facing API. It's not a vulnerability, but it's a risk. I can't teach you this. This is based on the context of the application based on the context of the company that you are working on. And, and so the answer is yes, the basics, but no, for the other risks that are taken from the essence of the application or the business, the industry that you are in this is what we are trying to automate.
Dennis Fisher: The context piece of it to me is what really makes the big difference. There's all these different ways to find bugs in code. But if you don't have a context for this developer has a lot of experience with this application, knows what the risks are, knows he or she made this change in this way. Or it's a completely inexperienced developer who is new to this project, and probably shouldn't have made this change. There's a big difference. You know, people love to talk about security as a series of trade-offs, but the context matters in those kinds of decisions
Idan Plotnik: Spot on. We're trying to put context into the code changes that you are doing. And not, as I mentioned, not only the context that it is technical, the context, who are you, what's your knowledge across the history, or let's say, you're working in this organization for five years. And for the last four years, you worked as a backend developer, and now you're working as a front-end developer. We will look at you, even if you're working for five years at the same company, we will look at you as a risk.