Legacy vs New age security model

Background

It is not a surprise that for past years, most of the discovered vulnerabilities found on application level. While, it was the case a few years back when most of exploited attacks were on infrastructure level. The reason for this change is well known. It is relevant to maturity of infrastructure defences; FW, IDS/IPS, WAF,AV, OS and security policies that drastically increased. As the direct result attackers focus is shifted toward applications, it is easier target. In order to address those application level security issues advanced arsenal of tools and polices is being built. This arsenal includes things as; SSDLC (Secure Development Lifecycle) and it includes many different tools such as; Secure Static Application Testing ( SAST) , Dynamic Application Security Testing (DAST), 3th party libraries security, virtualization security/docker security, devopsec approach (better say automation), etc.

New arsenal old ways

One of the biggest issues with this new arsenal is, this arsenal requires totally new skill sets and understanding. Since security is moving closer to actual development (SHIFT left), all infosec/cyber/devsecop/devopsec (words are new, but the responsibilities are not) personal and their managers/directors must develop new understanding and approach. One of the biggest issues are, many times when SAST, DAST tools are deployed, they are actually deployed in what is known out-of-box configuration, or close to is as possible. While this approach worked good in case of FW, WAF, etc it doesn't work with new tools. The reason if very simple, the lack of flexibility or in case of infrastructure tools, their inflexibility. When you deploy new FW (even next generation), it is clear what happen when you allow HTTP traffic (http example for brevity), only this service is allowed. As results your security acceptance criteria/evaluation are very simple. Any traffic which is not HTTP must be blocked. The life was easy.

New approach.

On the other hand, when the same approach is used with new tools, it actually creates false sense of security. Let's take SAST as example (detailed post). To be effective any SAST tool must be customized to support organizational development patterns. Those patterns might include: programming languages, frameworks, pipeline manager, ticketing systems, build systems and above all, how the applications are developed and constructed. This information is imperial [!!DETAILED POST!!]] to build the solution that will find security vulnerabilities in the early stage of development. In order for any SAST tool to properly find vulnerabilities in source code, SAST solution must be able correctly identify inputs(aka source) and outputs (aka sink), at the very least. When this information is incorrect, the SAST tool will creates many false positives alarms (many times I hear bad tools) in the best case and will not skip actual vulnerabilities in the code in worth case.

Quick poll: Do you verify that your SAST solution correctly identifies inputs and outputs?

Let’s look at another example, DAST (Penetration Testing). Many DAST solutions are running in fuzzing mode (discover/crawler) through application. This process is hardly effective (from CI/CD perspective), as it takes a lot of time, and the results are questionably, This is part of previous approach. New approach will be to automate and optimize DAST/PAT based on the inputs that are actually processed by application. This information can be easily obtained from development teams. It will safe resources, hundreds of hours of work and compute and drastically increase precision of actual findings at the same time.

Quick poll: Is you DAST/Penetration testing running in fuzzing or manual one off engagements?

To summarize:

In order to provide adequate solution for today's technological challenges, security practice must develop as well. To address these challenges, secure infrastructure concept and processes of operation are not giving the needed response.

Disagress, let me know in comments