I was first introduced to Agile methodologies in 2014, as the company I was with at the time started moving from a Waterfall approach to Agile with our development teams. These agile teams typically consisted of a business analyst, scum master, back-end developers, UX developers, and quality assurance analysts. Security had not yet found its place in this new structure, however, we quickly learned that adopting a hybrid form of “ScrumBan“, a combination of Scrum and Kanban, would work out very well for our team.
Initially, to familiarize ourselves with how the development teams were now operating, the Security team formed it’s own scrum team. With scrum team names for the development teams such as “Dizzy Grizzlies” and “Thirsty Goats”, the Security team appropriately adopted the name “The 443”. Two week sprints were selected, and daily 15 minute stand-ups were scheduled. We became familiar with concepts such as “stories” and “epics” and activities such as “sprint planning”, “retrospectives” and “demos”.
An Agile “Stand Up” meeting, is a brief daily 15 minute scrum team meeting. As you may have guessed from its name, these attendees typically remain standing for this meeting to encourage brevity and keep things moving.
The purpose for this meeting is for each team member to summarize:
- What they accomplished yesterday
- What they plan to accomplish today
- Any impediments or roadblocks are preventing them from completing tasks
This promotes team ownership and awareness of what is happening within the team. Often team members are able to assist each other with issues they are facing, or the scrum master will work to help remove impediments.
Tell Me a Story
In Agile terminology, a “story” is a technical requirement, written in as few sentences as possible using non-technical terminology. A story for the Security team may be “Perform periodic user rights audit”, ”
Stories accumulate in the “backlog” queue. Once ready to be worked, they are pulled into the current sprint where they are tracked on the scrum board categorized into states such as ‘To Do’, ‘In Process’, ‘Needs Testing’, or ‘Done’.
Mind the Backlog
New stories and tasks are added to the backlog, with is a living list of items of various priorities that will need to be evaluated, researched, or worked. The team schedules regular “grooming” meetings to review the backlog and prioritize backlog items. Prior to beginning each spring, items are brought into the new sprint from the backlog.
In our structure, it was not uncommon as priorities shifted for items to be removed from the current sprint, to be addressed at a later date. One of the unexpected benefit our team has seen many times when grooming the backlog, is backlog items that have been pending for some time being closed out as complete as a result of other activities that have taken place.
At a director level, the backlog gave me a consolidated space to manage priorities, assign stories and tasks to team members, and maintain an overview on what workload was in the pipeline for the team.
Planned Work vs Responsive Activities
One challenge we faced as a Security team implementing Agile centered around how to handle responsive activities. Typical development teams have very structured workloads and sprints. They take in a set amount of work they believe they can complete within the sprint and chip away at it each day.
While Security teams have some level of predictive workload, there is also a significant element of responsive activities that come up each day. To address this, we remained very flexible within our sprints, having our scrum master move stories in and out of the sprint, as necessary. This allowed us to maintain visibility of all pending activity on our scrum board, while remaining flexible to react responsively as business and security demands changed.
The primary benefit that I always identify from adopting Agile team structure is the team accountability. Each team member is held responsible for completing the work assigned to them. This workload is visible to everyone on the scrum board, and it is encouraged for team members to hold each other accountable when they see tasks or stories stalling with little or no progress.
Each sprint, the team as a whole, is responsible for closing out the items in the sprint. This means the incentive is there to work together and help each other out. We all succeed as a team!
When I first stepped into the CISO role, I had the tremendous benefit of receiving help and advice from a number of seasoned CISOs and CSOs. Having now filled roles for several years, there are a number of things (in no particular order) learned from my experience that I would like to pass along to others.
Teamwork is cruical
Building rapport and working relationships, both internally and externally, are very important. You won’t be able to get very far if you’re not able to get others on board with your security initiatives.
Hire people that are smarter than you
This doesn’t mean to grill prospective employees for random tidbits of technical knowledge like a trivia quiz. Put aside that list of common “trick” interview questions aimed at tripping up the candidate. These serve little purpose but to put the interviewee on edge.
Instead, have a meaningful conversation. Discuss specific experiences. Learn how they would approach various scenarios. This said, not everyone has a type A outgoing personality. If a candidate is having difficulty verbalizing their thoughts, put a pen or marker in their hand and have them draw or white board their solution. This will often put technical folks at ease, and you will learn much about their thought process and decision-making skills.
Listen to your team
This is advice I need to remind myself of often, learning to stop, take a breath, and listen to what my team members have to say. It is often easy to jump to conclusions or make assumptions about where they are coming from. If you have hired smart people, you’ll want to sit back and take the opportunity to learn from them and their experiences.
There have been numerous occasions where I thought I understood the situation and had a decision in mind, however, after healthy conversation with my team, my mind was changed on a position or course of action. Don’t miss out on the benefit of other points of view and experience.
The single best advice I have for communicating with others is to put yourself in the other person’s position, and communicate to their perspective. In other words, if you were that other person, what information and context would you be looking for? Think about that and deliver your message accordingly.
Not to pick on the network guys with my example here, but network engineers see everything from the network perspective. Each device has an IP address. They typically don’t care about host names or server functions. If you’re having communications issues between hosts and you present your issue as two IP addresses failing to communicate on a specific port, you will have their attention. Similarly, if you’re working with a project manager, present your concerns with them the perspective of how it effects the project, milestones, and overall timeline.
This also applies to situations when you need to tighten permissions or implement restrictions. Often, affected users take these restrictions personally, whether it really effects them or not. Something that I always do when communicating these changes is to identify and emphasize items and results that will benefit them.
For example, when removing unnecessary elevated privileges for system administrators, which is critical for maintaining proper separation of duties, is not something they need to take personally as a reflection on their skill or trustworthiness. Instead, it has the immediate benefit of protecting them from making a costly error. Also, if a forensic investigation ever takes place, focus is placed on users having access to the affected systems. Limited permissions can prevent users from having to be questioned as part of an investigation.
Maintain proper focus.
Don’t lose focus of the big picture. Understand the full scope of your responsibilities, and where your bigger risk areas are. It’s so easy to get caught up in a “fire drill” related to a specific risk, issue, or vulnerability. When you find this happening, take a moment, step back and keep focus on where this fits in the overall picture.
If you find yourself in the throws of a compliance audit where it feels like nothing is going well, your team is buried in evidence requests and findings from the auditor, realize that there is a bigger picture. Your overall security posture is much larger than any one compliance audit. Learn what you can from the audit process, make positive adjustments to your security program where you can.
I hope that some of these suggestions are helpful to others who are just starting out in security leadership, and perhaps ring true with other more seasoned security veterans.
When I first assumed the role of CISO at my current organization, I knew that a variety of commercial security tools had been purchased, along with numerous other open source, or otherwise free tools that were also being used for various functions. This raised a number of questions for me, including:
- What specific functions does each tool offer?
- Many tools are multi-functional. Which tool is the right tool for the job?
- How effectively had each tool been implemented?
- What value was each tool was bringing to the security program?
With the assistance of my security architect, we set about conducting a study to measure and report on security tool utilization. The results of the study really helped me gain a high-level understanding of what our toolbox looked like. The summary report provided both insight into what consideration could be made for consolidating or eliminating tools, as well as what effort could be invested to better leverage tools we already had, but were not leveraging to their fullest potential.
Primarily, my goal was to identify three things:
- Gaps in visibility that we either lacked functionality for, or had not fully implemented.
- Overlaps where multiple products offered the same, or similar, functionality.
- Steps that could be taken to better-utilize existing tools.
Cost May Go Well Beyond the Price Tag
Each tool, whether costing a lot in licensing and maintenance, or even free and open source products, represents a “cost” of time and effort required to implement, maintain, and administer it. This cost can vary widely, depending on the nature of the tool, whether an enterprise SIEM, or a client-based utility. Other factors, such as the level of paid support or maturity of the open source project community also determine how much effort each particular tool “costs” to the organization. Understanding the utilization and additional potential utilization from each tool can help justify budget and personnel head count.
First, identify the capabilities that each tool offers. This was an easy step for us, as we have an Atlassian Confluence documentation repository, similar to a wiki, with pages for each security tool. However, we discovered that we did not have documentation on everything. If you don’t have something like this, start by visiting the vendors website. Weed through all the marketing lingo to pull out a description of basic functionality. Then continue documenting tool network requirements as well as environment-specific configurations.
Next, gather feedback from your team. We did this in the form of distributing a spreadsheet survey, with columns like this:
||What We Do Well||What We Can Improve||Comments|
Each team member was asked to document their opinions of how well they perceived each tool was utilized (percentage), what we do well with the tool, and what we could improve with the tool. The responses were then tabulated and averaged out to begin documenting a summary report.
Once the survey spreadsheets were received from all team members, results were averaged and results were placed in an executive style report format. The report categorized each tool into the following sections, each with an associated color code:
- Well-Utilized tools (Green)
- Under-Utilized tools (Yellow)
- Tools That Require Implementation (Red)
- Tools to be Decommissioned (Gray)
Details on each tool contained the following:
||Progress / Next Steps|
|SEIM||65%||Completed professional services engagement.
We made sure that the output of this report was fully actionable. While some tools contained notes on many actions that were needed, other tools that were already fully utilized contained few action items. In some cases it was determined that a tool was no longer needed, so the action notes included decommissioning steps. Tasks are then assigned out to the team based on the Next Steps outlined in the report.
This review and reporting exercise produced more significant value than I expected. In addition to the obvious result of inventorying our tool set, I was able to quickly assess the status and perceived value of each tool in our inventory. This uncovered opportunities for cost-savings where some tools could be consolidated, allowing budget to replace aging unsupported tools, or invest in professional services.
We decided to conduct this exercise on a quarterly bases. Each quarter it is very interesting to watch the progress as the utilization metrics moved and the associated action items changed as progress was made.
How do you define or measure the state of security within your organization? This is the age old question that business leaders and information security professional have been struggling with for some time. The many factors that determine one’s security posture can differ widely between organizations. There is no “one size fits all” that works for everyone.
Over the years, various security frameworks have been developed to assist with establishing a healthy security posture. Compliance benchmarks were introduced to measure organizations against a set of controls, such as policies and technical controls. However, none of these, taken alone, can guarantee a successful security program.
Many organizations with less mature security programs point to successfully passing their various compliance audits as indication that they are “secure”. While I am not going to argue that these standards are not worthwhile, they cannot be considered an exhaustive yard stick to measure a successful security program.
Security compliance audits are typically a paperwork exercise going through and reviewing an organization assessing whether a long checklist of security controls, policies and processes are in place.
While an organization can claim compliance with industry security standards, such as ISO 27002, or PCI-DSS, this typically reflects a single point in time where the organization successfully completed an audit.
These recurring audits typically involve an independent auditor conducting an assessment to determine whether the organization is “compliant” or “non-complaint” during the time frame when the audit was conducted. Specific controls that were found to be out of compliance are typically written up as findings for the organization to take action to mitigate or resolve to the auditor’s satisfaction.
These audits require a considerable amount of time from many resources across the organization gathering and submitting audit evidence. This often involves participation from HR staff, security analysts, system and network administrators, change control, and help desk personnel.
At the conclusion of an audit, you may find yourself wondering just how much all those hours of effort have translated into meaningful improvement of the overall security posture.
In contrast, security is an ongoing state, always adapting to keep pace with the changing industry landscape and evolving business requirements. It involves much more than completing a series of check boxes indicating that certain processes or technical controls are in place.
As information security professionals, it is very important that we use these compliance activities to turn items that are uncovered during the audit process into opportunities to make adjustments and improvements to our security program.
For example, an audit may evaluate that adequate event logging is enabled and retained. While providing the necessary evidence to the assessor, perhaps a security analyst discovered that logs from another tool, not specifically within the scope of the audit, were not being retained properly. While still able to fulfill the specific requirements of the audit, in the spirit of “true security” and maintaining a secure posture, it is important that this gap be tracked and a solution applied.
Our mindset must always be beyond just the audit controls, to what can we learn from the review process to improve the overall state of security.
Unless you have achieved 100% comprehensive security, which I’m certain does not exist anywhere, there are always valuable lessons to learn from compliance activities. Otherwise, compliance will quickly become that time-consuming paperwork exercise that consumes considerable time without any meaningful impact on the organization’s security posture.
While compliance audits and activities have their place, and are important, don’t let them distract you from your important job of maintaining and evolving your security program.
Similar in some ways to a network firewall, a web application firewall (WAF) is a device (or in some cases a service), purpose-built to protect web applications. Instead of filtering requests at the network level with rules based on IP addresses and network protocols, WAFs understand http requests and responses at the application layer. WAF rules are crafted to identify anomalous http requests, and take action accordingly.
While very powerful and useful, operating WAF cannot be considered an adequate replacement for routine review of your application source code as part of your application deployment process. A 100% secure web application (if such a thing were to exist) would arguably not require WAF protection. However, even if such a web application did exist today, tomorrow new vulnerabilities would be discovered and require mitigation.
From a compliance perspective, security controls, such as PCI-DSS requirement 6.6, call for WAF protection in lieu of code reviews. Many organizations have invested in WAF solutions to check the compliance box addressing these requirements. However, while a WAF may be physically in place monitoring traffic, is it really doing anything meaningful? Due to the complexity of tuning rules and policies to minimize negative impact, I am concerned that many organizations may not have adequate policies enabled to block attacks.
Relying too heavily on a WAF has the very real danger of providing a false sense of application security.
While a WAF can block malicious attacks from reaching your application, the ultimate goal of application security is to correct the application itself so it is no longer susceptible to those attack vectors. Depending on the scope, correcting the vulnerability at the code level can take some time. This is where a WAF can play an important role protecting the application in the interim until the code is fixed and deployed into production.
Real-World Attack Intelligence
Proper placement of a WAF can provide valuable insight into “real world” activity and attack attempts against your application. Monitoring this activity and following up to determine how the application handled (or would have handled, if the WAF had not blocked the traffic) can help uncover weaknesses that have not yet been detected by routine testing.
Sure, web server logs could also show evidence of this activity, but a WAF does a terrific job of pulling the needle out of the hay stack of raw log events, categorizing them into attack categories, such as SQL injections, command injections, scanning activity, etc.
Feeding these categorized and correlated events into your SIEM or log monitoring tool can easily generate meaningful reports and alerts that can be acted on to enhance your overall security posture.
Shared Threat Intelligence
Another valuable function some WAF vendors provide is dynamic black lists of bad actors who have exhibited malicious behavior on other websites. This threat intelligence can be particularly helpful blocking low-bandwidth distributed denial of service (DDoS) attacks.
Is a WAF the silver bullet to web application security? Not on its own.
My advice is to consider a WAF as one of many tools in your toolbox. When leveraged along with routine code reviews and active event monitoring, it can be very effective. However, if your goal is to stand it up, set it, and forget it, I would caution that you will likely not have favorable results.
Application security (AppSec) can be very daunting, especially for teams just staring to tackle it. There are countless tools available, but you may be asking “where do I begin”? Hopefully some of my experience will benefit you and get you on your way.
Successful AppSec cannot be merely a bolt-on task tacked on the end of the development workflow (DevOps + Sec). Developers must be fully engaged, and held accountable for writing and maintaining secure code (DevSecOps).
Over the last year, my security team has completely overhauled our approach to application security. This has yielded some significant wins, including improved timeliness in resolving security findings, developers having a better understanding of how their code directly impacts security, and most importantly, developers empowered to take ownership of application security by leveraging tools directly within their deployment workflow.
Application development is an iterative process. Development teams are primarily driven by deliverables. New features must be introduced and tested. Bugs are typically identified during testing and hopefully mitigated. The application update is finally deployed, and the process repeats itself.
Depending on the size of the organization and the amount of change control involved in deploying code into production, any bug can quickly introduce lengthy delays into the cycle. Not only does it take time for the developers to resolve the issue at the code level, but subsequent functional testing is also required to ensure no other issues have been unintentionally introduced. All this takes measurable time and effort.
Dynamic Application Scanning Tools
Previously, much of our AppSec effort was focused on dynamic application scanning tools (DAST). While they certainly have their place, which I’ll get into more later, a mature AppSec program needs a broader approach.
The first challenge we found with use of DAST tools, was that dynamic application security testing could not begin until the code had been deployed into a testing or staging environment. Typically, at this phase of the development cycle, the application has moved from the developers over to quality assurance for functional and regression testing. Resolving any security findings at this point requires moving the application back into development phase, thus restarting most, if not all, of the testing. This quickly becomes a time consuming road block to deploying the application, with potential negative impact on timelines and deliverables.
Other challenges include the difficulty and level of effort required to automate dynamic testing into the build tools. Most dynamic application scanners require some assistance stepping through the application, often with valid test data, depending on the nature of the application. Typically this is done either by manually exercising the application from a web browser, or developing automation scripts that walk the scanner through the workflow of the application.
Finally, and perhaps the most significant challenge we encountered, was related to understanding and communicating the nature of the security findings. Because the DAST scanner is testing the application externally, security findings are reported by the behavior that was observed. This can be difficult for developers to fully understand how the reported behavior relates to their code. We found in some cases hours and days were spent fully dissecting findings, replicating and demonstrating the behavior for the developers. We needed results that the developers could easily relate to.
Static Application Scanning Tools
Perhaps you’re thinking that static application scanning tools (SAST) are great, but you’ve already looked into some commercial products and they are cost-prohibitive. This can be true, but first let’s take a look at one free option that yielded tremendous results for us.
OWASP Dependency Checker
When writing modern web applications, developers no longer begin with a blank slate writing code. Typically web application frameworks, such as Spring, Grails, etc are used as a foundation to build the application upon. This means that developers no longer need to write code for functionalities common to most applications, (for example user authentication, parameter validation, etc). These reusable functions, often thousands of lines of code, are contained within the framework that the developer can easily integrate into their new application as a code dependency.
The good news for security folks, is that because of the prevalence of these application frameworks, there are many eyes constantly evaluating them, discovering and reporting vulnerabilities.
The OWASP Dependency Checker tool focuses on OWASP top 10 risk A9: Using Components with known vulnerabilities. It identifies dependencies in Java and .NET applications, checking them against vulnerability repositories to determine if there are any known, publicly-disclosed vulnerabilities. These findings are especially present when developers revert to linking in older framework libraries, without obtaining the latest version release. Often resolving these dependency findings is as easy as updating the application with the latest framework versions.
We found that the effort of ensuring that all frameworks and third-party libraries are up to date has resulted in substantially fewer security findings from both DAST and SAST tools.
Once third-party code has been evaluated with this free OWASP tool, results from DAST and SAST tools will now focus primarily on findings related to code written by the application developers.
Preventing an Equifax Breach – Some security shops attempted to use host vulnerability scanners to detect for Apache Struts. This proved to be difficult and unreliable at best, because Apache Struts is a framework incorporated into application code and not a server application component such as Apache web server or OpenSSL.
The OWASP Dependency Check tool was perfect for quickly evaluating all of our application source code repositories to determine if the vulnerable Apache Struts dependency was present within any applications.
Commercial SAST Tools
While third party frameworks and libraries constitute large percentages of applications, it is still critically important to be testing the custom code that developers are writing into their applications. This is where commercial SAST products play an important role. Many of these tools look for vulnerabilities against all of the OWASP Top Ten risks, including SQL or OS command injection, faulty authentication or session management, cross-site scripting, and other security flaws.
As noted above, a significant benefit we found from implementation of SAST provided findings that were reported in was that were meaningful to developers, in language they understand. For example, looks at this highlighted line of code, and make this change to address a specific vulnerability. While some false positives still existed, conversations around the finding are abbreviated as the developers have a much better understanding of what the potential issue is, and where it exists within their code.
In order for modern security tools to keep up with the pace of DevOps and rapid deployments, we must effectively integrate automated security into the SDLC workflow and application deployment process. By introducing these tools as additional steps in the automated build process with tools such as Atlassian Bamboo or Jenkins, security scans can be completed each time a developer checks code back into the repository and runs a build. The build can even be configured to fail if a clean dependency check or static scan is not reported.
Integration at this level is easily understood by developers. They already have a good understanding of the build process, and they are getting feedback from the security tools while they are still actively engaged and working on their app changes.
Many SAST products also offer deeper integration and automation features, such as integration directly within the developers’ IDE environment. This integration offers evaluation of code as the developer is writing it, not just at build time.
SAST IDE integration provides immediate feedback to the developer with suggestions for resolving findings, which can also server as ongoing secure coding education for the developers.
To accomplish true DevSecOps, and for application security to be effective, security testing must be automated into the deployment workflows. In my experience, static based tools such as the OWASP Dependency Check tool and, if budgets allow, commercial SAST scanners, are ideal candidates for automation. They only require access to application source code, and do not require automation scripting or test data which complicates the automation efforts.
Before you throw out DAST tools completely, they are great as an additional layer for conducting more thorough testing against web applications. However, realize that they can often be difficult to automate into the build process. For penetration testing, or other focused testing that allows for manual interaction with the application, DAST tools certainly provide worthwhile benefits: both in catching vulnerabilities that static tools may have missed, or in validating whether a static finding is indeed a false positive, or non-exploitable in certain circumstances.
If you don’t know where to begin, start small. Start running some dependency checks on your source branches. You may be surprised by the number of vulnerable libraries that are reported. Once the development teams are accustomed to keeping their dependencies up to date, consider conducting a proof of concept evaluation on a commercial SAST product.
You may find that the benefits easily justify the budget, especially if you can demonstrate a significant win from the dependency check effort.
Whatever steps you take, you will begin to get your arms around your overall application security posture and have a better understanding where you should be focusing efforts.