• Code will always have flaws. Lists will always be in tens. Appsec will always be necessary.

    Sometimes it will be effective.

    Let’s look at the history of the OWASP Top 10. It set the standard (more on this in a bit) for web security in the early 2000s. It normalized how we talked about vulns and raised awareness about how web apps were being compromised.

    And 20 years later it’s mostly the same. As a means to an end – a world of more secure apps – that doesn’t feel like success.

    If you love the list, carry on and use it to educate other appsec folks. As an awareness tool for appsec, it’s useful. But that’s also its limitation. In practice, its audience is essentially appsec.

    What we need are actionable references for building secure apps that serves the audience that builds them – developers.

    The List Begins

    Satire of the Trades

    The list originated in 2003 as the web’s parallel to the SANS Top 10 for Windows and Unix vulns, which collected the most popular ways those operating systems were compromised. It positioned itself as a means to raise awareness about impactful vulns. It even noted that many of the vulns

    …have been well understood for decades. Yet for some reason, major software development projects are still making these mistakes…

    We’ll come back to that sentiment in a moment. The introduction continues with a line, emphasized in bold text, that set up a decade or more of disconnect between appsec and development teams:

    The OWASP Top Ten is a list of vulnerabilities that require immediate remediation.

    This adds prioritization onto awareness, but it’s a prioritization with no criteria or decision points other than that it be immediate. There’s no means to evaluate risk or rank priorities against each other. It just sets the expectation that everything on the list must be found and fixed. Right now.

    The 2004 version tweaked some category titles and made a slight rearrangement to combine remote admin flaws into broken access control so a new item could appear: Denial of Service. The introduction also added a line that perhaps tempered the original’s “immediate remediation”:

    Existing code should be checked for these vulnerabilities immediately…

    It then followed that with an acknowledgement of how apps are built – there’s a design phase, requirements, implementation, testing. There’s time and budget that needs to be allocated for security processes. There are security practices that need to be planned and adopted. This is the paragraph for the audience the list should be focused on – developers.

    But the introduction also included a sentence that haunts it to this day (emphasis in the original):

    We encourage organizations to join the growing list of companies that have adopted the OWASP Top Ten as a minimum standard

    This turned the OWASP Top 10 from a security awareness starting point into a compliance buzzword. It was now a standard.

    The 2007 update tried to retcon this (emphasis in the original):

    This document is first and foremost an education piece, not a standard. Please do not adopt this document as a policy or standard without talking to us first!

    Too late. Just as development teams were struggling and frustrated with an immediate urgency about all things Top 10, now appsec teams were struggling with a sudden misappropriation of the list’s intent.

    The 2010 update again warned about falling into the trap of myopic adherence to the list or treating it as a standard. Every iteration since includes that warning. Perhaps one day it will successfully shed the moniker of standard.

    Prioritizing Everything Immediately Isn’t Prioritizing Anything

    In addition to continuing the campaign against being a standard, the 2010 update also tried to clarify terminology. The initial 2003 version was loose with terms, calling its items “mistakes” and “vulnerabilities”. The 2004 version leaned heavily into labeling them “weaknesses”. In 2010 these were called out as “risks”, along with a methodology that showed how their risk was derived.

    Yet there’s a suspicious observation bias where popular vulns rise to the top, possibly because that’s what researchers are prompted to look for. These researchers are likely also coming to web security from a proactive rather than forensics perspective, meaning they’re identifying a wealth of vulns against an app vs. observing which vulns are actively exploited to compromise them. This isn’t bad or misleading data, just the data that’s most available.

    But it’s also a bias that adds counterproductive significance to a fix-all-the-vulns mentality. Developers need priorities that include factors like impact, exploitability, mitigating controls, and their own roadmaps.

    Two of the 2010 list’s metrics, Prevalence and Detectability, also appear curiously correlated. A vuln that’s easy to detect, like XSS, has a widespread prevalence. Are they widespread because they’re easy to detect? This question arises because the entry for Insecure Cryptographic Storage (A7) has a difficult detectability and (therefore?) uncommon prevalence. However, there were many breaches throughout 2010 from sites who mishandled password storage1.

    Six of the list’s entries have easy detectability. If more than half of the risks to a web app are easy to find, why do they remain so widespread?

    If vulns like XSS are everywhere, maybe this means detectability isn’t so easy when dealing with large, complex systems. To be fair, another reason could be that you have to start looking for these vulns in the first place. After all, raising awareness to look for these vulns was an original intent of the list back in 2003.

    Compare the OWASP Top 10 with the Verizon Data Breach Investigations Report. The report’s view from 2020 was that less than 20% of web attacks relied on an exploit, whereas

    …over 80% of the breaches in this pattern [of basic web app attacks] can be attributed to stolen credentials.

    This doesn’t mean one reference is right or wrong. It means that threat models should be informed by actual attack scenarios as much as they’re informed by basic secure coding concerns like XSS. And it means that it’s important to distinguish issues that are implementation mistakes from those that are fundamental design problems.

    Design is where devs create barriers to protect data or insert controls that reduce the impact of a compromise.

    Secure Design, Secure Implementation

    Tools & Instruments

    One way to think about vulns is whether they are design or implementation errors. Design errors may be a fundamental weakness in the app. Implementation errors might just be a mistake that weakens an otherwise strong design.

    An example of a design error is cross-site request forgery (CSRF). CSRF exploits the natural, expected way a browser retrieves resources for HTML tags like iframe and img. The mechanism for this attack was present when the first form tags appeared in the 90s, but it didn’t reach critical mass of awareness until its inclusion in the Top 10 in 2007.

    SQL injection is another design error. It’s a subset of the ever-insecure commingling of code and data. Next to XSS, it’s the most “web” of web vulns.

    Secure design addresses a vuln class rather than tackling instances of it one by one2. Both CSRF and SQL injection have well-established design patterns that solve these vulns. A vuln might still occur, but it’s usually a mistake in implementation. In a successful appsec world, these two vulns would have disappeared because modern frameworks make the secure option the easy, default option.

    The 2021 list made this concept explicit with entry A4, Insecure Design. I’m a fan of this idea, but I’m not a fan of this entry. It’s like saying, “Write secure code” – superficially admirable, yet woefully incomplete. It’s basically a more diplomatic version of the list’s first incarnation that lamented how “projects are still making these mistakes…”

    Maybe we need fewer reminders of all these mistakes and more solutions that avoid these mistakes in the first place.

    From the beginning of the web, XSS seemed destined to be the unkillable cockroach never to disappearfrom the top 10 list. It took an engineering project, not appsec awareness, to finally give XSS and HTML injection the secure by design treatment.

    When React appeared in 2013, it set the entire class of XSS vulns towards extinction (still not there yet!) by making HTML and DOM manipulation secure by default. It allowed insecure code, but made it obvious – and therefore more easily detectable – with the conspicuously named dangerouslySetInnerHTML property. It and frameworks like it aren’t immune to XSS, they had to fix occasional implementation errors, but these frameworks did what 10 years of awareness had failed to do – address a well-understood vuln in a way that centered developers’ needs.

    Developers build sites. With React they could focus on building sites and, incidentally, have a vuln class taken care of by their default framework and tooling. This is the way.

    Here’s a handful of other projects I like to point to as examples of engineering solutions that made a significant, positive impact on security:

    • Dependabot – Automatically generate merge requests to update packages with known vulns or are outdated. Providing a tool that supports existing dev processes is infinitely better than telling devs to keep packages up to date.

    • Let’s Encrypt – Provide free TLS certs so that HTTPS is feasible for anyone. Provide automation so that cert provisioning and rotation can easily accommodate short-lived certs and minimize human error. This was so much more successful than a decade of shaming sites into HTTPS followed by a decade of begging sites to use HTTPS.

    • SameSite cookie attribute – Move all the custom CSRF countermeasures out of individual frameworks and into the HTTP spec.

    • WebAuthn – Mutual authentication of users and sites that’s resistant to interception and replay attacks. As a bonus, disclosure of a site’s user keys is near inconsequential compared to the disclosure of a site’s password database.

    This doesn’t ignore that implementation errors happen. It just emphasizes that coding guidelines provide secure alternatives to common patterns, frameworks enable consistent usage of recommended techniques, and automated scans provide a degree of consistent verification.

    Still Not a Standard

    Yet this delineation of design and implementation still plays second fiddle to the siren song of the OWASP Top 10. All too often (at least, anecdotally) the phrase, “Does it scan for the OWASP Top 10?” or “How does this compare to the OWASP Top 10?” arises when discussing a scanner’s capability or the outcome of a penetration test.

    The list isn’t fully to blame. Its entries were also co-opted by security tools that used it as a reference to describe their coverage. It made sense to explain capabilities in terms users would be familiar with.

    The 2007 version included Common Weakness Enumeration (CWE) entries, which had just officially appeared the year prior. This further compounded the standard-like use of the OWASP Top 10 since now there were mappings of web weaknesses to common weaknesses, which all together made for long lists of security checks.

    There are over 900 CWEs. Mercifully, the 2021 list only maps to 182 of them.

    I like CWEs for their history. CSRF evolved from the “Confused Deputy” described in 1988. SQL injection and HTML injection have ancestors in Unix command injection used against SMTP and finger daemons.

    But I would never dump them on devs as part of vuln writeups. CWEs have their place in terminology and evaluating coverage in security scanners. They’re a useful standard in that way and a useful encyclopedia of what can go wrong in apps. But they’re just a longer, more boring list than the Top 10. More awareness on top of more awareness hasn’t led to more secure code.

    Awareness for Appsec, Action for Developers

    If you care about your site’s security, engage your devs in the site’s design and architecture. Use automation and manual testing to verify its implementation. Keep the OWASP Top 10 list around as a reference for vulns that plague web apps.

    Casiotron

    As an experiment, take a Top 10 list and invert its risks and weaknesses into a prescriptive list. Then consider how actionable they are or how much additional context is needed to make them useful to your devs and your apps. You could do this with any version. Here’s how it might look for 2021:

    • M1. Apply the SameSite=strict attribute to all cookies with an authentication or security context.

    • M2. Require TLS 1.2 as the default for all connections. Prefer TLS 1.3.

    • M3. Use functions that explicitly prevent data from changng the syntax or semantics of code execution. That’s a clunky phrase, but it means things like prepared statements, exec-style calls that bind arguments to command options in order to prevent unintended execution via pipes or semicolons. Lots of times input validation really means using correct functions for an execution interface.

    • M3b. Encode data for the appropriate context before it is rendered in the client. Lots of times input validation really means output encoding.

    • M4. Create a secure design. (This still feels too generic and unactionable, like saying, “Write secure code.”)

    • M5. Use Infrastructure as Code to define cloud environments. (Which sounds slightly more useful than, “Configure the platform securely.”)

    • M6. Keep third-party packages and dependencies up to date. Use semver to influence update cycles based on minor versions and recency.

    • M7. Support multi-factor authentication with workflows and messaging that encourage its adoption.

    • M8. Build secure packages. (Back to that generic guidance that feels less actionable.)

    • M9. Run tabletop exercises to test your processes and inter-team communications against various types of incidents.

    • M10. Use IMDSv2 for AWS EC2 instances.

    These start to describe actions that devs can take rather than a list of things not to do.

    Ultimately, the OWASP Top 10 will raise awareness among an appsec audience who want to understand common flaws and know what not to do.

    But when it comes to building a web app and talking to an audience of developers about what to do, turn to the OWASP Application Security Verification Standard (ASVS). It still has a checkbox-like flavor to it, but it’s a far better starting point than a generic list of mistakes to avoid.


    1.  This used to point to a long list of password breaches, of which most of the passwords weren’t even hashed. But more importly, I dislike how this item mixes password storage and encrypting records at rest. Securing passwords relies on hashing and concepts like work factors. You don’t store passwords to recover the original plaintext, you compare hashes. Encrypting records is important, but it only addresses narrow threat models. If a service is compromised that processes that data, meaning it can decrypt the data in storage or in memory, then the plaintext data can be accessed. 

    2. I like to call this BugOps, where a team chases individual vulns or bug bounty reports as opposed to stepping back to reconsider their app’s design. I mentioned this back in a presentation in 2017. 

    • • •
  • Lake View at Engelsberg, Västmanland

    Curl is one of my favorite open source projects. We marked its 25th anniversary in the news segment of ASW episode 233.

    I’ve used Curl as a command-line tool, a library, and as a positive example of how to maintain a community. Daniel Stenberg has a done a wonderful job of maintaining the project and fostering a positive atmosphere around it. His blog provides lots of insights into the development process and how software engineers make informed decisions.

    Curl has wonderful documentation – a necessity for a tool with almost 250 command-line options. I also appreciate that it documents its own history. Its development has been consistent over the decades, with an ever-improving list of features and performance.

    Its development has also reflected major milestones in the web ecosystem, such as supporting HTTP/2 in 2014, becoming part of the OSS-fuzz effort to secure critical software in 2017, and supporting HTTP/3 in 2019.

    Curl is also an example of why C code will be around for quite a long time – many other languages rely on the library and can easily integrated with its C-based API. Curl is also an example of how C can be written securely. Two major security challenges of working with C are safely handling memory and concurrency. The code has had a few stumbles in both, but nothing to the degree that should cause anyone to lose confidence in its underlying design.

    Here’s to several more decades of developer-friendly code and user-friendly tools.

    • • •
  • D&D minis

    It can be fun to go into an interview cold – there’s an appealing energy that comes from the uncertainty of not knowing what’s going to happen next. That’s also why I enjoy role-playing games so much. As a DM, you can set up a combat encounter or introduce an NPC, then embrace the chaos as players hurl their characters in completely unexpected directions. Combine that with merciless randomness of dice rolls and you have a recipe for grand amusement.

    But it’s also helpful to plan for chaos, whether from a dungeoncrawl or interview.

    Prep calls are essential to making an interview entertaining and informative. Ideally, it’s a conversation that feels dynamic and natural. The worst thing to do is ask a question, passively wait for an answer, ignore the sense of that answer, and carry on to the next question.

    Here’s a rough outline of my approach:

    • Be flexible. Explore the topics the guest is passionate about and knowledgeable of. Sometimes we’ll start with one topic, only to discover a tangent that would be more interesting.
    • Use open-ended questions to prompt clear explanations or strong opinions. It takes practice to reformulate questions from yes/no formulas into “why” or “how” ones that generate conversations.
    • Probe for interesting or unique insights. This may also reveal areas to avoid. It’s hard to give specific examples here since it relies on the context of the topic, but I usually find questions based on “What does that mean for X?” or “Why does that matter?” works well.
    • Anecdotes are good. If responses tend to be generalities or platitudes, ask for examples of the topic in practice, such as how they’ve seen a problem solved, a tool implemented, or a strategy succeed.
    • Anecdotes of lessons learned from mistakes are also good. Plus, failures are often entertaining. Here I pay attention to the tone of the answer. Something like, “They were all idiots,” isn’t really helpful or educational. Something like, “We didn’t anticipate X” or “We tried to apply a process for X when it’s better for Y” is more useful.
    • Listen for themes or framing devices as they answer.

    During the prep I skip around a lot as I build a picture, but in the interview I’ll try to stick to themes and a flow that builds a story. Stories and conversations are more engaging than dry Q&A. This also means I may reorder questions from how we went through them in the prep call.

    Apple Podcast Icon

    Ultimately, I look for some sort of narrative in terms of problem, complication, and solution or background, conflicts, and resolution. Some examples might be:

    • What’s the problem? Why is it such a problem? How should we think of solutions?
    • You tried X, then Y. You learned Z. In hindsight, what would you do differently?

    One of the traps of asking too many followup questions or searching for a narrative is that it may constrain the guest to a rigid path. They have insights and knowledge to share. Let them reveal what that is rather than trying to guess it through questions. Thus, I always ask, “Is there something we didn’t cover that you want to mention?”

    If they seem likely to be nervous during the interview, I’ll repeat some seed questions so they have an idea of what to expect.

    Finally, I explain that we’ll close out the segment with a call to action or shout out of their choice. I’ll ask what they’re working on or what they want to draw attention to. Sometimes this also helps me refine questions during the interview so they build up to this point.

    To recap, I go into every prep call with a plan to:

    • Ask what they’re passionate about.
    • Ask many short questions to gather context and background so the subsequent interview can be a more natural conversation.
    • Develop a narrative arc.
    • During the interview, actively listen to the guest’s responses and use them to flow into followup questions.

    The way I prep for interviews is closely tied to the format we use on ASW. They’re intended to highlight the guest’s expertise, put them in a good light, understand their opinions, and draw out their personality. If the format were different, I’d keep many of the principles, but would adjust as necessary to the context. But in every case, being prepared makes for a better interview and, perhaps surprisingly, one that can be even more spontaneous.

    For more about how I approach the podcast, check out the style guide.

    • • •