• The Bogey-Owl

    The Advanced Persistent Threat (APT) is competing with Cyberwar for security word of the year. It would have been nice if we had given other important words like HTTPS or prepared statements their chance to catch enough collective attention to drive security fixes. Alas, we still deal with these fundamental security problems due to Advanced Persistent Ignorance.

    It’s not wrong to seek out APT examples. It helps to have an idea about their nature, but we must be careful about seeing the APT boogeyman everywhere.

    Threats have agency. They are persons (or natural events like earthquakes and tsunamis) that take action against your assets (like customer information or encryption keys). An XSS vuln in an email site isn’t a threat – the person trying to hijack an account with it is. With this in mind, the term APT helpfully self-describes two important properties of a threat:

    • it’s persistent
    • it’s advanced

    Persistence is uncomplicated. The threat actor has a continuous focus on the target. This doesn’t mean around-the-clock port scanning just waiting for an interesting port to appear. It means active collection of data about the target as well as development of tools, techniques, and plans once a compromise is attained. Persistent implies patience in searching for “simple” vulns as much as enumerating resources vulnerable to more sophisticated exploits.

    A random hacker joyriding the internet with sqlmap or metasploit be persistent, but the persistence is geared towards finding any instance of a vuln rather than finding one instance of a vuln in a specific target. It’s the difference between a creep stalking his ex versus a creep hanging out in a bar waiting for someone to get really drunk.

    The advanced aspect of a threat leads to more confusion than its persistent aspect. An advanced threat may still exploit simple vulns like SQL injection. The advanced nature need not even be the nature of the exploit (like using sqlmap). What may be advanced is how they leverage the exploit. Remember, the threat actor most likely wants to do more than grab passwords from a database. With passwords in hand it’s possible to reach deeper into the target network, steal information, cause disruption, and establish more permanent access.

    Stolen passwords are one of the easiest backdoors and the most difficult to detect. Several months ago RSA systems were hacked. Enough information was allegedly stolen that observers at the time imagined it would enable attackers to spoof or otherwise attack SecurID tokens and authentication schemes.

    Now it seems those expectations have been met with not one, but two major defense contractors reporting breaches that apparently used SecurID as a vector.

    I don’t want you to leave without a good understanding of what an insidious threat looks like. Let’s turn to the metaphor and allegory of television influenced by the Cold War.

    Specifically, The Twilight Zone season 2, episode 28, “Will the Real Martian Please Stand Up” written by the show’s creator, Rod Serling.

    Spoilers ahead. Watch the episode before reading further. It’ll be 25 minutes of entertainment you won’t regret.

    The set-up of the episode is that a Sheriff and his deputy find possible evidence of a crashed UFO, along with very human-like footprints leading from the forested crash site into town.

    The two men follow the tracks to a diner where a bus is parked out front. They enter the diner and ask if anyone’s seen someone suspicious. You know, like an alien. The bus driver explains the bus is delayed by the snowy weather and they had just recently stopped at this diner. The lawmen scan the room, “Know how many you had?”

    “Six.”

    In addition to the driver and the diner’s counterman, Haley, there are seven people in the diner. Two couples, a dandy in a fedora, an old man, and a woman. Ha! Someone’s a Martian in disguise!

    What follows are questions, doubt, confusion, and a jukebox. With no clear evidence of who the Martian may be, the lawmen reluctantly give up and allow everyone to depart. The passengers reload the bus1. The sheriff and his deputy leave. The bus drives away.

    But this is the Twilight Zone. It wouldn’t leave you with a such a simple parable of paranoia; there’s always a catch.

    The man in the fedora and overcoat, Mr. Ross, returns to the diner. He laments that the bus didn’t make it across the bridge. (“Kerplunk. Right into the river.”)

    Dismayed, he sits down at the counter, cradling a cup of coffee in his left hand. The next instant, with marvelous understatement, he pulls a pack of cigarettes from his overcoat and extracts a cigarette – using a third hand to retrieve some matches.

    We Martians (he explains) are looking for a nice remote, pleasant spot to start colonizing Earth.

    Oh, but we’re not finished. Haley nods at Mr. Ross’ story. You see, the inhabitants of Venus thought the same thing. In fact, they’ve already intercepted and blocked the Ross’ Martian friends in order to establish a colony of their own.

    Haley smiles, pushing back his diner hat to reveal a third eye in his forehead.

    That, my friends, is an advanced persistent threat!


    1. The server rings up their bills, charging one of them $1.40 for 14 cups of coffee. I’m not sure which is more astonishing – drinking 14 cups or paying 10 cents for each one. 

    • • •
  • I finished the original May 2011 version of this article and its linguistic metaphor a few days before coming across an article that described research showing the feasibility of identifying language patterns over encrypted channels.

    One goal of an encryption algorithm is to create diffusion of the original content in order to camouflage that content’s structure. For example, diffusion applied to a long English text, say one of Iain M. Bank’s novels would reduce the frequency of the letter e from the most common one to (ideally) an equally common frequency within the encrypted output (aka ciphertext).

    The confusion property of an encryption algorithm would obscure meaning with something like replacing every letter e with the letter z, but that wouldn’t affect how frequently the letter appears – hence the need for diffusion.

    Check out the Communication Theory of Secrecy Systems by Claude Shannon for the original (and far superior) explanation of these concepts.

    There have been analyses of SSL and SSH that demonstrated how it was possible to infer the length of encrypted content (which therefore might reveal the length of a password even if the content is not known) or guess whether content was HTML or an image. The Skype analysis is a fantastic example of looking for structure within an encrypted stream and making inferences from those observations.

    • • •
  • CSRF and Beyond

    Two Trees

    Identifying CSRF vulns is more interesting than just scraping HTML for hidden fields or forging requests. CSRF stems from a design issue of HTTP and HTML. An HTML form is effectively vulnerable to CSRF by default. That design is a positive feature for sites – it makes many types of interactions and use cases easy to create. But is also leads to unexpected consequences.

    A passive detection method that is simple to automate looks for the presence or absence of CSRF tokens. However, scraping HTML is prone to errors and generates noisy results that don’t scale well for someone dealing with more than one app at a time. This approach just assumes the identity of a token. It doesn’t verify that it is valid or relied upon by the app. And unless the page is examined after JavaScript has updated the DOM, this technique misses dynamically generated tokens, form fields, and forms.

    An active detection method that can be automated is one that replays requests under different user sessions. This approach follows the assumption that CSRF tokens are unique to a user’s session, such as the session cookie or other pseudo-random value. There’s also a secondary assumption that concurrent sessions are possible. It also requires a browser to deal with dynamic JavaScript and DOM manipulation.

    This active approach basically swaps forms between two sessions for the same user. If the submission succeeds, then it’s more likely request forgery is possible. If the submission fails, then it’s more likely a CSRF countermeasure has blocked it. There’s still potential for false negatives if some static state token or other form field wasn’t updated properly. The benefit of this approach is that it’s not necessary to guess the identity of a token and it’s explicitly testing whether a request can be forged.

    Once more countermeasures become based on the Origin header, the replay approach might be as as simple as setting an off-origin value for this header. A server will either reject or accept the request. This would be a nice, reliable detection as well as a simple, strong countermeasure). Modern browsers released after 2016 support an even better countermeasure – SameSite cookies.

    WhiteHat Security described one way to narrow the scope of CSRF reporting from any form whatsoever to resources carry risk for a user. I’ve slightly modified their three criteria to be resources:

    • with a security context or that cross a security boundary, such as password or profile management
    • that deliver an HTML injection (XSS) or HTTP response splitting payload to a vulnerable page on the target site. This answers the question for people who react to those vulns with, “That’s nice, but so what if you can only hack your own browser.” This seems more geared towards increasing the risk of a pre-existing vuln rather than qualifying it as a CSRF. We’ll come back to this one.
    • where sensitive actions are executed, such as anything involving money, updating a contact list, or sending a message

    There’s an interesting aspect in WhiteHat’s “benign” example. To summarize, imagine a site with a so-called non-obvious CSRF, one XSS vuln, one Local File Inclusion (LFI) vuln, and a CSRF-protected file upload form. The attack uses the non-obvious CSRF to exploit the XSS vuln, which in turn triggers the file upload to exploit the LFI. For example, the attacker creates the JavaScript necessary to upload a file and exploit the LFI, places this payload in an image tag on an unrelated domain, and waits for a victim to visit the booby-trapped page so their browser loads <img src=”https://target.site/xss_inject.page?arg=payload”>.

    This attack was highlighted as a scenario where CSRF detection methods would usually produce false negatives because the vulnerable link, https://target.site/xss_inject.page, doesn’t otherwise affect the user’s security context or perform a sensitive action.

    Let’s review the three vulns:

    • Ability to forge a request to a resource, considered “non-obvious” because the resource doesn’t affect a security context or execute a sensitive action.
    • Presence of HTML injection, HTTP Response Splitting, or other clever injection vuln in said resource.
    • Presence of Local File Inclusion.

    Using XSS to upload a file isn’t a necessarily a vuln (the XSS is, but not the file upload). There’s nothing that says JavaScript within the Same Origin Rule (under which the XSS falls once it’s reflected) can’t use XHR to POST data to a file upload form. In this case it also doesn’t matter if the file upload form has CSRF tokens because the code is executing under the Same Origin Rule and therefore has access the tokens.

    I think these two recommendations would be made by all and accepted by the site developers as necessary:

    • Fix the XSS vulnerability using recommend practices (let’s just assume the arg variable is just reflected in xss_inject.page)
    • Fix the Local File Inclusion (by verifying file content, forcing MIME types, not making the file readable)

    But it was CSRF that started us off on this attack scenario. This leads to the question of how the “non-obvious” CSRF should be reported, especially from an automation perspective:

    • Is a non-obvious CSRF vuln actually obvious if the resource has another vuln like XSS? Does the CSRF become non-reportable once the other vuln has been fixed?
    • Should a non-obvious CSRF vuln be obvious if it has a query string or form fields that might be vulnerable?

    If you already believe CSRF should be on every page, then clearly you would have already marked the example vulnerable just by inspection because it didn’t have an explicit countermeasure. But what about those who don’t follow the tenet that CSRF lurks everywhere? For example, maybe the resource doesn’t affect the user’s state or security context.

    Think about pages that use “referrer” arguments. For example:

    https://web.site/redir.page?url=https://from.here

    In addition to possibly being an open redirect, these are prime targets for XSS with payloads like

    https://web.site/redir.page?url=javascript:arbitrary_payload()

    It seems that in these cases the presence of CSRF just serves to increase the XSS risk rather than be a vuln on its own. Otherwise, you risk producing too much noise by calling any resource with a query string vulnerable. In this case CSRF provides a rejoinder to the comment, “That’s a nice reflected XSS, but you can only hack yourself with it. So what.” Without the XSS vuln you probably wouldn’t waste time protecting that particular resource.

    Look at a few of the other WhiteHat examples. They clearly fall into CSRF vulns – changing shipping address, password reset mechanisms.

    What’s interesting is that they seem to require race conditions or to happen during specific workflows to be successful, e.g. execute the CSRF so the shipping address is changed before the transaction is completed. That neither detracts from the impact nor obviates it as a vuln. Instead, it highlights a more subtle aspect of web security: state management.

    Let’s set aside malicious attackers and consider a beneficent CSRF actor. Our scenario begins with an ecommerce site. The victim, a lucky recipient in this case, has selected an item and placed it into a virtual shopping cart.

    1. The victim (lucky recipient!) fills out a shipping destination.

    2. The attacker (benefactor!) uses a CSRF attack to apply a discount coupon.

    3. The recipient supplies a credit card number.

    4. Maybe the web site is really bad and the benefactor knows that the same coupon can be applied twice. A second CSRF applies another discount.

    5. The recipient completes the transaction.

    6. Our unknown benefactor looks for the new victim of this CSRF attack.

    I chose this Robin Hood-esque scenario to take your attention away from the malicious attacker/victim formula of CSRF to focus on the abuse of workflows.

    A CSRF countermeasure would have prevented the discount coupon from being applied to the transaction, but that wouldn’t fully address the underlying issues here. Consider the state management for this transaction.

    One problem is that the coupon can be applied multiple times. During a normal workflow the site’s UI leads the user through a check-out sequence that must be followed. On the other hand, if the site only prevented users from revisiting the coupon step in the UI, then the site’s developers have forgotten how trivial it is to replay GET and POST requests. This is an example of a state management issue where an action that should be performed only once can be executed multiple times.

    A less obvious problem of state management is the order in which the actions were performed. The user submitted a discount coupon in two different steps: right after the shipping destination and right after providing payment info. In the UI, let’s assume the option to apply a discount shows up only after the user provides payment information. A strict adherence to this transaction’s state management should have rejected the first discount coupon since it arrived out of order.

    Sadly, we have to interrupt this thought to address real-world challenges of web apps. I’ve defined a strict workflow as (1) shipping address required, (2) payment info required, (3) discount coupon optional, (4) confirm transaction required. A site’s UI design influences how strict these steps will be enforced. For example, the checkout process might be a single page that updates with XHR calls as the user fills out each section in any order. Conversely, this single page checkout might enable each step as the user completes them in order.

    UI enforcement cannot guarantee that requests be made in order. This is where decisions have to be made regarding how strictly the sequence is to be enforced. It’s relatively easy to have a server-side state object track these steps and only update itself for requests in the correct order. The challenge is keeping the state flexible enough to deal with users who abandon a shopping cart, or decide at the last minute to add another item before completing the transaction, or a multitude of other actions that affect the state. These aren’t insurmountable challenges, but they induce complexity and require careful testing. This trade-off between coarse state management and granular control leads more to a balance of correctness rather than security. You can still have a secure site if steps can be performed in order of 3, 1, 2, 4 rather than the expected 1, 2, 3, 4.

    CSRF is about requests made in the victim’s session context by the victim’s browser on behalf of the attacker (initiated from an unrelated domain) without the victim’s interaction. If a link, iframe, image tag, or JavaScript causes the victim’s browser to make a request that affects that user’s state in another web site, then the CSRF attack succeeded. The conceptual way to fix CSRF is to identify forged requests and reject them. CSRF tokens are intended to identify legitimate requests because they’re a shared secret between the site and the user’s browser. An attacker who doesn’t know the secret can forge a legitimate request.

    These attacks highlight the soft underbelly of web app state management mechanisms.

    Automated scanners should excel at scaleability and consistent accuracy, but woe to those who believe they fully replace manual testing. Scanners find implementation errors like forgetting to use a prepared statements or not encoding output placed in HTML, but they struggle with understanding design flaws. Complex interactions are more easily understood and analyzed by manual testing.

    CSRF stands astride this gap between automation and manual testing. Automation identifies whether an app accepts forged requests, whereas manual testing can delve deeper into underlying state vulns or chains of exploits that CSRF might enable.

    • • •