A couple of months ago, Brandon Sterne of the Mozilla Security Team asked me some questions about NoScript's internals, because he was developing a Firefox add-on which involved selective script-blocking.

Looks like he finally delivered: Site Security Policy is a proof of concept for an idea proposed by RSnake and turned into a specification by Gervase Markham, known as "Content Restrictions".

A Site Security Policy is defined by the website administrator and communicated to the web browser as a set of special "X-SSP-..." HTTP headers either attached to the affected content or sent in response to a "discovery"

HEAD

request:

  • X-SSP-Script-Source

    specifies a deny/allow list of hosts which are allowed to run scripts.
    If this header is sent, no embedded script is allowed to run, and only included scripts whose sources match the rules are executed. This is an effective anti-XSS countermeasure, and could be extremely useful for so called "Web 2.0" sites featuring user-generated rich content.

  • X-SSP-Request-Source

    lists the hosts which can or cannot send HTTP requests to a certain resource, and the "acceptable" HTTP verbs.
    This can help enforcing referrer-based checks against CSRF attacks which work even if user chooses to omit or spoof the

    Referer

    header for privacy reasons, and mitigating verb-tampering attacks when used to "enhance" CSRF.

  • X-SSP-Request-Target

    limits the destinations of requests originated by the current page.
    This may help mitigating data-leakage outcomes of a successful XSS attack, e.g. by preventing authentication tokens from being logged to remote hosts, but also avoiding the page to be used as a platform for CSRF attacks and blocking inclusion of unwanted 3rd party content.

  • X-SSP-Report-URI

    declares an URL where policy violation attempts should be logged by the browser.

If you want to start applying these restrictions to your web content, you'll find a detailed yet simple reference* with examples on Brandon's project web site.
Implementing a Site Security Policy cannot surrogate web developers' security awareness and best practices, but it's nonetheless a big step forward in making a website safer for its users.

Obviously enough, to be generally effective this technology still needs to be evangelized to administrators and coders, correctly deployed and supported in a consistent cross-browser fashion. But as soon as it gets built in our favourite browser and we begin to see badges like "Browsing this site is safer with Firefox", we can hope other vendors to join making the Web a safer place.

* Update:

Site Security Policy changed its name to "Content Security Policy", and it dropped its anti-CSRF features to focus on XSS prevention only.
Details have been relocated here.

20 Responses to “Site Security Policy, AKA Content Restrictions”

  1. #1 Gareth Heyes says:

    Unless the default is deny all then I can see these headers being pretty useless, we'll have to break the web first in order to make it secure. I also hope there's not a global allow option either.

  2. #2 Andre Gironda says:

    This is good news

  3. #3 nobody says:

    If those "restrictions" are sent with the http header, why cant the attacker send their own, so disabling this feature?

    And why do i need to allow javascript to post a comment? :-)

  4. #4 Exec says:

    @nobody
    afaik, this isn't made to protect against "man in the middle" type of attacks.

    I'd guess it's that irritating captcha...

    @Gareth
    "If this header is sent, no embedded script is allowed to run, and only included scripts whose sources match the rules are executed."
    The default is deny "if" the header is sent, at least that's how I read it.

  5. #5 Giorgio says:

    @Gareth Heyes:
    well, any sane use of a Security Site Policy is obviously default deny, and Exec's interpretation of the X-SPP-Script-Source as a whitelist is correct.
    The "global allow" option is the way web content behaves nowadays, and that's why these were called "Content Restrictions".

    @nobody:
    For the attacker sending his own headers he would need to exploit a response splitting vulnerability or a pre-content injection point, which is possible but extremely rare if compared with "regular" in-content XSS holes. Man-in-the-middle attacks are clearly out of the scope of this technology, but you can regard SSP (against application-level tampering) + SSL (against wire-level tampering) as a very strong combo of complementary security features.
    And you do not need to allow JavaScript to post a comment here: you just need to have either both

    hackademix.net

    and

    recaptcha.net

    disabled (this way the iframe-based captcha fallback kicks in), or both enabled. BTW, I'm trying to fix the ReCaptcha plugin code so that the fallback works even if you've got

    hackademix.net

    allowed and forgot to whitelist

    recaptcha.net

    .

  6. #6 Awesome AnDrEw says:

    I read a post on Jeremiah Grossman's blog about this topic, but think it should be created in a similar fashion to the crossdomain.xml file. Perhaps if the file does not exist in the root directory of the website the browser could interpret that as a "global denial"?

  7. #7 Giorgio says:

    @Awesome AnDrEw:
    I tend to agree that an option to use a crossdomain.xml-style policy file could be desirable as a simple way to centralize security concerns.
    On the other hand, headers are probably more flexible to allow finer-grained policies.
    The log spam objection is a moot point if you know how to setup log filters.
    A winning choice may be allowing both styles and resolving possible conflicts to the most restrictive setting.
    Defaulting to "global denial" if the policy file does not exist seems impossible for the foreseeable future, though, because it would just break the "original concept" of the Web (a collection of freely linkable documents), while this specification is meant to "fix" it in its "almost abusive" incarnation (an application platform), allowing administrators and developers to define application boundaries.

  8. #8 nobody says:

    I didnt mean MITM.

    If just _one_ script on a host is vulnerable so that response splitting (or similar) is possible, he can push new Restrictions (like "none") to the victim, and then can happily execute other forms of attacks which should be prevented by those Restrictions.

    How does the browser know the Restrictions it receives are valid/genuine? Will it update it's per-host restriction silently, if he encounters changed ones? Or does he drop new one's, making changes to the Restrictions impossible? Or does he asks the user (which breaks security completly)?

  9. #9 Giorgio says:

    @nobody:
    Current model has no "per-host" restriction.
    All the restrictions are "per-resource", even though I can imagine they're gonna be defined more often than not in a coarse-grained fashion using filters or includes, depending on the web application framework.
    The

    Request-Source

    header has an expire attribute for caching, but the other are overwritten as soon as you get fresh content.
    You're right on the point that once one page is compromised the XSS protection becomes almost uneffective, but it's nevertheless much better than the current situation where the attack surface is enormously wider.
    A way to mitigate tampering risks may be mandating X-SSP headers to be sent at the very beginning of the response and falling back to the most restrictive policies if some header is specified twice.

  10. #10 Gareth Heyes says:

    @Giorgio

    Yes I'm aware of that but my point is if a global allow option exists (By not including the header) then it will fail. Forcing developers to specify a list of domain names if not then none are allowed. The browser needs to force the developer not to be lazy.

  11. #11 Giorgio says:

    @Gareth:
    If not sending any SSP header means "maximum restrictions", you're basically proposing a server-driven NoScript, where any site which doesn't implement the new specification gets potentially broken and users cannot do anything to work-around.
    Since this technology aims to become a standard, implemented by default in every browser, I believe backward compatibility is crucial for smooth acceptance.
    The way I see it:

    • Security-aware users who want "default-deny" to be the policy applied everywhere just keep using NoScript.
    • The SSP component itself should highlight pages with content restrictions, just like we do with SSL, training users to avoid performing sensible transactions on sites which don't implement both SSP (guarded boundaries) and SSL (integrity and, to a certain extent, identity).
    • If a "trusted" site is both SSL and SSP complaint, allowing it in NoScript whitelist may be considered an easier choice.
  12. #12 Gareth Heyes says:

    @Giorgio

    Yep that's the only way to enforce the change as far as I can see it, otherwise we have security researchers and banks with well implemented security restrictions and everyone else with the same problem we have today. It will break the web, users will be annoyed but at the same time it would be a fundamental shift in internet security. If we force deny all it could work, if not then I can see another PGP situation (great technology but the majority don't implement it)

    Noscript is a fantastic program but I feel it shouldn't be needed and like you say the servers should become the Noscript of the internet.

    A idea to smooth this transition would be to start requiring sites to have a crossdomain policy within the W3C validation, as web designers are already used to validating for XHTML , why not include the security policy in there as well?

  13. #13 Giorgio says:

    @Gareth:

    I'm more for pushing it as a "shame on you" soft requirement for "serious" web applications (through some 3rd party "validation" badge like you suggest) than a general browser-enforced requirement for all web sites (even the read-only ones), many of which actually value inbound and outbound links very much (directories, search engines, blogs), and even liberal application-level relationships (mashup APIs anyone?)

  14. #14 Gordon Mohr says:

    Would X-SSP-Script-Source prevent people from inserting their own third-party or offsite scripts into sites? (As might happen via a bookmarklet or add-on.)

    Would X-SSP-Request-Source be usable by sites which don't want any deep-linking, replacing the referrer-based blocks they now sometimes put in place?

  15. #15 Giorgio says:

    @Gordon Mohr:

    Would X-SSP-Script-Source prevent people from inserting their own third-party or offsite scripts into sites? (As might happen via a bookmarklet or add-on.)

    It will depend on the final implementation, I guess, since it's out of the scope of the current specification.
    Giving users a chance to override SSP in cases like this is probably desirable, but I'm afraid it may not be technically trivial.

    Would X-SSP-Request-Source be usable by sites which don’t want any deep-linking, replacing the referrer-based blocks they now sometimes put in place?

    Yes, of course, but I'm not sure there would be a great gain: you would still face fallback issues with not compliant or SSP-disabled browsers, as you currently do with anonymous or spoofed referrers.
    But maybe SSP-compliance would be easier than reliable referrers to be required from your users "for their own good" ;)

  16. #16 kuza55 says:

    "mitigating verb-tampering attacks" - wait, what?
    How are you proposing that client-side controls will mitigate server-side issues?

  17. #17 Giorgio says:

    @kuza55:
    Thanks, I did not mean to talk about any "generic" verb tampering initiated from an user-agent in direct control of the attacker, who would obviously bypass any client-side restriction.
    What I was hinting is that, since X-SSP headers allows administrator to declare HTTP verb restrictions, they could be used to enforce a default-deny policy for unused verb so that they could not be used in the context of a CSRF attack.
    I'll update the post accordingly.

  18. #18 alanjstr says:

    1) So does this mean you're going to be incorporating SSP into NoScript until it becomes baked into the browser?
    2) Are you working to get some of the NoScript protections baked into the browser so that people don't have to always get an addon?

  19. #19 Giorgio says:

    @alanjstr:
    First of all, how long, nice to have you back :)
    1) It depends on how fast the SSP experimentation with its own extension goes. It would make some sense, though, because NoScript users would be probably willing to experiment with this stuff.
    2) It's quite unlikely: SSP is the most a browser can do without appearing "rude" to web site owners. Blacklist-based adblocking is more likely to be built in a browser than whitelist-based script control.

  20. #20 hackademix.net » Replace What?! says:

    [...] anybody know what this XeroBank guy is talking about? SPP can’t obviously stand for Site Pecurity Policy. It wouldn’t make sense (spelling and grammar aside) because SSP is not meant and not going [...]

  21. #21 hackademix.net » NoScript's Anti-XSS Filters Partially Ported to IE8 says:

    [...] that post, calling for adoption of his own bright Content Restrictions idea, he forgot to say that one (experimental) implementation already exists… Do these cross-site scripting filters suppress legitimate cross-site references as well? [...]

Bad Behavior has blocked 2260 access attempts in the last 7 days.