Archive for December, 2008

2008 has not been a happy year for web security, especially regarding trust you can have in the identity of web site you’re visiting:

  1. Dan Kaminski shook world’s faith in DNS. BTW, you already checked your DNS hardness or switched to OpenDNS, didn’t you? Anyway, DNS security or not, you cannot trust non-SSL traffic when you’re traveling, or you’re behind a proxy you can’t control (TOR, for instance), or otherwise not using a trusted ISP… wait, do you really trust your ISP? OK, you should not trust non-SSL traffic, period.
  2. But then, Mike Perry demonstrated how cookies can be stolen from SSL-secured sites (and NoScript deployed some countermeasures).
  3. Unfortunately, a shameful incident revealed that you can easily buy a valid SSL certificate for a web site you’re not related with, if you find an unscrupulous enough vendor: in this case, a certificate has been obtained by Eddy Nigg of StartCom Ltd. from the Certstar Comodo reseller, no question asked. Of course, as a work-around, you could remove the offending CA root, but you must expect side effects (I discovered this breaks cleverbridge e-commerce back-ends, for instance). And, most important, are you sure this is the only sloppy CA out there?
  4. As if this didn’t suck enough, a speech has been given today at 253c by Alex Sotirov, Arjen Lenstra and other high-profile researchers, who managed to leverage known MD5 weaknesses and not-safe-enough practices of some certificate issuers to build their own rogue CA.

The implications of the 3rd and 4th scenarios are scary: as long as these issues stand, trusting internet transactions is an act of faith.
CAs definitely need to move their asses, performing and proving their due diligence on “basic validation” when issuing a proof of identity (which a certificate is), rather than focusing on overpriced “premium services”. Obsolete technologies like MD5 in SSL certificates must be deprecated and banned, both by CAs and browser vendors, as soon as possible.

In the meanwhile, there’s not much we as end-users can do, other than checking for a sudden and unjustified change in the SSL certificate of a site we usually do business with, and that’s not simple either, because there’s no built-in browser alert of the kind we’ve got in SSH clients, for instance. Anyway, some help can come from the Perspectives add-on for Firefox.

Even if Perspective’s primary and most advertised aim is enabling SSH-style certificate “validation” for self-signed certificates (those not issued by an established certification authority), it can be configured to act a second validation layer for CA-signed certificates too, by checking their consistency from multiple internet nodes (called “Notaries”) and/or over time:

  • Install the Perspectives add-on (if you are not a Firefox user, get Firefox first).
  • Open the Tools|Add-Ons Firefox’s menu item, then select the Perspectives row and click the Options button.
  • In the Preferences panel of the Perspective options window, check Contact Notaries for all HTTPS sites.
  • Optionally clear the Allow Perspectives to automatically override security errors checkbox if you’re not interested in managing self-signed certificates.
  • Optionally modify, in the Security Settings box, the required quorum (the fraction of Notaries which must agree) and the number of days this quorum must have been reach for.

This way you should obtain some protection against rogue but “valid” certificates.
Happy new year!

Fennec Alpha2 is out, breaking compatibility with NoScript up to because of a change in the Browser object hierarchy.
If you’re using NoScript on Fennec (or you want to give it a try), please get the NoScript development build.
A stable AMO release should follow in one day or two.
upgrade to NoScript 1.8.8

During the past weeks I’ve started a new project called ABE, sponsored by the NLnet Foundation and meant to provide CSRF countermeasures configurable on the client side, the server side or both.

As you probably know, the NoScript browser extension improves web client security by applying a Default Deny policy to JavaScript, Java, Flash and other active content and providing users with an one-click interface to easily whitelist sites they trust for active content execution. It also implements the most effective Cross-Site Scripting (XSS) filters available on the client side, covering Type-0 and Type-1 XSS attacks; ClearClick, the only specific browser countermeasure currently available against ClickJacking/UI redressing attacks, and many other security enhancements, including a limited form of protection against Cross-Site Request Forgery (CSRF) attacks: POST requests from non-whitelisted (unknown or untrusted) sites are stripped out of their payload and turned into idempotent GET requests.

Many of the threats NoScript is currently capable of handling, such as XSS, CSRF or ClickJacking, have one common evil root: lack of proper isolation at the web application level. Since the web has not been originally conceived as an application platform, it misses some key features required for ensuring application security. Actually, it cannot even define what a “web application” is, or declare its boundaries especially if they span across multiple domains, a scenario becoming more common and common in these “mashups” and “social media” days.

The idea behind the Application Boundaries Enforcer (ABE) module is hardening the web application oriented protections already provided by NoScript, by developing a firewall-like component running inside the browser. It will be specialized in defining and guarding the boundaries of each sensitive web application relevant to the user (e.g. webmail, online banking and so on), according to policies defined either by the user himself, or by the web developer/administrator, or by a trusted 3rd party.

ABE rules, whose syntax is defined in this specification (pdf), are quite simple and intuitive, especially if you ever looked at a firewall policy file:

# This one defines normal application behavior, allowing hyperlinking
# but not cross-site POST requests altering app status
# Additionally, pages can be embedded as subdocuments only by documents from
# the same domain (this prevents ClickJacking/UI redressing attacks)
Site *
Accept POST, SUB from SELF
Accept GET

# This one guards logout, which is foolish enough to accept GET and
# therefore we need to guard against trivial CSRF (e.g. via <img>)
Accept GET POST from SELF

# This one guards the local network, like LocalRodeo
# LOCAL is a placeholder which matches all the LAN
# subnets (possibly configurable) and localhost
Accept from LOCAL

# This one strips off any authentication data
# (Auth and Cookie headers) from requests outside the
# application domains, like RequestRodeo
Site *
Accept ALL from *

Living inside the browser, the ABE component can take advantage of its privileged placement for enforcing web application boundaries, because it always knows the real origin of each HTTP request, rather than a possibly missing or forged (even for privacy reasons) HTTP Referer header, and can learn from user’s feedback.
Rules for the most popular web applications will be made downloadable and/or available via automatic updates for opt-in subscribers, and UI front-ends will be provided to edit them manually or through a transparent auto-learning process, while browsing. Additionally, web developers or administrator will be able to declare policies for their own web applications: ABE will honor them, unless they conflict with more restrictive user-defined rules.
As soon as browser support for the Origin HTTP header becomes widespread and reliable, an external version of ABE might be developed as a filtering proxy.

An initial implementation will be released during the 1st quarter of 2009 as a NoScript module.
I already collected precious feedback from security researchers like Arshan “Anti-Samy” Dabirsiaghi, Ivan Ristic of ModSecurity fame, Sirdarckcat and others.
More opinions and suggestions about rules design and features are very welcome.

Yesterday and today we’ve got a blizzard of web browser security updates:

Microsoft zealots are taking this as an argument to argue that all browsers are equally insecure, and therefore there’s no reason to switch (from IE) for security purposes (an advice which, on the other hand, starts spreading even on mainstream media).
This is quite a debatable statement, if you think about it.
IE’s vulnerability, being a zero day, is actively exploited in the wild by thousands of compromised web sites and puts several millions of users worldwide at risk, while both Firefox’s and Opera’s are still embargoed.
Firefox will be automatically updated for its users before bad guys can analyze and exploit the patched vulnerabilities. That’s effective patching. Opera is in a slightly worse shape, since its update mechanism is not fully automated (it requires user to manually download and install the new version). Microsoft already failed this time, because the vulnerability has been already known and exploited for more than one week.
Right, zero day situations can happen to any software product, and Opera and Firefox might face a similar shitstorm tomorrow. But, even so, there are some interesting differences:

  1. Patching policies: Microsoft implements a predictable monthly patching cycle. This is probably good for corporate IT departments, which can carefully plan the so called “black Tuesday” to minimize their troubles, but it’s also good for evildoers and security attention-bitches, who can carefully plan their exploits or disclosures to maximize their impact. Zero day critical vulnerabilities in three different Microsoft products have been disclosed immediately after last “black Tuesday”: is this really a coincidence?
    Firefox and Opera, on the other hand, issue security updates whenever they’re ready and tested.
  2. Agility: as everybody knows, Internet Explorer is tightly coupled with the underlying Windows OS platform, and this makes both mitigation and fixing more difficult. In this case, for instance, the suggested work-around required not just hardening the browser itself by blocking scripts and plugins, but also disabling a system-wide data access component (OLEDB): this affected not just surfing the web, with many sites inaccessible or malfunctioning, but also most Windows applications relying upon databases.
  3. Viable ad-interim mitigation: even if a browser vulnerability doesn’t involve system-wide components, mitigation until a patch is available almost always requires disabling JavaScript and/or plugin content (the latter is often used to circumvent security features like Vista’s DEP). On IE, such a work-around is hardly acceptable, since “Security Zones”, the mechanism available to selectively change the security level of certain pages, is very obtrusive and almost unusable (yes, way worse than UAC). Opera is friendlier, thanks to its “Site Preferences” which let user quickly change site permissions for JavaScript, Java, Flash and so on. Of course, only a minority of Opera users actually configure a default-deny policy, to selectively allow active content on trusted sites only. However, even those savvy users are suddenly out of luck, if they grant permissions to a site which is vulnerable to XSS: an attacker could circumvent script and plugin blocking by injecting his malicious code there, where it’s allowed to run. But if you use Firefox and you install NoScript, you get a safe default-deny policy configured out of the box and your trusted whitelist is effectively enforced notwithstanding site flaws, thanks to Anti-XSS Protection: JavaScript and other active content will run only where you want it to run.

To summarize: all the browsers can have vulnerabilities and equally need timely patching, but not all the users are equally vulnerable.

Microsoft’s “clarification” on the various workarounds for the recent Internet Explorer security debacle.

Bad Behavior has blocked 2745 access attempts in the last 7 days.