Archive for the Google Category

Share of most secure browser versionsAccording to an independent study by Google Switzerland, IBM Internet Security Systems and CSG ETH Zurich, Mozilla Firefox users are the safest among web surfers (on average), because they are more likely to be running the latest and most secure version of their browser.
This research analyzed the user agent headers sent with Google search queries beetween January 2007 and June 2008 (lots of data points!), finding that more than 83% of the surveyed Firefox browsers were up-to-date. Safari scored 65.3%, Opera 58.1% and IE, not surprising, was the worst with 47.6% (it should be noticed, though, that IE6 has been considered, rightly, an "insecure version").

The most important factor in this achievement is probably Firefox's streamlined patching process, which is painless and hard to avoid: in facts, security updates are downloaded in background and proposed to the user as soon as they're ready. He can refuse installing (e.g. not to interrupt his work), but as soon as the browser restarts they get installed nonetheless.
There's obviously room for improvement. For instance, upgrading requires administrative privileges. Therefore, a warning to low-permissions users saying something like "You're running an outdated version of Firefox, please ask your administrator to upgrade" would be helpful. But even so, Firefox already shows a stunning lead over its competitors.

One of the declared limits of this study is that nothing could be said about browser plugins, universally recognized as an endless source of security pain. Even on this side, though, Firefox has some clear advantages: plugins can be disabled either manually, from the Tools|Add-Ons|Plugins panel, or automatically through a centralized blacklist. Last but not least, if you're really security minded, you can always adopt a whitelist approach.

Researcher NKTPRO does not like the way Yahoo! manages security reports.

Last year he discovered a XSS Vulnerability in Yahoo! Mail, allowing attackers to steal Yahoo! accounts. After asking for "para-legal" advice, he decided to do the right thing and go for responsible disclosure. Communication was described as "very good" in the beginning, but almost two months later it wasn't clear if the bug had been fully fixed yet, and no public acknowledgment of the problem nor credits to the reporter were given, anyway.

By contrast, Google maintains a dedicated communication channel for security researchers, is known to fix reported issues very timely and publicly thanks reporters.

Some weeks ago, NKTPRO found another XSS vulnerability affecting Yahoo! blogs, and this one was even worse: persistent, CSS-based and working with IE6, IE7 and Firefox 2 (unless NoScript was installed), it could enable attackers to build worms spreading through Yahoo! networks at a potentially very fast pace. Since our hero is apparently a nice guy, he decided to give Yahoo! a second chance, filing a responsible report again. But after waiting one month, frustrated by its counterpart's kind of expected (lack of) responsiveness, he gave up and went for full disclosure, greeted by the almost unanimous approval of his fellow sla.ckers.

After full disclosure, the one-month old bug has been fixed in 3 days.

"Full vs responsible disclosure" is a potentially endless debate, but here we can see two different "corporate styles", Yahoo!'s and Google's, eliciting different reactions from whitehat hackers and ultimately leading to different results:

  1. You can be open about your issues and your security processes, and "reward" reporters, not necessarily with money prizes, which may become dangerous when they feed an anonymous, uncontrolled vulnerability brokerage market. Most of these guys would just appreciate their name attached to your security page, for the glory and something interesting to add to their CV. In turn, you get valuable bug reports with practical proof of concepts, and a reasonable time frame to make your users safer and run regression tests.
  2. Or you can decide to discourage confidential reports, either by threatening legal consequences for "testers" or just refusing to give public credit on their findings. It can work once, but as soon as it's clear that responsible disclosure is not an option, you will be forced into tracking every each full disclosure forum out there and playing catch up in a rush because your vulnerabilities are already public and script kiddies may be busy with your users (good luck with code quality).

So, "big brother" concerns aside, do you feel safer with a Yahoo! Mail account or a GMail one?

This morning I was toying with an idea for easing NoScript allowance of sub-objects and sub-scripts which, even being 1st party content, are offloaded to different domains for performance reasons.
One prominent example is YouTube, which recently started serving scripts from, requiring NoScript users who want to watch videos on to whitelist both domains.
Now the idea, probably too much naive not to be a dead end, was to correlate domains by "ownership", using real time and cached WHOIS queries: sub-content whose Registrant information matches top-level page site's would be allowed to load if the latter is trusted.
Databases (in)accuracy aside, this approach is too much coarse-grained to fit: how many NoScript users would be happy to put and in the same basket?
Anyway, playing some minutes with (the "meta-server" where WHOIS client programs lookup the server responsible for a certain .com domain) yielded some amusing results:

[ma1@groucho]$ cat >wtf && chmod 700 wtf
while [ ! -z "$1" ]; do
exec 3<>/dev/tcp/$
echo -e >&3 "$1"
egrep -i "$1\.\w+\." <&3

The amazing thing is that this data is not even meant for human consumption!

Canopic JarThe recent disclosure by pdp of the jar: protocol bug originally discovered and responsibly reported by Jesse Ruderman in February, and its redirect variant discovered and popularized by Beford with a nice Google-targeted proof of concept, spawned some interesting 3rd party coverage.
Interesting, because very few 3rd party reporters and commenters seem to truly understand how this vulnerability works, and -- worse -- because of the quite nonsensical advices given to protect users.

How the Vulnerability Works

The jar: protocol is used internally by Mozilla browsers to resolve and address resources stuffed inside optionally compressed archives in Zip format called JARs (Java ARchives).

A JAR URL looks like this:


As you can see, after the


scheme we've got a regular http: URL (

), followed by an exclamation mark separator and the internal path to find the actual resource inside the archive.

When a Mozilla browser is asked to open a jar: URL, it first downloads the whole JAR file from the server using a regular HTTP GET request (for

, in our example), then extracts the required resource (


, in our example) from the archive on the client side.

All good and handy, but here's the problem: the jar: protocol currently assumes any nested URL following the jar: scheme actually points to a JAR, no matter what the actual content-type header or any other file type hint (e.g. the file extension) suggests.
This means that I can stuff a malicious HTML page of mine inside a Zip file, rename it as "ma1.jpg" and upload it as my avatar on a message board.
Then trick some user to open my malicious page directly through a jar: URL like


Worse, my page's JavaScript code will run in the same domain as the hosting site, i.e.

, hence I'm cross-site scripting the message board.
To make it even more nightmarish, I don't actually need the target website to be a message board allowing file uploads: I just need an open redirect, which can be found on Google and many other "safe" places, because my disguised JAR file will inherit the security context of my victim's redirector even if it's hosted on a website of mine. That's how Beford's proof of concept works.
Did you say "Universal XSS”?

What Firefox Users Can Do

At least until a Firefox patch is released, Firefox users should install latest NoScript stable release, or even better help us testing latest development version.
NoScript will prevent remote JAR resources from being loaded as documents, neutralizing the XSS dangers of the jar: protocol while keeping its functionality. See this FAQ for more details.
It should be noted that this specific protection is completely independent from JavaScript blocking: this means that you're protected on every site, no matter if there you've set NoScript to allow JavaScript or not.
Strangely enough, even if this is the only advice which works (other than switching to a different browser), it has been give only by the US Cert advisory (together with another less effective one, see below).

Bogus Advices

Firefox users should avoid follow untrusted “jar:” links on suspicious Web sites.
(Ryan Naraine's Zero Day)

There are several ways for a browser to open an URL automatically without your consent: JavaScript, an IFrame, a Meta Refresh, a redirect...
If this vulnerability is exploited in the wild, you won't see any "


link" coming.

Poor Ryan has probably been misled by reading the following:

Do not follow untrusted "jar:" links or browse untrusted websites.
(Secunia Advisory)

Admittedly, the "don't browse untrusted websites" clause adds some correctness to this advice, but also makes it practically useless.
Furthermore, Ryan quotes the US Cert advisory as well, hence he's got no excuses for omitting the NoScript work-around.
As we said, US Cert correctly referenced NoScript's JAR protection, but also gave a bogus advice of its own:

Using proxy servers or application firewalls to block URIs that contain jar: may mitigate this vulnerability.

If you read how this issue works carefully, you should have noticed that all the jar: protocol resolution happens inside the browser: the only request which is sent, possibly hitting proxy servers or application firewalls, is the regular nested HTTP request. Are you going to block every image together with my innocent looking

? At any rate, your network devices are very unlikely to ever see mythological beasts like "URIs that contain jar:"...

No matter how nonsensical the advices above are, someone decided to endorse them all:

No patch is available through there are a number of workarounds (such as blocking URIs that contain "jar:" using a reverse proxy or application firewall). For home users, Secunia advises users to avoid following untrusted "jar:" links or visiting untrusted websites.
(The Register)

What a pity they left out the only one which does work ;)


PC World and ComputerWorld finally joined the fun:

application firewalls and proxy servers can be used to block Windows Universal Resource Identifiers (URIs) that contain the JAR protocol

They actually reference NoScript, but in a quite misleading way:

Users can download a NoScript add-on for Firefox to block JavaScript and executable content from untrusted Web sites, and can secure their Google accounts by remaining signed out whenever possible.
IBRS security consultant James Turner, who has used the NoScript add-on, said protection against these vulnerabilities can be a trade-off between security and a rich online experience.
"The add-on works fine but it is a trade-off between reducing your online experience by blocking JavaScript and protecting yourself against the exploit," Turner said.

As I said, NoScript's JAR protection has nothing to do with JavaScript blocking and it works no matter what the content of your whitelist is: in other words, there's no trade-off at all because you keep JavaScript enabled where you need it! Oh, "security expertize"...
In the meanwhile, Ryan Naraine wrote a follow-up honestly reporting my criticism (notice that I had privately emailed both him and The Register's John Layden with no answer, before deciding to write this post of mine).
Thanks Ryan.

Evil GMail by GoogHOleIf the GoogHOle wasn't wide enough, yesterday Petko D. Petkov AKA pdp posted another "semi-disclosure" about how you can redirect someone else's GMail incoming messages to your account.
Petko declared "I am not planning to release this vulnerability for now”, but this counts as a full disclosure in my book, since the details he gives away are far more than enough to put up a proof of concept in 10 minutes, if the reader knowns the very basics of Cross Site Request Forgery (CSRF).

  1. Victim must be logged in GMail.
  2. Victim must visit a malicious web page: most likely scenarios are clicking an external link from an incoming message or surfing porn while checking email from time to time.
  3. The malicious web site forges a POST request to GMail's "Create Filter" wizard, possibly using an autosubmit invisible form to build a filter which forwards incoming messages to a mail recipient owned by the attacker.
  4. Since user is already authenticated, her session cookie is passed along with the forged request and the GMail filter gets silently implanted, with all the output hidden inside an IFRAME.
  5. The new GMail filter now acts as a persistent backdoor stealing incoming messages, and it will go unnoticed forever unless the victim is a power GMail user who creates or edits from time to time her own filters, which are deep buried in the "Settings" user interface: I, for instance, never saw them until yesterday!

Very clever and very dangerous.
As I said, this surely counts as a 0day public full disclosure: no matter if pdp omitted an explicit PoC. How many of us, having minimal web coding experience, wouldn't be able to build a working exploit using the info above?

CSRF Countermeasures

As usual, now that's been publicly disclosed, this vulnerability is being patched very quickly by the great Google development crew.
Nonetheless, many other holes of this kind are still around. That's why CSRF is called “The Sleeping Giant”: some web coders may still need to learn how to fix or prevent them, and users surely want to know how to protect themselves.

1. Web Developers

Please use form keys!

  1. Generate a random identifier (form key) every time you display a form meant to be submitted by authenticated users only.
  2. Echo your key as an hidden field of the form and bind its value to the user session data kept on the server side.
  3. As soon the form is submitted, compare the returned key with the one stored in session data: if they don't match, throw away the request because it is probably forged.

The above is a simple yet effective anti-CSRF technique. It will work fine unless a further Cross Site Scripting (XSS) vulnerability is present too, allowing attacker to read your form key on the fly and forge a seemingly valid requests.

2. Web Users

This GMail incident proves how even the best trained web developer in the world can fail at implementing CSRF countermeasures.
Most of the existing literature about XSS and CSRF will tell you that poor users can do nothing to protect themselves from these attacks, but this is blatantly false.
A quite radical, not very usable but effective approach would be using different browsers (or different profiles, if you use Mozilla derivatives) for each "sensitive" web site you access, and force yourself not to follow any external link nor browse any other site while logged in.

Anyway, if you prefer not to make your life miserable by spawning multiple browsers and scanning every single link with a magnifying lens, your answer is, once again, Firefox + NoScript (sorry to sound repetitive, but that's it).

First of all, automating a POST request is trivial if JavaScript is enabled on the attacker site --


-- but just impossible if malicious scripts are blocked by NoScript.
The obvious objection, raised for instance by both pdp and Adrian Pastor at GNUCITIZEN, sounds like:

The attacker could simply build an invisible POST form and disguise its "submit" button as a regular link or an image, then social-engineer his victim into clicking it and so have the exploit launched no matter if JavaScript is disabled.

True, but NoScript effectively defeats this attack as well!

A common misconception about NoScript is that it just blocks JavaScript on untrusted sites.
It certainly does, but NoScript actually enhances browser security in several other ways.
A very incomplete list:

  1. It blocks Java, Flash, Silverlight and other plugins on untrusted sites, and optionally also on trusted pages, while letting you activate the plugin content on demand, with a click.
  2. It prevents malformed URIs to exploit buggy URI handlers, i.e. the foundation for many cross-application exploits discovered by Billy Rios, Nate McFeters and Thor Larholm.
  3. It implements the most advanced and effective anti-XSS protection available on the client side.

NoScript's anti-XSS protection deploys various specific countermeasures, e.g. HTML/JavaScript injection detection, URL sanitization on suspicious cross-site requests, UTF-7 encoding neutralization and many others.
One of them also provides an effective defense from CSRFs of the kind affecting GMail: in facts, NoScript intercepts POST requests sent from untrusted origins to trusted sites and strips out their payloads.
This means that, even if the attacker exploits a scriptless vector to launch his POST CSRF through social engineering, NoScript users are still safe as long as the malicious site is not explicitly whitelisted.

When he learned this, pdp commented:

Giorgio, sounds good, but doesn’t that break things?
I mean, CSRF is one of the most fundamental Web characteristic.
Disabling it might be OK for people like us, but for the general population, that is a no go!

Petko, my friend,

  1. The very foundation of the Web is CSR (Cross Site Requests, AKA Hyperlinking), not CSRF (Cross Site Request Forgery) which is an unwanted side effect of bad coding practices.
  2. RFC 2616 defining HTTP (hence, in a certain sense, the Web itself), clearly states that while GET requests (the ones we generate by following hyperlinks or submitting a search form) are idempotent, i.e. should not modify the receiving system, POST is reserved to requests which cause a permanent change. NoScript just prevents untrusted sites from modifying data held by trusted sites, and this looks Pure Good™: why could you want the contrary?
  3. Even if you actually wanted the contrary, you can either use the "Unsafe reload" command, available whenever a request is filtered by NoScript, or permanently configure some sites of your choice as unfiltered recipients of unsafe requests by listing them in NoScript Options/Advanced/XSS/Exceptions

The NoScript feature we're talking about has been in place for more than six month now.
I guess it's transparent enough if security researchers like you, Adrian or .mario -- people "like us", much more attentive to what happens inside their browsers than "the general population" -- did not even notice it... ;)

Bad Behavior has blocked 1401 access attempts in the last 7 days.