Web 2.0 Security – The More Things Change…
If you spend a little time looking into the online literature for the Cross-Site Request Forgery (CSRF) exploit, you might get the impression that Web 2.0 has opened up an appalling can of security worms. In some ways this is true, but in other respects what we are seeing are nothing more than novel effects of long-standing vulnerabilities.
To be sure, the new class of threats certainly seem to break new ground. It was bad enough when we had to worry about Cross-Site Scripting (XSS). That merely involved the injection of unauthorized and malicious script code into Web sites. If your site didn’t allow untrusted users to post content in some way, you were okay. Even if it did (say in order to support a forum or blog comments), the signature-matching needed to prevent the upload of scripts (e.g., the SCRIPT tag) was relatively tractable (If only comment spam signatures were that easy to handle!).
All this can make one long for the days of Web 1.0 sites and the comfort of SSL/TLS with PKI. After all, hadn’t we worked out a pretty solid security model for e-commerce and other secure transactions on the World Wide Web – including authentication, integrity and confidentiality? Why is it that we seem to be going backwards here all of sudden? CSRF is a type of Confused Deputy attack, in which a deputy (in this case the Web browser) that obtains authority from one party (the site that is the target of the attack), is then fooled by some other party into misusing that authority. So you might think: the Confused Deputy must be to Web 2.0 what the Man in the Middle was to Web 1.0. If the latter was made defeatable by public keys verified by Certificate Authorities, why can’t something similar be done against the vulnerability that leads to CSRF?
But that might be the wrong way of looking at the problem. In “Secrets and Lies” Bruce Schneier argued that PKI on the Web (SSL/TLS plus Certificate Authorities) doesn’t in fact accomplish most of what it seems to accomplish. SSL encryption does give you confidentiality in the transaction, but does nothing to secure the confidential data once it gets where its going. Moreover, SSL doesn’t even provide authentication — the key to defeating the fabled Man in the Middle.
In all these cases, it is the policies of the credit card companies (the limited liability for stolen cards, the ability to repudiate fraudulent purchases) that really provide the protection we associate with the familiar, Web 1.0 e-commerce security model. Scheier demonstrates his point regarding the failure of the SSL authentication model with a vivid example (Quoted from http://www.waterken.com/dev/YURL/Schneier/):
[T]he company F-Secure (formerly Data Fellows) sells software from its Web site at www.datafellows.com. If you click to buy software, you are redirected to the Web site www.netsales.net, which makes an SSL connection with you. The SSL certificate was issued to “NetSales, Inc., Software Review LLC” in Kansas. F-Secure is headquartered in Helsinki and San Jose. By any PKI rules, no one should do business with this site. The certificate received is not from the same company that sells the software. This is exactly what a man-in-the-middle attack looks like, and exactly what PKI is supposed to prevent.
So in other words, the ambiguity made possible by everyday aspects of the Web like HTTP redirects and DNS lookups means that the system of SSL plus Certificate Authorities establishes no guaranteed tie between the site where you meant to make the purchase and the site to which your credit card details were forwarded, much less the entity to which a payment was made on your behalf by your credit card company.
This suggests that the problem exposed by CSRF might be deeper, and older, than it first appears. It is not simply a matter of new vulnerabilities due to the proliferation of Ajax and Web services. Underneath that, you have the same basic fact exposed by the limitations of SSL authentication — that URLs were made to find hosts and sites, not to authenticate them. Since the locating of resources is divorced from their authentication, the way is open to the hijacking of legitimate authentications — as when a CSRF exploit on site A gets a user’s browser to employ its authenticated cookie from site A to carry out actions on site B that the user never intended. And that explains too why standard means for avoiding CSRF (such as secret hidden form fields or double submission of cookies) involve strengthening browsers’ cross-domain rules by somehow tying the use of an authentication token back to the site that issued it (see for instance: CSRF Attacks or How to avoid exposing your GMail contacts).
So while it pays to be aware of the latest Web 2.0 type vulnerabilities, it’s also worth understanding how these new exploits are rooted in long-standing shortcomings of the Web security model, if only to see how the solutions tend to reproduce older patterns as well. There are novel challenges out there, to be sure, but looked at in a slightly wider context, there is nothing new under the Web 2.0 security sun. So fix those CSRF holes, but don’t panic.
[photo under CC from flickr user Christos m2001]
Accessibility on the Modern Web
There’s been a lot of buzz in the news lately about accessibility, specifically in reference to the dozens of ADA lawsuits that seem to be more and more...
Automated Visual Regression Testing
What is automated visual regression testing? The name sounds scary, but in reality, the idea is fairly simple. If you have a user interface (UI),...
Automated Testing Tool Comparisons
Automated testing is rapidly gaining popularity across the web development field, and as expected, the number of automated testing tools is growing rapidly as well....