supposed security issues that the new Web 2.0 and AJAX enabled sites produce and have been also looking into some claims I heard that for serious applications, one should stick to Flash, since it is inherently more secure, tried and tested than AJAX is today.
To investigate the inherent and publicized security issues within AJAX, we first need to understand the underlying technologies that AJAX uses. Specifically, AJAX comprises of:
a) XMLHttpRequest (XHR)– a new functionality, that has been added to most well known browsers today that allow an asynchronous communication mechanism between the browser and the webserver
To properly assess AJAX related security issues, then, it makes sense for us to take a look at what sort of security issues do these two critical underlying technologies present.
Ideally, one could consider that XSS should be disallowed, or, all webservers MUST ensure that all special characters are ‘safe’ized (encoded using HTML entities) to ensure that they are not interpreted at the remote side, rather, only displayed. However, with the increasing popularity of responsive web sites using AJAX related technologies, Mash-ups are becoming very common, and the ability for one domain to pass on data for remote script execution helps in managing load on the local host. XSS related vulnerabilties have been around for a long time, and have been successful in infecting a variety of web based solutions, including Php boards, google, apache, mozilla, myspace, yahoo and many more.
It is important to note that XSS depends on the following:
a) You somehow pass on a ‘malicious’ URI to a user and get him to click on it (it doesn’t make sense for you to execute the XSS vulnerability in your own browser context, unless you think you can launch a remote exploit due to a buffer underflow or overflow – that is a different attack, and not catagorized under XSS Attacks – more on this later)
b) It makes an assumption that the web-server receiving this information does not do a good job validating input
Effect of Web 2.0 on XSS attacks
Effect of Web 2.0 on Repudiation Attacks
Once again, with the hype and slickness of XHR+JS applications proliferating around the web, more people should concentrate on architectural and data protection besides only focussing on ‘how to make the interface slicker’. So again, Web 2.0 does not technically make the situation worse, but in the surge of trying to get out prettier applications, developers needs to go back to their basics on implementing a secure enviroment for distributed systems and input validation.
Bridging is a software concept that has long pre-dated Web 2.0. In effect, it is the process on installing a ‘gateway server’ in between multiple content sources. This ‘gateway server’ maintains a trusted relationship with the backend content servers and is also therefore capable of creating integration logic of how to ‘mesh’ these diverse contents together. This sort of bridging has increased in popularity, with the deployment of mash applications. It introduces another twist to XHR attacks. XHR cannot access HTTP pages which are outside the domain which is being served. For exampl
Effect of Web 2.0 on XHR Bridging Attacks
Web based mash-up applications are proliferating today. The use of bridge servers to blend content from multiple sites are getting much more common place today (for instance, mashing google maps with craigslist, house searches and more). In any architecture, the fundamental principle of ‘a chain is only as strong as it’s weakest link’ applies equally here.
b) Nasty JS code that traps browser events and does not let you navigate away from the page, unless you close the browser
Ofcourse, JS being a programming language, it can also be used for more serious purposes – like port scanning your own network and potentially reporting found results to an external site.
Effect of Web 2.0 on JS Vulnerabilties
XHR+JS insecurities vs. Flash
Comparing this to Flash, when I googled around for vulnerabilities, most of them I came across, were to do with buffer exploits. Simply put, buffer exploits are set of techniques where a malicious user finds a spot of executable code that does not check for input length, and using that, provides a large enough string which overflows the allocated stack for that string. In addition to that, the string is crafted with machine opcodes at the right place, which causes the return address from that function to reach an address within that string, which is actually an encoded instruction set. Effectively it means that the remote user has managed to execute code on the remote platform, thereby launching an internal attack. Such attacks are very old and still prevalent. A great tutorial (old, but gold) for buffer exploits can be read here.
So while I found such buffer exploits, I did not have much success in finding actionscript related exploits similar in nature to those mentioned above. I did find references to ramdom posts on forums that said a few years ago, actionscript from Flash was in a similar stage, but it seems over the years, the implementation has improved. Again, this was a cursory look – I’d be happy to be pointed to similar exploits in the Flash world.
Buffer exploits will always remain – detecting and compromising buffer exploits don’t need an open programming language interface like JS. These exploits require more skill to probe and compromise (yes, I am aware of automated buffer exploit scanners, but applying them to different environments requires more skill) and is usually an art for a smaller niche community.
However, in the entire flash vs. AJAX security debate, I do have another concern. JS and XHR exploits need to be fixed by the ‘browser’ owners – Microsoft, Mozilla, Apple and others. Flash exploits need to be fixed by Adobe and can be done independent of the browser, since Flash is a 3rd party browser plugin, while JS and XHR are core browser implementations. I wonder if this would mean that relying on browser companies to fix issues takes a longer time than relying on Adobe to fix a plugin-issue. Anyway, as Web 2.0 with AJAX continues to go mainstream, only time will tell.
What we observe, then, is that most of the vulnerabilties of AJAX are little to do with AJAX and more to do with inherent vulnerabilties of immature implementations and bad architectural choices by developers. Most of the vulnerabilities above boil down to:
a) User Input is not validated well
b) Data Integrity and logical firewalling is not thought out well
d) If your architecture is insecure, no technology can rescue you
What Web 2.0 is doing is making these technologies mainstream. As more people use it, many more bugs come out, and many more people think of hacking insecure sites for fun and profit (nothing new with that). I think overall, this will be beneficial for the web community - there is nothing like taking a technology mainstream to make it robust ! But in the process, developers and marketeers need to be aware that this is an evolving process and should do everything possible to safeguard their own data from the imperfections in the implementation of their chosen tools.