Google is the VIP of the bot world. Just like any other VIP, he enjoys some special privileges, one of which is a having unhindered access to almost any page, on any website on the web. However, as a recent case-study by a ddos protection firm Incapsula shows, some will try to abuse Googlebot’s VIP status to pass under the DDoS protection radar.

What makes Googlebot a VIP?
Googlebot is an official Google agent and an intermediary between your content and Google’s search engine. It is only by allowing Googlebots to scan your site that you become eligible to appear in Google Search Pages (SERPs), which is – needless to say – a pretty big deal.

This is why Googlebot has special VIP access to almost all of the existing webpages.  After all, if you own or manage a website, you will always consider Googlebot to be your best friend and when it comes knocking you`ll always do your best to keep you doors wide open.

Why Hackers are interested in Googlebot?
All hackers are looking for ways to circumvent traditional security barriers. As previously mentioned, hosts typically give Googlebots full access to their information. Unfortunately, hackers see this routine process of ‘opening the gates’ as an opportunity to be exploited. As Incapsula’s study shows, these troublemakers will try to mimic Google to bypass low-level security measures. The more sophisticated infiltrators even mimic the crawling activity of normal Googlebots, to get past some of the more intelligent defenses.

DDoS Attackers with Googlebot IDs
Incapsula’s case study demonstrates a case of “Googlebot DDoS”, which occurred at the end of 2013. The documented attack targeted a moderately sized commercial website which suddenly received a significant spike in requests. At the height of the DDoS attack, the site averaged close to 1,500 requests per second from Google-like bots, more than enough to crash most servers.

The first warning sign detected by Incapsula’s security system was suspicious HTTP header data. Although the attacker was smart enough to use Google user agent, the header data still contained major inconsistencies, which were picked up by Incapsula’s bot identification algorithms.

Next, Incapsula determined that the IP and ASN information were not from Google sources. Taken alone, this evidence causes concern to a host but not quite enough to condemn (as many good SEO bots also mimic Google crawlers).

Finally, the security system took a holistic view of traffic flow and concluded that the surge of suspicious behavior was indeed malicious.

With this triangulation of warning, Incapsula DDoS protection intervened and stopped the DDoS attack before it could flood the site’s servers. Overall, nearly a million fake Googlebot requests were vetted and sorted by the Client-Classification Process, while still allowing non-malicious traffic to pass through to the site.

Wanted: Some Smart DDoS Protection

With the evolution of cyber attacks, a strong network is not enough to protect your website. Advanced security systems must be able to differentiate between various types of bot activity like those seen in Application Layer DDoS attacks.  Also, you should try not to trust bots too much, even if they call themselves Googlebots.