|User-Agent Name||Googlebot 2.1|
|Category||Robot, Spider, Crawler|
|Last Visit||Feb 02, 2013 21:53 PST|
|#||User-Agent String||Visit Frequency||Last Visit||View|
|1||Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)||2887341||Feb 02, 2013 21:53 PST||View Detail|
A Googlebot is a search bot used by Google. It collects documents from the web to build a searchable index for the Google search engine.
If a webmaster wishes to restrict the information on their site available to a Googlebot, or another well-behaved spider, they can do so with the appropriate directives in a robots.txt file, or by adding the meta tag to the webpage.  Googlebot requests to Web servers are identifiable by a user-agent string containing "Googlebot" and a host address containing "googlebot.com".
You can verify that a bot accessing your server really is Googlebot by using a reverse DNS lookup, verifying that the name is in the googlebot.com domain, and then doing a forward DNS lookup using that googlebot name. This is useful if you're concerned that spammers or other troublemakers are accessing your site while claiming to be Googlebot.
> host 184.108.40.206 220.127.116.11.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com. > host crawl-66-249-66-1.googlebot.com crawl-66-249-66-1.googlebot.com has address 18.104.22.168
Google doesn't post a public list of IP addresses for webmasters to whitelist. This is because these IP address ranges can change, causing problems for any webmasters who have hard coded them. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot)