RSS feed of httpuseragent.org RSS   Contact  

Contact Andrew

My E-mail:

Subscribe HttpUserAgent.org

RSS Feed of HttpUserAgent.org

Submit Your User-Agent

You are here: Home » User-agent List » DotBot 1.1 » User-agent string detail

DotBot 1.1 User-Agent String: Mozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/, crawler@dotnetdotcom.org)

DotBot 1.1

User-Agent NameDotBot 1.1 [1 items]
User-Agent URLhttp://www.dotnetdotcom.org/
CategoryRobot, Spider, Crawler
Organizationdotnetdotcom.org  The city and country location of web http://www.dotnetdotcom.org/
User-agent StringMozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/, crawler@dotnetdotcom.org)
Sponsor
Visit Frequency126542
First VisitSep 19, 2008 16:52 PDT
First IP208.115.111.251 - crawl10.dotnetdotcom.org
Last VisitAug 11, 2011 02:44 PDT
Last IP208.115.111.250 - crawl9.dotnetdotcom.org

Last updated: Apr 05, 2009 01:56 PDT    

The description about DotBot 1.1

Dotbot is a crawler of dotnetdotcom.org.

How to block DotBot crawler

  1. Create a simple text file named robots.txt and place it in your server's root directory. (Example: http://www.yoursite.com/robots.txt)
  2. Add the following code to your robots.txt file:
  3. User-agent: dotbot
    Disallow: / 

The official information about DotBot from DotnetDotcom.org

  1. Our Purpose and Goal
  2. Our purpose is rather simple. We want to make the internet as open as possible. Currently only a select few corporations have a complete and useful index of the web. Our goal is to change that fact by crawling the web and releasing as much information about its structure and content as possible. We plan on doing this in a manner that will cover our costs (selling our index) and releasing it for free for the benefit of all webmasters. Obviously, this goal has many potential legal, financial, ethical and technical problems. So while we can't promise specific results, we can promise to work hard, share our results, and help make the internet a better and more open space.

  3. Our Technology
  4. Our crawling system is written in a mixture of C and Python. We elected to store our index using custom flat files on disk as opposed to a traditional database management system. We would like to give thanks to everyone who was involved in the many open source tools we used. These include gcc, gdb, ubuntu linux, valgrind, python and libcurl. Additionally, we want to thank the many webmasters who have taken the time to give us feedback and support our cause.