RSS feed of httpuseragent.org RSS   Contact  

Contact Andrew

My E-mail:

Subscribe HttpUserAgent.org

RSS Feed of HttpUserAgent.org

Submit Your User-Agent

You are here: Home » User-agent List » DotBot 1.1

DotBot 1.1 User-Agent String

DotBot 1.1

User-Agent NameDotBot 1.1
User-Agent URLhttp://www.dotnetdotcom.org
CategoryRobot, Spider, Crawler
Organizationdotnetdotcom.org
Total Strings1
Last VisitAug 11, 2011 02:44 PDT

All user-agent string from DotBot 1.1

#User-Agent StringVisit FrequencyLast VisitView
1Mozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/, crawler@dotnetdotcom.org)126542Aug 11, 2011 02:44 PDTView Detail

The description about DotBot 1.1

Dotbot is a crawler of dotnetdotcom.org.

How to block DotBot crawler

  1. Create a simple text file named robots.txt and place it in your server's root directory. (Example: http://www.yoursite.com/robots.txt)
  2. Add the following code to your robots.txt file:
  3. User-agent: dotbot
    Disallow: / 

The official information about DotBot from DotnetDotcom.org

  1. Our Purpose and Goal
  2. Our purpose is rather simple. We want to make the internet as open as possible. Currently only a select few corporations have a complete and useful index of the web. Our goal is to change that fact by crawling the web and releasing as much information about its structure and content as possible. We plan on doing this in a manner that will cover our costs (selling our index) and releasing it for free for the benefit of all webmasters. Obviously, this goal has many potential legal, financial, ethical and technical problems. So while we can't promise specific results, we can promise to work hard, share our results, and help make the internet a better and more open space.

  3. Our Technology
  4. Our crawling system is written in a mixture of C and Python. We elected to store our index using custom flat files on disk as opposed to a traditional database management system. We would like to give thanks to everyone who was involved in the many open source tools we used. These include gcc, gdb, ubuntu linux, valgrind, python and libcurl. Additionally, we want to thank the many webmasters who have taken the time to give us feedback and support our cause.