|User-Agent Name||Yahoo! Slurp 3.0|
|Category||Robot, Spider, Crawler|
|Last Visit||Jan 21, 2013 21:27 PST|
|#||User-Agent String||Visit Frequency||Last Visit||View|
|1||Mozilla/5.0 (compatible; Yahoo! Slurp/3.0; http://help.yahoo.com/help/us/ysearch/slurp)||2414258||Jan 21, 2013 21:27 PST||View Detail|
Yahoo! Slurp is Yahoo!'s web-indexing robot. The Yahoo! Slurp crawler collects documents from the Web to build a searchable index for search services using the Yahoo! Search engine. These documents are discovered and crawled because other web pages contain links directing to these documents.
As part of the crawling effort, the Yahoo! Slurp crawler takes robots.txt standards into account to ensure we do not crawl and index content from those pages whose content you do not want included in Yahoo! Search. If a page is disallowed to be crawled by robots.txt standards, Yahoo! does not read or use the contents of that page.
With everything now in place, the rollout has officially begun. The new Yahoo! Slurp 3.0 recognizes the same user-agent and all robots.txt directives for Yahoo! Slurp, though it will identify itself as Slurp 3.0 in your web logs