The Role Of The Robot Exclusion In Copyright Defenses

Law360, New York (January 21, 2014, 6:54 PM EST) -- The robots.txt protocol, also known as the robot exclusion standard, is a nearly 20-year-old voluntary Web-programming convention that communicates to Web-crawling or scraping software programs (i.e., “spiders” or “bots”) permission, or the lack thereof, to access all or part of a publicly available website.[1] The protocol generally uses simple programming instructions that define which portions of a website robots (or any particular robot) are “disallowed” from accessing — e.g., crawlers may be disallowed from accessing any portion of a website’s server whose URL begins with a...
To view the full article, register now.