no longer honors robot.txt

The other day while working on my wife’s website I realized that was making snapshots of the site when I had setup the robot.txt file to block that. Then I took a look at some other sites of mine and noticed all of them were being scraped. logo

I’m not sure when it happen. But it seems at some point “The Wayback Machine” decided it was going to ignore website owner wishes from their robot.txt file and copy content anyway.

I was following the steps from their own site from a page that seems to have been remove sometime after October 2015.

How to remove you site from

So I wanted to throw this post out there as a warning. I wonder how many other web site owners did not know about this change.

But you can still remove your site. It just takes a little extra foot work now. According to this page you can e-mail them and ask for your site to be removed. I was able to do this for mine but it took about a week and they asked me questions about how long I owned the domain.

How can I exclude or remove my site’s pages from the Wayback Machine?
You can send an email request for us to review to with the URL (web address) in the text of your message.

Why would I want to remove my site from The Wayback Machine?

Well I have a number of reasons.

  1. I hate the idea of my spelling and grammar mistakes being saved online forever. I’m dyslexic and I find and correct things on old post all the time.
  2. I have good backups. I hear lots of stories were people have restored lost or hacked sites from the wayback machine. That’s really cool. But that is not a service I need.
  3. It’s my content made with hard work over years. I would rather people come to my site to see it instead of going somewhere else. I’m kind of surprised there is no copywrite problems with the way they scrape. I don’t think it would be looked at too favorably if I went to a public library and started making copy’s of books. /shrug

I don’t hate is a cool site. I don’t hate it or anything. I guess I just feel a little betrayed they are ignoring robot.txt. Now people have to take extra steps to get their content removed.

Anyway that’s my warning to other site owners that might not know about the change.

1 thought on “ no longer honors robot.txt

  1. John

    All our content is copyrighted and we have blocked the IA bot in our robots.txt. About 8 months ago, they suddenly started copying our site. They absolutely ignored the robotx.txt file (we checked this). Therefore, I think they are playing a dangerous game. We will sue them if it happens again.


Leave a Reply

Your email address will not be published. Required fields are marked *