Has anyone had any experience with getting Google’s web crawlers off the scent when they seem to have picked up your url?
I have about 6 devices that have somehow got onto Google’s database for crawling.
I’m going to try putting
in the html for the root, but there’s no guarantee that I will return html.
I could try returning something when google or another crawler tries
Is this known to work?
Your suggested HTML was eaten by the forums. What did you put in there?
Bit disturbing that they have been found. My guess: Chrome submits all URLs you visit to google…?
The html was (with <> encapsulation)
meta name=‘Googlebot’ content=‘nofollow’ /
Not all of the devices get crawled, only a subset. I don’t know how google got wind of them. I want google to “forget” the urls, rather than not crawl them
We added a robots.txt to the root of agents.electricimp.com by the way - Google seems to have honored it and stopped displaying the few agent URLs that were indexed.