Web.Crawler.Crawler calls queue->set_stage(real_uri, 6); if its denied by a robots.txt exclusion. Does anyone know what 6 means? It looks like its just ignored later on in MemoryQueue()->get(). The result is that any time a Crawler hits a uri denied by robots.txt, it loops forever checking that uri, calling the error_callback, and then leaving it the queue to check again.
Adam