python – Scrapy:如何从spider_idle事件回调手动插入请求?

我创建了一个蜘蛛,并将一个方法链接到spider_idle事件.

如何手动添加请求?我不能只是从parse中返回项目 – 在这种情况下解析不会运行,因为所有已知的URL都已被解析.我有一个方法来生成新的请求,我想从spider_idle回调运行它来添加创建的请求.

class FooSpider(BaseSpider):
    name = 'foo'

    def __init__(self):
        dispatcher.connect(self.dont_close_me, signals.spider_idle)

    def dont_close_me(self, spider):
        if spider != self:
            return
        # The engine instance will allow me to schedule requests, but
        # how do I get the engine object?
        engine = unknown_get_engine()
        engine.schedule(self.create_request())

        # afterward, ensure we stay alive by raising DontCloseSpider
        raise DontCloseSpider("..I prefer live spiders.")

更新:我已经确定我可能需要ExecutionEngine对象,但是我不完全知道如何从蜘蛛中获取它,尽管它可以从Crawler实例获得.

更新2:.. ..crawler是作为超类的属性附加的,所以我可以使用self.crawler,而不需要额外的努力. >>

最佳答案
class FooSpider(BaseSpider):
    def __init__(self, *args, **kwargs):
        super(FooSpider, self).__init__(*args, **kwargs)
        dispatcher.connect(self.dont_close_me, signals.spider_idle)

    def dont_close_me(self, spider):
        if spider != self:
            return

        self.crawler.engine.crawl(self.create_request(), spider)

        raise DontCloseSpider("..I prefer live spiders.")

2016年更新:

class FooSpider(BaseSpider):
    yet = False

    @classmethod
    def from_crawler(cls, crawler, *args, **kwargs):
        from_crawler = super(FooSpider, cls).from_crawler
        spider = from_crawler(crawler, *args, **kwargs)
        crawler.signals.connect(spider.idle, signal=scrapy.signals.spider_idle)
        return spider

    def idle(self):
        if not self.yet:
            self.crawler.engine.crawl(self.create_request(), self)
            self.yet = True

转载注明原文:python – Scrapy:如何从spider_idle事件回调手动插入请求? - 代码日志