提交 da8d0ead 编写于 作者: K Konstantin Lopuhin

Use "url" variable in the example

Instead of hardcoded http://www.example.com: without it url variable is unused and only one request will make it past dupefilter.
上级 d3ced85e
......@@ -200,7 +200,7 @@ There is support for keeping multiple cookie sessions per spider by using the
For example::
for i, url in enumerate(urls):
yield scrapy.Request("http://www.example.com", meta={'cookiejar': i},
yield scrapy.Request(url, meta={'cookiejar': i},
callback=self.parse_page)
Keep in mind that the :reqmeta:`cookiejar` meta key is not "sticky". You need to keep
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册