Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
looyolo
scrapy
提交
bd4f156d
S
scrapy
项目概览
looyolo
/
scrapy
与 Fork 源项目一致
从无法访问的项目Fork
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
S
scrapy
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
bd4f156d
编写于
10月 24, 2016
作者:
P
Paul Tremberth
提交者:
GitHub
10月 24, 2016
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #2354 from stav/doc-spider-arguments
[MRG+1] doc: wording
上级
1be2447a
99daea49
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
10 addition
and
10 deletion
+10
-10
docs/topics/spiders.rst
docs/topics/spiders.rst
+10
-10
未找到文件。
docs/topics/spiders.rst
浏览文件 @
bd4f156d
...
...
@@ -24,8 +24,8 @@ For spiders, the scraping cycle goes through something like this:
Requests.
2. In the callback function, you parse the response (web page) and return either
dicts with extracted data, :class:`~scrapy.item.Item` objects,
:class:`~scrapy.http.Request` objects, or an iterable of these objects.
dicts with extracted data, :class:`~scrapy.item.Item` objects,
:class:`~scrapy.http.Request` objects, or an iterable of these objects.
Those Requests will also contain a callback (maybe
the same) and will then be downloaded by Scrapy and then their
response handled by the specified callback.
...
...
@@ -56,7 +56,7 @@ scrapy.Spider
must inherit (including spiders that come bundled with Scrapy, as well as spiders
that you write yourself). It doesn't provide any special functionality. It just
provides a default :meth:`start_requests` implementation which sends requests from
the :attr:`start_urls` spider attribute and calls the spider's method ``parse``
the :attr:`start_urls` spider attribute and calls the spider's method ``parse``
for each of the resulting responses.
.. attribute:: name
...
...
@@ -161,7 +161,7 @@ scrapy.Spider
class MySpider(scrapy.Spider):
name = 'myspider'
def start_requests(self):
return [scrapy.FormRequest("http://www.example.com/login",
formdata={'user': 'john', 'pass': 'secret'},
...
...
@@ -247,8 +247,8 @@ Return multiple Requests and items from a single callback::
for url in response.xpath('//a/@href').extract():
yield scrapy.Request(url, callback=self.parse)
Instead of :attr:`~.start_urls` you can use :meth:`~.start_requests` directly;
Instead of :attr:`~.start_urls` you can use :meth:`~.start_requests` directly;
to give data more structure you can use :ref:`topics-items`::
import scrapy
...
...
@@ -257,7 +257,7 @@ to give data more structure you can use :ref:`topics-items`::
class MySpider(scrapy.Spider):
name = 'example.com'
allowed_domains = ['example.com']
def start_requests(self):
yield scrapy.Request('http://www.example.com/1.html', self.parse)
yield scrapy.Request('http://www.example.com/2.html', self.parse)
...
...
@@ -269,7 +269,7 @@ to give data more structure you can use :ref:`topics-items`::
for url in response.xpath('//a/@href').extract():
yield scrapy.Request(url, callback=self.parse)
.. _spiderargs:
Spider arguments
...
...
@@ -285,7 +285,7 @@ Spider arguments are passed through the :command:`crawl` command using the
scrapy crawl myspider -a category=electronics
Spiders
receive arguments in their constructor
s::
Spiders
can access arguments in their `__init__` method
s::
import scrapy
...
...
@@ -301,7 +301,7 @@ Spider arguments can also be passed through the Scrapyd ``schedule.json`` API.
See `Scrapyd documentation`_.
.. _builtin-spiders:
Generic Spiders
===============
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录