Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
looyolo
scrapy
提交
b8fcb46e
S
scrapy
项目概览
looyolo
/
scrapy
与 Fork 源项目一致
从无法访问的项目Fork
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
S
scrapy
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
b8fcb46e
编写于
3月 01, 2016
作者:
D
Daniel Graña
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1804 from redapple/enable-test-dwnld-timeout
Re-enable HTTPS tests for download timeouts
上级
21da4931
c9e78135
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
22 addition
and
5 deletion
+22
-5
scrapy/core/downloader/handlers/http11.py
scrapy/core/downloader/handlers/http11.py
+10
-0
scrapy/core/downloader/webclient.py
scrapy/core/downloader/webclient.py
+5
-0
tests/test_downloader_handlers.py
tests/test_downloader_handlers.py
+7
-5
未找到文件。
scrapy/core/downloader/handlers/http11.py
浏览文件 @
b8fcb46e
...
...
@@ -209,6 +209,7 @@ class ScrapyAgent(object):
self
.
_pool
=
pool
self
.
_maxsize
=
maxsize
self
.
_warnsize
=
warnsize
self
.
_txresponse
=
None
def
_get_agent
(
self
,
request
,
timeout
):
bindaddress
=
request
.
meta
.
get
(
'bindaddress'
)
or
self
.
_bindAddress
...
...
@@ -275,6 +276,11 @@ class ScrapyAgent(object):
if
self
.
_timeout_cl
.
active
():
self
.
_timeout_cl
.
cancel
()
return
result
# needed for HTTPS requests, otherwise _ResponseReader doesn't
# receive connectionLost()
if
self
.
_txresponse
:
self
.
_txresponse
.
_transport
.
stopProducing
()
raise
TimeoutError
(
"Getting %s took longer than %s seconds."
%
(
url
,
timeout
))
def
_cb_latency
(
self
,
result
,
request
,
start_time
):
...
...
@@ -310,6 +316,10 @@ class ScrapyAgent(object):
d
=
defer
.
Deferred
(
_cancel
)
txresponse
.
deliverBody
(
_ResponseReader
(
d
,
txresponse
,
request
,
maxsize
,
warnsize
))
# save response for timeouts
self
.
_txresponse
=
txresponse
return
d
def
_cb_bodydone
(
self
,
result
,
request
,
url
):
...
...
scrapy/core/downloader/webclient.py
浏览文件 @
b8fcb46e
...
...
@@ -83,6 +83,11 @@ class ScrapyHTTPPageGetter(HTTPClient):
def
timeout
(
self
):
self
.
transport
.
loseConnection
()
# transport cleanup needed for HTTPS connections
if
self
.
factory
.
url
.
startswith
(
b
'https'
):
self
.
transport
.
stopProducing
()
self
.
factory
.
noPage
(
\
defer
.
TimeoutError
(
"Getting %s took longer than %s seconds."
%
\
(
self
.
factory
.
url
,
self
.
factory
.
timeout
)))
...
...
tests/test_downloader_handlers.py
浏览文件 @
b8fcb46e
...
...
@@ -182,17 +182,19 @@ class HttpTestCase(unittest.TestCase):
return
d
@
defer
.
inlineCallbacks
def
test_timeout_download_from_spider
(
self
):
if
self
.
scheme
==
'https'
:
raise
unittest
.
SkipTest
(
'test_timeout_download_from_spider skipped under https'
)
def
test_timeout_download_from_spider_nodata_rcvd
(
self
):
# client connects but no data is received
spider
=
Spider
(
'foo'
)
meta
=
{
'download_timeout'
:
0.2
}
# client connects but no data is received
request
=
Request
(
self
.
getURL
(
'wait'
),
meta
=
meta
)
d
=
self
.
download_request
(
request
,
spider
)
yield
self
.
assertFailure
(
d
,
defer
.
TimeoutError
,
error
.
TimeoutError
)
@
defer
.
inlineCallbacks
def
test_timeout_download_from_spider_server_hangs
(
self
):
# client connects, server send headers and some body bytes but hangs
spider
=
Spider
(
'foo'
)
meta
=
{
'download_timeout'
:
0.2
}
request
=
Request
(
self
.
getURL
(
'hang-after-headers'
),
meta
=
meta
)
d
=
self
.
download_request
(
request
,
spider
)
yield
self
.
assertFailure
(
d
,
defer
.
TimeoutError
,
error
.
TimeoutError
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录