Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
looyolo
scrapy
提交
19b2910a
S
scrapy
项目概览
looyolo
/
scrapy
与 Fork 源项目一致
从无法访问的项目Fork
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
S
scrapy
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
19b2910a
编写于
2月 15, 2016
作者:
K
Konstantin Lopuhin
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix assert_aws_environ: check for botocore with boto fallback on PY2
上级
408bc158
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
19 addition
and
21 deletion
+19
-21
scrapy/utils/test.py
scrapy/utils/test.py
+14
-5
tests/test_downloader_handlers.py
tests/test_downloader_handlers.py
+5
-16
未找到文件。
scrapy/utils/test.py
浏览文件 @
19b2910a
...
...
@@ -5,6 +5,7 @@ This module contains some assorted functions used in tests
import
os
from
importlib
import
import_module
import
six
from
twisted.trial.unittest
import
SkipTest
...
...
@@ -12,14 +13,22 @@ def assert_aws_environ():
"""Asserts the current environment is suitable for running AWS testsi.
Raises SkipTest with the reason if it's not.
"""
try
:
import
boto
except
ImportError
as
e
:
raise
SkipTest
(
str
(
e
))
skip_if_no_boto
()
if
'AWS_ACCESS_KEY_ID'
not
in
os
.
environ
:
raise
SkipTest
(
"AWS keys not found"
)
def
skip_if_no_boto
():
try
:
import
botocore
except
ImportError
:
if
six
.
PY2
:
try
:
import
boto
except
ImportError
:
raise
SkipTest
(
'missing botocore or boto library'
)
else
:
raise
SkipTest
(
'missing botocore library'
)
def
get_crawler
(
spidercls
=
None
,
settings_dict
=
None
):
"""Return an unconfigured Crawler object. If settings_dict is given, it
will be used to populate the crawler settings with a project level
...
...
tests/test_downloader_handlers.py
浏览文件 @
19b2910a
...
...
@@ -28,7 +28,7 @@ from scrapy.core.downloader.handlers.s3 import S3DownloadHandler
from
scrapy.spiders
import
Spider
from
scrapy.http
import
Request
from
scrapy.settings
import
Settings
from
scrapy.utils.test
import
get_crawler
from
scrapy.utils.test
import
get_crawler
,
skip_if_no_boto
from
scrapy.utils.python
import
to_bytes
from
scrapy.exceptions
import
NotConfigured
...
...
@@ -437,22 +437,10 @@ class HttpDownloadHandlerMock(object):
return
request
class
BaseS3TestCase
(
unittest
.
TestCase
):
try
:
import
botocore
except
ImportError
:
if
six
.
PY2
:
try
:
import
boto
except
ImportError
:
skip
=
'missing botocore or boto library'
else
:
skip
=
'missing botocore library'
class
S3AnonTestCase
(
BaseS3TestCase
):
class
S3AnonTestCase
(
unittest
.
TestCase
):
def
setUp
(
self
):
skip_if_no_boto
()
self
.
s3reqh
=
S3DownloadHandler
(
Settings
(),
httpdownloadhandler
=
HttpDownloadHandlerMock
,
#anon=True, # is implicit
...
...
@@ -469,7 +457,7 @@ class S3AnonTestCase(BaseS3TestCase):
httpreq
.
url
,
'http://aws-publicdatasets.s3.amazonaws.com/'
)
class
S3TestCase
(
BaseS3
TestCase
):
class
S3TestCase
(
unittest
.
TestCase
):
download_handler_cls
=
S3DownloadHandler
# test use same example keys than amazon developer guide
...
...
@@ -480,6 +468,7 @@ class S3TestCase(BaseS3TestCase):
AWS_SECRET_ACCESS_KEY
=
'uV3F3YluFJax1cknvbcGwgjvx4QpvB+leU8dUj2o'
def
setUp
(
self
):
skip_if_no_boto
()
s3reqh
=
S3DownloadHandler
(
Settings
(),
self
.
AWS_ACCESS_KEY_ID
,
self
.
AWS_SECRET_ACCESS_KEY
,
httpdownloadhandler
=
HttpDownloadHandlerMock
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录