提交 a429d780 编写于 作者: P Pablo Hoffman

update scrapinghub.com urls to use https

上级 7a35a1ad
......@@ -235,7 +235,7 @@ After any of these workarounds you should be able to install Scrapy::
.. _AUR Scrapy package: https://aur.archlinux.org/packages/scrapy/
.. _homebrew: http://brew.sh/
.. _zsh: http://www.zsh.org/
.. _Scrapinghub: http://scrapinghub.com
.. _Scrapinghub: https://scrapinghub.com
.. _Anaconda: http://docs.continuum.io/anaconda/index
.. _Miniconda: http://conda.pydata.org/docs/install/quick.html
.. _conda-forge: https://conda-forge.github.io/
......@@ -51,9 +51,9 @@ just like ``scrapyd-deploy``.
.. _Scrapyd: https://github.com/scrapy/scrapyd
.. _Deploying your project: https://scrapyd.readthedocs.io/en/latest/deploy.html
.. _Scrapy Cloud: http://scrapinghub.com/scrapy-cloud/
.. _Scrapy Cloud: https://scrapinghub.com/scrapy-cloud
.. _scrapyd-client: https://github.com/scrapy/scrapyd-client
.. _shub: http://doc.scrapinghub.com/shub.html
.. _shub: https://doc.scrapinghub.com/shub.html
.. _scrapyd-deploy documentation: https://scrapyd.readthedocs.io/en/latest/deploy.html
.. _Scrapy Cloud documentation: http://doc.scrapinghub.com/scrapy-cloud.html
.. _Scrapinghub: http://scrapinghub.com/
.. _Scrapinghub: https://scrapinghub.com/
......@@ -102,7 +102,7 @@ instance, which can be accessed and used like this::
class MySpider(scrapy.Spider):
name = 'myspider'
start_urls = ['http://scrapinghub.com']
start_urls = ['https://scrapinghub.com']
def parse(self, response):
self.logger.info('Parse function called on %s', response.url)
......@@ -118,7 +118,7 @@ Python logger you want. For example::
class MySpider(scrapy.Spider):
name = 'myspider'
start_urls = ['http://scrapinghub.com']
start_urls = ['https://scrapinghub.com']
def parse(self, response):
logger.info('Parse function called on %s', response.url)
......
......@@ -253,5 +253,5 @@ If you are still unable to prevent your bot getting banned, consider contacting
.. _Google cache: http://www.googleguide.com/cached_pages.html
.. _testspiders: https://github.com/scrapinghub/testspiders
.. _Twisted Reactor Overview: https://twistedmatrix.com/documents/current/core/howto/reactor-basics.html
.. _Crawlera: http://scrapinghub.com/crawlera
.. _Crawlera: https://scrapinghub.com/crawlera
.. _scrapoxy: http://scrapoxy.io/
......@@ -37,5 +37,5 @@ To use the packages:
.. warning:: `python-scrapy` is a different package provided by official debian
repositories, it's very outdated and it isn't supported by Scrapy team.
.. _Scrapinghub: http://scrapinghub.com/
.. _Scrapinghub: https://scrapinghub.com/
.. _GitHub repo: https://github.com/scrapy/scrapy
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册