settings.rst 25.1 KB
Newer Older
P
Pablo Hoffman 已提交
1 2
.. _topics-settings:

3 4 5
========
Settings
========
6

P
Pablo Hoffman 已提交
7 8 9
.. module:: scrapy.conf
   :synopsis: Settings manager

10 11 12
The Scrapy settings allows you to customize the behaviour of all Scrapy
components, including the core, extensions, pipelines and spiders themselves.

13
The infrastructure of the settings provides a global namespace of key-value mappings
14
that the code can use to pull configuration values from. The settings can be
15 16
populated through different mechanisms, which are described below.

17
The settings are also the mechanism for selecting the currently active Scrapy
18
project (in case you have many).
19

20 21 22
For a list of available built-in settings see: :ref:`topics-settings-ref`.

Designating the settings
23 24
========================

25
When you use Scrapy, you have to tell it which settings you're using. You can
P
Pablo Hoffman 已提交
26
do this by using an environment variable, ``SCRAPY_SETTINGS_MODULE``.
27 28 29 30 31 32 33 34 35 36

The value of ``SCRAPY_SETTINGS_MODULE`` should be in Python path syntax, e.g.
``myproject.settings``. Note that the settings module should be on the
Python `import search path`_.

.. _import search path: http://diveintopython.org/getting_to_know_python/everything_is_an_object.html

Populating the settings
=======================

37 38 39 40 41 42
Settings can be populated using different mechanisms, each of which having a
different precedence. Here is the list of them in decreasing order of
precedence:

 1. Global overrides (most precedence)
 2. Environment variables
P
Pablo Hoffman 已提交
43 44 45
 3. scrapy_settings
 4. Default settings per-command
 5. Default global settings (less precedence)
46

47
These mechanisms are described in more detail below.
48 49 50 51

1. Global overrides
-------------------

52 53
Global overrides are the ones that take most precedence, and are usually
populated by command-line options.
54 55 56 57 58

Example::
   >>> from scrapy.conf import settings
   >>> settings.overrides['LOG_ENABLED'] = True

59 60
You can also override one (or more) settings from command line using the
``--set`` command line argument. 
61

P
Pablo Hoffman 已提交
62 63
.. highlight:: sh

64 65
Example::

66
    scrapy crawl domain.com --set LOG_FILE=scrapy.log
67 68 69 70

2. Environment variables
------------------------

71
You can populate settings using environment variables prefixed with
72
``SCRAPY_``. For example, to change the log file location un Unix systems::
73

74
    $ export SCRAPY_LOG_FILE=scrapy.log
75
    $ scrapy crawl example.com
76

77 78 79 80 81
In Windows systems, you can change the environment variables from the Control
Panel following `these guidelines`_.

.. _these guidelines: http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx

P
Pablo Hoffman 已提交
82
3. scrapy_settings
83 84 85 86 87
------------------

scrapy_settings is the standard configuration file for your Scrapy project.
It's where most of your custom settings will be populated.

P
Pablo Hoffman 已提交
88 89 90
4. Default settings per-command
-------------------------------

91 92 93 94
Each :doc:`Scrapy tool </topics/commands>` command can have its own default
settings, which override the global default settings. Those custom command
settings are specified in the ``default_settings`` attribute of the command
class.
P
Pablo Hoffman 已提交
95 96 97

5. Default global settings
--------------------------
98 99

The global defaults are located in scrapy.conf.default_settings and documented
100
in the :ref:`topics-settings-ref` section.
101 102 103 104

How to access settings
======================

P
Pablo Hoffman 已提交
105 106
.. highlight:: python

P
Pablo Hoffman 已提交
107
Here's an example of the simplest way to access settings from Python code::
108 109 110 111 112

   >>> from scrapy.conf import settings
   >>> print settings['LOG_ENABLED']
   True

P
Pablo Hoffman 已提交
113 114 115 116
In other words, settings can be accesed like a dict, but it's usually preferred
to extract the setting in the format you need it to avoid type errors. In order
to do that you'll have to use one of the following methods:

117 118
.. class:: Settings()

P
Pablo Hoffman 已提交
119
   There is a (singleton) Settings object automatically instantiated when the
120 121 122
   :mod:`scrapy.conf` module is loaded, and it's usually accessed like this::

   >>> from scrapy.conf import settings
P
Pablo Hoffman 已提交
123

P
Pablo Hoffman 已提交
124
    .. method:: get(name, default=None)
P
Pablo Hoffman 已提交
125

P
Pablo Hoffman 已提交
126
       Get a setting value without affecting its original type.
P
Pablo Hoffman 已提交
127

P
Pablo Hoffman 已提交
128 129
       :param name: the setting name
       :type name: string
P
Pablo Hoffman 已提交
130

P
Pablo Hoffman 已提交
131 132
       :param default: the value to return if no setting is found
       :type default: any
P
Pablo Hoffman 已提交
133

P
Pablo Hoffman 已提交
134
    .. method:: getbool(name, default=False)
P
Pablo Hoffman 已提交
135

P
Pablo Hoffman 已提交
136 137 138
       Get a setting value as a boolean. For example, both ``1`` and ``'1'``, and
       ``True`` return ``True``, while ``0``, ``'0'``, ``False`` and ``None``
       return ``False````
P
Pablo Hoffman 已提交
139

P
Pablo Hoffman 已提交
140 141
       For example, settings populated through environment variables set to ``'0'``
       will return ``False`` when using this method.
P
Pablo Hoffman 已提交
142

P
Pablo Hoffman 已提交
143 144
       :param name: the setting name
       :type name: string
P
Pablo Hoffman 已提交
145

P
Pablo Hoffman 已提交
146 147
       :param default: the value to return if no setting is found
       :type default: any
P
Pablo Hoffman 已提交
148

P
Pablo Hoffman 已提交
149
    .. method:: getint(name, default=0)
P
Pablo Hoffman 已提交
150

P
Pablo Hoffman 已提交
151
       Get a setting value as an int
P
Pablo Hoffman 已提交
152

P
Pablo Hoffman 已提交
153 154
       :param name: the setting name
       :type name: string
P
Pablo Hoffman 已提交
155

P
Pablo Hoffman 已提交
156 157
       :param default: the value to return if no setting is found
       :type default: any
P
Pablo Hoffman 已提交
158

P
Pablo Hoffman 已提交
159
    .. method:: getfloat(name, default=0.0)
P
Pablo Hoffman 已提交
160

P
Pablo Hoffman 已提交
161
       Get a setting value as a float
P
Pablo Hoffman 已提交
162

P
Pablo Hoffman 已提交
163 164
       :param name: the setting name
       :type name: string
P
Pablo Hoffman 已提交
165

P
Pablo Hoffman 已提交
166 167
       :param default: the value to return if no setting is found
       :type default: any
P
Pablo Hoffman 已提交
168

P
Pablo Hoffman 已提交
169
    .. method:: getlist(name, default=None)
P
Pablo Hoffman 已提交
170

P
Pablo Hoffman 已提交
171 172
       Get a setting value as a list. If the setting original type is a list it
       will be returned verbatim. If it's a string it will be split by ",".
P
Pablo Hoffman 已提交
173

P
Pablo Hoffman 已提交
174 175
       For example, settings populated through environment variables set to
       ``'one,two'`` will return a list ['one', 'two'] when using this method.
P
Pablo Hoffman 已提交
176

P
Pablo Hoffman 已提交
177 178 179 180 181
       :param name: the setting name
       :type name: string

       :param default: the value to return if no setting is found
       :type default: any
P
Pablo Hoffman 已提交
182 183


184 185 186 187
Rationale for setting names
===========================

Setting names are usually prefixed with the component that they configure. For
P
Pablo Hoffman 已提交
188
example, proper setting names for a fictional robots.txt extension would be
189
``ROBOTSTXT_ENABLED``, ``ROBOTSTXT_OBEY``, ``ROBOTSTXT_CACHEDIR``, etc.
190 191 192 193 194 195 196 197 198 199 200 201 202 203 204


.. _topics-settings-ref:

Built-in settings reference
===========================

Here's a list of all available Scrapy settings, in alphabetical order, along
with their default values and the scope where they apply. 

The scope, where available, shows where the setting is being used, if it's tied
to any particular component. In that case the module of that component will be
shown, typically an extension, middleware or pipeline. It also means that the
component must be enabled in order for the setting to have any effect.

205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224
.. setting:: AWS_ACCESS_KEY_ID

AWS_ACCESS_KEY_ID
-----------------

Default: ``None``

The AWS access key used by code that requires access to `Amazon Web services`_,
such as the :ref:`S3 feed storage backend <topics-feed-storage-s3>`.

.. setting:: AWS_SECRET_ACCESS_KEY

AWS_SECRET_ACCESS_KEY
---------------------

Default: ``None``

The AWS secret key used by code that requires access to `Amazon Web services`_,
such as the :ref:`S3 feed storage backend <topics-feed-storage-s3>`.

225 226 227 228 229
.. setting:: BOT_NAME

BOT_NAME
--------

230
Default: ``'scrapybot'``
231

232 233 234 235 236
The name of the bot implemented by this Scrapy project (also known as the
project name). This will be used to construct the User-Agent by default, and
also for logging.

It's automatically populated with your project name when you create your
237
project with the :command:`startproject` command.
238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256

.. setting:: BOT_VERSION

BOT_VERSION
-----------

Default: ``1.0``

The version of the bot implemented by this Scrapy project. This will be used to
construct the User-Agent by default.

.. setting:: CONCURRENT_ITEMS

CONCURRENT_ITEMS
----------------

Default: ``100``

Maximum number of concurrent items (per response) to process in parallel in the
257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276
Item Processor (also known as the :ref:`Item Pipeline <topics-item-pipeline>`).

.. setting:: CONCURRENT_REQUESTS_PER_SPIDER

CONCURRENT_REQUESTS_PER_SPIDER
------------------------------

Default: ``8``

Specifies how many concurrent (ie. simultaneous) requests will be performed per
open spider.

.. setting:: CONCURRENT_SPIDERS

CONCURRENT_SPIDERS
------------------

Default: ``8``

Maximum number of spiders to scrape in parallel.
277 278 279 280 281 282 283 284 285 286 287 288 289 290 291

.. setting:: COOKIES_DEBUG

COOKIES_DEBUG
-------------

Default: ``False``

Enable debugging message of Cookies Downloader Middleware.

.. setting:: DEFAULT_ITEM_CLASS

DEFAULT_ITEM_CLASS
------------------

292
Default: ``'scrapy.item.Item'``
293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311

The default class that will be used for instantiating items in the :ref:`the
Scrapy shell <topics-shell>`.

.. setting:: DEFAULT_REQUEST_HEADERS

DEFAULT_REQUEST_HEADERS
-----------------------

Default::

    {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
        'Accept-Language': 'en',
    }

The default headers used for Scrapy HTTP Requests. They're populated in the
:class:`~scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware`.

312 313 314 315 316 317 318 319 320 321 322
.. setting:: DEFAULT_RESPONSE_ENCODING

DEFAULT_RESPONSE_ENCODING
-------------------------

Default: ``'ascii'``

The default encoding to use for :class:`~scrapy.http.TextResponse` objects (and
subclasses) when no encoding is declared and no encoding could be inferred from
the body.

323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375
.. setting:: DEPTH_LIMIT

DEPTH_LIMIT
-----------

Default: ``0``

The maximum depth that will be allowed to crawl for any site. If zero, no limit
will be imposed.

.. setting:: DEPTH_STATS

DEPTH_STATS
-----------

Default: ``True``

Whether to collect depth stats.

.. setting:: DOWNLOADER_DEBUG

DOWNLOADER_DEBUG
----------------

Default: ``False``

Whether to enable the Downloader debugging mode.

.. setting:: DOWNLOADER_MIDDLEWARES

DOWNLOADER_MIDDLEWARES
----------------------

Default:: ``{}``

A dict containing the downloader middlewares enabled in your project, and their
orders. For more info see :ref:`topics-downloader-middleware-setting`.

.. setting:: DOWNLOADER_MIDDLEWARES_BASE

DOWNLOADER_MIDDLEWARES_BASE
---------------------------

Default:: 

    {
        'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
        'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
        'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
        'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
        'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
        'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
        'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
376
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409
        'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 800,
        'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
        'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
    }

A dict containing the downloader middlewares enabled by default in Scrapy. You
should never modify this setting in your project, modify
:setting:`DOWNLOADER_MIDDLEWARES` instead.  For more info see
:ref:`topics-downloader-middleware-setting`.

.. setting:: DOWNLOADER_STATS

DOWNLOADER_STATS
----------------

Default: ``True``

Whether to enable downloader stats collection.

.. setting:: DOWNLOAD_DELAY

DOWNLOAD_DELAY
--------------

Default: ``0``

The amount of time (in secs) that the downloader should wait before downloading
consecutive pages from the same spider. This can be used to throttle the
crawling speed to avoid hitting servers too hard. Decimal numbers are
supported.  Example::

    DOWNLOAD_DELAY = 0.25    # 250 ms of delay 

410 411 412 413 414 415 416 417 418
This setting is also affected by the :setting:`RANDOMIZE_DOWNLOAD_DELAY`
setting (which is enabled by default). By default, Scrapy doesn't wait a fixed
amount of time between requests, but uses a random interval between 0.5 and 1.5
* :setting:`DOWNLOAD_DELAY`.

Another way to change the download delay (per spider, instead of globally) is
by using the ``download_delay`` spider attribute, which takes more precedence
than this setting.

419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439
.. setting:: DOWNLOAD_TIMEOUT

DOWNLOAD_TIMEOUT
----------------

Default: ``180``

The amount of time (in secs) that the downloader will wait before timing out.

.. setting:: DUPEFILTER_CLASS

DUPEFILTER_CLASS
----------------

Default: ``'scrapy.contrib.dupefilter.RequestFingerprintDupeFilter'``

The class used to detect and filter duplicate requests.

The default (``RequestFingerprintDupeFilter``) filters based on request fingerprint
(using ``scrapy.utils.request.request_fingerprint``) and grouping per domain.

440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460
.. setting:: ENCODING_ALIASES

ENCODING_ALIASES
----------------

Default: ``{}``

A mapping of custom encoding aliases for your project, where the keys are the
aliases (and must be lower case) and the values are the encodings they map to.

This setting extends the :setting:`ENCODING_ALIASES_BASE` setting which
contains some default mappings.

.. setting:: ENCODING_ALIASES_BASE

ENCODING_ALIASES_BASE
---------------------

Default::

    {
461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477
        # gb2312 is superseded by gb18030
        'gb2312': 'gb18030',
        'chinese': 'gb18030',
        'csiso58gb231280': 'gb18030',
        'euc- cn': 'gb18030',
        'euccn': 'gb18030',
        'eucgb2312-cn': 'gb18030',
        'gb2312-1980': 'gb18030',
        'gb2312-80': 'gb18030',
        'iso- ir-58': 'gb18030',
        # gbk is superseded by gb18030
        'gbk': 'gb18030',
        '936': 'gb18030',
        'cp936': 'gb18030',
        'ms936': 'gb18030',
        # latin_1 is a subset of cp1252
        'latin_1': 'cp1252',
478 479 480 481 482 483 484
        'iso-8859-1': 'cp1252',
        'iso8859-1': 'cp1252',
        '8859': 'cp1252',
        'cp819': 'cp1252',
        'latin': 'cp1252',
        'latin1': 'cp1252',
        'l1': 'cp1252',
485 486 487 488 489
        # others
        'zh-cn': 'gb18030',
        'win-1251': 'cp1251',
        'macintosh' : 'mac_roman',
        'x-sjis': 'shift_jis',
490 491 492 493 494 495 496 497 498 499 500
    }

The default encoding aliases defined in Scrapy. Don't override this setting in
your project, override :setting:`ENCODING_ALIASES` instead.

The reason why `ISO-8859-1`_ (and all its aliases) are mapped to `CP1252`_ is
due to a well known browser hack. For more information see: `Character
encodings in HTML`_.

.. _ISO-8859-1: http://en.wikipedia.org/wiki/ISO/IEC_8859-1
.. _CP1252: http://en.wikipedia.org/wiki/Windows-1252
P
Pablo Hoffman 已提交
501
.. _Character encodings in HTML: http://en.wikipedia.org/wiki/Character_encodings_in_HTML
502

503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519
.. setting:: EXTENSIONS

EXTENSIONS
----------

Default:: ``{}``

A dict containing the extensions enabled in your project, and their orders. 

.. setting:: EXTENSIONS_BASE

EXTENSIONS_BASE
---------------

Default:: 

    {
520
        'scrapy.contrib.corestats.CoreStats': 0,
521 522
        'scrapy.webservice.WebService': 0,
        'scrapy.telnet.TelnetConsole': 0,
523 524 525 526 527
        'scrapy.contrib.memusage.MemoryUsage': 0,
        'scrapy.contrib.memdebug.MemoryDebugger': 0,
        'scrapy.contrib.closedomain.CloseDomain': 0,
    }

528
The list of available extensions. Keep in mind that some of them need to
529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557
be enabled through a setting. By default, this setting contains all stable
built-in extensions. 

For more information See the :ref:`extensions user guide  <topics-extensions>`
and the :ref:`list of available extensions <topics-extensions-ref>`.

.. setting:: ITEM_PIPELINES

ITEM_PIPELINES
--------------

Default: ``[]``

The item pipelines to use (a list of classes).

Example::

   ITEM_PIPELINES = [
       'mybot.pipeline.validate.ValidateMyItem',
       'mybot.pipeline.validate.StoreMyItem'
   ]

.. setting:: LOG_ENABLED

LOG_ENABLED
-----------

Default: ``True``

P
Pablo Hoffman 已提交
558 559 560 561 562 563 564 565 566 567
Whether to enable logging.

.. setting:: LOG_ENCODING

LOG_ENCODING
------------

Default: ``'utf-8'``

The encoding to use for logging.
568

569
.. setting:: LOG_FILE
570

571 572
LOG_FILE
--------
573 574 575

Default: ``None``

576
File name to use for logging output. If None, standard error will be used.
577

578
.. setting:: LOG_LEVEL
579

580 581
LOG_LEVEL
---------
582 583 584

Default: ``'DEBUG'``

585 586
Minimum level to log. Available levels are: CRITICAL, ERROR, WARNING,
INFO, DEBUG. For more info see :ref:`topics-logging`.
587

588 589 590 591 592 593 594
.. setting:: LOG_STDOUT

LOG_STDOUT
----------

Default: ``False``

595 596 597
If ``True``, all standard output (and error) of your process will be redirected
to the log. For example if you ``print 'hello'`` it will appear in the Scrapy
log.
598

599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700
.. setting:: MEMDEBUG_ENABLED

MEMDEBUG_ENABLED
----------------

Default: ``False``

Whether to enable memory debugging.

.. setting:: MEMDEBUG_NOTIFY

MEMDEBUG_NOTIFY
---------------

Default: ``[]``

When memory debugging is enabled a memory report will be sent to the specified
addresses if this setting is not empty, otherwise the report will be written to
the log.

Example::

    MEMDEBUG_NOTIFY = ['user@example.com']

.. setting:: MEMUSAGE_ENABLED

MEMUSAGE_ENABLED
----------------

Default: ``False``

Scope: ``scrapy.contrib.memusage``

Whether to enable the memory usage extension that will shutdown the Scrapy
process when it exceeds a memory limit, and also notify by email when that
happened.

See :ref:`topics-extensions-ref-memusage`.

.. setting:: MEMUSAGE_LIMIT_MB

MEMUSAGE_LIMIT_MB
-----------------

Default: ``0``

Scope: ``scrapy.contrib.memusage``

The maximum amount of memory to allow (in megabytes) before shutting down
Scrapy  (if MEMUSAGE_ENABLED is True). If zero, no check will be performed.

See :ref:`topics-extensions-ref-memusage`.

.. setting:: MEMUSAGE_NOTIFY_MAIL

MEMUSAGE_NOTIFY_MAIL
--------------------

Default: ``False``

Scope: ``scrapy.contrib.memusage``

A list of emails to notify if the memory limit has been reached.

Example::

    MEMUSAGE_NOTIFY_MAIL = ['user@example.com']

See :ref:`topics-extensions-ref-memusage`.

.. setting:: MEMUSAGE_REPORT

MEMUSAGE_REPORT
---------------

Default: ``False``

Scope: ``scrapy.contrib.memusage``

Whether to send a memory usage report after each domain has been closed.

See :ref:`topics-extensions-ref-memusage`.

.. setting:: MEMUSAGE_WARNING_MB

MEMUSAGE_WARNING_MB
-------------------

Default: ``0``

Scope: ``scrapy.contrib.memusage``

The maximum amount of memory to allow (in megabytes) before sending a warning
email notifying about it. If zero, no warning will be produced.

.. setting:: NEWSPIDER_MODULE

NEWSPIDER_MODULE
----------------

Default: ``''``

701
Module where to create new spiders using the :command:`genspider` command.
702 703 704 705 706

Example::

    NEWSPIDER_MODULE = 'mybot.spiders_dev'

707 708 709 710 711 712 713 714 715 716 717 718 719
.. setting:: RANDOMIZE_DOWNLOAD_DELAY

RANDOMIZE_DOWNLOAD_DELAY
------------------------

Default: ``True``

If enabled, Scrapy will wait a random amount of time (between 0.5 and 1.5
* :setting:`DOWNLOAD_DELAY`) while fetching requests from the same
spider.

This randomization decreases the chance of the crawler being detected (and
subsequently blocked) by sites which analyze requests looking for statistically
720
significant similarities in the time between their requests.
721 722 723 724 725 726 727

The randomization policy is the same used by `wget`_ ``--random-wait`` option.

If :setting:`DOWNLOAD_DELAY` is zero (default) this option has no effect.

.. _wget: http://www.gnu.org/software/wget/manual/wget.html

728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751
.. setting:: REDIRECT_MAX_TIMES

REDIRECT_MAX_TIMES
------------------

Default: ``20``

Defines the maximun times a request can be redirected. After this maximun the
request's response is returned as is. We used Firefox default value for the
same task.

.. setting:: REDIRECT_MAX_METAREFRESH_DELAY

REDIRECT_MAX_METAREFRESH_DELAY
------------------------------

Default: ``100``

Some sites use meta-refresh for redirecting to a session expired page, so we
restrict automatic redirection to a maximum delay (in seconds)

.. setting:: REDIRECT_PRIORITY_ADJUST

REDIRECT_PRIORITY_ADJUST
P
Pablo Hoffman 已提交
752
------------------------
753 754 755 756 757 758

Default: ``+2``

Adjust redirect request priority relative to original request.
A negative priority adjust means more priority.

D
Daniel Grana 已提交
759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779
.. setting:: REQUEST_HANDLERS

REQUEST_HANDLERS
----------------

Default: ``{}``

A dict containing the request downloader handlers enabled in your project.
See `REQUEST_HANDLERS_BASE` for example format.

.. setting:: REQUEST_HANDLERS_BASE

REQUEST_HANDLERS_BASE
---------------------

Default:: 

    {
        'file': 'scrapy.core.downloader.handlers.file.download_file',
        'http': 'scrapy.core.downloader.handlers.http.download_http',
        'https': 'scrapy.core.downloader.handlers.http.download_http',
P
Pablo Hoffman 已提交
780
        's3': 'scrapy.core.downloader.handlers.s3.S3RequestHandler',
D
Daniel Grana 已提交
781 782 783 784 785 786
    }

A dict containing the request download handlers enabled by default in Scrapy.
You should never modify this setting in your project, modify
:setting:`REQUEST_HANDLERS` instead. 

787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808
.. setting:: REQUESTS_QUEUE_SIZE

REQUESTS_QUEUE_SIZE
-------------------

Default: ``0``

Scope: ``scrapy.contrib.spidermiddleware.limit``

If non zero, it will be used as an upper limit for the amount of requests that
can be scheduled per domain.

.. setting:: ROBOTSTXT_OBEY

ROBOTSTXT_OBEY
--------------

Default: ``False``

Scope: ``scrapy.contrib.downloadermiddleware.robotstxt``

If enabled, Scrapy will respect robots.txt policies. For more information see
809
:ref:`topics-dlmw-robots`
810 811 812 813 814 815 816 817 818 819 820 821 822 823 824

.. setting:: SCHEDULER

SCHEDULER
---------

Default: ``'scrapy.core.scheduler.Scheduler'``

The scheduler to use for crawling.

.. setting:: SCHEDULER_ORDER 

SCHEDULER_ORDER
---------------

825
Default: ``'DFO'``
826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914

Scope: ``scrapy.core.scheduler``

The order to use for the crawling scheduler. Available orders are: 

* ``'BFO'``:  `Breadth-first order`_ - typically consumes more memory but
  reaches most relevant pages earlier.

* ``'DFO'``:  `Depth-first order`_ - typically consumes less memory than
  but takes longer to reach most relevant pages.

.. _Breadth-first order: http://en.wikipedia.org/wiki/Breadth-first_search
.. _Depth-first order: http://en.wikipedia.org/wiki/Depth-first_search

.. setting:: SCHEDULER_MIDDLEWARES

SCHEDULER_MIDDLEWARES
---------------------

Default:: ``{}``

A dict containing the scheduler middlewares enabled in your project, and their
orders. 

.. setting:: SCHEDULER_MIDDLEWARES_BASE

SCHEDULER_MIDDLEWARES_BASE
--------------------------

Default:: 

    SCHEDULER_MIDDLEWARES_BASE = {
        'scrapy.contrib.schedulermiddleware.duplicatesfilter.DuplicatesFilterMiddleware': 500,
    }

A dict containing the scheduler middlewares enabled by default in Scrapy. You
should never modify this setting in your project, modify
:setting:`SCHEDULER_MIDDLEWARES` instead. 

.. setting:: SPIDER_MIDDLEWARES

SPIDER_MIDDLEWARES
------------------

Default:: ``{}``

A dict containing the spider middlewares enabled in your project, and their
orders. For more info see :ref:`topics-spider-middleware-setting`.

.. setting:: SPIDER_MIDDLEWARES_BASE

SPIDER_MIDDLEWARES_BASE
-----------------------

Default::

    {
        'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
        'scrapy.contrib.itemsampler.ItemSamplerMiddleware': 100,
        'scrapy.contrib.spidermiddleware.requestlimit.RequestLimitMiddleware': 200,
        'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
        'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
        'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
        'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
    }

A dict containing the spider middlewares enabled by default in Scrapy. You
should never modify this setting in your project, modify
:setting:`SPIDER_MIDDLEWARES` instead. For more info see
:ref:`topics-spider-middleware-setting`.

.. setting:: SPIDER_MODULES

SPIDER_MODULES
--------------

Default: ``[]``

A list of modules where Scrapy will look for spiders.

Example::

    SPIDER_MODULES = ['mybot.spiders_prod', 'mybot.spiders_dev']

.. setting:: STATS_CLASS

STATS_CLASS
-----------

915
Default: ``'scrapy.statscol.MemoryStatsCollector'``
916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946

The class to use for collecting stats (must implement the Stats Collector API,
or subclass the StatsCollector class).

.. setting:: STATS_DUMP

STATS_DUMP
----------

Default: ``False``

Dump (to log) domain-specific stats collected when a domain is closed, and all
global stats when the Scrapy process finishes (ie. when the engine is
shutdown).

.. setting:: STATS_ENABLED

STATS_ENABLED
-------------

Default: ``True``

Enable stats collection.

.. setting:: STATSMAILER_RCPTS

STATSMAILER_RCPTS
-----------------

Default: ``[]`` (empty list)

947
Send Scrapy stats after domains finish scraping. See
948 949 950 951 952 953 954 955 956
:class:`~scrapy.contrib.statsmailer.StatsMailer` for more info.

.. setting:: TELNETCONSOLE_ENABLED

TELNETCONSOLE_ENABLED
---------------------

Default: ``True``

957 958
A boolean which specifies if the :ref:`telnet console <topics-telnetconsole>`
will be enabled (provided its extension is also enabled).
959 960 961 962 963 964

.. setting:: TELNETCONSOLE_PORT

TELNETCONSOLE_PORT
------------------

965
Default: ``[6023, 6073]``
966

967
The port range to use for the telnet console. If set to ``None`` or ``0``, a
968 969 970 971 972 973 974 975 976 977
dynamically assigned port is used. For more info see
:ref:`topics-telnetconsole`.

.. setting:: TEMPLATES_DIR

TEMPLATES_DIR
-------------

Default: ``templates`` dir inside scrapy module

978 979
The directory where to look for templates when creating new projects with
:command:`startproject` command.
980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999

.. setting:: URLLENGTH_LIMIT

URLLENGTH_LIMIT
---------------

Default: ``2083``

Scope: ``contrib.spidermiddleware.urllength``

The maximum URL length to allow for crawled URLs. For more information about
the default value for this setting see: http://www.boutell.com/newfaq/misc/urllength.html

.. setting:: USER_AGENT

USER_AGENT
----------

Default: ``"%s/%s" % (BOT_NAME, BOT_VERSION)``

1000
The default User-Agent to use when crawling, unless overridden. 
1001

1002
.. _Amazon web services: http://aws.amazon.com/