ReferenceGuide.rst 26.6 KB
Newer Older
1 2 3 4 5 6 7 8
.. _reference-guide:

===============
Reference Guide
===============

This guide presents information on the Avocado basic design and its internals.

9 10 11
Job, test and identifiers
=========================

12 13 14
.. _job-id:

Job ID
15
------
16

17
The Job ID is a random SHA1 string that uniquely identifies a given job.
18

19
The full form of the SHA1 string is used is most references to a job::
20

21
  $ avocado run sleeptest.py
22 23 24 25 26 27 28
  JOB ID     : 49ec339a6cca73397be21866453985f88713ac34
  ...

But a shorter version is also used at some places, such as in the job
results location::

  JOB LOG    : $HOME/avocado/job-results/job-2015-06-10T10.44-49ec339/job.log
29

30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68

Test References
---------------

A Test Reference is a string that can be resolved into
(interpreted as) one or more tests by the Avocado Test Resolver.
A given resolver plugin is free to interpret a test reference,
it is completely abstract to the other components of Avocado.

.. note:: Mapping the Test References to tests can be affected
   by command-line switches like `--external-runner`, which
   completelly changes the meaning of the given strings.


Test Name
---------

A test name is an arbitrarily long string that unambiguously
points to the source of a single test. In other words the Avocado
Test Resolver, as configured for a particular job, should return
one and only one test as the interpretation of this name.

This name can be as specific as necessary to make it unique.
Therefore it can contain an arbitrary number of variables,
prefixes, suffixes, tags, etc.  It all depends on user
preferences, what is supported by Avocado via its Test Resolvers and
the context of the job.

The output of the Test Resolver when resolving Test References
should always be a list of unambiguous Test Names (for that
particular job).

Notice that although the Test Name has to be unique, one test can
be run more than once inside a job.

By definition, a Test Name is a Test Reference, but the
reciprocal is not necessarily true, as the latter can represent
more than one test.

69 70 71 72 73 74 75 76
Examples of Test Names::

   '/bin/true'
   'passtest.py:Passtest.test'
   'file:///tmp/passtest.py:Passtest.test'
   'multiple_tests.py:MultipleTests.test_hello'
   'type_specific.io-github-autotest-qemu.systemtap_tracing.qemu.qemu_free'

77 78 79 80

Variant IDs
-----------

81
The varianter component creates different sets of variables
82 83 84 85
(known as "variants"), to allow tests to be run individually in
each of them.

A Variant ID is an arbitrary and abstract string created by the
86 87 88
varianter plugin to identify each variant. It should be unique per
variant inside a set. In other words, the varianter plugin generates
a set of variants, identified by unique IDs.
89

90
A simpler implementation of the varianter uses serial integers
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118
as Variant IDs. A more sophisticated implementation could
generate Variant IDs with more semantic, potentially representing
their contents.


Test ID
--------

A test ID is a string that uniquely identifies a test in the
context of a job. When considering a single job, there are no two
tests with the same ID.

A test ID should encapsulate the Test Name and the Variant ID, to
allow direct identification of a test. In other words, by looking
at the test ID it should be possible to identify:

  - What's the test name
  - What's the variant used to run this test (if any)

Test IDs don't necessarily keep their uniqueness properties when
considered outside of a particular job, but two identical jobs
run in the exact same environment should generate a identical
sets of Test IDs.

Syntax::

   <unique-id>-<test-name>[;<variant-id>]

119
Example of Test IDs::
120

121 122 123 124 125
   '1-/bin/true'
   '2-passtest.py:Passtest.test;quiet-'
   '3-file:///tmp/passtest.py:Passtest.test'
   '4-multiple_tests.py:MultipleTests.test_hello;maximum_debug-df2f'
   '5-type_specific.io-github-autotest-qemu.systemtap_tracing.qemu.qemu_free'
126

127 128 129 130 131
.. _test-types:

Test Types
==========

132 133
Avocado at its simplest configuration can run three different types of tests
[#f1]_. You can mix and match those in a single job.
134 135 136 137

Instrumented
------------

138
These are tests written in Python or BASH with the Avocado helpers that use the Avocado test API.
139

140
To be more precise, the Python file must contain a class derived from :mod:`avocado.test.Test`.
141 142 143
This means that an executable written in Python is not always an instrumented test, but may work
as a simple test.

144 145 146
The instrumented tests allows the writer finer control over the process
including logging, test result status and other more sophisticated test APIs.

147
Test statuses ``PASS``, ``WARN``, ``START`` and ``SKIP`` are considered as
148 149
successful builds. The ``ABORT``, ``ERROR``, ``FAIL``, ``ALERT``, ``RUNNING``,
``NOSTATUS`` and ``INTERRUPTED`` are considered as failed ones.
150

151 152 153 154 155 156 157 158 159 160
Python unittest
---------------

The discovery of classical python unittest is also supported, although unlike
python unittest we still use static analysis to get individual tests so
dynamically created cases are not recognized. Also note that test result SKIP
is reported as CANCEL in Avocado as SKIP test meaning differs from our
definition. Apart from that there should be no surprises when running
unittests via Avocado.

161 162 163 164 165 166
Simple
------

Any executable in your box. The criteria for PASS/FAIL is the return code of the executable.
If it returns 0, the test PASSes, if it returns anything else, it FAILs.

167 168 169 170 171 172 173 174 175 176 177 178 179 180 181
Test Statuses
=============

Avocado sticks to the following definitions of test statuses:

 * ```PASS```: The test passed, which means all conditions being tested have passed.
 * ```FAIL```: The test failed, which means at least one condition being tested has
   failed. Ideally, it should mean a problem in the software being tested has been found.
 * ```ERROR```: An error happened during the test execution. This can happen, for example,
   if there's a bug in the test runner, in its libraries or if a resource breaks unexpectedly.
   Uncaught exceptions in the test code will also result in this status.
 * ```SKIP```: The test runner decided a requested test should not be run. This
   can happen, for example, due to missing requirements in the test environment
   or when there's a job timeout.

182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229
.. _libraries-apis:

Libraries and APIs
==================

The Avocado libraries and its APIs are a big part of what Avocado is.

But, to avoid having any issues you should understand what parts of the Avocado
libraries are intended for test writers and their respective API stability promises.

Test APIs
---------

At the most basic level there's the Test APIs which you should use when writing
tests in Python and planning to make use of any other utility library.

The Test APIs can be found in the :mod:`avocado` main module, and its most important
member is the :class:`avocado.Test` class. By conforming to the :class:`avocado.Test`
API, that is, by inheriting from it, you can use the full set of utility libraries.

The Test APIs are guaranteed to be stable across a single major version of Avocado.
That means that a test written for a given version of Avocado should not break on later
minor versions because of Test API changes.

Utility Libraries
-----------------

There are also a large number of utility libraries that can be found under the
:mod:`avocado.utils` namespace. These are very general in nature and can help you
speed up your test development.

The utility libraries may receive incompatible changes across minor versions, but
these will be done in a staged fashion. If a given change to an utility library
can cause test breakage, it will first be documented and/or deprecated, and only
on the next subsequent minor version it will actually be changed.

What this means is that upon updating to later minor versions of Avocado, you
should look at the Avocado Release Notes for changes that may impact your tests.

Core (Application) Libraries
----------------------------

Finally, everything under :mod:`avocado.core` is part of the application's
infrastructure and should not be used by tests.

Extensions and Plugins can use the core libraries, but API stability is not
guaranteed at any level.

230 231 232 233 234 235 236 237 238
Test Resolution
===============

When you use the Avocado runner, frequently you'll provide paths to files,
that will be inspected, and acted upon depending on their contents. The
diagram below shows how Avocado analyzes a file and decides what to do with
it:

.. figure:: diagram.png
239

240 241 242 243 244 245 246 247 248 249 250 251 252
It's important to note that the inspection mechanism is safe (that is, python
classes and files are not actually loaded and executed on discovery and
inspection stage). Due to the fact Avocado doesn't actually load the code
and classes, the introspection is simple and will *not* catch things like
buggy test modules, missing imports and miscellaneous bugs in the code you
want to list or run. We recommend only running tests from sources you trust,
use of static checking and reviews in your test development process.

Due to the simple test inspection mechanism, avocado will not recognize test
classes that inherit from a class derived from :class:`avocado.Test`. Please
refer to the :doc:`WritingTests` documentation on how to use the tags functionality
to mark derived classes as avocado test classes.

253 254 255 256 257 258 259 260 261 262 263
Results Specification
=====================

On a machine that executed tests, job results are available under
``[job-results]/job-[timestamp]-[short job ID]``, where ``logdir`` is the configured Avocado
logs directory (see the data dir plugin), and the directory name includes
a timestamp, such as ``job-2014-08-12T15.44-565e8de``. A typical
results directory structure can be seen below ::

    $HOME/avocado/job-results/job-2014-08-13T00.45-4a92bc0/
    ├── id
264 265 266 267 268 269
    ├── jobdata
    │   ├── args
    │   ├── cmdline
    │   ├── config
    │   ├── multiplex
    │   ├── pwd
270
    │   └── test_references
271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301
    ├── job.log
    ├── results.json
    ├── results.xml
    ├── sysinfo
    │   ├── post
    │   │   ├── brctl_show
    │   │   ├── cmdline
    │   │   ├── cpuinfo
    │   │   ├── current_clocksource
    │   │   ├── df_-mP
    │   │   ├── dmesg_-c
    │   │   ├── dmidecode
    │   │   ├── fdisk_-l
    │   │   ├── gcc_--version
    │   │   ├── hostname
    │   │   ├── ifconfig_-a
    │   │   ├── interrupts
    │   │   ├── ip_link
    │   │   ├── ld_--version
    │   │   ├── lscpu
    │   │   ├── lspci_-vvnn
    │   │   ├── meminfo
    │   │   ├── modules
    │   │   ├── mount
    │   │   ├── mounts
    │   │   ├── numactl_--hardware_show
    │   │   ├── partitions
    │   │   ├── scaling_governor
    │   │   ├── uname_-a
    │   │   ├── uptime
    │   │   └── version
L
Lukáš Doktor 已提交
302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329
    │   ├── pre
    │   │   ├── brctl_show
    │   │   ├── cmdline
    │   │   ├── cpuinfo
    │   │   ├── current_clocksource
    │   │   ├── df_-mP
    │   │   ├── dmesg_-c
    │   │   ├── dmidecode
    │   │   ├── fdisk_-l
    │   │   ├── gcc_--version
    │   │   ├── hostname
    │   │   ├── ifconfig_-a
    │   │   ├── interrupts
    │   │   ├── ip_link
    │   │   ├── ld_--version
    │   │   ├── lscpu
    │   │   ├── lspci_-vvnn
    │   │   ├── meminfo
    │   │   ├── modules
    │   │   ├── mount
    │   │   ├── mounts
    │   │   ├── numactl_--hardware_show
    │   │   ├── partitions
    │   │   ├── scaling_governor
    │   │   ├── uname_-a
    │   │   ├── uptime
    │   │   └── version
    │   └── profile
330 331
    └── test-results
        └── tests
L
Lukáš Doktor 已提交
332
            ├── sleeptest.py.1
333 334 335 336 337
            │   ├── data
            │   ├── debug.log
            │   └── sysinfo
            │       ├── post
            │       └── pre
L
Lukáš Doktor 已提交
338
            ├── sleeptest.py.2
339 340 341 342 343
            │   ├── data
            │   ├── debug.log
            │   └── sysinfo
            │       ├── post
            │       └── pre
L
Lukáš Doktor 已提交
344
            └── sleeptest.py.3
345 346 347 348 349 350
                ├── data
                ├── debug.log
                └── sysinfo
                    ├── post
                    └── pre
    
351
    22 directories, 65 files
352 353 354 355 356 357 358


From what you can see, the results dir has:

1) A human readable ``id`` in the top level, with the job SHA1.
2) A human readable ``job.log`` in the top level, with human readable logs of
   the task
359 360
3) Subdirectory ``jobdata``, that contains machine readable data about the job.
4) A machine readable ``results.xml`` and ``results.json`` in the top level,
L
Lukáš Doktor 已提交
361
   with a summary of the job information in xUnit/json format.
362
5) A top level ``sysinfo`` dir, with sub directories ``pre``, ``post`` and
L
Lukáš Doktor 已提交
363
   ``profile``, that store sysinfo files pre/post/during job, respectively.
364
6) Subdirectory ``test-results``, that contains a number of subdirectories
L
Lukáš Doktor 已提交
365
   (filesystem-friendly test ids). Those test ids represent instances of test
366 367 368 369 370 371 372
   execution results.

Test execution instances specification
--------------------------------------

The instances should have:

L
Lukáš Doktor 已提交
373
1) A top level human readable ``job.log``, with job debug information
374 375 376
2) A ``sysinfo`` subdir, with sub directories ``pre``, ``post`` and
   ``profile`` that store sysinfo files pre test, post test and
   profiling info while the test was running, respectively.
377 378
3) A ``data`` subdir, where the test can output a number of files if necessary.

379

380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445
Test execution environment
--------------------------

Each test is executed in a separate process.  Due to how the
underlying operating system works, a lot of the attributes of the
parent process (the Avocado test **runner**) are passed down to the
test process.

On GNU/Linux systems, a child process should be *"an exact duplicate
of the parent process, except"* some items that are documented in
the ``fork(2)`` man page.

Besides those operating system exceptions, the Avocado test runner
changes the test process in the following ways:

1) The standard input (``STDIN``) is set to a :data:`null device
   <os.devnull>`.  This is truth both for :data:`sys.stdin` and for
   file descriptor ``0``.  Both will point to the same open null
   device file.

2) The standard output (``STDOUT``), as in :data:`sys.stdout`, is
   redirected so that it doesn't interfere with the test runner's own
   output.  All content written to the test's :data:`sys.stdout` will
   be available in the logs under the ``output`` prefix.

   .. warning:: The file descriptor ``1`` (AKA ``/dev/stdout``, AKA
                ``/proc/self/fd/1``, etc) is **not** currently
                redirected for INSTRUMENTED tests.  Any attempt to
                write directly to the file descriptor will interfere
                with the runner's own output stream.  This behavior
                will be addressed in a future version.

3) The standard error (``STDERR``), as in :data:`sys.stderr`, is
   redirected so that it doesn't interfere with the test runner's own
   errors.  All content written to the test's :data:`sys.stderr` will
   be available in the logs under the ``output`` prefix.

   .. warning:: The file descriptor ``2`` (AKA ``/dev/stderr``, AKA
                ``/proc/self/fd/2``, etc) is **not** currently
                redirected for INSTRUMENTED tests.  Any attempt to
                write directly to the file descriptor will interfere
                with the runner's own error stream.  This behavior
                will be addressed in a future version.

4) A custom handler for signal ``SIGTERM`` which will simply raise an
   exception (with the appropriate message) to be handled by the
   Avocado test runner, stating the fact that the test was interrupted
   by such a signal.

   .. tip:: By following the backtrace that is given alongside the in
            the test log (look for ``RuntimeError: Test interrupted
            by SIGTERM``) a user can quickly grasp at which point the
            test was interrupted.

   .. note:: If the test handles ``SIGTERM`` differently and doesn't
             finish the test process quickly enough, it will receive
             then a ``SIGKILL`` which is supposed to definitely end
             the test process.

5) A number of :ref:`environment variables
   <environment-variables-for-tests>` that are set by Avocado, all
   prefixed with ``AVOCADO_``.

If you want to see for yourself what is described here, you may want
to run the example test ``test_env.py`` and examine its log messages.

446 447 448 449 450 451 452 453 454 455 456 457 458
Pre and post plugins
====================

Avocado provides interfaces with which custom plugins can register to
be called at various times.  For instance, it's possible to trigger
custom actions before and after the execution of a :class:`job
<avocado.core.job.Job>`, or before and after the execution of the
tests from a job :data:`test suite <avocado.core.job.Job.test_suite>`.

Let's discuss each interface briefly.

Before and after jobs
---------------------
459 460 461

Avocado supports plug-ins which are (guaranteed to be) executed before the
first test and after all tests finished. The interfaces are
462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526
:class:`avocado.core.plugin_interfaces.JobPre` and
:class:`avocado.core.plugin_interfaces.JobPost`, respectively.

The :meth:`pre <avocado.core.plugin_interfaces.JobPre.pre>` method of
each installed plugin of type ``job.prepost`` will be called by the
``run`` command, that is, anytime an ``avocado run
<valid_test_reference>`` command is executed.

.. note:: Conditions such as the :exc:`SystemExit` or
          :exc:`KeyboardInterrupt` execeptions being raised can
          interrupt the execution of those plugins.

Then, immediately after that, the job's :meth:`run
<avocado.core.job.Job.run>` method is called, which attempts to run
all job phases, from test suite creation to test execution.

Unless a :exc:`SystemExit` or :exc:`KeyboardInterrupt` is raised, or
yet another major external event (like a system condition that Avocado
can not control) it will attempt to run the :meth:`post
<avocado.core.plugin_interfaces.JobPre.post>` methods of all the
installed plugins of type ``job.prepost``.  This even includes job
executions where the :meth:`pre
<avocado.core.plugin_interfaces.JobPre.pre>` plugin executions were
interrupted.

Before and after the execution of tests
---------------------------------------

If you followed the previous section, you noticed that the job's
:meth:`run <avocado.core.job.Job.run>` method was said to run all the
test phases.  Here's a sequence of the job phases:

1) :meth:`Creation of the test suite <avocado.core.job.Job.create_test_suite>`
2) :meth:`Pre tests hook <avocado.core.job.Job.pre_tests>`
3) :meth:`Tests execution <avocado.core.job.Job.run_tests>`
4) :meth:`Post tests hook <avocado.core.job.Job.post_tests>`

Plugin writers can have their own code called at Avocado during a job
by writing a that will be called at phase number 2 (``pre_tests``) by
writing a method according to the
:meth:`avocado.core.plugin_interfaces.JobPreTests` interface.
Accordingly, plugin writers can have their own called at phase number
4 (``post_tests``) by writing a method according to the
:meth:`avocado.core.plugin_interfaces.JobPostTests` interface.

Note that there's no guarantee that all of the first 3 job phases will
be executed, so a failure in phase 1 (``create_test_suite``), may
prevent the phase 2 (``pre_tests``) and/or 3 (``run_tests``) from from
being executed.

Now, no matter what happens in the *attempted execution* of job phases
1 through 3, job phase 4 (``post_tests``) will be *attempted to be
executed*.  To make it extra clear, as long as the Avocado test runner
is still in execution (that is, has not been terminated by a system
condition that it can not control), it will execute plugin's
``post_tests`` methods.

As a concrete example, a plugin' ``post_tests`` method would not be
executed after a ``SIGKILL`` is sent to the Avocado test runner on
phases 1 through 3, because the Avocado test runner would be promptly
interrupted.  But, a ``SIGTERM`` and ``KeyboardInterrupt`` sent to the
Avocado test runner under phases 1 though 3 would still cause the test
runner to run ``post_tests`` (phase 4).  Now, if during phase 4 a
``KeyboardInterrupt`` or ``SystemExit`` is received, the remaining
plugins' ``post_tests`` methods will **NOT** be executed.
527 528 529

Jobscripts plugin
-----------------
530 531 532 533 534 535 536 537

Avocado ships with a plugin (installed by default) that allows running
scripts before and after the actual execution of Jobs.  A user can be
sure that, when a given "pre" script is run, no test in that job has
been run, and when the "post" scripts are run, all the tests in a
given job have already finished running.

Configuration
538
^^^^^^^^^^^^^
539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574

By default, the script directory location is::

  /etc/avocado/scripts/job

Inside that directory, that is a directory for pre-job scripts::

  /etc/avocado/scripts/job/pre.d

And for post-job scripts::

  /etc/avocado/scripts/job/post.d

All the configuration about the Pre/Post Job Scripts are placed under
the ``avocado.plugins.jobscripts`` config section.  To change the
location for the pre-job scripts, your configuration should look
something like this::

  [plugins.jobscripts]
  pre = /my/custom/directory/for/pre/job/scripts/

Accordingly, to change the location for the post-job scripts, your
configuration should look something like this::

  [plugins.jobscripts]
  post = /my/custom/directory/for/post/scripts/

A couple of other configuration options are available under the same
section:

* ``warn_non_existing_dir``: gives warnings if the configured (or
  default) directory set for either pre or post scripts do not exist
* ``warn_non_zero_status``: gives warnings if a given script (either
  pre or post) exits with non-zero status

Script Execution Environment
575
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593

All scripts are run in separate process with some environment
variables set.  These can be used in your scripts in any way you wish:

* ``AVOCADO_JOB_UNIQUE_ID``: the unique `job-id`_.
* ``AVOCADO_JOB_STATUS``: the current status of the job.
* ``AVOCADO_JOB_LOGDIR``: the filesystem location that holds the logs
  and various other files for a given job run.

Note: Even though these variables should all be set, it's a good
practice for scripts to check if they're set before using their
values.  This may prevent unintended actions such as writing to the
current working directory instead of to the ``AVOCADO_JOB_LOGDIR`` if
this is not set.

Finally, any failures in the Pre/Post scripts will not alter the
status of the corresponding jobs.

594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632
Job Cleanup
===========

It's possible to register a callback function that will be called when
all the tests have finished running. This effectively allows for a
test job to clean some state it may have left behind.

At the moment, this feature is not intended to be used by test writers,
but it's seen as a feature for Avocado extensions to make use.

To register a callback function, your code should put a message in a
very specific format in the "runner queue". The Avocado test runner
code will understand that this message contains a (serialized) function
that will be called once all tests finish running.

Example::

  from avocado import Test

  def my_cleanup(path_to_file):
     if os.path.exists(path_to_file):
        os.unlink(path_to_file)

  class MyCustomTest(Test):
  ...
     cleanup_file = '/tmp/my-custom-state'
     self.runner_queue.put({"func_at_exit": self.my_cleanup,
                            "args": (cleanup_file),
                            "once": True})
  ...

This results in the ``my_cleanup`` function being called with
positional argument ``cleanup_file``.

Because ``once`` was set to ``True``, only one unique combination of
function, positional arguments and keyword arguments will be
registered, not matter how many times they're attempted to be
registered. For more information check
:meth:`avocado.utils.data_structures.CallbackRegister.register`.
633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670

.. _docstring-directive-rules:

Docstring Directives Rules
==========================

Avocado INSTRUMENTED tests, those written in Python and using the
:class:`avocado.Test` API, can make use of special directives
specified as docstrings.

To be considered valid, the docstring must match this pattern:
:data:`avocado.core.safeloader.DOCSTRING_DIRECTIVE_RE_RAW`.

An Avocado docstring directive has two parts:

 1) The marker, which is the literal string ``:avocado:``.

 2) The content, a string that follows the marker, separated by at
    least one white space or tab.

The following is a list of rules that makes a docstring directive
be a valid one:

 * It should start with ``:avocado:``, which is the docstring
   directive "marker"

 * At least one whitespace or tab must follow the marker and preceed
   the docstring directive "content"

 * The "content", which follows the marker and the space, must begin
   with an alphanumeric character, that is, characters within "a-z",
   "A-Z" or "0-9".

 * After at least one alphanumeric character, the content may contain
   the following special symbols too: ``_``, ``,``, ``=`` and ``:``.

 * An end of string (or end of line) must immediately follow the
   content.
671

A
Amador Pahim 已提交
672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701
.. _signal_handlers:

Signal Handlers
===============

Avocado normal operation is related to run code written by
users/test-writers. It means the test code can carry its own handlers
for different signals or even ignore then. Still, as the code is being
executed by Avocado, we have to make sure we will finish all the
subprocesses we create before ending our execution.

Signals sent to the Avocado main process will be handled as follows:

- SIGSTP/Ctrl+Z: On SIGSTP, Avocado will pause the execution of the
  subprocesses, while the main process will still be running,
  respecting the timeout timer and waiting for the subprocesses to
  finish. A new SIGSTP will make the subprocesses to resume the
  execution.
- SIGINT/Ctrl+C: This signal will be forwarded to the test process and
  Avocado will wait until it's finished. If the test process does not
  finish after receiving a SIGINT, user can send a second SIGINT (after
  the 2 seconds ignore period). The second SIGINT will make Avocado
  to send a SIGKILL to the whole subprocess tree and then complete the
  main process execution.
- SIGTERM: This signal will make Avocado to terminate immediately. A
  SIGKILL will be sent to the whole subprocess tree and the main process
  will exit without completing the execution. Notice that it's a
  best-effort attempt, meaning that in case of fork-bomb, newly created
  processes might still be left behind.

702
.. [#f1] Avocado plugins can introduce additional test types.