WritingTests.rst 46.9 KB
Newer Older
1 2
.. _writing-tests:

3
=====================
4 5 6
Writing Avocado Tests
=====================

7
We are going to write an Avocado test in Python and we are going to inherit from
8
:class:`avocado.Test`. This makes this test a so-called instrumented test.
9

10 11
Basic example
=============
12

13 14
Let's re-create an old time favorite, ``sleeptest`` [#f1]_.  It is so simple, it
does nothing besides sleeping for a while::
15

16
        import time
17

18
        from avocado import Test
19

20
        class SleepTest(Test):
21

22
            def test(self):
23 24 25
                sleep_length = self.params.get('sleep_length', default=1)
                self.log.debug("Sleeping for %.2f seconds", sleep_length)
                time.sleep(sleep_length)
26

27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
This is about the simplest test you can write for Avocado, while still
leveraging its API power.

What is an Avocado Test
-----------------------

As can be seen in the example above, an Avocado test is a method that
starts with ``test`` in a class that inherits from :mod:`avocado.Test`.

Multiple tests and naming conventions
-------------------------------------

You can have multiple tests in a single class.

To do so, just give the methods names that start with ``test``, say
``test_foo``, ``test_bar`` and so on. We recommend you follow this naming
style, as defined in the `PEP8 Function Names`_ section.

For the class name, you can pick any name you like, but we also recommend
that it follows the CamelCase convention, also known as CapWords, defined
in the PEP 8 document under `Class Names`_.

Convenience Attributes
----------------------

Note that the test class provides you with a number of convenience attributes:

* A ready to use log mechanism for your test, that can be accessed by means
  of ``self.log``. It lets you log debug, info, error and warning messages.
* A parameter passing system (and fetching system) that can be accessed by
  means of ``self.params``. This is hooked to the Multiplexer, about which
  you can find that more information at :doc:`MultiplexConfig`.
59

C
Cleber Rosa 已提交
60 61 62
Saving test generated (custom) data
===================================

63
Each test instance provides a so called ``whiteboard``. It can be accessed
64 65
through ``self.whiteboard``. This whiteboard is simply a string that will be
automatically saved to test results (as long as the output format supports it).
66
If you choose to save binary data to the whiteboard, it's your responsibility to
67
encode it first (base64 is the obvious choice).
C
Cleber Rosa 已提交
68

69
Building on the previously demonstrated ``sleeptest``, suppose that you want to save the
C
Cleber Rosa 已提交
70 71
sleep length to be used by some other script or data analysis tool::

72
        def test(self):
73 74 75 76
            sleep_length = self.params.get('sleep_length', default=1)
            self.log.debug("Sleeping for %.2f seconds", sleep_length)
            time.sleep(sleep_length)
            self.whiteboard = "%.2f" % sleep_length
C
Cleber Rosa 已提交
77

78
The whiteboard can and should be exposed by files generated by the available test result
79 80 81 82 83
plugins. The ``results.json`` file already includes the whiteboard for each test.
Additionally, we'll save a raw copy of the whiteboard contents on a file named
``whiteboard``, in the same level as the ``results.json`` file, for your convenience
(maybe you want to use the result of a benchmark directly with your custom made scripts
to analyze that particular benchmark result).
C
Cleber Rosa 已提交
84

85 86 87
Accessing test parameters
=========================

88 89 90 91 92 93
Each test has a set of parameters that can be accessed through
``self.params.get($name, $path=None, $default=None)``.
Avocado finds and populates ``self.params`` with all parameters you define on
a Multiplex Config file (see :doc:`MultiplexConfig`). As an example, consider
the following multiplex file for sleeptest::

94
    sleeptest:
95
        type: "builtin"
96 97 98 99 100 101 102
        length: !mux
            short:
                sleep_length: 0.5
            medium:
                sleep_length: 1
            long:
                sleep_length: 5
103 104 105 106 107 108 109

When running this example by ``avocado run $test --multiplex $file.yaml``
three variants are executed and the content is injected into ``/run`` namespace
(see :doc:`MultiplexConfig` for details). Every variant contains variables
"type" and "sleep_length". To obtain the current value, you need the name
("sleep_length") and its path. The path differs for each variant so it's
needed to use the most suitable portion of the path, in this example:
110 111
`/run/sleeptest/length/*` or perhaps `sleeptest/*` might be enough. It depends
on how your setup looks like.
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129

The default value is optional, but always keep in mind to handle them nicely.
Someone might be executing your test with different params or without any
params at all. It should work fine.

So the complete example on how to access the "sleep_length" would be::

    self.params.get("sleep_length", "/*/sleeptest/*", 1)

There is one way to make this even simpler. It's possible to define resolution
order, then for simple queries you can simply omit the path::

    self.params.get("sleep_length", None, 1)
    self.params.get("sleep_length", '*', 1)
    self.params.get("sleep_length", default=1)

One should always try to avoid param clashes (multiple matching keys for given
path with different origin). If it's not possible (eg. when
130 131
you use multiple yaml files) you can modify the default paths by modifying
``--mux-path``. What it does is it slices the params and iterates through the
132 133
paths one by one. When there is a match in the first slice it returns
it without trying the other slices. Although relative queries only match
134
from ``--mux-path`` slices.
135 136 137 138 139 140 141 142 143

There are many ways to use paths to separate clashing params or just to make
more clear what your query for. Usually in tests the usage of '*' is sufficient
and the namespacing is not necessarily, but it helps make advanced usage
clearer and easier to follow.

When thinking of the path always think about users. It's common to extend
default config with additional variants or combine them with different
ones to generate just the right scenarios they need. People might
144 145
simply inject the values elsewhere (eg. `/run/sleeptest` =>
`/upstream/sleeptest`) or they can merge other clashing file into the
146 147 148 149
default path, which won't generate clash, but would return their values
instead. Then you need to clarify the path (eg. `'*'` =>  `sleeptest/*`)

More details on that are in :doc:`MultiplexConfig`
150 151 152 153

Using a multiplex file
======================

154
You may use the Avocado runner with a multiplex file to provide params and matrix
155 156
generation for sleeptest just like::

157 158 159 160 161 162 163
    $ avocado run sleeptest.py --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
    JOB ID     : d565e8dec576d6040f894841f32a836c751f968f
    JOB LOG    : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/job.log
    TESTS      : 3
     (1/3) sleeptest.py:SleepTest.test;1: PASS (0.50 s)
     (2/3) sleeptest.py:SleepTest.test;2: PASS (1.00 s)
     (3/3) sleeptest.py:SleepTest.test;3: PASS (5.00 s)
164
    RESULTS    : PASS 3 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
165
    JOB HTML   : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/html/results.html
166
    TESTS TIME : 6.50 s
167

168
The ``--multiplex`` accepts either only ``$FILE_LOCATION`` or ``$INJECT_TO:$FILE_LOCATION``.
169 170 171 172 173 174
As explained in :doc:`MultiplexConfig` without any path the content gets
injected into ``/run`` in order to be in the default relative path location.
The ``$INJECT_TO`` can be either relative path, then it's injected into
``/run/$INJECT_TO`` location, or absolute path (starting with ``'/'``), then
it's injected directly into the specified path and it's up to the test/framework
developer to get the value from this location (using path or adding the path to
175
``mux-path``). To understand the difference execute those commands::
176 177

    $ avocado multiplex -t examples/tests/sleeptest.py.data/sleeptest.yaml
178 179
    $ avocado multiplex -t duration:examples/tests/sleeptest.py.data/sleeptest.yaml
    $ avocado multiplex -t /my/location:examples/tests/sleeptest.py.data/sleeptest.yaml
180

181 182 183
Note that, as your multiplex file specifies all parameters for sleeptest, you
can't leave the test ID empty::

184
    $ scripts/avocado run --multiplex examples/tests/sleeptest/sleeptest.yaml
185
    Empty test ID. A test path or alias must be provided
186

187 188
You can also execute multiple tests with the same multiplex file::

189 190 191
    $ avocado run sleeptest.py synctest.py --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
    JOB ID     : cd20fc8d1714da6d4791c19322374686da68c45c
    JOB LOG    : $HOME/avocado/job-results/job-2016-05-04T09.25-cd20fc8/job.log
192
    TESTS      : 8
193 194 195 196 197 198 199 200
     (1/8) sleeptest.py:SleepTest.test;1: PASS (0.50 s)
     (2/8) sleeptest.py:SleepTest.test;2: PASS (1.00 s)
     (3/8) sleeptest.py:SleepTest.test;3: PASS (5.01 s)
     (4/8) sleeptest.py:SleepTest.test;4: PASS (10.00 s)
     (5/8) synctest.py:SyncTest.test;1: PASS (2.38 s)
     (6/8) synctest.py:SyncTest.test;2: PASS (2.47 s)
     (7/8) synctest.py:SyncTest.test;3: PASS (2.46 s)
     (8/8) synctest.py:SyncTest.test;4: PASS (2.45 s)
201
    RESULTS    : PASS 8 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
202
    JOB HTML   : $HOME/avocado/job-results/job-2016-05-04T09.25-cd20fc8/html/results.html
203
    TESTS TIME : 26.26 s
204

205

206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277
Advanced logging capabilities
=============================

Avocado provides advanced logging capabilities at test run time.  These can
be combined with the standard Python library APIs on tests.

One common example is the need to follow specific progress on longer or more
complex tests. Let's look at a very simple test example, but one multiple
clear stages on a single test::

    import logging
    import time

    from avocado import Test

    progress_log = logging.getLogger("progress")

    class Plant(Test):

        def test_plant_organic(self):
            rows = self.params.get("rows", default=3)

            # Preparing soil
            for row in range(rows):
                progress_log.info("%s: preparing soil on row %s",
                                  self.tagged_name, row)

            # Letting soil rest
            progress_log.info("%s: letting soil rest before throwing seeds",
                              self.tagged_name)
            time.sleep(2)

            # Throwing seeds
            for row in range(rows):
                progress_log.info("%s: throwing seeds on row %s",
                                  self.tagged_name, row)

            # Let them grow
            progress_log.info("%s: waiting for Avocados to grow",
                              self.tagged_name)
            time.sleep(5)

            # Harvest them
            for row in range(rows):
                progress_log.info("%s: harvesting organic avocados on row %s",
                                  self.tagged_name, row)


From this point on, you can ask Avocado to show your logging stream, either
exclusively or in addition to other builtin streams::

    $ avocado --show app,progress run plant.py

The outcome should be similar to::

    JOB ID     : af786f86db530bff26cd6a92c36e99bedcdca95b
    JOB LOG    : /home/cleber/avocado/job-results/job-2016-03-18T10.29-af786f8/job.log
    TESTS      : 1
     (1/1) plant.py:Plant.test_plant_organic:  progress: plant.py:Plant.test_plant_organic: preparing soil on row 0
    progress: plant.py:Plant.test_plant_organic: preparing soil on row 1
    progress: plant.py:Plant.test_plant_organic: preparing soil on row 2
    progress: plant.py:Plant.test_plant_organic: letting soil rest before throwing seeds
    -progress: plant.py:Plant.test_plant_organic: throwing seeds on row 0
    progress: plant.py:Plant.test_plant_organic: throwing seeds on row 1
    progress: plant.py:Plant.test_plant_organic: throwing seeds on row 2
    progress: plant.py:Plant.test_plant_organic: waiting for Avocados to grow
    \progress: plant.py:Plant.test_plant_organic: harvesting organic avocados on row 0
    progress: plant.py:Plant.test_plant_organic: harvesting organic avocados on row 1
    progress: plant.py:Plant.test_plant_organic: harvesting organic avocados on row 2
    PASS
    RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
    JOB HTML   : /home/cleber/avocado/job-results/job-2016-03-18T10.29-af786f8/html/results.html
278
    TESTS TIME : 7.01 s
279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307

The custom ``progress`` stream is combined with the application output, which
may or may not suit your needs or preferences. If you want the ``progress``
stream to be sent to a separate file, both for clarity and for persistence,
you can run Avocado like this::

    $ avocado run plant.py --store-logging-stream progress

The result is that, besides all the other log files commonly generated, there
will be another log file named ``progress.INFO`` at the job results
dir. During the test run, one could watch the progress with::

    $ tail -f ~/avocado/job-results/latest/progress.INFO
    10:36:59 INFO | plant.py:Plant.test_plant_organic: preparing soil on row 0
    10:36:59 INFO | plant.py:Plant.test_plant_organic: preparing soil on row 1
    10:36:59 INFO | plant.py:Plant.test_plant_organic: preparing soil on row 2
    10:36:59 INFO | plant.py:Plant.test_plant_organic: letting soil rest before throwing seeds
    10:37:01 INFO | plant.py:Plant.test_plant_organic: throwing seeds on row 0
    10:37:01 INFO | plant.py:Plant.test_plant_organic: throwing seeds on row 1
    10:37:01 INFO | plant.py:Plant.test_plant_organic: throwing seeds on row 2
    10:37:01 INFO | plant.py:Plant.test_plant_organic: waiting for Avocados to grow
    10:37:06 INFO | plant.py:Plant.test_plant_organic: harvesting organic avocados on row 0
    10:37:06 INFO | plant.py:Plant.test_plant_organic: harvesting organic avocados on row 1
    10:37:06 INFO | plant.py:Plant.test_plant_organic: harvesting organic avocados on row 2

The very same ``progress`` logger, could be used across multiple test methods
and across multiple test modules.  In the example given, the tagged test name
is used to give extra context.

308 309 310 311 312
:class:`unittest.TestCase` heritage
===================================

Since an Avocado test inherits from :class:`unittest.TestCase`, you
can use all the assertion methods that its parent.
313

314 315 316 317
The code example bellow uses :meth:`assertEqual
<unittest.TestCase.assertEqual>`, :meth:`assertTrue
<unittest.TestCase.assertTrue>` and :meth:`assertIsInstace
<unittest.TestCase.assertIsInstance>`::
318

319 320 321
    from avocado import Test

    class RandomExamples(Test):
322
        def test(self):
323 324 325 326 327 328 329 330 331 332 333 334
            self.log.debug("Verifying some random math...")
            four = 2 * 2
            four_ = 2 + 2
            self.assertEqual(four, four_, "something is very wrong here!")

            self.log.debug("Verifying if a variable is set to True...")
            variable = True
            self.assertTrue(variable)

            self.log.debug("Verifying if this test is an instance of test.Test")
            self.assertIsInstance(self, test.Test)

335 336
Running tests under other :mod:`unittest` runners
-------------------------------------------------
337

338 339 340 341
`nose <https://nose.readthedocs.org/>`__ is another Python testing framework
that is also compatible with :mod:`unittest`.

Because of that, you can run avocado tests with the ``nosetests`` application::
342

343
    $ nosetests examples/tests/sleeptest.py
344 345
    .
    ----------------------------------------------------------------------
346
    Ran 1 test in 1.004s
347

348 349
    OK

350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371
Conversely, you can also use the standard :func:`unittest.main` entry point to run an
Avocado test. Check out the following code, to be saved as ``dummy.py``::

   from avocado import Test
   from unittest import main

   class Dummy(Test):
       def test(self):
           self.assertTrue(True)

   if __name__ == '__main__':
       main()

It can be run by::

   $ python dummy.py
   .
   ----------------------------------------------------------------------
   Ran 1 test in 0.000s

   OK

372
Setup and cleanup methods
373
=========================
374 375

If you need to perform setup actions before/after your test, you may do so
376
in the ``setUp`` and ``tearDown`` methods, respectively. We'll give examples
377 378
in the following section.

379 380
Running third party test suites
===============================
381 382

It is very common in test automation workloads to use test suites developed
383
by third parties. By wrapping the execution code inside an Avocado test module,
384 385 386 387 388
you gain access to the facilities and API provided by the framework. Let's
say you want to pick up a test suite written in C that it is in a tarball,
uncompress it, compile the suite code, and then executing the test. Here's
an example that does that::

389
    #!/usr/bin/env python
390

391 392
    import os

393
    from avocado import Test
394
    from avocado import main
395 396 397 398 399
    from avocado.utils import archive
    from avocado.utils import build
    from avocado.utils import process


400
    class SyncTest(Test):
401 402 403 404

        """
        Execute the synctest test suite.
        """
405 406 407
        default_params = {'sync_tarball': 'synctest.tar.bz2',
                          'sync_length': 100,
                          'sync_loop': 10}
408

409
        def setUp(self):
410 411 412 413 414
            """
            Set default params and build the synctest suite.
            """
            # Build the synctest suite
            self.cwd = os.getcwd()
415
            tarball_path = os.path.join(self.datadir, self.params.sync_tarball)
416 417 418 419
            archive.extract(tarball_path, self.srcdir)
            self.srcdir = os.path.join(self.srcdir, 'synctest')
            build.make(self.srcdir)

420
        def test(self):
421 422 423
            """
            Execute synctest with the appropriate params.
            """
424
            os.chdir(self.srcdir)
425 426
            cmd = ('./synctest %s %s' %
                   (self.params.sync_length, self.params.sync_loop))
427
            process.system(cmd)
428 429 430 431
            os.chdir(self.cwd)


    if __name__ == "__main__":
432
        main()
433

434
Here we have an example of the ``setUp`` method in action: Here we get the
435
location of the test suite code (tarball) through
436
:meth:`avocado.Test.datadir`, then uncompress the tarball through
437
:func:`avocado.utils.archive.extract`, an API that will
438 439
decompress the suite tarball, followed by :func:`avocado.utils.build.make`, that will build
the suite.
440

441 442 443 444 445 446 447 448
The ``setUp`` method is the only place in avocado where you are allowed to
call the ``skip`` method, given that, if a test started to be executed, by
definition it can't be skipped anymore. Avocado will do its best to enforce
this boundary, so that if you use ``skip`` outside ``setUp``, the test upon
execution will be marked with the ``ERROR`` status, and the error message
will instruct you to fix your test's code.

In this example, the ``test`` method just gets into the base directory of
449 450
the compiled suite  and executes the ``./synctest`` command, with appropriate
parameters, using :func:`avocado.utils.process.system`.
451

A
Amador Pahim 已提交
452 453
Fetching asset files
====================
454 455

To run third party test suites as mentioned above, or for any other purpose,
A
Amador Pahim 已提交
456 457 458 459 460 461 462 463 464 465 466 467 468
we offer an asset fetcher as a method of Avocado Test class.
The asset method looks for a list of directories in the ``cache_dirs`` key,
inside the ``[datadir.paths]`` section from the configuration files. Read-only
directories are also supported. When the asset file is not present in any of
the provided directories, we will try to download the file from the provided
locations, copying it to the first writable cache directory. Example::

    cache_dirs = ['/usr/local/src/', '~/avocado/cache']

In the example above, ``/usr/local/src/`` is a read-only directory. In that
case, when we need to fetch the asset from the locations, it will be copied to
the ``~/avocado/cache`` directory.

469 470
If you don't provide a ``cache_dirs``, we will create a ``cache`` directory
inside the avocado ``data_dir`` location to put the fetched files in.
A
Amador Pahim 已提交
471 472 473 474 475 476 477 478 479 480 481 482

* Use case 1: no ``cache_dirs`` key in config files, only the asset name
  provided in the full url format::

    ...
        def setUp(self):
            stress = 'http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz'
            tarball = self.fetch_asset(stress)
            archive.extract(tarball, self.srcdir)
    ...

  In this case, ``fetch_asset()`` will download the file from the url provided,
483 484
  copying it to the ``$data_dir/cache`` directory. ``tarball`` variable  will
  contains, for example, ``/home/user/avocado/data/cache/stress-1.0.4.tar.gz``.
A
Amador Pahim 已提交
485 486 487 488 489 490 491 492 493 494 495 496

* Use case 2: Read-only cache directory provided. ``cache_dirs = ['/mnt/files']``::

    ...
        def setUp(self):
            stress = 'http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz'
            tarball = self.fetch_asset(stress)
            archive.extract(tarball, self.srcdir)
    ...

  In this case, we try to find ``stress-1.0.4.tar.gz`` file in ``/mnt/files``
  directory. If it's not there, since ``/mnt/files`` is read-only,  we will try
497
  to download the asset file to the ``$data_dir/cache`` directory.
A
Amador Pahim 已提交
498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513

* Use case 3: Writable cache directory provided, along with a list of
  locations. ``cache_dirs = ['~/avocado/cache']``::

    ...
        def setUp(self):
            st_name = 'stress-1.0.4.tar.gz'
            st_hash = 'e1533bc704928ba6e26a362452e6db8fd58b1f0b'
            st_loc = ['http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz',
                      'ftp://foo.bar/stress-1.0.4.tar.gz']
            tarball = self.fetch_asset(st_name, asset_hash=st_hash,
                                       locations=st_loc)
            archive.extract(tarball, self.srcdir)
    ...

  In this case, we try to download ``stress-1.0.4.tar.gz`` from the provided
514
  locations list (if it's not already in ``~/avocado/cache``). The hash was
A
Amador Pahim 已提交
515 516 517 518 519
  also provided, so we will verify the hash. To do so, we first look for a
  hashfile named ``stress-1.0.4.tar.gz.sha1`` in the same directory. If the
  hashfile is not present we compute the hash and create the hashfile for
  further usage.

520 521
  The resulting ``tarball`` variable content will be
  ``~/avocado/cache/stress-1.0.4.tar.gz``.
A
Amador Pahim 已提交
522 523 524 525 526 527 528 529 530 531 532 533 534 535
  An exception will take place if we fail to download or to verify the file.


Detailing the ``fetch_asset()`` attributes:

* ``name:`` The name used to name the fetched file. It can also contains a full
  URL, that will be used as the first location to try (after serching into the
  cache directories).
* ``asset_hash:`` (optional) The expected file hash. If missing, we skip the
  check. If provided, before computing the hash, we look for a hashfile to
  verify the asset. If the hashfile is nor present, we compute the hash and
  create the hashfile in the same cache directory for further usage.
* ``algorithm:`` (optional) Provided hash algorithm format. Defaults to sha1.
* ``locations:`` (optional) List of locations that will be used to try to fetch
536 537 538 539 540
  the file from. The supported schemes are ``http://``, ``https://``,
  ``ftp://`` and ``file://``. You're required to inform the full url to the
  file, including the file name. The first success will skip the next
  locations. Notice that for ``file://`` we just create a symbolic link in the
  cache directory, pointing to the file original location.
A
Amador Pahim 已提交
541 542 543 544 545
* ``expire:`` (optional) time period that the cached file will be considered
  valid. After that period, the file will be dowloaded again. The value can
  be an integer or a string containig the time and the unit. Example: '10d'
  (ten days). Valid units are ``s`` (second), ``m`` (minute), ``h`` (hour) and
  ``d`` (day).
A
Amador Pahim 已提交
546 547 548

The expected ``return`` is the asset file path or an exception.

549 550 551
Test Output Check and Output Record Mode
========================================

552 553 554
In a lot of occasions, you want to go simpler: just check if the output of a
given application matches an expected output. In order to help with this common
use case, we offer the option ``--output-check-record [mode]`` to the test runner::
555 556 557

      --output-check-record OUTPUT_CHECK_RECORD
                            Record output streams of your tests to reference files
558 559 560 561
                            (valid options: none (do not record output streams),
                            all (record both stdout and stderr), stdout (record
                            only stderr), stderr (record only stderr). Default:
                            none
562 563 564 565 566 567

If this option is used, it will store the stdout or stderr of the process (or
both, if you specified ``all``) being executed to reference files: ``stdout.expected``
and ``stderr.expected``. Those files will be recorded in the test data dir. The
data dir is in the same directory as the test source file, named
``[source_file_name.data]``. Let's take as an example the test ``synctest.py``. In a
568
fresh checkout of Avocado, you can see::
569 570 571 572 573 574 575 576 577 578 579 580 581

        examples/tests/synctest.py.data/stderr.expected
        examples/tests/synctest.py.data/stdout.expected

From those 2 files, only stdout.expected is non empty::

    $ cat examples/tests/synctest.py.data/stdout.expected
    PAR : waiting
    PASS : sync interrupted

The output files were originally obtained using the test runner and passing the
option --output-check-record all to the test runner::

582
    $ scripts/avocado run --output-check-record all synctest.py
583
    JOB ID    : bcd05e4fd33e068b159045652da9eb7448802be5
584
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
585
    TESTS     : 1
586
     (1/1) synctest.py:SyncTest.test: PASS (2.20 s)
587
    RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
588
    TESTS TIME : 2.20 s
589 590


591 592 593 594
After the reference files are added, the check process is transparent, in the sense
that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you want to override the default
595
behavior and skip output check entirely, you may provide the flag ``--output-check=off`` to the test runner.
596

597 598 599 600
The :mod:`avocado.utils.process` APIs have a parameter ``allow_output_check`` (defaults to ``all``), so that you
can select which process outputs will go to the reference files, should you chose to record them. You may choose
``all``, for both stdout and stderr, ``stdout``, for the stdout only, ``stderr``, for only the stderr only, or ``none``,
to allow neither of them to be recorded and checked.
601

602 603
This process works fine also with simple tests, which are programs or shell scripts
that returns 0 (PASSed) or != 0 (FAILed). Let's consider our bogus example::
604 605 606 607 608 609 610 611 612

    $ cat output_record.sh
    #!/bin/bash
    echo "Hello, world!"

Let's record the output for this one::

    $ scripts/avocado run output_record.sh --output-check-record all
    JOB ID    : 25c4244dda71d0570b7f849319cd71fe1722be8b
613
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.49-25c4244/job.log
614
    TESTS     : 1
615
     (1/1) output_record.sh: PASS (0.01 s)
616
    RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
617
    TESTS TIME : 0.01 s
618

619
After this is done, you'll notice that a the test data directory
620 621 622 623 624 625 626 627 628 629 630 631 632 633
appeared in the same level of our shell script, containing 2 files::

    $ ls output_record.sh.data/
    stderr.expected  stdout.expected

Let's look what's in each of them::

    $ cat output_record.sh.data/stdout.expected
    Hello, world!
    $ cat output_record.sh.data/stderr.expected
    $

Now, every time this test runs, it'll take into account the expected files that
were recorded, no need to do anything else but run the test. Let's see what
634
happens if we change the ``stdout.expected`` file contents to ``Hello, Avocado!``::
635 636 637

    $ scripts/avocado run output_record.sh
    JOB ID    : f0521e524face93019d7cb99c5765aedd933cb2e
638
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
639
    TESTS     : 1
640
     (1/1) output_record.sh: FAIL (0.02 s)
641
    RESULTS    : PASS 0 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
642
    TESTS TIME : 0.02 s
643 644 645

Verifying the failure reason::

646
    $ cat $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
647
    20:52:38 test       L0163 INFO | START 1-output_record.sh
648 649 650 651 652 653 654
    20:52:38 test       L0164 DEBUG|
    20:52:38 test       L0165 DEBUG| Test instance parameters:
    20:52:38 test       L0173 DEBUG|
    20:52:38 test       L0176 DEBUG| Default parameters:
    20:52:38 test       L0180 DEBUG|
    20:52:38 test       L0181 DEBUG| Test instance params override defaults whenever available
    20:52:38 test       L0182 DEBUG|
655
    20:52:38 process    L0242 INFO | Running '$HOME/Code/avocado/output_record.sh'
656
    20:52:38 process    L0310 DEBUG| [stdout] Hello, world!
657
    20:52:38 test       L0565 INFO | Command: $HOME/Code/avocado/output_record.sh
658 659 660 661 662 663 664 665 666
    20:52:38 test       L0565 INFO | Exit status: 0
    20:52:38 test       L0565 INFO | Duration: 0.00313782691956
    20:52:38 test       L0565 INFO | Stdout:
    20:52:38 test       L0565 INFO | Hello, world!
    20:52:38 test       L0565 INFO |
    20:52:38 test       L0565 INFO | Stderr:
    20:52:38 test       L0565 INFO |
    20:52:38 test       L0060 ERROR|
    20:52:38 test       L0063 ERROR| Traceback (most recent call last):
667
    20:52:38 test       L0063 ERROR|   File "$HOME/Code/avocado/avocado/test.py", line 397, in check_reference_stdout
668 669 670 671 672 673 674 675 676 677
    20:52:38 test       L0063 ERROR|     self.assertEqual(expected, actual, msg)
    20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
    20:52:38 test       L0063 ERROR|     assertion_func(first, second, msg=msg)
    20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual
    20:52:38 test       L0063 ERROR|     raise self.failureException(msg)
    20:52:38 test       L0063 ERROR| AssertionError: Actual test sdtout differs from expected one:
    20:52:38 test       L0063 ERROR| Actual:
    20:52:38 test       L0063 ERROR| Hello, world!
    20:52:38 test       L0063 ERROR|
    20:52:38 test       L0063 ERROR| Expected:
678
    20:52:38 test       L0063 ERROR| Hello, Avocado!
679 680
    20:52:38 test       L0063 ERROR|
    20:52:38 test       L0064 ERROR|
681
    20:52:38 test       L0529 ERROR| FAIL 1-output_record.sh -> AssertionError: Actual test sdtout differs from expected one:
682 683 684 685
    Actual:
    Hello, world!

    Expected:
686
    Hello, Avocado!
687 688 689 690 691

    20:52:38 test       L0516 INFO |

As expected, the test failed because we changed its expectations.

692
Test log, stdout and stderr in native Avocado modules
693 694 695 696 697 698 699 700 701 702
=====================================================

If needed, you can write directly to the expected stdout and stderr files
from the native test scope. It is important to make the distinction between
the following entities:

* The test logs
* The test expected stdout
* The test expected stderr

703 704 705 706 707 708 709
The first one is used for debugging and informational purposes. Additionally
writing to `self.log.warning` causes test to be marked as dirty and when
everything else goes well the test ends with WARN. This means that the test
passed but there were non-related unexpected situations described in warning
log.

You may log something into the test logs using the methods in
710
:mod:`avocado.Test.log` class attributes. Consider the example::
711

712
    class output_test(Test):
713

714
        def test(self):
715
            self.log.info('This goes to the log and it is only informational')
716 717 718 719 720 721
            self.log.warn('Oh, something unexpected, non-critical happened, '
                          'but we can continue.')
            self.log.error('Describe the error here and don't forget to raise '
                           'an exception yourself. Writing to self.log.error '
                           'won't do that for you.')
            self.log.debug('Everybody look, I had a good lunch today...')
722

723 724 725 726 727 728
If you need to write directly to the test stdout and stderr streams,
Avocado makes two preconfigured loggers available for that purpose,
named ``avocado.test.stdout`` and ``avocado.test.stderr``. You can use
Python's standard logging API to write to them. Example::

    import logging
729

730
    class output_test(Test):
731

732
        def test(self):
733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748
            stdout = logging.getLogger('avocado.test.stdout')
            stdout.info('Informational line that will go to stdout')
            ...
            stderr = logging.getLogger('avocado.test.stderr')
            stderr.info('Informational line that will go to stderr')

Avocado will automatically save anything a test generates on STDOUT
into a ``stdout`` file, to be found at the test results directory. The same
applies to anything a test generates on STDERR, that is, it will be saved
into a ``stderr`` file at the same location.

Additionally, when using the runner's output recording features,
namely the ``--output-check-record`` argument with values ``stdout``,
``stderr`` or ``all``, everything given to those loggers will be saved
to the files ``stdout.expected`` and ``stderr.expected`` at the test's
data directory (which is different from the job/test results directory).
749

750 751 752 753
Avocado Tests run on a separate process
=======================================

In order to avoid tests to mess around the environment used by the main
754 755
Avocado runner process, tests are run on a forked subprocess. This allows
for more robustness (tests are not easily able to mess/break Avocado) and
756 757 758 759 760 761 762 763 764 765 766 767 768 769 770
some nifty features, such as setting test timeouts.

Setting a Test Timeout
======================

Sometimes your test suite/test might get stuck forever, and this might
impact your test grid. You can account for that possibility and set up a
``timeout`` parameter for your test. The test timeout can be set through
2 means, in the following order of precedence:

* Multiplex variable parameters. You may just set the timeout parameter, like
  in the following simplistic example:

::

771 772 773 774
    sleep_length = 5
    sleep_length_type = float
    timeout = 3
    timeout_type = float
775 776 777

::

778
    $ avocado run sleeptest.py --multiplex /tmp/sleeptest-example.yaml
779 780 781 782
    JOB ID    : 6d5a2ff16bb92395100fbc3945b8d253308728c9
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/html/results.html
    TESTS     : 1
783
     (1/1) sleeptest.py:SleepTest.test: ERROR (2.97 s)
784
    RESULTS    : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
785
    TESTS TIME : 2.97 s
786 787 788

::

789
    $ cat $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
790
    15:52:51 test       L0143 INFO | START 1-sleeptest.py
791
    15:52:51 test       L0144 DEBUG|
792
    15:52:51 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/sleeptest.1/test.log
793
    15:52:51 test       L0146 DEBUG| Test instance parameters:
794 795
    15:52:51 test       L0153 DEBUG|     _name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
    15:52:51 test       L0153 DEBUG|     _short_name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
796 797 798 799 800 801 802 803
    15:52:51 test       L0153 DEBUG|     dep = []
    15:52:51 test       L0153 DEBUG|     id = sleeptest
    15:52:51 test       L0153 DEBUG|     name = sleeptest
    15:52:51 test       L0153 DEBUG|     shortname = sleeptest
    15:52:51 test       L0153 DEBUG|     sleep_length = 5.0
    15:52:51 test       L0153 DEBUG|     sleep_length_type = float
    15:52:51 test       L0153 DEBUG|     timeout = 3.0
    15:52:51 test       L0153 DEBUG|     timeout_type = float
804
    15:52:51 test       L0154 DEBUG|
805 806
    15:52:51 test       L0157 DEBUG| Default parameters:
    15:52:51 test       L0159 DEBUG|     sleep_length = 1.0
807
    15:52:51 test       L0161 DEBUG|
808
    15:52:51 test       L0162 DEBUG| Test instance params override defaults whenever available
809
    15:52:51 test       L0163 DEBUG|
810
    15:52:51 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15670 to end
811
    15:52:51 test       L0170 INFO |
812
    15:52:51 sleeptest  L0035 DEBUG| Sleeping for 5.00 seconds
813
    15:52:54 test       L0057 ERROR|
814
    15:52:54 test       L0060 ERROR| Traceback (most recent call last):
815
    15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/sleeptest.py", line 36, in action
816
    15:52:54 test       L0060 ERROR|     time.sleep(self.params.sleep_length)
817
    15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
818 819
    15:52:54 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
    15:52:54 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for sleeptest to end
820
    15:52:54 test       L0061 ERROR|
821
    15:52:54 test       L0400 ERROR| ERROR 1-sleeptest.py -> TestTimeoutError: Timeout reached waiting for sleeptest to end
822
    15:52:54 test       L0387 INFO |
823 824 825


If you pass that multiplex file to the runner multiplexer, this will register
826
a timeout of 3 seconds before Avocado ends the test forcefully by sending a
827 828 829 830 831 832 833 834 835
:class:`signal.SIGTERM` to the test, making it raise a
:class:`avocado.core.exceptions.TestTimeoutError`.

* Default params attribute. Consider the following example:

::

    import time

836
    from avocado import Test
837
    from avocado import main
838 839


840
    class TimeoutTest(Test):
841 842

        """
843
        Functional test for Avocado. Throw a TestTimeoutError.
844 845 846 847
        """
        default_params = {'timeout': 3.0,
                          'sleep_time': 5.0}

848
        def test(self):
849 850 851 852 853 854 855 856 857
            """
            This should throw a TestTimeoutError.
            """
            self.log.info('Sleeping for %.2f seconds (2 more than the timeout)',
                          self.params.sleep_time)
            time.sleep(self.params.sleep_time)


    if __name__ == "__main__":
858
        main()
859 860 861 862 863

This accomplishes a similar effect to the multiplex setup defined in there.

::

864
    $ avocado run timeouttest.py
865 866 867 868
    JOB ID    : d78498a54504b481192f2f9bca5ebb9bbb820b8a
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/html/results.html
    TESTS     : 1
869
     (1/1) timeouttest.py:TimeoutTest.test: INTERRUPTED (3.04 s)
870
    RESULTS    : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
871
    TESTS TIME : 3.04 s
872

873 874 875

::

876
    $ cat $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
877
    15:54:28 test       L0143 INFO | START 1-timeouttest.py:TimeoutTest.test
878
    15:54:28 test       L0144 DEBUG|
879
    15:54:28 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/timeouttest.1/test.log
880 881
    15:54:28 test       L0146 DEBUG| Test instance parameters:
    15:54:28 test       L0153 DEBUG|     id = timeouttest
882
    15:54:28 test       L0154 DEBUG|
883 884 885
    15:54:28 test       L0157 DEBUG| Default parameters:
    15:54:28 test       L0159 DEBUG|     sleep_time = 5.0
    15:54:28 test       L0159 DEBUG|     timeout = 3.0
886
    15:54:28 test       L0161 DEBUG|
887
    15:54:28 test       L0162 DEBUG| Test instance params override defaults whenever available
888
    15:54:28 test       L0163 DEBUG|
889
    15:54:28 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15759 to end
890
    15:54:28 test       L0170 INFO |
891
    15:54:28 timeouttes L0036 INFO | Sleeping for 5.00 seconds (2 more than the timeout)
892
    15:54:31 test       L0057 ERROR|
893
    15:54:31 test       L0060 ERROR| Traceback (most recent call last):
894
    15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/timeouttest.py", line 37, in action
895
    15:54:31 test       L0060 ERROR|     time.sleep(self.params.sleep_time)
896
    15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
897 898
    15:54:31 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
    15:54:31 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for timeouttest to end
899
    15:54:31 test       L0061 ERROR|
900
    15:54:31 test       L0400 ERROR| ERROR 1-timeouttest.py:TimeoutTest.test -> TestTimeoutError: Timeout reached waiting for timeouttest to end
901
    15:54:31 test       L0387 INFO |
902

903

904 905 906 907 908 909 910 911 912 913 914 915 916 917 918
Test Tags
=========

The need may arise for more complex tests, that use more advanced Python features
such as inheritance. Due to the fact that Avocado uses a safe test introspection
method, that is more limited than actual loading of the test classes, Avocado
may need your help to identify those tests. For example, let's say you are
defining a new test class that inherits from the Avocado base test class and
putting it in ``mylibrary.py``::

    from avocado import Test


    class MyOwnDerivedTest(Test):
        def __init__(self, methodName='test', name=None, params=None,
919
                     base_logdir=None, job=None, runner_queue=None):
920
            super(MyOwnDerivedTest, self).__init__(methodName, name, params,
921
                                                   base_logdir, job,
922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995
                                                   runner_queue)
            self.log('Derived class example')


Then implement your actual test using that derived class, in ``mytest.py``::

    import mylibrary


    class MyTest(mylibrary.MyOwnDerivedTest):

        def test1(self):
            self.log('Testing something important')

        def test2(self):
            self.log('Testing something even more important')


If you try to list the tests in that file, this is what you'll get::

    scripts/avocado list mytest.py -V
    Type       Test
    NOT_A_TEST mytest.py

    ACCESS_DENIED: 0
    BROKEN_SYMLINK: 0
    EXTERNAL: 0
    FILTERED: 0
    INSTRUMENTED: 0
    MISSING: 0
    NOT_A_TEST: 1
    SIMPLE: 0
    VT: 0

You need to give avocado a little help by adding a docstring tag. That docstring
tag is ``:avocado: enable``. That tag tells the Avocado safe test detection
code to consider it as an avocado test, regardless of what the (admittedly simple)
detection code thinks of it. Let's see how that works out. Add the docstring,
as you can see the example below::

    import mylibrary


    class MyTest(mylibrary.MyOwnDerivedTest):
        """
        :avocado: enable
        """
        def test1(self):
            self.log('Testing something important')

        def test2(self):
            self.log('Testing something even more important')


Now, trying to list the tests on the ``mytest.py`` file again::

    scripts/avocado list mytest.py -V
    Type         Test
    INSTRUMENTED mytest.py:MyTest.test1
    INSTRUMENTED mytest.py:MyTest.test2

    ACCESS_DENIED: 0
    BROKEN_SYMLINK: 0
    EXTERNAL: 0
    FILTERED: 0
    INSTRUMENTED: 2
    MISSING: 0
    NOT_A_TEST: 0
    SIMPLE: 0
    VT: 0

You can also use the ``:avocado: disable`` tag, that works the opposite way:
Something looks like an Avocado test, but we force it to not be listed as one.

996
Environment Variables for Simple Tests
997 998
======================================

999
Avocado exports Avocado variables and multiplexed variables as BASH environment
1000
to the running test. Those variables are interesting to simple tests, because
1001 1002
they can not make use of Avocado API directly with Python, like the native
tests can do and also they can modify the test parameters.
1003

1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014
Here are the current variables that Avocado exports to the tests:

+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| Environemnt Variable    | Meaning                               | Example                                                                                             |
+=========================+=======================================+=====================================================================================================+
| AVOCADO_VERSION         | Version of Avocado test runner        | 0.12.0                                                                                              |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_BASEDIR    | Base directory of Avocado tests       | $HOME/Downloads/avocado-source/avocado                                                              |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_DATADIR    | Data directory for the test           | $AVOCADO_TEST_BASEDIR/my_test.sh.data                                                               |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
1015
| AVOCADO_TEST_WORKDIR    | Work directory for the test           | /var/tmp/avocado_Bjr_rd/my_test.sh                                                                  |
1016
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
1017
| AVOCADO_TEST_SRCDIR     | Source directory for the test         | /var/tmp/avocado_Bjr_rd/my-test.sh/src                                                              |
1018
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
1019
| AVOCADO_TEST_LOGDIR     | Log directory for the test            | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1                 |
1020
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
1021
| AVOCADO_TEST_LOGFILE    | Log file for the test                 | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/debug.log       |
1022
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
1023
| AVOCADO_TEST_OUTPUTDIR  | Output directory for the test         | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/data            |
1024
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
1025
| AVOCADO_TEST_SYSINFODIR | The system information directory      | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/sysinfo         |
1026
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
1027 1028
| *                       | All variables from --multiplex-file   | TIMEOUT=60; IO_WORKERS=10; VM_BYTES=512M; ...                                                       |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
1029

1030 1031 1032 1033 1034 1035 1036 1037 1038

Simple Tests BASH extensions
============================

To enhance simple tests one can use supported set of libraries we created. The
only requirement is to use::

    PATH=$(avocado "exec-path"):$PATH

1039
which injects path to Avocado utils into shell PATH. Take a look into
1040 1041 1042 1043
``avocado exec-path`` to see list of available functions and take a look at
``examples/tests/simplewarning.sh`` for inspiration.


1044
Wrap Up
1045
=======
1046

1047 1048 1049
We recommend you take a look at the example tests present in the
``examples/tests`` directory, that contains a few samples to take some
inspiration from. That directory, besides containing examples, is also used by
1050
the Avocado self test suite to do functional testing of Avocado itself.
1051

1052 1053
It is also recommended that you take a look at the :ref:`api-reference`.
for more possibilities.
1054 1055 1056 1057 1058 1059 1060

.. [#f1] sleeptest is a functional test for Avocado. It's "old" because we
	 also have had such a test for `Autotest`_ for a long time.

.. _Autotest: http://autotest.github.io
.. _Class Names: https://www.python.org/dev/peps/pep-0008/
.. _PEP8 Function Names: https://www.python.org/dev/peps/pep-0008/#function-names