WritingTests.rst 31.6 KB
Newer Older
1 2
.. _writing-tests:

3
=====================
4 5 6
Writing Avocado Tests
=====================

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
Test Resolution in avocado - simple tests vs instrumented tests
===============================================================

What is a test in the avocado context? Either one of:

* An executable file, that returns exit code 0 (PASS) or != 0 (FAIL). This
  is known as a SimpleTest, in avocado terminology.
* A python module containing a class derived from :mod:`avocado.test.Test`.
  This is known as an instrumented test, in avocado terminology. The term
  instrumented is used because the avocado python test classes allow you to
  get more features for your test, such as logging facilities and more
  sophisticated test APIs.

When you use the avocado runner, frequently you'll provide paths to files,
that will be inspected, and acted upon depending on their contents. The
diagram below shows how avocado analyzes a file and decides what to do with
it:

.. figure:: diagram.png

Now that we covered how avocado resolves tests, let's get to business.
This section is concerned with writing an avocado test. The process is not
hard, all you need to do is to create a test module, which is a python file
with a class that inherits from :class:`avocado.test.Test`. This class only
really needs to implement a method called `action`, which represents the actual
sequence of test operations.
33

34 35
Simple example
==============
36

37 38 39
Let's re-create an old time favorite, ``sleeptest``, which is a functional
test for avocado (old because we also use such a test for autotest). It does
nothing but ``time.sleep([number-seconds])``::
40 41 42 43 44 45 46 47 48

    #!/usr/bin/python

    import time

    from avocado import job
    from avocado import test


49
    class SleepTest(test.Test):
50 51 52 53

        """
        Example test for avocado.
        """
54
        default_params = {'sleep_length': 1.0}
55

56
        def action(self):
57 58 59
            """
            Sleep for length seconds.
            """
60 61
            self.log.debug("Sleeping for %.2f seconds", self.params.sleep_length)
            time.sleep(self.params.sleep_length)
62 63 64 65 66


    if __name__ == "__main__":
        job.main()

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81

This is about the simplest test you can write for avocado (at least, one using
the avocado APIs). An avocado test is basically a class that inherits from
:mod:`avocado.test.Test` and could have any name you might like (we'll trust
you'll choose a good name, although we do recommend that the name uses the
CamelCase convention, for PEP8 consistency).

Note that the test object provides you with a number of convenience
attributes, such as ``self.log``, that lets you log debug, info, error and
warning messages. Also, we note the parameter passing system that avocado
provides: We frequently want to pass parameters to tests, and we can do that
through what we call a `multiplex file`, which is a configuration file that
not only allows you to provide params to your test, but also easily create a
validation matrix in a concise way. You can find more about the multiplex
file format on :doc:`MultiplexConfig`.
82

C
Cleber Rosa 已提交
83 84 85 86
Saving test generated (custom) data
===================================

Each test instance provides a so called ``whiteboard``. It that can be accessed
87 88
through ``self.whiteboard``. This whiteboard is simply a string that will be
automatically saved to test results (as long as the output format supports it).
89
If you choose to save binary data to the whiteboard, it's your responsibility to
90
encoded it first (base64 is the obvious choice).
C
Cleber Rosa 已提交
91

92
Building on the previously demonstrated sleeptest, suppose that you want to save the
C
Cleber Rosa 已提交
93 94 95 96 97 98 99 100
sleep length to be used by some other script or data analysis tool::

        def action(self):
            """
            Sleep for length seconds.
            """
            self.log.debug("Sleeping for %.2f seconds", self.params.sleep_length)
            time.sleep(self.params.sleep_length)
101
            self.whiteboard = "%.2f" % self.params.sleep_length
C
Cleber Rosa 已提交
102

103
The whiteboard can and should be exposed by files generated by the available test result
104 105 106 107 108
plugins. The ``results.json`` file already includes the whiteboard for each test.
Additionally, we'll save a raw copy of the whiteboard contents on a file named
``whiteboard``, in the same level as the ``results.json`` file, for your convenience
(maybe you want to use the result of a benchmark directly with your custom made scripts
to analyze that particular benchmark result).
C
Cleber Rosa 已提交
109

110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147
Accessing test parameters
=========================

Each test has a set of parameters that can be accessed through ``self.params.[param-name]``.
Avocado finds and populates ``self.params`` with all parameters you define on a Multiplex
Config file (see :doc:`MultiplexConfig`), in a way that they are available as attributes,
not just dict keys. This has the advantage of reducing the boilerplate code necessary to
access those parameters. As an example, consider the following multiplex file for sleeptest::

    variants:
        - sleeptest:
            sleep_length_type = float
            variants:
                - short:
                    sleep_length = 0.5
                - medium:
                    sleep_length = 1
                - long:
                    sleep_length = 5

You may notice some things here: there is one test param to sleeptest, called ``sleep_length``. We could have named it
``length`` really, but I prefer to create a param namespace of sorts here. Then, I defined
``sleep_length_type``, that is used by the config system to convert a value (by default a
:class:`basestring`) to an appropriate value type (in this case, we need to pass a :class:`float`
to :func:`time.sleep` anyway). Note that this is an optional feature, and you can always use
:func:`float` to convert the string value coming from the configuration anyway.

Another important design detail is that sometimes we might not want to use the config system
at all (for example, when we run an avocado test as a stand alone test). To account for this
case, we have to specify a ``default_params`` dictionary that contains the default values
for when we are not providing config from a multiplex file.

Using a multiplex file
======================

You may use the avocado runner with a multiplex file to provide params and matrix
generation for sleeptest just like::

148
    $ avocado run sleeptest --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
149
    JOB ID : d565e8dec576d6040f894841f32a836c751f968f
150
    JOB LOG: $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/job.log
151 152 153 154 155 156 157 158 159 160 161 162 163 164
    TESTS  : 3
    (1/3) sleeptest.short: PASS (0.50 s)
    (2/3) sleeptest.medium: PASS (1.01 s)
    (3/3) sleeptest.long: PASS (5.01 s)
    PASS : 3
    ERROR: 0
    FAIL : 0
    SKIP : 0
    WARN : 0
    TIME : 6.52 s

Note that, as your multiplex file specifies all parameters for sleeptest, you
can't leave the test ID empty::

165
    $ scripts/avocado run --multiplex examples/tests/sleeptest/sleeptest.yaml
166
    Empty test ID. A test path or alias must be provided
167 168 169

If you want to run some tests that don't require params set by the multiplex file, you can::

170
    $ avocado run sleeptest synctest --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
171
    JOB ID : dd91ea5f8b42b2f084702315688284f7e8aa220a
172
    JOB LOG: $HOME/avocado/job-results/job-2014-08-12T15.49-dd91ea5f/job.log
173 174 175 176 177 178 179 180 181 182 183
    TESTS  : 4
    (1/4) sleeptest.short: PASS (0.50 s)
    (2/4) sleeptest.medium: PASS (1.01 s)
    (3/4) sleeptest.long: PASS (5.01 s)
    (4/4) synctest.1: ERROR (1.85 s)
    PASS : 3
    ERROR: 1
    FAIL : 0
    SKIP : 0
    WARN : 0
    TIME : 8.69 s
184 185

Avocado tests are also unittests
186
================================
187 188

Since avocado tests inherit from :class:`unittest.TestCase`, you can use all
189
the :func:`assert` class methods on your tests. Some silly examples::
190

191
    class RandomExamples(test.Test):
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212
        def action(self):
            self.log.debug("Verifying some random math...")
            four = 2 * 2
            four_ = 2 + 2
            self.assertEqual(four, four_, "something is very wrong here!")

            self.log.debug("Verifying if a variable is set to True...")
            variable = True
            self.assertTrue(variable)

            self.log.debug("Verifying if this test is an instance of test.Test")
            self.assertIsInstance(self, test.Test)

The reason why we have a shebang in the beginning of the test is because
avocado tests, similarly to unittests, can use an entry point, called
:func:`avocado.job.main`, that calls avocado libs to look for test classes and execute
its main entry point. This is an optional, but fairly handy feature. In case
you want to use it, don't forget to ``chmod +x`` your test.

Executing an avocado test gives::

213
    $ examples/tests/sleeptest.py
214
    JOB ID : de6c1e4c227c786dc4d926f6fca67cda34d96276
215
    JOB LOG: $HOME/avocado/job-results/job-2014-08-12T15.48-de6c1e4c/job.log
216 217 218 219 220 221 222 223
    TESTS  : 1
    (1/1) sleeptest.1: PASS (1.00 s)
    PASS : 1
    ERROR: 0
    FAIL : 0
    SKIP : 0
    WARN : 0
    TIME : 1.00 s
224 225

Running tests with nosetests
226
============================
227 228 229 230 231 232 233

`nose <https://nose.readthedocs.org/>`__ is a python testing framework with
similar goals as avocado, except that avocado also intends to provide tools to
assemble a fully automated test grid, plus richer test API for tests on the
Linux platform. Regardless, the fact that an avocado class is also an unittest
cass, you can run them with the ``nosetests`` application::

234
    $ nosetests examples/tests/sleeptest.py
235 236
    .
    ----------------------------------------------------------------------
237
    Ran 1 test in 1.004s
238

239 240 241
    OK

Setup and cleanup methods
242
=========================
243 244 245 246 247

If you need to perform setup actions before/after your test, you may do so
in the ``setup`` and ``cleanup`` methods, respectively. We'll give examples
in the following section.

248 249
Running third party test suites
===============================
250 251

It is very common in test automation workloads to use test suites developed
252
by third parties. By wrapping the execution code inside an avocado test module,
253 254 255 256 257
you gain access to the facilities and API provided by the framework. Let's
say you want to pick up a test suite written in C that it is in a tarball,
uncompress it, compile the suite code, and then executing the test. Here's
an example that does that::

258 259
    #!/usr/bin/python

260 261 262 263 264 265 266 267 268
    import os

    from avocado import test
    from avocado import job
    from avocado.utils import archive
    from avocado.utils import build
    from avocado.utils import process


269
    class SyncTest(test.Test):
270 271 272 273

        """
        Execute the synctest test suite.
        """
274 275 276
        default_params = {'sync_tarball': 'synctest.tar.bz2',
                          'sync_length': 100,
                          'sync_loop': 10}
277

278 279 280 281 282 283
        def setup(self):
            """
            Set default params and build the synctest suite.
            """
            # Build the synctest suite
            self.cwd = os.getcwd()
284
            tarball_path = self.get_data_path(self.params.sync_tarball)
285 286 287 288
            archive.extract(tarball_path, self.srcdir)
            self.srcdir = os.path.join(self.srcdir, 'synctest')
            build.make(self.srcdir)

289 290 291 292
        def action(self):
            """
            Execute synctest with the appropriate params.
            """
293
            os.chdir(self.srcdir)
294 295
            cmd = ('./synctest %s %s' %
                   (self.params.sync_length, self.params.sync_loop))
296
            process.system(cmd)
297 298 299 300 301
            os.chdir(self.cwd)


    if __name__ == "__main__":
        job.main()
302 303 304

Here we have an example of the ``setup`` method in action: Here we get the
location of the test suite code (tarball) through
305
:func:`avocado.test.Test.get_data_path`, then uncompress the tarball through
306 307 308 309
:func:`avocado.utils.archive.extract`, an API that will
decompress the suite tarball, followed by ``build.make``, that will build the
suite.

310 311 312
In this example, the ``action`` method just gets into the base directory of
the compiled suite  and executes the ``./synctest`` command, with appropriate
parameters, using :func:`avocado.utils.process.system`.
313 314 315 316

Test Output Check and Output Record Mode
========================================

317 318 319
In a lot of occasions, you want to go simpler: just check if the output of a
given application matches an expected output. In order to help with this common
use case, we offer the option ``--output-check-record [mode]`` to the test runner::
320 321 322

      --output-check-record OUTPUT_CHECK_RECORD
                            Record output streams of your tests to reference files
323 324 325 326
                            (valid options: none (do not record output streams),
                            all (record both stdout and stderr), stdout (record
                            only stderr), stderr (record only stderr). Default:
                            none
327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348

If this option is used, it will store the stdout or stderr of the process (or
both, if you specified ``all``) being executed to reference files: ``stdout.expected``
and ``stderr.expected``. Those files will be recorded in the test data dir. The
data dir is in the same directory as the test source file, named
``[source_file_name.data]``. Let's take as an example the test ``synctest.py``. In a
fresh checkout of avocado, you can see::

        examples/tests/synctest.py.data/stderr.expected
        examples/tests/synctest.py.data/stdout.expected

From those 2 files, only stdout.expected is non empty::

    $ cat examples/tests/synctest.py.data/stdout.expected
    PAR : waiting
    PASS : sync interrupted

The output files were originally obtained using the test runner and passing the
option --output-check-record all to the test runner::

    $ scripts/avocado run --output-check-record all synctest
    JOB ID    : bcd05e4fd33e068b159045652da9eb7448802be5
349
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
350 351 352 353 354 355 356 357 358 359
    TESTS     : 1
    (1/1) synctest.py: PASS (2.20 s)
    PASS      : 1
    ERROR     : 0
    FAIL      : 0
    SKIP      : 0
    WARN      : 0
    TIME      : 2.20 s


360 361 362 363 364
After the reference files are added, the check process is transparent, in the sense
that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you want to override the default
behavior and skip output check entirely, you may provide the flag ``--disable-output-check`` to the test runner.
365

366 367 368 369
The :mod:`avocado.utils.process` APIs have a parameter ``allow_output_check`` (defaults to ``all``), so that you
can select which process outputs will go to the reference files, should you chose to record them. You may choose
``all``, for both stdout and stderr, ``stdout``, for the stdout only, ``stderr``, for only the stderr only, or ``none``,
to allow neither of them to be recorded and checked.
370

371 372
This process works fine also with simple tests, which are programs or shell scripts
that returns 0 (PASSed) or != 0 (FAILed). Let's consider our bogus example::
373 374 375 376 377 378 379 380 381

    $ cat output_record.sh
    #!/bin/bash
    echo "Hello, world!"

Let's record the output for this one::

    $ scripts/avocado run output_record.sh --output-check-record all
    JOB ID    : 25c4244dda71d0570b7f849319cd71fe1722be8b
382
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.49-25c4244/job.log
383
    TESTS     : 1
384
    (1/1) home/$USER/Code/avocado/output_record.sh: PASS (0.01 s)
385 386 387 388 389 390 391
    PASS      : 1
    ERROR     : 0
    FAIL      : 0
    SKIP      : 0
    WARN      : 0
    TIME      : 0.01 s

392
After this is done, you'll notice that a the test data directory
393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410
appeared in the same level of our shell script, containing 2 files::

    $ ls output_record.sh.data/
    stderr.expected  stdout.expected

Let's look what's in each of them::

    $ cat output_record.sh.data/stdout.expected
    Hello, world!
    $ cat output_record.sh.data/stderr.expected
    $

Now, every time this test runs, it'll take into account the expected files that
were recorded, no need to do anything else but run the test. Let's see what
happens if we change the ``stdout.expected`` file contents to ``Hello, avocado!``::

    $ scripts/avocado run output_record.sh
    JOB ID    : f0521e524face93019d7cb99c5765aedd933cb2e
411
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
412
    TESTS     : 1
413
    (1/1) home/$USER/Code/avocado/output_record.sh: FAIL (0.02 s)
414 415 416 417 418 419 420 421 422
    PASS      : 0
    ERROR     : 0
    FAIL      : 1
    SKIP      : 0
    WARN      : 0
    TIME      : 0.02 s

Verifying the failure reason::

423 424
    $ cat $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
    20:52:38 test       L0163 INFO | START home/$USER/Code/avocado/output_record.sh
425 426 427 428 429 430 431
    20:52:38 test       L0164 DEBUG|
    20:52:38 test       L0165 DEBUG| Test instance parameters:
    20:52:38 test       L0173 DEBUG|
    20:52:38 test       L0176 DEBUG| Default parameters:
    20:52:38 test       L0180 DEBUG|
    20:52:38 test       L0181 DEBUG| Test instance params override defaults whenever available
    20:52:38 test       L0182 DEBUG|
432
    20:52:38 process    L0242 INFO | Running '$HOME/Code/avocado/output_record.sh'
433
    20:52:38 process    L0310 DEBUG| [stdout] Hello, world!
434
    20:52:38 test       L0565 INFO | Command: $HOME/Code/avocado/output_record.sh
435 436 437 438 439 440 441 442 443
    20:52:38 test       L0565 INFO | Exit status: 0
    20:52:38 test       L0565 INFO | Duration: 0.00313782691956
    20:52:38 test       L0565 INFO | Stdout:
    20:52:38 test       L0565 INFO | Hello, world!
    20:52:38 test       L0565 INFO |
    20:52:38 test       L0565 INFO | Stderr:
    20:52:38 test       L0565 INFO |
    20:52:38 test       L0060 ERROR|
    20:52:38 test       L0063 ERROR| Traceback (most recent call last):
444
    20:52:38 test       L0063 ERROR|   File "$HOME/Code/avocado/avocado/test.py", line 397, in check_reference_stdout
445 446 447 448 449 450 451 452 453 454 455 456 457
    20:52:38 test       L0063 ERROR|     self.assertEqual(expected, actual, msg)
    20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
    20:52:38 test       L0063 ERROR|     assertion_func(first, second, msg=msg)
    20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual
    20:52:38 test       L0063 ERROR|     raise self.failureException(msg)
    20:52:38 test       L0063 ERROR| AssertionError: Actual test sdtout differs from expected one:
    20:52:38 test       L0063 ERROR| Actual:
    20:52:38 test       L0063 ERROR| Hello, world!
    20:52:38 test       L0063 ERROR|
    20:52:38 test       L0063 ERROR| Expected:
    20:52:38 test       L0063 ERROR| Hello, avocado!
    20:52:38 test       L0063 ERROR|
    20:52:38 test       L0064 ERROR|
458
    20:52:38 test       L0529 ERROR| FAIL home/$USER/Code/avocado/output_record.sh -> AssertionError: Actual test sdtout differs from expected one:
459 460 461 462 463 464 465 466 467 468
    Actual:
    Hello, world!

    Expected:
    Hello, avocado!

    20:52:38 test       L0516 INFO |

As expected, the test failed because we changed its expectations.

469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508
Test log, stdout and stderr in native avocado modules
=====================================================

If needed, you can write directly to the expected stdout and stderr files
from the native test scope. It is important to make the distinction between
the following entities:

* The test logs
* The test expected stdout
* The test expected stderr

The first one is used for debugging and informational purposes. The framework
machinery uses logs to give you more detailed info about your test, so they
are not the most reliable source to compare stdout/err. You may log something
into the test logs using the methods in :mod:`avocado.test.Test.log` class
attributes. Consider
the example::

    class output_test(test.Test):

        def action(self):
            self.log.info('This goes to the log and it is only informational')

If you need to write directly to the test stdout and stderr streams, there
are another 2 class attributes for that, :mod:`avocado.test.Test.stdout_log`
and :mod:`avocado.test.Test.stderr_log`, that have the exact same methods
of the log object. So if you want to add stuff to your expected stdout and
stderr streams, you can do something like::

    class output_test(test.Test):

        def action(self):
            self.log.info('This goes to the log and it is only informational')
            self.stdout_log.info('This goes to the test stdout (will be recorded)')
            self.stderr_log.info('This goes to the test stderr (will be recorded)')

Each one of the last 2 statements will go to the ``stdout.expected`` and
``stderr.expected``, should you choose ``--output-check-record all``, and
will be output to the files ``stderr`` and ``stdout`` of the job results dir
every time that test is executed.
509

510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539
Avocado Tests run on a separate process
=======================================

In order to avoid tests to mess around the environment used by the main
avocado runner process, tests are run on a forked subprocess. This allows
for more robustness (tests are not easily able to mess/break avocado) and
some nifty features, such as setting test timeouts.

Setting a Test Timeout
======================

Sometimes your test suite/test might get stuck forever, and this might
impact your test grid. You can account for that possibility and set up a
``timeout`` parameter for your test. The test timeout can be set through
2 means, in the following order of precedence:

* Multiplex variable parameters. You may just set the timeout parameter, like
  in the following simplistic example:

::

    variants:
        - sleeptest:
            sleep_length = 5
            sleep_length_type = float
            timeout = 3
            timeout_type = float

::

540 541
    $ avocado run sleeptest --multiplex /tmp/sleeptest-example.mplx
    JOB ID : 6d5a2ff16bb92395100fbc3945b8d253308728c9
542
    JOB LOG: $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
543 544 545 546 547 548 549 550
    TESTS  : 1
    (1/1) sleeptest.1: ERROR (2.97 s)
    PASS : 0
    ERROR: 1
    FAIL : 0
    SKIP : 0
    WARN : 0
    TIME : 2.97 s
551 552 553

::

554
    $ cat $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
555
    15:52:51 test       L0143 INFO | START sleeptest.1
556
    15:52:51 test       L0144 DEBUG|
557
    15:52:51 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/sleeptest.1/test.log
558 559 560 561 562 563 564 565 566 567 568
    15:52:51 test       L0146 DEBUG| Test instance parameters:
    15:52:51 test       L0153 DEBUG|     _name_map_file = {'sleeptest-example.mplx': 'sleeptest'}
    15:52:51 test       L0153 DEBUG|     _short_name_map_file = {'sleeptest-example.mplx': 'sleeptest'}
    15:52:51 test       L0153 DEBUG|     dep = []
    15:52:51 test       L0153 DEBUG|     id = sleeptest
    15:52:51 test       L0153 DEBUG|     name = sleeptest
    15:52:51 test       L0153 DEBUG|     shortname = sleeptest
    15:52:51 test       L0153 DEBUG|     sleep_length = 5.0
    15:52:51 test       L0153 DEBUG|     sleep_length_type = float
    15:52:51 test       L0153 DEBUG|     timeout = 3.0
    15:52:51 test       L0153 DEBUG|     timeout_type = float
569
    15:52:51 test       L0154 DEBUG|
570 571
    15:52:51 test       L0157 DEBUG| Default parameters:
    15:52:51 test       L0159 DEBUG|     sleep_length = 1.0
572
    15:52:51 test       L0161 DEBUG|
573
    15:52:51 test       L0162 DEBUG| Test instance params override defaults whenever available
574
    15:52:51 test       L0163 DEBUG|
575
    15:52:51 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15670 to end
576
    15:52:51 test       L0170 INFO |
577
    15:52:51 sleeptest  L0035 DEBUG| Sleeping for 5.00 seconds
578
    15:52:54 test       L0057 ERROR|
579
    15:52:54 test       L0060 ERROR| Traceback (most recent call last):
580
    15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/sleeptest.py", line 36, in action
581
    15:52:54 test       L0060 ERROR|     time.sleep(self.params.sleep_length)
582
    15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
583 584
    15:52:54 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
    15:52:54 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for sleeptest to end
585
    15:52:54 test       L0061 ERROR|
586
    15:52:54 test       L0400 ERROR| ERROR sleeptest.1 -> TestTimeoutError: Timeout reached waiting for sleeptest to end
587
    15:52:54 test       L0387 INFO |
588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604


If you pass that multiplex file to the runner multiplexer, this will register
a timeout of 3 seconds before avocado ends the test forcefully by sending a
:class:`signal.SIGTERM` to the test, making it raise a
:class:`avocado.core.exceptions.TestTimeoutError`.

* Default params attribute. Consider the following example:

::

    import time

    from avocado import test
    from avocado import job


605
    class TimeoutTest(test.Test):
606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628

        """
        Functional test for avocado. Throw a TestTimeoutError.
        """
        default_params = {'timeout': 3.0,
                          'sleep_time': 5.0}

        def action(self):
            """
            This should throw a TestTimeoutError.
            """
            self.log.info('Sleeping for %.2f seconds (2 more than the timeout)',
                          self.params.sleep_time)
            time.sleep(self.params.sleep_time)


    if __name__ == "__main__":
        job.main()

This accomplishes a similar effect to the multiplex setup defined in there.

::

629 630
    $ avocado run timeouttest
    JOB ID : d78498a54504b481192f2f9bca5ebb9bbb820b8a
631
    JOB LOG: $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
632 633 634 635 636 637 638 639 640
    TESTS  : 1
    (1/1) timeouttest.1: ERROR (2.97 s)
    PASS : 0
    ERROR: 1
    FAIL : 0
    SKIP : 0
    WARN : 0
    TIME : 2.97 s

641 642 643

::

644
    $ cat $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
645
    15:54:28 test       L0143 INFO | START timeouttest.1
646
    15:54:28 test       L0144 DEBUG|
647
    15:54:28 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/timeouttest.1/test.log
648 649
    15:54:28 test       L0146 DEBUG| Test instance parameters:
    15:54:28 test       L0153 DEBUG|     id = timeouttest
650
    15:54:28 test       L0154 DEBUG|
651 652 653
    15:54:28 test       L0157 DEBUG| Default parameters:
    15:54:28 test       L0159 DEBUG|     sleep_time = 5.0
    15:54:28 test       L0159 DEBUG|     timeout = 3.0
654
    15:54:28 test       L0161 DEBUG|
655
    15:54:28 test       L0162 DEBUG| Test instance params override defaults whenever available
656
    15:54:28 test       L0163 DEBUG|
657
    15:54:28 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15759 to end
658
    15:54:28 test       L0170 INFO |
659
    15:54:28 timeouttes L0036 INFO | Sleeping for 5.00 seconds (2 more than the timeout)
660
    15:54:31 test       L0057 ERROR|
661
    15:54:31 test       L0060 ERROR| Traceback (most recent call last):
662
    15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/timeouttest.py", line 37, in action
663
    15:54:31 test       L0060 ERROR|     time.sleep(self.params.sleep_time)
664
    15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
665 666
    15:54:31 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
    15:54:31 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for timeouttest to end
667
    15:54:31 test       L0061 ERROR|
668
    15:54:31 test       L0400 ERROR| ERROR timeouttest.1 -> TestTimeoutError: Timeout reached waiting for timeouttest to end
669
    15:54:31 test       L0387 INFO |
670

671

672
Environment Variables for Simple Tests
673 674
======================================

675
Avocado exports avocado variables and multiplexed variables as BASH environment
676
to the running test. Those variables are interesting to simple tests, because
677 678
they can not make use of Avocado API directly with Python, like the native
tests can do and also they can modify the test parameters.
679

680 681 682 683 684 685 686 687 688 689 690
Here are the current variables that Avocado exports to the tests:

+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| Environemnt Variable    | Meaning                               | Example                                                                                             |
+=========================+=======================================+=====================================================================================================+
| AVOCADO_VERSION         | Version of Avocado test runner        | 0.12.0                                                                                              |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_BASEDIR    | Base directory of Avocado tests       | $HOME/Downloads/avocado-source/avocado                                                              |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_DATADIR    | Data directory for the test           | $AVOCADO_TEST_BASEDIR/my_test.sh.data                                                               |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
691
| AVOCADO_TEST_WORKDIR    | Work directory for the test           | /var/tmp/avocado_Bjr_rd/my_test.sh                                                                  |
692
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
693
| AVOCADO_TEST_SRCDIR     | Source directory for the test         | /var/tmp/avocado_Bjr_rd/my-test.sh/src                                                              |
694
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
695
| AVOCADO_TEST_LOGDIR     | Log directory for the test            | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1                 |
696
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
697
| AVOCADO_TEST_LOGFILE    | Log file for the test                 | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/debug.log       |
698
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
699
| AVOCADO_TEST_OUTPUTDIR  | Output directory for the test         | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/data            |
700
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
701
| AVOCADO_TEST_SYSINFODIR | The system information directory      | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/sysinfo         |
702
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
703 704
| *                       | All variables from --multiplex-file   | TIMEOUT=60; IO_WORKERS=10; VM_BYTES=512M; ...                                                       |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
705

706
Wrap Up
707
=======
708

709 710 711 712 713 714 715
We recommend you take a look at the example tests present in the
``examples/tests`` directory, that contains a few samples to take some
inspiration from. That directory, besides containing examples, is also used by
the avocado self test suite to do functional testing of avocado itself.

It is also recommended that you take a look at the
:doc:`API documentation <api/modules>` for more possibilities.