WritingTests.rst 35.4 KB
Newer Older
1 2
.. _writing-tests:

3
=====================
4 5 6
Writing Avocado Tests
=====================

7
Test Resolution in Avocado - simple tests vs instrumented tests
8 9
===============================================================

10
What is a test in the Avocado context? Either one of:
11 12

* An executable file, that returns exit code 0 (PASS) or != 0 (FAIL). This
13 14 15 16
  is known as a SimpleTest, in Avocado terminology.
* A Python module containing a class derived from :mod:`avocado.Test`.
  This is known as an instrumented test, in Avocado terminology. The term
  instrumented is used because the Avocado Python test classes allow you to
17 18 19
  get more features for your test, such as logging facilities and more
  sophisticated test APIs.

20
When you use the Avocado runner, frequently you'll provide paths to files,
21
that will be inspected, and acted upon depending on their contents. The
22
diagram below shows how Avocado analyzes a file and decides what to do with
23 24 25 26 27
it:

.. figure:: diagram.png

Now that we covered how avocado resolves tests, let's get to business.
28 29
This section is concerned with writing an Avocado test. The process is not
hard, all you need to do is to create a test module, which is a Python file
30
with a class that inherits from :class:`avocado.Test`. This class only
31
really needs to implement a method called `runTest`, which represents the actual
32
sequence of test operations.
33

34 35
Simple example
==============
36

37
Let's re-create an old time favorite, ``sleeptest``, which is a functional
38
test for Avocado (old because we also use such a test for autotest). It does
39
nothing but ``time.sleep([number-seconds])``::
40

41
        #!/usr/bin/python
42

43
        import time
44

45
        from avocado import Test
46
        from avocado import main
47 48


49
        class SleepTest(Test):
50 51

            """
52
            Example test for Avocado.
53 54
            """

55 56 57 58 59 60 61
            def runTest(self):
                """
                Sleep for length seconds.
                """
                sleep_length = self.params.get('sleep_length', default=1)
                self.log.debug("Sleeping for %.2f seconds", sleep_length)
                time.sleep(sleep_length)
62 63


64 65
        if __name__ == "__main__":
            main()
66 67 68

This is about the simplest test you can write for avocado (at least, one using
the avocado APIs). An avocado test is basically a class that inherits from
69
:mod:`avocado.Test` and could have any name you might like (we'll trust
70 71 72 73 74
you'll choose a good name, although we do recommend that the name uses the
CamelCase convention, for PEP8 consistency).

Note that the test object provides you with a number of convenience
attributes, such as ``self.log``, that lets you log debug, info, error and
75
warning messages. Also, we note the parameter passing system that Avocado
76 77 78 79 80
provides: We frequently want to pass parameters to tests, and we can do that
through what we call a `multiplex file`, which is a configuration file that
not only allows you to provide params to your test, but also easily create a
validation matrix in a concise way. You can find more about the multiplex
file format on :doc:`MultiplexConfig`.
81

C
Cleber Rosa 已提交
82 83 84 85
Saving test generated (custom) data
===================================

Each test instance provides a so called ``whiteboard``. It that can be accessed
86 87
through ``self.whiteboard``. This whiteboard is simply a string that will be
automatically saved to test results (as long as the output format supports it).
88
If you choose to save binary data to the whiteboard, it's your responsibility to
89
encoded it first (base64 is the obvious choice).
C
Cleber Rosa 已提交
90

91
Building on the previously demonstrated ``sleeptest``, suppose that you want to save the
C
Cleber Rosa 已提交
92 93
sleep length to be used by some other script or data analysis tool::

94
        def runTest(self):
C
Cleber Rosa 已提交
95 96 97
            """
            Sleep for length seconds.
            """
98 99 100 101
            sleep_length = self.params.get('sleep_length', default=1)
            self.log.debug("Sleeping for %.2f seconds", sleep_length)
            time.sleep(sleep_length)
            self.whiteboard = "%.2f" % sleep_length
C
Cleber Rosa 已提交
102

103
The whiteboard can and should be exposed by files generated by the available test result
104 105 106 107 108
plugins. The ``results.json`` file already includes the whiteboard for each test.
Additionally, we'll save a raw copy of the whiteboard contents on a file named
``whiteboard``, in the same level as the ``results.json`` file, for your convenience
(maybe you want to use the result of a benchmark directly with your custom made scripts
to analyze that particular benchmark result).
C
Cleber Rosa 已提交
109

110 111 112
Accessing test parameters
=========================

113 114 115 116 117 118
Each test has a set of parameters that can be accessed through
``self.params.get($name, $path=None, $default=None)``.
Avocado finds and populates ``self.params`` with all parameters you define on
a Multiplex Config file (see :doc:`MultiplexConfig`). As an example, consider
the following multiplex file for sleeptest::

119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
    sleeptest:
        type: "builtin"
        short:
            sleep_length: 0.5
        medium:
            sleep_length: 1
        long:
            sleep_length: 5

When running this example by ``avocado run $test --multiplex $file.yaml``
three variants are executed and the content is injected into ``/run`` namespace
(see :doc:`MultiplexConfig` for details). Every variant contains variables
"type" and "sleep_length". To obtain the current value, you need the name
("sleep_length") and its path. The path differs for each variant so it's
needed to use the most suitable portion of the path, in this example:
"/run/sleeptest/*" or perhaps "sleeptest/*" might be enough. It depends on how
your setup looks like.
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167

The default value is optional, but always keep in mind to handle them nicely.
Someone might be executing your test with different params or without any
params at all. It should work fine.

So the complete example on how to access the "sleep_length" would be::

    self.params.get("sleep_length", "/*/sleeptest/*", 1)

There is one way to make this even simpler. It's possible to define resolution
order, then for simple queries you can simply omit the path::

    self.params.get("sleep_length", None, 1)
    self.params.get("sleep_length", '*', 1)
    self.params.get("sleep_length", default=1)

One should always try to avoid param clashes (multiple matching keys for given
path with different origin). If it's not possible (eg. when
you use multiple yaml files) you can modify the resolution order by modifying
``--mux-entry``. What it does is it slices the params and iterates through the
paths one by one. When there is a match in the first slice it returns
it without trying the other slices. Although relative queries only match
from ``--mux-entry`` slices.

There are many ways to use paths to separate clashing params or just to make
more clear what your query for. Usually in tests the usage of '*' is sufficient
and the namespacing is not necessarily, but it helps make advanced usage
clearer and easier to follow.

When thinking of the path always think about users. It's common to extend
default config with additional variants or combine them with different
ones to generate just the right scenarios they need. People might
168 169
simply inject the values elsewhere (eg. `/run/sleeptest` =>
`/upstream/sleeptest`) or they can merge other clashing file into the
170 171 172 173
default path, which won't generate clash, but would return their values
instead. Then you need to clarify the path (eg. `'*'` =>  `sleeptest/*`)

More details on that are in :doc:`MultiplexConfig`
174 175 176 177

Using a multiplex file
======================

178
You may use the Avocado runner with a multiplex file to provide params and matrix
179 180
generation for sleeptest just like::

181
    $ avocado run sleeptest --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
182 183 184 185
    JOB ID    : d565e8dec576d6040f894841f32a836c751f968f
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/job.log
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/html/results.html
    TESTS     : 3
186 187 188
    (1/3) sleeptest: PASS (0.50 s)
    (2/3) sleeptest.1: PASS (1.01 s)
    (3/3) sleeptest.2: PASS (5.01 s)
189 190 191 192 193 194
    PASS      : 3
    ERROR     : 0
    FAIL      : 0
    SKIP      : 0
    WARN      : 0
    INTERRUPT : 0
195 196
    TIME : 6.52 s

197
The ``--multiplex`` accepts either only ``$FILE_LOCATION`` or ``$INJECT_TO:$FILE_LOCATION``.
198 199 200 201 202 203 204
As explained in :doc:`MultiplexConfig` without any path the content gets
injected into ``/run`` in order to be in the default relative path location.
The ``$INJECT_TO`` can be either relative path, then it's injected into
``/run/$INJECT_TO`` location, or absolute path (starting with ``'/'``), then
it's injected directly into the specified path and it's up to the test/framework
developer to get the value from this location (using path or adding the path to
``mux-entry``). To understand the difference execute those commands::
205 206

    $ avocado multiplex -t examples/tests/sleeptest.py.data/sleeptest.yaml
207 208
    $ avocado multiplex -t duration:examples/tests/sleeptest.py.data/sleeptest.yaml
    $ avocado multiplex -t /my/location:examples/tests/sleeptest.py.data/sleeptest.yaml
209

210 211 212
Note that, as your multiplex file specifies all parameters for sleeptest, you
can't leave the test ID empty::

213
    $ scripts/avocado run --multiplex examples/tests/sleeptest/sleeptest.yaml
214
    Empty test ID. A test path or alias must be provided
215

216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237
You can also execute multiple tests with the same multiplex file::

    ./scripts/avocado run sleeptest synctest --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
    JOB ID     : 72166988c13fec26fcc9c2e504beec8edaad4761
    JOB LOG    : /home/medic/avocado/job-results/job-2015-05-15T11.02-7216698/job.log
    JOB HTML   : /home/medic/avocado/job-results/job-2015-05-15T11.02-7216698/html/results.html
    TESTS      : 8
    (1/8) sleeptest.py: PASS (1.00 s)
    (2/8) sleeptest.py.1: PASS (1.00 s)
    (3/8) sleeptest.py.2: PASS (1.00 s)
    (4/8) sleeptest.py.3: PASS (1.00 s)
    (5/8) synctest.py: PASS (1.31 s)
    (6/8) synctest.py.1: PASS (1.48 s)
    (7/8) synctest.py.2: PASS (3.36 s)
    (8/8) synctest.py.3: PASS (3.59 s)
    PASS       : 8
    ERROR      : 0
    FAIL       : 0
    SKIP       : 0
    WARN       : 0
    INTERRUPT  : 0
    TIME       : 13.76 s
238 239

Avocado tests are also unittests
240
================================
241

242
Since Avocado tests inherit from :class:`unittest.TestCase`, you can use all
243
the :func:`assert` class methods on your tests. Some silly examples::
244

245
    class RandomExamples(test.Test):
246
        def runTest(self):
247 248 249 250 251 252 253 254 255 256 257 258 259
            self.log.debug("Verifying some random math...")
            four = 2 * 2
            four_ = 2 + 2
            self.assertEqual(four, four_, "something is very wrong here!")

            self.log.debug("Verifying if a variable is set to True...")
            variable = True
            self.assertTrue(variable)

            self.log.debug("Verifying if this test is an instance of test.Test")
            self.assertIsInstance(self, test.Test)

The reason why we have a shebang in the beginning of the test is because
260
Avocado tests, similarly to unittests, can use an entry point, called
261
:func:`avocado.main`, that calls avocado libs to look for test classes and execute
262 263 264
its main entry point. This is an optional, but fairly handy feature. In case
you want to use it, don't forget to ``chmod +x`` your test.

265
Executing an Avocado test gives::
266

267
    $ examples/tests/sleeptest.py
268 269 270 271
    JOB ID    : de6c1e4c227c786dc4d926f6fca67cda34d96276
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.48-de6c1e4c/job.log
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.48-de6c1e4c/html/results.html
    TESTS     : 1
272
    (1/1) sleeptest.1: PASS (1.00 s)
273 274 275 276 277 278 279
    PASS      : 1
    ERROR     : 0
    FAIL      : 0
    SKIP      : 0
    WARN      : 0
    INTERRUPT : 0
    TIME      : 1.00 s
280 281

Running tests with nosetests
282
============================
283

284 285
`nose <https://nose.readthedocs.org/>`__ is a Python testing framework with
similar goals as Avocado, except that avocado also intends to provide tools to
286
assemble a fully automated test grid, plus richer test API for tests on the
287
Linux platform. Regardless, the fact that an Avocado class is also an unittest
288 289
cass, you can run them with the ``nosetests`` application::

290
    $ nosetests examples/tests/sleeptest.py
291 292
    .
    ----------------------------------------------------------------------
293
    Ran 1 test in 1.004s
294

295 296 297
    OK

Setup and cleanup methods
298
=========================
299 300

If you need to perform setup actions before/after your test, you may do so
301
in the ``setUp`` and ``tearDown`` methods, respectively. We'll give examples
302 303
in the following section.

304 305
Running third party test suites
===============================
306 307

It is very common in test automation workloads to use test suites developed
308
by third parties. By wrapping the execution code inside an Avocado test module,
309 310 311 312 313
you gain access to the facilities and API provided by the framework. Let's
say you want to pick up a test suite written in C that it is in a tarball,
uncompress it, compile the suite code, and then executing the test. Here's
an example that does that::

314 315
    #!/usr/bin/python

316 317
    import os

318
    from avocado import Test
319
    from avocado import main
320 321 322 323 324
    from avocado.utils import archive
    from avocado.utils import build
    from avocado.utils import process


325
    class SyncTest(Test):
326 327 328 329

        """
        Execute the synctest test suite.
        """
330 331 332
        default_params = {'sync_tarball': 'synctest.tar.bz2',
                          'sync_length': 100,
                          'sync_loop': 10}
333

334
        def setUp(self):
335 336 337 338 339
            """
            Set default params and build the synctest suite.
            """
            # Build the synctest suite
            self.cwd = os.getcwd()
340
            tarball_path = self.get_data_path(self.params.sync_tarball)
341 342 343 344
            archive.extract(tarball_path, self.srcdir)
            self.srcdir = os.path.join(self.srcdir, 'synctest')
            build.make(self.srcdir)

345
        def runTest(self):
346 347 348
            """
            Execute synctest with the appropriate params.
            """
349
            os.chdir(self.srcdir)
350 351
            cmd = ('./synctest %s %s' %
                   (self.params.sync_length, self.params.sync_loop))
352
            process.system(cmd)
353 354 355 356
            os.chdir(self.cwd)


    if __name__ == "__main__":
357
        main()
358 359 360

Here we have an example of the ``setup`` method in action: Here we get the
location of the test suite code (tarball) through
361
:func:`avocado.Test.get_data_path`, then uncompress the tarball through
362 363 364 365
:func:`avocado.utils.archive.extract`, an API that will
decompress the suite tarball, followed by ``build.make``, that will build the
suite.

366 367 368
In this example, the ``action`` method just gets into the base directory of
the compiled suite  and executes the ``./synctest`` command, with appropriate
parameters, using :func:`avocado.utils.process.system`.
369 370 371 372

Test Output Check and Output Record Mode
========================================

373 374 375
In a lot of occasions, you want to go simpler: just check if the output of a
given application matches an expected output. In order to help with this common
use case, we offer the option ``--output-check-record [mode]`` to the test runner::
376 377 378

      --output-check-record OUTPUT_CHECK_RECORD
                            Record output streams of your tests to reference files
379 380 381 382
                            (valid options: none (do not record output streams),
                            all (record both stdout and stderr), stdout (record
                            only stderr), stderr (record only stderr). Default:
                            none
383 384 385 386 387 388

If this option is used, it will store the stdout or stderr of the process (or
both, if you specified ``all``) being executed to reference files: ``stdout.expected``
and ``stderr.expected``. Those files will be recorded in the test data dir. The
data dir is in the same directory as the test source file, named
``[source_file_name.data]``. Let's take as an example the test ``synctest.py``. In a
389
fresh checkout of Avocado, you can see::
390 391 392 393 394 395 396 397 398 399 400 401 402 403 404

        examples/tests/synctest.py.data/stderr.expected
        examples/tests/synctest.py.data/stdout.expected

From those 2 files, only stdout.expected is non empty::

    $ cat examples/tests/synctest.py.data/stdout.expected
    PAR : waiting
    PASS : sync interrupted

The output files were originally obtained using the test runner and passing the
option --output-check-record all to the test runner::

    $ scripts/avocado run --output-check-record all synctest
    JOB ID    : bcd05e4fd33e068b159045652da9eb7448802be5
405
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
406 407 408 409 410 411 412 413 414 415
    TESTS     : 1
    (1/1) synctest.py: PASS (2.20 s)
    PASS      : 1
    ERROR     : 0
    FAIL      : 0
    SKIP      : 0
    WARN      : 0
    TIME      : 2.20 s


416 417 418 419
After the reference files are added, the check process is transparent, in the sense
that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you want to override the default
420
behavior and skip output check entirely, you may provide the flag ``--output-check=off`` to the test runner.
421

422 423 424 425
The :mod:`avocado.utils.process` APIs have a parameter ``allow_output_check`` (defaults to ``all``), so that you
can select which process outputs will go to the reference files, should you chose to record them. You may choose
``all``, for both stdout and stderr, ``stdout``, for the stdout only, ``stderr``, for only the stderr only, or ``none``,
to allow neither of them to be recorded and checked.
426

427 428
This process works fine also with simple tests, which are programs or shell scripts
that returns 0 (PASSed) or != 0 (FAILed). Let's consider our bogus example::
429 430 431 432 433 434 435 436 437

    $ cat output_record.sh
    #!/bin/bash
    echo "Hello, world!"

Let's record the output for this one::

    $ scripts/avocado run output_record.sh --output-check-record all
    JOB ID    : 25c4244dda71d0570b7f849319cd71fe1722be8b
438
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.49-25c4244/job.log
439
    TESTS     : 1
440
    (1/1) home/$USER/Code/avocado/output_record.sh: PASS (0.01 s)
441 442 443 444 445 446 447
    PASS      : 1
    ERROR     : 0
    FAIL      : 0
    SKIP      : 0
    WARN      : 0
    TIME      : 0.01 s

448
After this is done, you'll notice that a the test data directory
449 450 451 452 453 454 455 456 457 458 459 460 461 462
appeared in the same level of our shell script, containing 2 files::

    $ ls output_record.sh.data/
    stderr.expected  stdout.expected

Let's look what's in each of them::

    $ cat output_record.sh.data/stdout.expected
    Hello, world!
    $ cat output_record.sh.data/stderr.expected
    $

Now, every time this test runs, it'll take into account the expected files that
were recorded, no need to do anything else but run the test. Let's see what
463
happens if we change the ``stdout.expected`` file contents to ``Hello, Avocado!``::
464 465 466

    $ scripts/avocado run output_record.sh
    JOB ID    : f0521e524face93019d7cb99c5765aedd933cb2e
467
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
468
    TESTS     : 1
469
    (1/1) home/$USER/Code/avocado/output_record.sh: FAIL (0.02 s)
470 471 472 473 474 475 476 477 478
    PASS      : 0
    ERROR     : 0
    FAIL      : 1
    SKIP      : 0
    WARN      : 0
    TIME      : 0.02 s

Verifying the failure reason::

479 480
    $ cat $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
    20:52:38 test       L0163 INFO | START home/$USER/Code/avocado/output_record.sh
481 482 483 484 485 486 487
    20:52:38 test       L0164 DEBUG|
    20:52:38 test       L0165 DEBUG| Test instance parameters:
    20:52:38 test       L0173 DEBUG|
    20:52:38 test       L0176 DEBUG| Default parameters:
    20:52:38 test       L0180 DEBUG|
    20:52:38 test       L0181 DEBUG| Test instance params override defaults whenever available
    20:52:38 test       L0182 DEBUG|
488
    20:52:38 process    L0242 INFO | Running '$HOME/Code/avocado/output_record.sh'
489
    20:52:38 process    L0310 DEBUG| [stdout] Hello, world!
490
    20:52:38 test       L0565 INFO | Command: $HOME/Code/avocado/output_record.sh
491 492 493 494 495 496 497 498 499
    20:52:38 test       L0565 INFO | Exit status: 0
    20:52:38 test       L0565 INFO | Duration: 0.00313782691956
    20:52:38 test       L0565 INFO | Stdout:
    20:52:38 test       L0565 INFO | Hello, world!
    20:52:38 test       L0565 INFO |
    20:52:38 test       L0565 INFO | Stderr:
    20:52:38 test       L0565 INFO |
    20:52:38 test       L0060 ERROR|
    20:52:38 test       L0063 ERROR| Traceback (most recent call last):
500
    20:52:38 test       L0063 ERROR|   File "$HOME/Code/avocado/avocado/test.py", line 397, in check_reference_stdout
501 502 503 504 505 506 507 508 509 510
    20:52:38 test       L0063 ERROR|     self.assertEqual(expected, actual, msg)
    20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
    20:52:38 test       L0063 ERROR|     assertion_func(first, second, msg=msg)
    20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual
    20:52:38 test       L0063 ERROR|     raise self.failureException(msg)
    20:52:38 test       L0063 ERROR| AssertionError: Actual test sdtout differs from expected one:
    20:52:38 test       L0063 ERROR| Actual:
    20:52:38 test       L0063 ERROR| Hello, world!
    20:52:38 test       L0063 ERROR|
    20:52:38 test       L0063 ERROR| Expected:
511
    20:52:38 test       L0063 ERROR| Hello, Avocado!
512 513
    20:52:38 test       L0063 ERROR|
    20:52:38 test       L0064 ERROR|
514
    20:52:38 test       L0529 ERROR| FAIL home/$USER/Code/avocado/output_record.sh -> AssertionError: Actual test sdtout differs from expected one:
515 516 517 518
    Actual:
    Hello, world!

    Expected:
519
    Hello, Avocado!
520 521 522 523 524

    20:52:38 test       L0516 INFO |

As expected, the test failed because we changed its expectations.

525
Test log, stdout and stderr in native Avocado modules
526 527 528 529 530 531 532 533 534 535
=====================================================

If needed, you can write directly to the expected stdout and stderr files
from the native test scope. It is important to make the distinction between
the following entities:

* The test logs
* The test expected stdout
* The test expected stderr

536 537 538 539 540 541 542
The first one is used for debugging and informational purposes. Additionally
writing to `self.log.warning` causes test to be marked as dirty and when
everything else goes well the test ends with WARN. This means that the test
passed but there were non-related unexpected situations described in warning
log.

You may log something into the test logs using the methods in
543
:mod:`avocado.Test.log` class attributes. Consider the example::
544

545
    class output_test(Test):
546

547
        def runTest(self):
548
            self.log.info('This goes to the log and it is only informational')
549 550 551 552 553 554
            self.log.warn('Oh, something unexpected, non-critical happened, '
                          'but we can continue.')
            self.log.error('Describe the error here and don't forget to raise '
                           'an exception yourself. Writing to self.log.error '
                           'won't do that for you.')
            self.log.debug('Everybody look, I had a good lunch today...')
555 556

If you need to write directly to the test stdout and stderr streams, there
557 558
are another 2 class attributes for that, :mod:`avocado.Test.stdout_log`
and :mod:`avocado.Test.stderr_log`, that have the exact same methods
559 560 561
of the log object. So if you want to add stuff to your expected stdout and
stderr streams, you can do something like::

562
    class output_test(Test):
563

564
        def runTest(self):
565 566 567 568 569 570 571 572
            self.log.info('This goes to the log and it is only informational')
            self.stdout_log.info('This goes to the test stdout (will be recorded)')
            self.stderr_log.info('This goes to the test stderr (will be recorded)')

Each one of the last 2 statements will go to the ``stdout.expected`` and
``stderr.expected``, should you choose ``--output-check-record all``, and
will be output to the files ``stderr`` and ``stdout`` of the job results dir
every time that test is executed.
573

574 575 576 577
Avocado Tests run on a separate process
=======================================

In order to avoid tests to mess around the environment used by the main
578 579
Avocado runner process, tests are run on a forked subprocess. This allows
for more robustness (tests are not easily able to mess/break Avocado) and
580 581 582 583 584 585 586 587 588 589 590 591 592 593 594
some nifty features, such as setting test timeouts.

Setting a Test Timeout
======================

Sometimes your test suite/test might get stuck forever, and this might
impact your test grid. You can account for that possibility and set up a
``timeout`` parameter for your test. The test timeout can be set through
2 means, in the following order of precedence:

* Multiplex variable parameters. You may just set the timeout parameter, like
  in the following simplistic example:

::

595 596 597 598
    sleep_length = 5
    sleep_length_type = float
    timeout = 3
    timeout_type = float
599 600 601

::

602
    $ avocado run sleeptest --multiplex /tmp/sleeptest-example.yaml
603 604 605 606
    JOB ID    : 6d5a2ff16bb92395100fbc3945b8d253308728c9
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/html/results.html
    TESTS     : 1
607
    (1/1) sleeptest.1: ERROR (2.97 s)
608 609 610 611 612 613 614
    PASS      : 0
    ERROR     : 1
    FAIL      : 0
    SKIP      : 0
    WARN      : 0
    INTERRUPT : 0
    TIME      : 2.97 s
615 616 617

::

618
    $ cat $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
619
    15:52:51 test       L0143 INFO | START sleeptest.1
620
    15:52:51 test       L0144 DEBUG|
621
    15:52:51 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/sleeptest.1/test.log
622
    15:52:51 test       L0146 DEBUG| Test instance parameters:
623 624
    15:52:51 test       L0153 DEBUG|     _name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
    15:52:51 test       L0153 DEBUG|     _short_name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
625 626 627 628 629 630 631 632
    15:52:51 test       L0153 DEBUG|     dep = []
    15:52:51 test       L0153 DEBUG|     id = sleeptest
    15:52:51 test       L0153 DEBUG|     name = sleeptest
    15:52:51 test       L0153 DEBUG|     shortname = sleeptest
    15:52:51 test       L0153 DEBUG|     sleep_length = 5.0
    15:52:51 test       L0153 DEBUG|     sleep_length_type = float
    15:52:51 test       L0153 DEBUG|     timeout = 3.0
    15:52:51 test       L0153 DEBUG|     timeout_type = float
633
    15:52:51 test       L0154 DEBUG|
634 635
    15:52:51 test       L0157 DEBUG| Default parameters:
    15:52:51 test       L0159 DEBUG|     sleep_length = 1.0
636
    15:52:51 test       L0161 DEBUG|
637
    15:52:51 test       L0162 DEBUG| Test instance params override defaults whenever available
638
    15:52:51 test       L0163 DEBUG|
639
    15:52:51 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15670 to end
640
    15:52:51 test       L0170 INFO |
641
    15:52:51 sleeptest  L0035 DEBUG| Sleeping for 5.00 seconds
642
    15:52:54 test       L0057 ERROR|
643
    15:52:54 test       L0060 ERROR| Traceback (most recent call last):
644
    15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/sleeptest.py", line 36, in action
645
    15:52:54 test       L0060 ERROR|     time.sleep(self.params.sleep_length)
646
    15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
647 648
    15:52:54 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
    15:52:54 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for sleeptest to end
649
    15:52:54 test       L0061 ERROR|
650
    15:52:54 test       L0400 ERROR| ERROR sleeptest.1 -> TestTimeoutError: Timeout reached waiting for sleeptest to end
651
    15:52:54 test       L0387 INFO |
652 653 654


If you pass that multiplex file to the runner multiplexer, this will register
655
a timeout of 3 seconds before Avocado ends the test forcefully by sending a
656 657 658 659 660 661 662 663 664
:class:`signal.SIGTERM` to the test, making it raise a
:class:`avocado.core.exceptions.TestTimeoutError`.

* Default params attribute. Consider the following example:

::

    import time

665
    from avocado import Test
666
    from avocado import main
667 668


669
    class TimeoutTest(Test):
670 671

        """
672
        Functional test for Avocado. Throw a TestTimeoutError.
673 674 675 676
        """
        default_params = {'timeout': 3.0,
                          'sleep_time': 5.0}

677
        def runTest(self):
678 679 680 681 682 683 684 685 686
            """
            This should throw a TestTimeoutError.
            """
            self.log.info('Sleeping for %.2f seconds (2 more than the timeout)',
                          self.params.sleep_time)
            time.sleep(self.params.sleep_time)


    if __name__ == "__main__":
687
        main()
688 689 690 691 692

This accomplishes a similar effect to the multiplex setup defined in there.

::

693
    $ avocado run timeouttest
694 695 696 697
    JOB ID    : d78498a54504b481192f2f9bca5ebb9bbb820b8a
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/html/results.html
    TESTS     : 1
698
    (1/1) timeouttest.1: ERROR (2.97 s)
699 700 701 702 703 704 705
    PASS      : 0
    ERROR     : 1
    FAIL      : 0
    SKIP      : 0
    WARN      : 0
    INTERRUPT : 0
    TIME      : 2.97 s
706

707 708 709

::

710
    $ cat $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
711
    15:54:28 test       L0143 INFO | START timeouttest.1
712
    15:54:28 test       L0144 DEBUG|
713
    15:54:28 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/timeouttest.1/test.log
714 715
    15:54:28 test       L0146 DEBUG| Test instance parameters:
    15:54:28 test       L0153 DEBUG|     id = timeouttest
716
    15:54:28 test       L0154 DEBUG|
717 718 719
    15:54:28 test       L0157 DEBUG| Default parameters:
    15:54:28 test       L0159 DEBUG|     sleep_time = 5.0
    15:54:28 test       L0159 DEBUG|     timeout = 3.0
720
    15:54:28 test       L0161 DEBUG|
721
    15:54:28 test       L0162 DEBUG| Test instance params override defaults whenever available
722
    15:54:28 test       L0163 DEBUG|
723
    15:54:28 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15759 to end
724
    15:54:28 test       L0170 INFO |
725
    15:54:28 timeouttes L0036 INFO | Sleeping for 5.00 seconds (2 more than the timeout)
726
    15:54:31 test       L0057 ERROR|
727
    15:54:31 test       L0060 ERROR| Traceback (most recent call last):
728
    15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/timeouttest.py", line 37, in action
729
    15:54:31 test       L0060 ERROR|     time.sleep(self.params.sleep_time)
730
    15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
731 732
    15:54:31 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
    15:54:31 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for timeouttest to end
733
    15:54:31 test       L0061 ERROR|
734
    15:54:31 test       L0400 ERROR| ERROR timeouttest.1 -> TestTimeoutError: Timeout reached waiting for timeouttest to end
735
    15:54:31 test       L0387 INFO |
736

737

738
Environment Variables for Simple Tests
739 740
======================================

741
Avocado exports Avocado variables and multiplexed variables as BASH environment
742
to the running test. Those variables are interesting to simple tests, because
743 744
they can not make use of Avocado API directly with Python, like the native
tests can do and also they can modify the test parameters.
745

746 747 748 749 750 751 752 753 754 755 756
Here are the current variables that Avocado exports to the tests:

+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| Environemnt Variable    | Meaning                               | Example                                                                                             |
+=========================+=======================================+=====================================================================================================+
| AVOCADO_VERSION         | Version of Avocado test runner        | 0.12.0                                                                                              |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_BASEDIR    | Base directory of Avocado tests       | $HOME/Downloads/avocado-source/avocado                                                              |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_DATADIR    | Data directory for the test           | $AVOCADO_TEST_BASEDIR/my_test.sh.data                                                               |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
757
| AVOCADO_TEST_WORKDIR    | Work directory for the test           | /var/tmp/avocado_Bjr_rd/my_test.sh                                                                  |
758
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
759
| AVOCADO_TEST_SRCDIR     | Source directory for the test         | /var/tmp/avocado_Bjr_rd/my-test.sh/src                                                              |
760
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
761
| AVOCADO_TEST_LOGDIR     | Log directory for the test            | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1                 |
762
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
763
| AVOCADO_TEST_LOGFILE    | Log file for the test                 | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/debug.log       |
764
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
765
| AVOCADO_TEST_OUTPUTDIR  | Output directory for the test         | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/data            |
766
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
767
| AVOCADO_TEST_SYSINFODIR | The system information directory      | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/sysinfo         |
768
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
769 770
| *                       | All variables from --multiplex-file   | TIMEOUT=60; IO_WORKERS=10; VM_BYTES=512M; ...                                                       |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
771

772 773 774 775 776 777 778 779 780

Simple Tests BASH extensions
============================

To enhance simple tests one can use supported set of libraries we created. The
only requirement is to use::

    PATH=$(avocado "exec-path"):$PATH

781
which injects path to Avocado utils into shell PATH. Take a look into
782 783 784 785
``avocado exec-path`` to see list of available functions and take a look at
``examples/tests/simplewarning.sh`` for inspiration.


786
Wrap Up
787
=======
788

789 790 791
We recommend you take a look at the example tests present in the
``examples/tests`` directory, that contains a few samples to take some
inspiration from. That directory, besides containing examples, is also used by
792
the Avocado self test suite to do functional testing of Avocado itself.
793 794 795

It is also recommended that you take a look at the
:doc:`API documentation <api/modules>` for more possibilities.