WritingTests.rst 34.5 KB
Newer Older
1 2
.. _writing-tests:

3
=====================
4 5 6
Writing Avocado Tests
=====================

7
We are going to write an Avocado test in Python and we are going to inherit from
8
:class:`avocado.Test`. This makes this test a so-called instrumented test.
9

10 11
Basic example
=============
12

13 14
Let's re-create an old time favorite, ``sleeptest`` [#f1]_.  It is so simple, it
does nothing besides sleeping for a while::
15

16
        import time
17

18
        from avocado import Test
19

20
        class SleepTest(Test):
21

22
            def test(self):
23 24 25
                sleep_length = self.params.get('sleep_length', default=1)
                self.log.debug("Sleeping for %.2f seconds", sleep_length)
                time.sleep(sleep_length)
26

27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
This is about the simplest test you can write for Avocado, while still
leveraging its API power.

What is an Avocado Test
-----------------------

As can be seen in the example above, an Avocado test is a method that
starts with ``test`` in a class that inherits from :mod:`avocado.Test`.

Multiple tests and naming conventions
-------------------------------------

You can have multiple tests in a single class.

To do so, just give the methods names that start with ``test``, say
``test_foo``, ``test_bar`` and so on. We recommend you follow this naming
style, as defined in the `PEP8 Function Names`_ section.

For the class name, you can pick any name you like, but we also recommend
that it follows the CamelCase convention, also known as CapWords, defined
in the PEP 8 document under `Class Names`_.

Convenience Attributes
----------------------

Note that the test class provides you with a number of convenience attributes:

* A ready to use log mechanism for your test, that can be accessed by means
  of ``self.log``. It lets you log debug, info, error and warning messages.
* A parameter passing system (and fetching system) that can be accessed by
  means of ``self.params``. This is hooked to the Multiplexer, about which
  you can find that more information at :doc:`MultiplexConfig`.
59

C
Cleber Rosa 已提交
60 61 62 63
Saving test generated (custom) data
===================================

Each test instance provides a so called ``whiteboard``. It that can be accessed
64 65
through ``self.whiteboard``. This whiteboard is simply a string that will be
automatically saved to test results (as long as the output format supports it).
66
If you choose to save binary data to the whiteboard, it's your responsibility to
67
encode it first (base64 is the obvious choice).
C
Cleber Rosa 已提交
68

69
Building on the previously demonstrated ``sleeptest``, suppose that you want to save the
C
Cleber Rosa 已提交
70 71
sleep length to be used by some other script or data analysis tool::

72
        def test(self):
73 74 75 76
            sleep_length = self.params.get('sleep_length', default=1)
            self.log.debug("Sleeping for %.2f seconds", sleep_length)
            time.sleep(sleep_length)
            self.whiteboard = "%.2f" % sleep_length
C
Cleber Rosa 已提交
77

78
The whiteboard can and should be exposed by files generated by the available test result
79 80 81 82 83
plugins. The ``results.json`` file already includes the whiteboard for each test.
Additionally, we'll save a raw copy of the whiteboard contents on a file named
``whiteboard``, in the same level as the ``results.json`` file, for your convenience
(maybe you want to use the result of a benchmark directly with your custom made scripts
to analyze that particular benchmark result).
C
Cleber Rosa 已提交
84

85 86 87
Accessing test parameters
=========================

88 89 90 91 92 93
Each test has a set of parameters that can be accessed through
``self.params.get($name, $path=None, $default=None)``.
Avocado finds and populates ``self.params`` with all parameters you define on
a Multiplex Config file (see :doc:`MultiplexConfig`). As an example, consider
the following multiplex file for sleeptest::

94
    sleeptest: !mux
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110
        type: "builtin"
        short:
            sleep_length: 0.5
        medium:
            sleep_length: 1
        long:
            sleep_length: 5

When running this example by ``avocado run $test --multiplex $file.yaml``
three variants are executed and the content is injected into ``/run`` namespace
(see :doc:`MultiplexConfig` for details). Every variant contains variables
"type" and "sleep_length". To obtain the current value, you need the name
("sleep_length") and its path. The path differs for each variant so it's
needed to use the most suitable portion of the path, in this example:
"/run/sleeptest/*" or perhaps "sleeptest/*" might be enough. It depends on how
your setup looks like.
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128

The default value is optional, but always keep in mind to handle them nicely.
Someone might be executing your test with different params or without any
params at all. It should work fine.

So the complete example on how to access the "sleep_length" would be::

    self.params.get("sleep_length", "/*/sleeptest/*", 1)

There is one way to make this even simpler. It's possible to define resolution
order, then for simple queries you can simply omit the path::

    self.params.get("sleep_length", None, 1)
    self.params.get("sleep_length", '*', 1)
    self.params.get("sleep_length", default=1)

One should always try to avoid param clashes (multiple matching keys for given
path with different origin). If it's not possible (eg. when
129 130
you use multiple yaml files) you can modify the default paths by modifying
``--mux-path``. What it does is it slices the params and iterates through the
131 132
paths one by one. When there is a match in the first slice it returns
it without trying the other slices. Although relative queries only match
133
from ``--mux-path`` slices.
134 135 136 137 138 139 140 141 142

There are many ways to use paths to separate clashing params or just to make
more clear what your query for. Usually in tests the usage of '*' is sufficient
and the namespacing is not necessarily, but it helps make advanced usage
clearer and easier to follow.

When thinking of the path always think about users. It's common to extend
default config with additional variants or combine them with different
ones to generate just the right scenarios they need. People might
143 144
simply inject the values elsewhere (eg. `/run/sleeptest` =>
`/upstream/sleeptest`) or they can merge other clashing file into the
145 146 147 148
default path, which won't generate clash, but would return their values
instead. Then you need to clarify the path (eg. `'*'` =>  `sleeptest/*`)

More details on that are in :doc:`MultiplexConfig`
149 150 151 152

Using a multiplex file
======================

153
You may use the Avocado runner with a multiplex file to provide params and matrix
154 155
generation for sleeptest just like::

156
    $ avocado run sleeptest --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
157 158 159
    JOB ID    : d565e8dec576d6040f894841f32a836c751f968f
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/job.log
    TESTS     : 3
160 161 162
     (1/3) sleeptest: PASS (0.50 s)
     (2/3) sleeptest.1: PASS (1.01 s)
     (3/3) sleeptest.2: PASS (5.01 s)
163
    RESULTS    : PASS 3 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
164
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/html/results.html
165 166
    TIME : 6.52 s

167
The ``--multiplex`` accepts either only ``$FILE_LOCATION`` or ``$INJECT_TO:$FILE_LOCATION``.
168 169 170 171 172 173
As explained in :doc:`MultiplexConfig` without any path the content gets
injected into ``/run`` in order to be in the default relative path location.
The ``$INJECT_TO`` can be either relative path, then it's injected into
``/run/$INJECT_TO`` location, or absolute path (starting with ``'/'``), then
it's injected directly into the specified path and it's up to the test/framework
developer to get the value from this location (using path or adding the path to
174
``mux-path``). To understand the difference execute those commands::
175 176

    $ avocado multiplex -t examples/tests/sleeptest.py.data/sleeptest.yaml
177 178
    $ avocado multiplex -t duration:examples/tests/sleeptest.py.data/sleeptest.yaml
    $ avocado multiplex -t /my/location:examples/tests/sleeptest.py.data/sleeptest.yaml
179

180 181 182
Note that, as your multiplex file specifies all parameters for sleeptest, you
can't leave the test ID empty::

183
    $ scripts/avocado run --multiplex examples/tests/sleeptest/sleeptest.yaml
184
    Empty test ID. A test path or alias must be provided
185

186 187 188 189 190 191
You can also execute multiple tests with the same multiplex file::

    ./scripts/avocado run sleeptest synctest --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
    JOB ID     : 72166988c13fec26fcc9c2e504beec8edaad4761
    JOB LOG    : /home/medic/avocado/job-results/job-2015-05-15T11.02-7216698/job.log
    TESTS      : 8
192 193 194 195 196 197 198 199 200 201
     (1/8) sleeptest.py: PASS (1.00 s)
     (2/8) sleeptest.py.1: PASS (1.00 s)
     (3/8) sleeptest.py.2: PASS (1.00 s)
     (4/8) sleeptest.py.3: PASS (1.00 s)
     (5/8) synctest.py: PASS (1.31 s)
     (6/8) synctest.py.1: PASS (1.48 s)
     (7/8) synctest.py.2: PASS (3.36 s)
     (8/8) synctest.py.3: PASS (3.59 s)
    RESULTS    : PASS 8 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
    JOB HTML   : /home/medic/avocado/job-results/job-2015-05-15T11.02-7216698/html/results.html
202
    TIME       : 13.76 s
203

204 205 206 207 208
:class:`unittest.TestCase` heritage
===================================

Since an Avocado test inherits from :class:`unittest.TestCase`, you
can use all the assertion methods that its parent.
209

210 211 212 213
The code example bellow uses :meth:`assertEqual
<unittest.TestCase.assertEqual>`, :meth:`assertTrue
<unittest.TestCase.assertTrue>` and :meth:`assertIsInstace
<unittest.TestCase.assertIsInstance>`::
214

215 216 217
    from avocado import Test

    class RandomExamples(Test):
218
        def test(self):
219 220 221 222 223 224 225 226 227 228 229 230
            self.log.debug("Verifying some random math...")
            four = 2 * 2
            four_ = 2 + 2
            self.assertEqual(four, four_, "something is very wrong here!")

            self.log.debug("Verifying if a variable is set to True...")
            variable = True
            self.assertTrue(variable)

            self.log.debug("Verifying if this test is an instance of test.Test")
            self.assertIsInstance(self, test.Test)

231 232
Running tests under other :mod:`unittest` runners
-------------------------------------------------
233

234 235 236 237
`nose <https://nose.readthedocs.org/>`__ is another Python testing framework
that is also compatible with :mod:`unittest`.

Because of that, you can run avocado tests with the ``nosetests`` application::
238

239
    $ nosetests examples/tests/sleeptest.py
240 241
    .
    ----------------------------------------------------------------------
242
    Ran 1 test in 1.004s
243

244 245
    OK

246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267
Conversely, you can also use the standard :func:`unittest.main` entry point to run an
Avocado test. Check out the following code, to be saved as ``dummy.py``::

   from avocado import Test
   from unittest import main

   class Dummy(Test):
       def test(self):
           self.assertTrue(True)

   if __name__ == '__main__':
       main()

It can be run by::

   $ python dummy.py
   .
   ----------------------------------------------------------------------
   Ran 1 test in 0.000s

   OK

268
Setup and cleanup methods
269
=========================
270 271

If you need to perform setup actions before/after your test, you may do so
272
in the ``setUp`` and ``tearDown`` methods, respectively. We'll give examples
273 274
in the following section.

275 276
Running third party test suites
===============================
277 278

It is very common in test automation workloads to use test suites developed
279
by third parties. By wrapping the execution code inside an Avocado test module,
280 281 282 283 284
you gain access to the facilities and API provided by the framework. Let's
say you want to pick up a test suite written in C that it is in a tarball,
uncompress it, compile the suite code, and then executing the test. Here's
an example that does that::

285 286
    #!/usr/bin/python

287 288
    import os

289
    from avocado import Test
290
    from avocado import main
291 292 293 294 295
    from avocado.utils import archive
    from avocado.utils import build
    from avocado.utils import process


296
    class SyncTest(Test):
297 298 299 300

        """
        Execute the synctest test suite.
        """
301 302 303
        default_params = {'sync_tarball': 'synctest.tar.bz2',
                          'sync_length': 100,
                          'sync_loop': 10}
304

305
        def setUp(self):
306 307 308 309 310
            """
            Set default params and build the synctest suite.
            """
            # Build the synctest suite
            self.cwd = os.getcwd()
311
            tarball_path = self.get_data_path(self.params.sync_tarball)
312 313 314 315
            archive.extract(tarball_path, self.srcdir)
            self.srcdir = os.path.join(self.srcdir, 'synctest')
            build.make(self.srcdir)

316
        def test(self):
317 318 319
            """
            Execute synctest with the appropriate params.
            """
320
            os.chdir(self.srcdir)
321 322
            cmd = ('./synctest %s %s' %
                   (self.params.sync_length, self.params.sync_loop))
323
            process.system(cmd)
324 325 326 327
            os.chdir(self.cwd)


    if __name__ == "__main__":
328
        main()
329

330
Here we have an example of the ``setUp`` method in action: Here we get the
331
location of the test suite code (tarball) through
332
:func:`avocado.Test.get_data_path`, then uncompress the tarball through
333 334 335 336
:func:`avocado.utils.archive.extract`, an API that will
decompress the suite tarball, followed by ``build.make``, that will build the
suite.

337 338 339 340 341 342 343 344
The ``setUp`` method is the only place in avocado where you are allowed to
call the ``skip`` method, given that, if a test started to be executed, by
definition it can't be skipped anymore. Avocado will do its best to enforce
this boundary, so that if you use ``skip`` outside ``setUp``, the test upon
execution will be marked with the ``ERROR`` status, and the error message
will instruct you to fix your test's code.

In this example, the ``test`` method just gets into the base directory of
345 346
the compiled suite  and executes the ``./synctest`` command, with appropriate
parameters, using :func:`avocado.utils.process.system`.
347 348 349 350

Test Output Check and Output Record Mode
========================================

351 352 353
In a lot of occasions, you want to go simpler: just check if the output of a
given application matches an expected output. In order to help with this common
use case, we offer the option ``--output-check-record [mode]`` to the test runner::
354 355 356

      --output-check-record OUTPUT_CHECK_RECORD
                            Record output streams of your tests to reference files
357 358 359 360
                            (valid options: none (do not record output streams),
                            all (record both stdout and stderr), stdout (record
                            only stderr), stderr (record only stderr). Default:
                            none
361 362 363 364 365 366

If this option is used, it will store the stdout or stderr of the process (or
both, if you specified ``all``) being executed to reference files: ``stdout.expected``
and ``stderr.expected``. Those files will be recorded in the test data dir. The
data dir is in the same directory as the test source file, named
``[source_file_name.data]``. Let's take as an example the test ``synctest.py``. In a
367
fresh checkout of Avocado, you can see::
368 369 370 371 372 373 374 375 376 377 378 379 380 381 382

        examples/tests/synctest.py.data/stderr.expected
        examples/tests/synctest.py.data/stdout.expected

From those 2 files, only stdout.expected is non empty::

    $ cat examples/tests/synctest.py.data/stdout.expected
    PAR : waiting
    PASS : sync interrupted

The output files were originally obtained using the test runner and passing the
option --output-check-record all to the test runner::

    $ scripts/avocado run --output-check-record all synctest
    JOB ID    : bcd05e4fd33e068b159045652da9eb7448802be5
383
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
384
    TESTS     : 1
385
     (1/1) synctest.py: PASS (2.20 s)
386
    RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
387 388 389
    TIME      : 2.20 s


390 391 392 393
After the reference files are added, the check process is transparent, in the sense
that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you want to override the default
394
behavior and skip output check entirely, you may provide the flag ``--output-check=off`` to the test runner.
395

396 397 398 399
The :mod:`avocado.utils.process` APIs have a parameter ``allow_output_check`` (defaults to ``all``), so that you
can select which process outputs will go to the reference files, should you chose to record them. You may choose
``all``, for both stdout and stderr, ``stdout``, for the stdout only, ``stderr``, for only the stderr only, or ``none``,
to allow neither of them to be recorded and checked.
400

401 402
This process works fine also with simple tests, which are programs or shell scripts
that returns 0 (PASSed) or != 0 (FAILed). Let's consider our bogus example::
403 404 405 406 407 408 409 410 411

    $ cat output_record.sh
    #!/bin/bash
    echo "Hello, world!"

Let's record the output for this one::

    $ scripts/avocado run output_record.sh --output-check-record all
    JOB ID    : 25c4244dda71d0570b7f849319cd71fe1722be8b
412
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.49-25c4244/job.log
413
    TESTS     : 1
414
     (1/1) home/$USER/Code/avocado/output_record.sh: PASS (0.01 s)
415
    RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
416 417
    TIME      : 0.01 s

418
After this is done, you'll notice that a the test data directory
419 420 421 422 423 424 425 426 427 428 429 430 431 432
appeared in the same level of our shell script, containing 2 files::

    $ ls output_record.sh.data/
    stderr.expected  stdout.expected

Let's look what's in each of them::

    $ cat output_record.sh.data/stdout.expected
    Hello, world!
    $ cat output_record.sh.data/stderr.expected
    $

Now, every time this test runs, it'll take into account the expected files that
were recorded, no need to do anything else but run the test. Let's see what
433
happens if we change the ``stdout.expected`` file contents to ``Hello, Avocado!``::
434 435 436

    $ scripts/avocado run output_record.sh
    JOB ID    : f0521e524face93019d7cb99c5765aedd933cb2e
437
    JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
438
    TESTS     : 1
439
     (1/1) home/$USER/Code/avocado/output_record.sh: FAIL (0.02 s)
440
    RESULTS    : PASS 0 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
441 442 443 444
    TIME      : 0.02 s

Verifying the failure reason::

445 446
    $ cat $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
    20:52:38 test       L0163 INFO | START home/$USER/Code/avocado/output_record.sh
447 448 449 450 451 452 453
    20:52:38 test       L0164 DEBUG|
    20:52:38 test       L0165 DEBUG| Test instance parameters:
    20:52:38 test       L0173 DEBUG|
    20:52:38 test       L0176 DEBUG| Default parameters:
    20:52:38 test       L0180 DEBUG|
    20:52:38 test       L0181 DEBUG| Test instance params override defaults whenever available
    20:52:38 test       L0182 DEBUG|
454
    20:52:38 process    L0242 INFO | Running '$HOME/Code/avocado/output_record.sh'
455
    20:52:38 process    L0310 DEBUG| [stdout] Hello, world!
456
    20:52:38 test       L0565 INFO | Command: $HOME/Code/avocado/output_record.sh
457 458 459 460 461 462 463 464 465
    20:52:38 test       L0565 INFO | Exit status: 0
    20:52:38 test       L0565 INFO | Duration: 0.00313782691956
    20:52:38 test       L0565 INFO | Stdout:
    20:52:38 test       L0565 INFO | Hello, world!
    20:52:38 test       L0565 INFO |
    20:52:38 test       L0565 INFO | Stderr:
    20:52:38 test       L0565 INFO |
    20:52:38 test       L0060 ERROR|
    20:52:38 test       L0063 ERROR| Traceback (most recent call last):
466
    20:52:38 test       L0063 ERROR|   File "$HOME/Code/avocado/avocado/test.py", line 397, in check_reference_stdout
467 468 469 470 471 472 473 474 475 476
    20:52:38 test       L0063 ERROR|     self.assertEqual(expected, actual, msg)
    20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
    20:52:38 test       L0063 ERROR|     assertion_func(first, second, msg=msg)
    20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual
    20:52:38 test       L0063 ERROR|     raise self.failureException(msg)
    20:52:38 test       L0063 ERROR| AssertionError: Actual test sdtout differs from expected one:
    20:52:38 test       L0063 ERROR| Actual:
    20:52:38 test       L0063 ERROR| Hello, world!
    20:52:38 test       L0063 ERROR|
    20:52:38 test       L0063 ERROR| Expected:
477
    20:52:38 test       L0063 ERROR| Hello, Avocado!
478 479
    20:52:38 test       L0063 ERROR|
    20:52:38 test       L0064 ERROR|
480
    20:52:38 test       L0529 ERROR| FAIL home/$USER/Code/avocado/output_record.sh -> AssertionError: Actual test sdtout differs from expected one:
481 482 483 484
    Actual:
    Hello, world!

    Expected:
485
    Hello, Avocado!
486 487 488 489 490

    20:52:38 test       L0516 INFO |

As expected, the test failed because we changed its expectations.

491
Test log, stdout and stderr in native Avocado modules
492 493 494 495 496 497 498 499 500 501
=====================================================

If needed, you can write directly to the expected stdout and stderr files
from the native test scope. It is important to make the distinction between
the following entities:

* The test logs
* The test expected stdout
* The test expected stderr

502 503 504 505 506 507 508
The first one is used for debugging and informational purposes. Additionally
writing to `self.log.warning` causes test to be marked as dirty and when
everything else goes well the test ends with WARN. This means that the test
passed but there were non-related unexpected situations described in warning
log.

You may log something into the test logs using the methods in
509
:mod:`avocado.Test.log` class attributes. Consider the example::
510

511
    class output_test(Test):
512

513
        def test(self):
514
            self.log.info('This goes to the log and it is only informational')
515 516 517 518 519 520
            self.log.warn('Oh, something unexpected, non-critical happened, '
                          'but we can continue.')
            self.log.error('Describe the error here and don't forget to raise '
                           'an exception yourself. Writing to self.log.error '
                           'won't do that for you.')
            self.log.debug('Everybody look, I had a good lunch today...')
521 522

If you need to write directly to the test stdout and stderr streams, there
523 524
are another 2 class attributes for that, :mod:`avocado.Test.stdout_log`
and :mod:`avocado.Test.stderr_log`, that have the exact same methods
525 526 527
of the log object. So if you want to add stuff to your expected stdout and
stderr streams, you can do something like::

528
    class output_test(Test):
529

530
        def test(self):
531 532 533 534 535 536 537 538
            self.log.info('This goes to the log and it is only informational')
            self.stdout_log.info('This goes to the test stdout (will be recorded)')
            self.stderr_log.info('This goes to the test stderr (will be recorded)')

Each one of the last 2 statements will go to the ``stdout.expected`` and
``stderr.expected``, should you choose ``--output-check-record all``, and
will be output to the files ``stderr`` and ``stdout`` of the job results dir
every time that test is executed.
539

540 541 542 543
Avocado Tests run on a separate process
=======================================

In order to avoid tests to mess around the environment used by the main
544 545
Avocado runner process, tests are run on a forked subprocess. This allows
for more robustness (tests are not easily able to mess/break Avocado) and
546 547 548 549 550 551 552 553 554 555 556 557 558 559 560
some nifty features, such as setting test timeouts.

Setting a Test Timeout
======================

Sometimes your test suite/test might get stuck forever, and this might
impact your test grid. You can account for that possibility and set up a
``timeout`` parameter for your test. The test timeout can be set through
2 means, in the following order of precedence:

* Multiplex variable parameters. You may just set the timeout parameter, like
  in the following simplistic example:

::

561 562 563 564
    sleep_length = 5
    sleep_length_type = float
    timeout = 3
    timeout_type = float
565 566 567

::

568
    $ avocado run sleeptest --multiplex /tmp/sleeptest-example.yaml
569 570 571 572
    JOB ID    : 6d5a2ff16bb92395100fbc3945b8d253308728c9
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/html/results.html
    TESTS     : 1
573
     (1/1) sleeptest.1: ERROR (2.97 s)
574
    RESULTS    : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
575
    TIME      : 2.97 s
576 577 578

::

579
    $ cat $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
580
    15:52:51 test       L0143 INFO | START sleeptest.1
581
    15:52:51 test       L0144 DEBUG|
582
    15:52:51 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/sleeptest.1/test.log
583
    15:52:51 test       L0146 DEBUG| Test instance parameters:
584 585
    15:52:51 test       L0153 DEBUG|     _name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
    15:52:51 test       L0153 DEBUG|     _short_name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
586 587 588 589 590 591 592 593
    15:52:51 test       L0153 DEBUG|     dep = []
    15:52:51 test       L0153 DEBUG|     id = sleeptest
    15:52:51 test       L0153 DEBUG|     name = sleeptest
    15:52:51 test       L0153 DEBUG|     shortname = sleeptest
    15:52:51 test       L0153 DEBUG|     sleep_length = 5.0
    15:52:51 test       L0153 DEBUG|     sleep_length_type = float
    15:52:51 test       L0153 DEBUG|     timeout = 3.0
    15:52:51 test       L0153 DEBUG|     timeout_type = float
594
    15:52:51 test       L0154 DEBUG|
595 596
    15:52:51 test       L0157 DEBUG| Default parameters:
    15:52:51 test       L0159 DEBUG|     sleep_length = 1.0
597
    15:52:51 test       L0161 DEBUG|
598
    15:52:51 test       L0162 DEBUG| Test instance params override defaults whenever available
599
    15:52:51 test       L0163 DEBUG|
600
    15:52:51 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15670 to end
601
    15:52:51 test       L0170 INFO |
602
    15:52:51 sleeptest  L0035 DEBUG| Sleeping for 5.00 seconds
603
    15:52:54 test       L0057 ERROR|
604
    15:52:54 test       L0060 ERROR| Traceback (most recent call last):
605
    15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/sleeptest.py", line 36, in action
606
    15:52:54 test       L0060 ERROR|     time.sleep(self.params.sleep_length)
607
    15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
608 609
    15:52:54 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
    15:52:54 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for sleeptest to end
610
    15:52:54 test       L0061 ERROR|
611
    15:52:54 test       L0400 ERROR| ERROR sleeptest.1 -> TestTimeoutError: Timeout reached waiting for sleeptest to end
612
    15:52:54 test       L0387 INFO |
613 614 615


If you pass that multiplex file to the runner multiplexer, this will register
616
a timeout of 3 seconds before Avocado ends the test forcefully by sending a
617 618 619 620 621 622 623 624 625
:class:`signal.SIGTERM` to the test, making it raise a
:class:`avocado.core.exceptions.TestTimeoutError`.

* Default params attribute. Consider the following example:

::

    import time

626
    from avocado import Test
627
    from avocado import main
628 629


630
    class TimeoutTest(Test):
631 632

        """
633
        Functional test for Avocado. Throw a TestTimeoutError.
634 635 636 637
        """
        default_params = {'timeout': 3.0,
                          'sleep_time': 5.0}

638
        def test(self):
639 640 641 642 643 644 645 646 647
            """
            This should throw a TestTimeoutError.
            """
            self.log.info('Sleeping for %.2f seconds (2 more than the timeout)',
                          self.params.sleep_time)
            time.sleep(self.params.sleep_time)


    if __name__ == "__main__":
648
        main()
649 650 651 652 653

This accomplishes a similar effect to the multiplex setup defined in there.

::

654
    $ avocado run timeouttest
655 656 657 658
    JOB ID    : d78498a54504b481192f2f9bca5ebb9bbb820b8a
    JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
    JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/html/results.html
    TESTS     : 1
659
     (1/1) timeouttest.1: ERROR (2.97 s)
660
    RESULTS    : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
661
    TIME      : 2.97 s
662

663 664 665

::

666
    $ cat $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
667
    15:54:28 test       L0143 INFO | START timeouttest.1
668
    15:54:28 test       L0144 DEBUG|
669
    15:54:28 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/timeouttest.1/test.log
670 671
    15:54:28 test       L0146 DEBUG| Test instance parameters:
    15:54:28 test       L0153 DEBUG|     id = timeouttest
672
    15:54:28 test       L0154 DEBUG|
673 674 675
    15:54:28 test       L0157 DEBUG| Default parameters:
    15:54:28 test       L0159 DEBUG|     sleep_time = 5.0
    15:54:28 test       L0159 DEBUG|     timeout = 3.0
676
    15:54:28 test       L0161 DEBUG|
677
    15:54:28 test       L0162 DEBUG| Test instance params override defaults whenever available
678
    15:54:28 test       L0163 DEBUG|
679
    15:54:28 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15759 to end
680
    15:54:28 test       L0170 INFO |
681
    15:54:28 timeouttes L0036 INFO | Sleeping for 5.00 seconds (2 more than the timeout)
682
    15:54:31 test       L0057 ERROR|
683
    15:54:31 test       L0060 ERROR| Traceback (most recent call last):
684
    15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/timeouttest.py", line 37, in action
685
    15:54:31 test       L0060 ERROR|     time.sleep(self.params.sleep_time)
686
    15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
687 688
    15:54:31 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
    15:54:31 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for timeouttest to end
689
    15:54:31 test       L0061 ERROR|
690
    15:54:31 test       L0400 ERROR| ERROR timeouttest.1 -> TestTimeoutError: Timeout reached waiting for timeouttest to end
691
    15:54:31 test       L0387 INFO |
692

693

694
Environment Variables for Simple Tests
695 696
======================================

697
Avocado exports Avocado variables and multiplexed variables as BASH environment
698
to the running test. Those variables are interesting to simple tests, because
699 700
they can not make use of Avocado API directly with Python, like the native
tests can do and also they can modify the test parameters.
701

702 703 704 705 706 707 708 709 710 711 712
Here are the current variables that Avocado exports to the tests:

+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| Environemnt Variable    | Meaning                               | Example                                                                                             |
+=========================+=======================================+=====================================================================================================+
| AVOCADO_VERSION         | Version of Avocado test runner        | 0.12.0                                                                                              |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_BASEDIR    | Base directory of Avocado tests       | $HOME/Downloads/avocado-source/avocado                                                              |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_DATADIR    | Data directory for the test           | $AVOCADO_TEST_BASEDIR/my_test.sh.data                                                               |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
713
| AVOCADO_TEST_WORKDIR    | Work directory for the test           | /var/tmp/avocado_Bjr_rd/my_test.sh                                                                  |
714
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
715
| AVOCADO_TEST_SRCDIR     | Source directory for the test         | /var/tmp/avocado_Bjr_rd/my-test.sh/src                                                              |
716
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
717
| AVOCADO_TEST_LOGDIR     | Log directory for the test            | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1                 |
718
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
719
| AVOCADO_TEST_LOGFILE    | Log file for the test                 | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/debug.log       |
720
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
721
| AVOCADO_TEST_OUTPUTDIR  | Output directory for the test         | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/data            |
722
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
723
| AVOCADO_TEST_SYSINFODIR | The system information directory      | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/sysinfo         |
724
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
725 726
| *                       | All variables from --multiplex-file   | TIMEOUT=60; IO_WORKERS=10; VM_BYTES=512M; ...                                                       |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
727

728 729 730 731 732 733 734 735 736

Simple Tests BASH extensions
============================

To enhance simple tests one can use supported set of libraries we created. The
only requirement is to use::

    PATH=$(avocado "exec-path"):$PATH

737
which injects path to Avocado utils into shell PATH. Take a look into
738 739 740 741
``avocado exec-path`` to see list of available functions and take a look at
``examples/tests/simplewarning.sh`` for inspiration.


742
Wrap Up
743
=======
744

745 746 747
We recommend you take a look at the example tests present in the
``examples/tests`` directory, that contains a few samples to take some
inspiration from. That directory, besides containing examples, is also used by
748
the Avocado self test suite to do functional testing of Avocado itself.
749

750 751
It is also recommended that you take a look at the :ref:`api-reference`.
for more possibilities.
752 753 754 755 756 757 758

.. [#f1] sleeptest is a functional test for Avocado. It's "old" because we
	 also have had such a test for `Autotest`_ for a long time.

.. _Autotest: http://autotest.github.io
.. _Class Names: https://www.python.org/dev/peps/pep-0008/
.. _PEP8 Function Names: https://www.python.org/dev/peps/pep-0008/#function-names