.. _writing-tests:
=====================
Writing Avocado Tests
=====================
Test Resolution in avocado - simple tests vs instrumented tests
===============================================================
What is a test in the avocado context? Either one of:
* An executable file, that returns exit code 0 (PASS) or != 0 (FAIL). This
is known as a SimpleTest, in avocado terminology.
* A python module containing a class derived from :mod:`avocado.test.Test`.
This is known as an instrumented test, in avocado terminology. The term
instrumented is used because the avocado python test classes allow you to
get more features for your test, such as logging facilities and more
sophisticated test APIs.
When you use the avocado runner, frequently you'll provide paths to files,
that will be inspected, and acted upon depending on their contents. The
diagram below shows how avocado analyzes a file and decides what to do with
it:
.. figure:: diagram.png
Now that we covered how avocado resolves tests, let's get to business.
This section is concerned with writing an avocado test. The process is not
hard, all you need to do is to create a test module, which is a python file
with a class that inherits from :class:`avocado.test.Test`. This class only
really needs to implement a method called `runTest`, which represents the actual
sequence of test operations.
Simple example
==============
Let's re-create an old time favorite, ``sleeptest``, which is a functional
test for avocado (old because we also use such a test for autotest). It does
nothing but ``time.sleep([number-seconds])``::
#!/usr/bin/python
import time
from avocado import test
from avocado import main
class SleepTest(test.Test):
"""
Example test for avocado.
"""
default_params = {'sleep_length': 1.0}
def runTest(self):
"""
Sleep for length seconds.
"""
self.log.debug("Sleeping for %.2f seconds", self.params.sleep_length)
time.sleep(self.params.sleep_length)
if __name__ == "__main__":
main()
This is about the simplest test you can write for avocado (at least, one using
the avocado APIs). An avocado test is basically a class that inherits from
:mod:`avocado.test.Test` and could have any name you might like (we'll trust
you'll choose a good name, although we do recommend that the name uses the
CamelCase convention, for PEP8 consistency).
Note that the test object provides you with a number of convenience
attributes, such as ``self.log``, that lets you log debug, info, error and
warning messages. Also, we note the parameter passing system that avocado
provides: We frequently want to pass parameters to tests, and we can do that
through what we call a `multiplex file`, which is a configuration file that
not only allows you to provide params to your test, but also easily create a
validation matrix in a concise way. You can find more about the multiplex
file format on :doc:`MultiplexConfig`.
Saving test generated (custom) data
===================================
Each test instance provides a so called ``whiteboard``. It that can be accessed
through ``self.whiteboard``. This whiteboard is simply a string that will be
automatically saved to test results (as long as the output format supports it).
If you choose to save binary data to the whiteboard, it's your responsibility to
encoded it first (base64 is the obvious choice).
Building on the previously demonstrated sleeptest, suppose that you want to save the
sleep length to be used by some other script or data analysis tool::
def runTest(self):
"""
Sleep for length seconds.
"""
self.log.debug("Sleeping for %.2f seconds", self.params.sleep_length)
time.sleep(self.params.sleep_length)
self.whiteboard = "%.2f" % self.params.sleep_length
The whiteboard can and should be exposed by files generated by the available test result
plugins. The ``results.json`` file already includes the whiteboard for each test.
Additionally, we'll save a raw copy of the whiteboard contents on a file named
``whiteboard``, in the same level as the ``results.json`` file, for your convenience
(maybe you want to use the result of a benchmark directly with your custom made scripts
to analyze that particular benchmark result).
Accessing test parameters
=========================
Each test has a set of parameters that can be accessed through ``self.params.[param-name]``.
Avocado finds and populates ``self.params`` with all parameters you define on a Multiplex
Config file (see :doc:`MultiplexConfig`), in a way that they are available as attributes,
not just dict keys. This has the advantage of reducing the boilerplate code necessary to
access those parameters. As an example, consider the following multiplex file for sleeptest::
variants:
- sleeptest:
sleep_length_type = float
variants:
- short:
sleep_length = 0.5
- medium:
sleep_length = 1
- long:
sleep_length = 5
You may notice some things here: there is one test param to sleeptest, called ``sleep_length``. We could have named it
``length`` really, but I prefer to create a param namespace of sorts here. Then, I defined
``sleep_length_type``, that is used by the config system to convert a value (by default a
:class:`basestring`) to an appropriate value type (in this case, we need to pass a :class:`float`
to :func:`time.sleep` anyway). Note that this is an optional feature, and you can always use
:func:`float` to convert the string value coming from the configuration anyway.
Another important design detail is that sometimes we might not want to use the config system
at all (for example, when we run an avocado test as a stand alone test). To account for this
case, we have to specify a ``default_params`` dictionary that contains the default values
for when we are not providing config from a multiplex file.
Using a multiplex file
======================
You may use the avocado runner with a multiplex file to provide params and matrix
generation for sleeptest just like::
$ avocado run sleeptest --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
JOB ID : d565e8dec576d6040f894841f32a836c751f968f
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/job.log
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/html/results.html
TESTS : 3
(1/3) sleeptest.short: PASS (0.50 s)
(2/3) sleeptest.medium: PASS (1.01 s)
(3/3) sleeptest.long: PASS (5.01 s)
PASS : 3
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
TIME : 6.52 s
Note that, as your multiplex file specifies all parameters for sleeptest, you
can't leave the test ID empty::
$ scripts/avocado run --multiplex examples/tests/sleeptest/sleeptest.yaml
Empty test ID. A test path or alias must be provided
If you want to run some tests that don't require params set by the multiplex file, you can::
$ avocado run sleeptest synctest --multiplex examples/tests/sleeptest.py.data/sleeptest.yaml
JOB ID : dd91ea5f8b42b2f084702315688284f7e8aa220a
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.49-dd91ea5f/job.log
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.49-dd91ea5f/html/results.html
TESTS : 4
(1/4) sleeptest.short: PASS (0.50 s)
(2/4) sleeptest.medium: PASS (1.01 s)
(3/4) sleeptest.long: PASS (5.01 s)
(4/4) synctest.1: ERROR (1.85 s)
PASS : 3
ERROR : 1
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
TIME : 8.69 s
Avocado tests are also unittests
================================
Since avocado tests inherit from :class:`unittest.TestCase`, you can use all
the :func:`assert` class methods on your tests. Some silly examples::
class RandomExamples(test.Test):
def runTest(self):
self.log.debug("Verifying some random math...")
four = 2 * 2
four_ = 2 + 2
self.assertEqual(four, four_, "something is very wrong here!")
self.log.debug("Verifying if a variable is set to True...")
variable = True
self.assertTrue(variable)
self.log.debug("Verifying if this test is an instance of test.Test")
self.assertIsInstance(self, test.Test)
The reason why we have a shebang in the beginning of the test is because
avocado tests, similarly to unittests, can use an entry point, called
:func:`avocado.main`, that calls avocado libs to look for test classes and execute
its main entry point. This is an optional, but fairly handy feature. In case
you want to use it, don't forget to ``chmod +x`` your test.
Executing an avocado test gives::
$ examples/tests/sleeptest.py
JOB ID : de6c1e4c227c786dc4d926f6fca67cda34d96276
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.48-de6c1e4c/job.log
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.48-de6c1e4c/html/results.html
TESTS : 1
(1/1) sleeptest.1: PASS (1.00 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
TIME : 1.00 s
Running tests with nosetests
============================
`nose `__ is a python testing framework with
similar goals as avocado, except that avocado also intends to provide tools to
assemble a fully automated test grid, plus richer test API for tests on the
Linux platform. Regardless, the fact that an avocado class is also an unittest
cass, you can run them with the ``nosetests`` application::
$ nosetests examples/tests/sleeptest.py
.
----------------------------------------------------------------------
Ran 1 test in 1.004s
OK
Setup and cleanup methods
=========================
If you need to perform setup actions before/after your test, you may do so
in the ``setUp`` and ``tearDown`` methods, respectively. We'll give examples
in the following section.
Running third party test suites
===============================
It is very common in test automation workloads to use test suites developed
by third parties. By wrapping the execution code inside an avocado test module,
you gain access to the facilities and API provided by the framework. Let's
say you want to pick up a test suite written in C that it is in a tarball,
uncompress it, compile the suite code, and then executing the test. Here's
an example that does that::
#!/usr/bin/python
import os
from avocado import test
from avocado import main
from avocado.utils import archive
from avocado.utils import build
from avocado.utils import process
class SyncTest(test.Test):
"""
Execute the synctest test suite.
"""
default_params = {'sync_tarball': 'synctest.tar.bz2',
'sync_length': 100,
'sync_loop': 10}
def setUp(self):
"""
Set default params and build the synctest suite.
"""
# Build the synctest suite
self.cwd = os.getcwd()
tarball_path = self.get_data_path(self.params.sync_tarball)
archive.extract(tarball_path, self.srcdir)
self.srcdir = os.path.join(self.srcdir, 'synctest')
build.make(self.srcdir)
def runTest(self):
"""
Execute synctest with the appropriate params.
"""
os.chdir(self.srcdir)
cmd = ('./synctest %s %s' %
(self.params.sync_length, self.params.sync_loop))
process.system(cmd)
os.chdir(self.cwd)
if __name__ == "__main__":
main()
Here we have an example of the ``setup`` method in action: Here we get the
location of the test suite code (tarball) through
:func:`avocado.test.Test.get_data_path`, then uncompress the tarball through
:func:`avocado.utils.archive.extract`, an API that will
decompress the suite tarball, followed by ``build.make``, that will build the
suite.
In this example, the ``action`` method just gets into the base directory of
the compiled suite and executes the ``./synctest`` command, with appropriate
parameters, using :func:`avocado.utils.process.system`.
Test Output Check and Output Record Mode
========================================
In a lot of occasions, you want to go simpler: just check if the output of a
given application matches an expected output. In order to help with this common
use case, we offer the option ``--output-check-record [mode]`` to the test runner::
--output-check-record OUTPUT_CHECK_RECORD
Record output streams of your tests to reference files
(valid options: none (do not record output streams),
all (record both stdout and stderr), stdout (record
only stderr), stderr (record only stderr). Default:
none
If this option is used, it will store the stdout or stderr of the process (or
both, if you specified ``all``) being executed to reference files: ``stdout.expected``
and ``stderr.expected``. Those files will be recorded in the test data dir. The
data dir is in the same directory as the test source file, named
``[source_file_name.data]``. Let's take as an example the test ``synctest.py``. In a
fresh checkout of avocado, you can see::
examples/tests/synctest.py.data/stderr.expected
examples/tests/synctest.py.data/stdout.expected
From those 2 files, only stdout.expected is non empty::
$ cat examples/tests/synctest.py.data/stdout.expected
PAR : waiting
PASS : sync interrupted
The output files were originally obtained using the test runner and passing the
option --output-check-record all to the test runner::
$ scripts/avocado run --output-check-record all synctest
JOB ID : bcd05e4fd33e068b159045652da9eb7448802be5
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
TESTS : 1
(1/1) synctest.py: PASS (2.20 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
TIME : 2.20 s
After the reference files are added, the check process is transparent, in the sense
that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you want to override the default
behavior and skip output check entirely, you may provide the flag ``--output-check=off`` to the test runner.
The :mod:`avocado.utils.process` APIs have a parameter ``allow_output_check`` (defaults to ``all``), so that you
can select which process outputs will go to the reference files, should you chose to record them. You may choose
``all``, for both stdout and stderr, ``stdout``, for the stdout only, ``stderr``, for only the stderr only, or ``none``,
to allow neither of them to be recorded and checked.
This process works fine also with simple tests, which are programs or shell scripts
that returns 0 (PASSed) or != 0 (FAILed). Let's consider our bogus example::
$ cat output_record.sh
#!/bin/bash
echo "Hello, world!"
Let's record the output for this one::
$ scripts/avocado run output_record.sh --output-check-record all
JOB ID : 25c4244dda71d0570b7f849319cd71fe1722be8b
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.49-25c4244/job.log
TESTS : 1
(1/1) home/$USER/Code/avocado/output_record.sh: PASS (0.01 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
TIME : 0.01 s
After this is done, you'll notice that a the test data directory
appeared in the same level of our shell script, containing 2 files::
$ ls output_record.sh.data/
stderr.expected stdout.expected
Let's look what's in each of them::
$ cat output_record.sh.data/stdout.expected
Hello, world!
$ cat output_record.sh.data/stderr.expected
$
Now, every time this test runs, it'll take into account the expected files that
were recorded, no need to do anything else but run the test. Let's see what
happens if we change the ``stdout.expected`` file contents to ``Hello, avocado!``::
$ scripts/avocado run output_record.sh
JOB ID : f0521e524face93019d7cb99c5765aedd933cb2e
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
TESTS : 1
(1/1) home/$USER/Code/avocado/output_record.sh: FAIL (0.02 s)
PASS : 0
ERROR : 0
FAIL : 1
SKIP : 0
WARN : 0
TIME : 0.02 s
Verifying the failure reason::
$ cat $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
20:52:38 test L0163 INFO | START home/$USER/Code/avocado/output_record.sh
20:52:38 test L0164 DEBUG|
20:52:38 test L0165 DEBUG| Test instance parameters:
20:52:38 test L0173 DEBUG|
20:52:38 test L0176 DEBUG| Default parameters:
20:52:38 test L0180 DEBUG|
20:52:38 test L0181 DEBUG| Test instance params override defaults whenever available
20:52:38 test L0182 DEBUG|
20:52:38 process L0242 INFO | Running '$HOME/Code/avocado/output_record.sh'
20:52:38 process L0310 DEBUG| [stdout] Hello, world!
20:52:38 test L0565 INFO | Command: $HOME/Code/avocado/output_record.sh
20:52:38 test L0565 INFO | Exit status: 0
20:52:38 test L0565 INFO | Duration: 0.00313782691956
20:52:38 test L0565 INFO | Stdout:
20:52:38 test L0565 INFO | Hello, world!
20:52:38 test L0565 INFO |
20:52:38 test L0565 INFO | Stderr:
20:52:38 test L0565 INFO |
20:52:38 test L0060 ERROR|
20:52:38 test L0063 ERROR| Traceback (most recent call last):
20:52:38 test L0063 ERROR| File "$HOME/Code/avocado/avocado/test.py", line 397, in check_reference_stdout
20:52:38 test L0063 ERROR| self.assertEqual(expected, actual, msg)
20:52:38 test L0063 ERROR| File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
20:52:38 test L0063 ERROR| assertion_func(first, second, msg=msg)
20:52:38 test L0063 ERROR| File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual
20:52:38 test L0063 ERROR| raise self.failureException(msg)
20:52:38 test L0063 ERROR| AssertionError: Actual test sdtout differs from expected one:
20:52:38 test L0063 ERROR| Actual:
20:52:38 test L0063 ERROR| Hello, world!
20:52:38 test L0063 ERROR|
20:52:38 test L0063 ERROR| Expected:
20:52:38 test L0063 ERROR| Hello, avocado!
20:52:38 test L0063 ERROR|
20:52:38 test L0064 ERROR|
20:52:38 test L0529 ERROR| FAIL home/$USER/Code/avocado/output_record.sh -> AssertionError: Actual test sdtout differs from expected one:
Actual:
Hello, world!
Expected:
Hello, avocado!
20:52:38 test L0516 INFO |
As expected, the test failed because we changed its expectations.
Test log, stdout and stderr in native avocado modules
=====================================================
If needed, you can write directly to the expected stdout and stderr files
from the native test scope. It is important to make the distinction between
the following entities:
* The test logs
* The test expected stdout
* The test expected stderr
The first one is used for debugging and informational purposes. Additionally
writing to `self.log.warning` causes test to be marked as dirty and when
everything else goes well the test ends with WARN. This means that the test
passed but there were non-related unexpected situations described in warning
log.
You may log something into the test logs using the methods in
:mod:`avocado.test.Test.log` class attributes. Consider the example::
class output_test(test.Test):
def runTest(self):
self.log.info('This goes to the log and it is only informational')
self.log.warn('Oh, something unexpected, non-critical happened, '
'but we can continue.')
self.log.error('Describe the error here and don't forget to raise '
'an exception yourself. Writing to self.log.error '
'won't do that for you.')
self.log.debug('Everybody look, I had a good lunch today...')
If you need to write directly to the test stdout and stderr streams, there
are another 2 class attributes for that, :mod:`avocado.test.Test.stdout_log`
and :mod:`avocado.test.Test.stderr_log`, that have the exact same methods
of the log object. So if you want to add stuff to your expected stdout and
stderr streams, you can do something like::
class output_test(test.Test):
def runTest(self):
self.log.info('This goes to the log and it is only informational')
self.stdout_log.info('This goes to the test stdout (will be recorded)')
self.stderr_log.info('This goes to the test stderr (will be recorded)')
Each one of the last 2 statements will go to the ``stdout.expected`` and
``stderr.expected``, should you choose ``--output-check-record all``, and
will be output to the files ``stderr`` and ``stdout`` of the job results dir
every time that test is executed.
Avocado Tests run on a separate process
=======================================
In order to avoid tests to mess around the environment used by the main
avocado runner process, tests are run on a forked subprocess. This allows
for more robustness (tests are not easily able to mess/break avocado) and
some nifty features, such as setting test timeouts.
Setting a Test Timeout
======================
Sometimes your test suite/test might get stuck forever, and this might
impact your test grid. You can account for that possibility and set up a
``timeout`` parameter for your test. The test timeout can be set through
2 means, in the following order of precedence:
* Multiplex variable parameters. You may just set the timeout parameter, like
in the following simplistic example:
::
variants:
- sleeptest:
sleep_length = 5
sleep_length_type = float
timeout = 3
timeout_type = float
::
$ avocado run sleeptest --multiplex /tmp/sleeptest-example.mplx
JOB ID : 6d5a2ff16bb92395100fbc3945b8d253308728c9
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/html/results.html
TESTS : 1
(1/1) sleeptest.1: ERROR (2.97 s)
PASS : 0
ERROR : 1
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
TIME : 2.97 s
::
$ cat $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
15:52:51 test L0143 INFO | START sleeptest.1
15:52:51 test L0144 DEBUG|
15:52:51 test L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/sleeptest.1/test.log
15:52:51 test L0146 DEBUG| Test instance parameters:
15:52:51 test L0153 DEBUG| _name_map_file = {'sleeptest-example.mplx': 'sleeptest'}
15:52:51 test L0153 DEBUG| _short_name_map_file = {'sleeptest-example.mplx': 'sleeptest'}
15:52:51 test L0153 DEBUG| dep = []
15:52:51 test L0153 DEBUG| id = sleeptest
15:52:51 test L0153 DEBUG| name = sleeptest
15:52:51 test L0153 DEBUG| shortname = sleeptest
15:52:51 test L0153 DEBUG| sleep_length = 5.0
15:52:51 test L0153 DEBUG| sleep_length_type = float
15:52:51 test L0153 DEBUG| timeout = 3.0
15:52:51 test L0153 DEBUG| timeout_type = float
15:52:51 test L0154 DEBUG|
15:52:51 test L0157 DEBUG| Default parameters:
15:52:51 test L0159 DEBUG| sleep_length = 1.0
15:52:51 test L0161 DEBUG|
15:52:51 test L0162 DEBUG| Test instance params override defaults whenever available
15:52:51 test L0163 DEBUG|
15:52:51 test L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15670 to end
15:52:51 test L0170 INFO |
15:52:51 sleeptest L0035 DEBUG| Sleeping for 5.00 seconds
15:52:54 test L0057 ERROR|
15:52:54 test L0060 ERROR| Traceback (most recent call last):
15:52:54 test L0060 ERROR| File "$HOME/Code/avocado/tests/sleeptest.py", line 36, in action
15:52:54 test L0060 ERROR| time.sleep(self.params.sleep_length)
15:52:54 test L0060 ERROR| File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
15:52:54 test L0060 ERROR| raise exceptions.TestTimeoutError(e_msg)
15:52:54 test L0060 ERROR| TestTimeoutError: Timeout reached waiting for sleeptest to end
15:52:54 test L0061 ERROR|
15:52:54 test L0400 ERROR| ERROR sleeptest.1 -> TestTimeoutError: Timeout reached waiting for sleeptest to end
15:52:54 test L0387 INFO |
If you pass that multiplex file to the runner multiplexer, this will register
a timeout of 3 seconds before avocado ends the test forcefully by sending a
:class:`signal.SIGTERM` to the test, making it raise a
:class:`avocado.core.exceptions.TestTimeoutError`.
* Default params attribute. Consider the following example:
::
import time
from avocado import test
from avocado import main
class TimeoutTest(test.Test):
"""
Functional test for avocado. Throw a TestTimeoutError.
"""
default_params = {'timeout': 3.0,
'sleep_time': 5.0}
def runTest(self):
"""
This should throw a TestTimeoutError.
"""
self.log.info('Sleeping for %.2f seconds (2 more than the timeout)',
self.params.sleep_time)
time.sleep(self.params.sleep_time)
if __name__ == "__main__":
main()
This accomplishes a similar effect to the multiplex setup defined in there.
::
$ avocado run timeouttest
JOB ID : d78498a54504b481192f2f9bca5ebb9bbb820b8a
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/html/results.html
TESTS : 1
(1/1) timeouttest.1: ERROR (2.97 s)
PASS : 0
ERROR : 1
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
TIME : 2.97 s
::
$ cat $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
15:54:28 test L0143 INFO | START timeouttest.1
15:54:28 test L0144 DEBUG|
15:54:28 test L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/timeouttest.1/test.log
15:54:28 test L0146 DEBUG| Test instance parameters:
15:54:28 test L0153 DEBUG| id = timeouttest
15:54:28 test L0154 DEBUG|
15:54:28 test L0157 DEBUG| Default parameters:
15:54:28 test L0159 DEBUG| sleep_time = 5.0
15:54:28 test L0159 DEBUG| timeout = 3.0
15:54:28 test L0161 DEBUG|
15:54:28 test L0162 DEBUG| Test instance params override defaults whenever available
15:54:28 test L0163 DEBUG|
15:54:28 test L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15759 to end
15:54:28 test L0170 INFO |
15:54:28 timeouttes L0036 INFO | Sleeping for 5.00 seconds (2 more than the timeout)
15:54:31 test L0057 ERROR|
15:54:31 test L0060 ERROR| Traceback (most recent call last):
15:54:31 test L0060 ERROR| File "$HOME/Code/avocado/tests/timeouttest.py", line 37, in action
15:54:31 test L0060 ERROR| time.sleep(self.params.sleep_time)
15:54:31 test L0060 ERROR| File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
15:54:31 test L0060 ERROR| raise exceptions.TestTimeoutError(e_msg)
15:54:31 test L0060 ERROR| TestTimeoutError: Timeout reached waiting for timeouttest to end
15:54:31 test L0061 ERROR|
15:54:31 test L0400 ERROR| ERROR timeouttest.1 -> TestTimeoutError: Timeout reached waiting for timeouttest to end
15:54:31 test L0387 INFO |
Environment Variables for Simple Tests
======================================
Avocado exports avocado variables and multiplexed variables as BASH environment
to the running test. Those variables are interesting to simple tests, because
they can not make use of Avocado API directly with Python, like the native
tests can do and also they can modify the test parameters.
Here are the current variables that Avocado exports to the tests:
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| Environemnt Variable | Meaning | Example |
+=========================+=======================================+=====================================================================================================+
| AVOCADO_VERSION | Version of Avocado test runner | 0.12.0 |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_BASEDIR | Base directory of Avocado tests | $HOME/Downloads/avocado-source/avocado |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_DATADIR | Data directory for the test | $AVOCADO_TEST_BASEDIR/my_test.sh.data |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_WORKDIR | Work directory for the test | /var/tmp/avocado_Bjr_rd/my_test.sh |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_SRCDIR | Source directory for the test | /var/tmp/avocado_Bjr_rd/my-test.sh/src |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_LOGDIR | Log directory for the test | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1 |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_LOGFILE | Log file for the test | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/debug.log |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_OUTPUTDIR | Output directory for the test | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/data |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| AVOCADO_TEST_SYSINFODIR | The system information directory | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/sysinfo |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
| * | All variables from --multiplex-file | TIMEOUT=60; IO_WORKERS=10; VM_BYTES=512M; ... |
+-------------------------+---------------------------------------+-----------------------------------------------------------------------------------------------------+
Simple Tests BASH extensions
============================
To enhance simple tests one can use supported set of libraries we created. The
only requirement is to use::
PATH=$(avocado "exec-path"):$PATH
which injects path to avocado utils into shell PATH. Take a look into
``avocado exec-path`` to see list of available functions and take a look at
``examples/tests/simplewarning.sh`` for inspiration.
Wrap Up
=======
We recommend you take a look at the example tests present in the
``examples/tests`` directory, that contains a few samples to take some
inspiration from. That directory, besides containing examples, is also used by
the avocado self test suite to do functional testing of avocado itself.
It is also recommended that you take a look at the
:doc:`API documentation ` for more possibilities.