提交 dfac839e 编写于 作者: L Lucas Meneghel Rodrigues

man: Add recording test output documentation

Signed-off-by: NLucas Meneghel Rodrigues <lmr@redhat.com>
上级 34fc7286
......@@ -200,6 +200,98 @@ while you are debugging it, avocado has no way to know about its status.
Avocado will automatically send a `continue` command to the debugger
when you disconnect from and exit gdb.
RECORDING TEST REFERENCE OUTPUT
===============================
As a tester, you may want to check if the output of a given application matches
an expected output. In order to help with this common use case, we offer the
option ``--output-check-record [mode]`` to the test runner. If this option is
used, it will store the stdout or stderr of the process (or both, if you
specified ``all``) being executed to reference files: ``stdout.expected`` and
``stderr.expected``.
Those files will be recorded in the test data dir. The data dir is in the same
directory as the test source file, named ``[source_file_name.data]``. Let's
take as an example the test ``synctest.py``. In a fresh checkout of avocado,
you can see::
examples/tests/synctest.py.data/stderr.expected
examples/tests/synctest.py.data/stdout.expected
From those 2 files, only stdout.expected is non empty::
$ cat examples/tests/synctest.py.data/stdout.expected
PAR : waiting
PASS : sync interrupted
The output files were originally obtained using the test runner and passing the
option --output-check-record all to the test runner::
$ avocado run --output-check-record all examples/tests/synctest.py
JOB ID : <id>
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 1
(1/1) examples/tests/synctest.py: PASS (2.20 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
NOT FOUND : 0
TIME : 2.20 s
After the reference files are added, the check process is transparent, in the
sense that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you
want to override the default behavior and skip output check entirely, you may
provide the flag ``--disable-output-check`` to the test runner.
The ``avocado.utils.process`` APIs have a parameter ``allow_output_check``
(defaults to ``all``), so that you can select which process outputs will go to
the reference files, should you chose to record them. You may choose ``all``,
for both stdout and stderr, ``stdout``, for the stdout only, ``stderr``, for
only the stderr only, or ``none``, to allow neither of them to be recorded and
checked.
This process works fine also with dropin tests (random executables that
return 0 (PASSed) or != 0 (FAILed). Let's consider our bogus example::
$ cat output_record.sh
#!/bin/bash
echo "Hello, world!"
Let's record the output (both stdout and stderr) for this one::
$ avocado run output_record.sh --output-check-record all
JOB ID : <id>
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 1
(1/1) home/lmr/Code/avocado.lmr/output_record.sh: PASS (0.01 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
NOT FOUND : 0
TIME : 0.01 s
After this is done, you'll notice that a the test data directory
appeared in the same level of our shell script, containing 2 files::
$ ls output_record.sh.data/
stderr.expected stdout.expected
Let's look what's in each of them::
$ cat output_record.sh.data/stdout.expected
Hello, world!
$ cat output_record.sh.data/stderr.expected
$
Now, every time this test runs, it'll take into account the expected files that
were recorded, no need to do anything else but run the test.
FILES
=====
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册