提交 8aa56d58 编写于 作者: C Cleber Rosa

Output Check: documentation update

The documentation and examples on the output check feature are
outdated or innaccurate.

For instance, the synctest.py example would produce different output
than recorded in the accompanying reference files because the API
parameters were not being respected.
Signed-off-by: NCleber Rosa <crosa@redhat.com>
上级 11f3b0df
......@@ -134,13 +134,31 @@ class Run(CLICmd):
out_check = parser.add_argument_group('output check arguments')
out_check.add_argument('--output-check-record',
choices=('none', 'all', 'stdout', 'stderr',
'both', 'combined'),
help="Record output streams of your tests "
"to reference files (valid options: none (do "
"not record output streams), all (record both "
"stdout and stderr), stdout (record only "
"stderr), stderr (record only stderr). ")
choices=('none', 'stdout', 'stderr',
'both', 'combined', 'all'),
help="Record the output produced by each test "
"(from stdout and stderr) into both the "
"current executing result and into "
"reference files. Reference files are "
"used on subsequent runs to determine if "
"the test produced the expected output or "
"not, and the current executing result is "
"used to check against a previously "
"recorded reference file. Valid values: "
"'none' (to explicitly disable all "
"recording) 'stdout' (to record standard "
"output *only*), 'stderr' (to record "
"standard error *only*), 'both' (to record"
" standard output and error in separate "
"files), 'combined' (for standard output "
"and error in a single file). 'all' is "
"also a valid but deprecated option that "
"is a synonym of 'both'. This option "
"does not have a default value, but the "
"Avocado test runner will record the "
"test under execution in the most suitable"
" way unless it's explicitly disabled with"
" value 'none'")
out_check.add_argument('--output-check', choices=('on', 'off'),
default='on',
......
......@@ -752,52 +752,88 @@ Test Output Check and Output Record Mode
========================================
In a lot of occasions, you want to go simpler: just check if the output of a
given application matches an expected output. In order to help with this common
use case, we offer the option ``--output-check-record [mode]`` to the test runner::
--output-check-record OUTPUT_CHECK_RECORD
Record output streams of your tests to reference files
(valid options: none (do not record output streams),
all (record both stdout and stderr), stdout (record
only stderr), stderr (record only stderr). Default:
none
If this option is used, it will store the stdout or stderr of the process (or
both, if you specified ``all``) being executed to reference files: ``stdout.expected``
and ``stderr.expected``. Those files will be recorded in the first (most specific)
test's data dir (:ref:`accessing-test-data-files`). Let's take as an example the test
``synctest.py``. In a fresh checkout of Avocado, you can see::
examples/tests/synctest.py.data/stderr.expected
examples/tests/synctest.py.data/stdout.expected
From those 2 files, only stdout.expected is non empty::
$ cat examples/tests/synctest.py.data/stdout.expected
PAR : waiting
PASS : sync interrupted
The output files were originally obtained using the test runner and passing the
option --output-check-record all to the test runner::
$ scripts/avocado run --output-check-record all synctest.py
JOB ID : bcd05e4fd33e068b159045652da9eb7448802be5
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
(1/1) synctest.py:SyncTest.test: PASS (2.20 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB TIME : 2.30 s
After the reference files are added, the check process is transparent, in the sense
that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you want to override the default
behavior and skip output check entirely, you may provide the flag ``--output-check=off`` to the test runner.
The :mod:`avocado.utils.process` APIs have a parameter ``allow_output_check`` (defaults to ``all``), so that you
can select which process outputs will go to the reference files, should you chose to record them. You may choose
``all``, for both stdout and stderr, ``stdout``, for the stdout only, ``stderr``, for only the stderr only, or ``none``,
to allow neither of them to be recorded and checked.
given test matches an expected output. In order to help with this common
use case, Avocado provides the ``--output-check-record`` option::
--output-check-record {none,stdout,stderr,both,combined,all}
Record the output produced by each test (from stdout
and stderr) into both the current executing result and
into reference files. Reference files are used on
subsequent runs to determine if the test produced the
expected output or not, and the current executing
result is used to check against a previously recorded
reference file. Valid values: 'none' (to explicitly
disable all recording) 'stdout' (to record standard
output *only*), 'stderr' (to record standard error
*only*), 'both' (to record standard output and error
in separate files), 'combined' (for standard output
and error in a single file). 'all' is also a valid but
deprecated option that is a synonym of 'both'. This
option does not have a default value, but the Avocado
test runner will record the test under execution in
the most suitable way unless it's explicitly disabled
with value 'none'
If this option is used, Avocado will store the content generated by
the test in the standard (POSIX) streams, that is, ``STDOUT`` and
``STDERR``. Depending on the option chosen, you may end up with different
files recorded (into what we call "reference files"):
* ``stdout`` will produce a file named ``stdout.expected`` with the
contents from the test process standard output stream (file
descriptor 1)
* ``stderr`` will produce a file named ``stderr.expected`` with the
contents from the test process standard error stream (file
descriptor 2)
* ``both`` will produce both a file named ``stdout.expected`` and a
file named ``stderr.expected``
* ``combined``: will produce a single file named ``output.expected``,
with the content from both test process standard output and error
streams (file descriptors 1 and 2)
* ``none`` will explicitly disable all recording of test generated
output and the generation reference files with that content
The reference files will be recorded in the first (most specific)
test's data dir (:ref:`accessing-test-data-files`). Let's take as an
example the test ``synctest.py``. In a fresh checkout of the Avocado
source code you can find the following reference files::
examples/tests/synctest.py.data/stderr.expected
examples/tests/synctest.py.data/stdout.expected
From those 2 files, only stdout.expected has some content::
$ cat examples/tests/synctest.py.data/stdout.expected
PAR : waiting
PASS : sync interrupted
This means that during a previous test execution, output was recorded
with option ``--output-check-record both`` and content was generated
on the ``STDOUT`` stream only::
$ avocado run --output-check-record both synctest.py
JOB ID : b6306504351b037fa304885c0baa923710f34f4a
JOB LOG : $JOB_RESULTS_DIR/job-2017-11-26T16.42-b630650/job.log
(1/1) examples/tests/synctest.py:SyncTest.test: PASS (2.03 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME : 2.26 s
After the reference files are added, the check process is transparent,
in the sense that you do not need to provide special flags to the test
runner. From this point on, after such as test (one with a reference
file recorded) has finished running, Avocado will check if the output
generated match the reference(s) file(s) content. If they don't
match, the test will finish with a ``FAIL`` status.
You can disable this automatic check when a reference file exists by
passing ``--output-check=off`` to the test runner.
.. tip:: The :mod:`avocado.utils.process` APIs have a parameter called
``allow_output_check`` that let you individually select the
output that will be part of the test output and recorded
reference files. Some other APIs built on top of
:mod:`avocado.utils.process`, such as the ones in
:mod:`avocado.utils.build` also provide the same parameter.
This process works fine also with simple tests, which are programs or shell scripts
that returns 0 (PASSed) or != 0 (FAILed). Let's consider our bogus example::
......
......@@ -35,9 +35,11 @@ class SyncTest(Test):
if self.params.get('debug_symbols', default=True):
build.make(srcdir,
env={'CFLAGS': '-g -O0'},
extra_args='synctest')
extra_args='synctest',
allow_output_check='none')
else:
build.make(srcdir)
build.make(srcdir,
allow_output_check='none')
def test(self):
"""
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册