Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openeuler
avocado
提交
dfac839e
A
avocado
项目概览
openeuler
/
avocado
通知
0
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
A
avocado
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
dfac839e
编写于
10月 01, 2014
作者:
L
Lucas Meneghel Rodrigues
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
man: Add recording test output documentation
Signed-off-by:
N
Lucas Meneghel Rodrigues
<
lmr@redhat.com
>
上级
34fc7286
变更
1
显示空白变更内容
内联
并排
Showing
1 changed file
with
92 addition
and
0 deletion
+92
-0
man/avocado.rst
man/avocado.rst
+92
-0
未找到文件。
man/avocado.rst
浏览文件 @
dfac839e
...
...
@@ -200,6 +200,98 @@ while you are debugging it, avocado has no way to know about its status.
Avocado will automatically send a `continue` command to the debugger
when you disconnect from and exit gdb.
RECORDING TEST REFERENCE OUTPUT
===============================
As a tester, you may want to check if the output of a given application matches
an expected output. In order to help with this common use case, we offer the
option ``--output-check-record [mode]`` to the test runner. If this option is
used, it will store the stdout or stderr of the process (or both, if you
specified ``all``) being executed to reference files: ``stdout.expected`` and
``stderr.expected``.
Those files will be recorded in the test data dir. The data dir is in the same
directory as the test source file, named ``[source_file_name.data]``. Let's
take as an example the test ``synctest.py``. In a fresh checkout of avocado,
you can see::
examples/tests/synctest.py.data/stderr.expected
examples/tests/synctest.py.data/stdout.expected
From those 2 files, only stdout.expected is non empty::
$ cat examples/tests/synctest.py.data/stdout.expected
PAR : waiting
PASS : sync interrupted
The output files were originally obtained using the test runner and passing the
option --output-check-record all to the test runner::
$ avocado run --output-check-record all examples/tests/synctest.py
JOB ID : <id>
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 1
(1/1) examples/tests/synctest.py: PASS (2.20 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
NOT FOUND : 0
TIME : 2.20 s
After the reference files are added, the check process is transparent, in the
sense that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you
want to override the default behavior and skip output check entirely, you may
provide the flag ``--disable-output-check`` to the test runner.
The ``avocado.utils.process`` APIs have a parameter ``allow_output_check``
(defaults to ``all``), so that you can select which process outputs will go to
the reference files, should you chose to record them. You may choose ``all``,
for both stdout and stderr, ``stdout``, for the stdout only, ``stderr``, for
only the stderr only, or ``none``, to allow neither of them to be recorded and
checked.
This process works fine also with dropin tests (random executables that
return 0 (PASSed) or != 0 (FAILed). Let's consider our bogus example::
$ cat output_record.sh
#!/bin/bash
echo "Hello, world!"
Let's record the output (both stdout and stderr) for this one::
$ avocado run output_record.sh --output-check-record all
JOB ID : <id>
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 1
(1/1) home/lmr/Code/avocado.lmr/output_record.sh: PASS (0.01 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
NOT FOUND : 0
TIME : 0.01 s
After this is done, you'll notice that a the test data directory
appeared in the same level of our shell script, containing 2 files::
$ ls output_record.sh.data/
stderr.expected stdout.expected
Let's look what's in each of them::
$ cat output_record.sh.data/stdout.expected
Hello, world!
$ cat output_record.sh.data/stderr.expected
$
Now, every time this test runs, it'll take into account the expected files that
were recorded, no need to do anything else but run the test.
FILES
=====
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录