提交 754c0a85 编写于 作者: C Cleber Rosa

Tests time accounting: be more precise about what is being shown

Avocado has until now shown very simplified time information for jobs,
and some could even say unprecise or just plain incorrect.

The fact is that the time displayed in the UI (HumanResult) and also
on the various other supported formats is the sum of the run time of
individual tests.

The problem with that is tha a job may include other actions that can
take time, from little to substancial amounts.  This makes it clear
that a job that say, took 30 seconds to run (although not displayed at
this point), spent only 20 seconds actually running tests.
Signed-off-by: NCleber Rosa <crosa@redhat.com>
上级 101bf36a
......@@ -46,7 +46,7 @@
<td>Results Dir</td><td><a href="{{results_dir}}"><tt>{{results_dir_basename}}</tt></a></td>
</tr>
<tr>
<td>Execution time</td><td>{{execution_time}} s</td>
<td>Cumulative test time</td><td>{{execution_time}} s</td>
</tr>
<tr>
<td>Stats</td><td>From {{total}} tests executed, {{passed}} passed (pass rate of {{pass_rate}}%)</td>
......
......@@ -231,7 +231,7 @@ class HumanTestResult(TestResult):
logdir = os.path.dirname(self.logfile)
html_file = os.path.join(logdir, 'html', 'results.html')
self.log.info("JOB HTML : %s", html_file)
self.log.info("TIME : %.2f s", self.total_time)
self.log.info("TESTS TIME : %.2f s", self.total_time)
def start_test(self, state):
super(HumanTestResult, self).start_test(state)
......
......@@ -81,7 +81,7 @@ recognizable name::
(1/1) /bin/true: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.39-381b849a/html/results.html
TIME : 0.01 s
TESTS TIME : 0.01 s
You probably noticed that we used ``/bin/true`` as a test, and in accordance with our
expectations, it passed! These are known as `simple tests`, but there is also another
......@@ -101,7 +101,7 @@ using the ``--dry-run`` argument::
(1/1) /bin/true: SKIP
RESULTS : PASS 0 | ERROR 0 | FAIL 0 | SKIP 1 | WARN 0 | INTERRUPT 0
JOB HTML : /tmp/avocado-dry-runSeWniM/job-2015-10-16T15.46-0000000/html/results.html
TIME : 0.00 s
TESTS TIME : 0.00 s
which supports all ``run`` arguments, simulates the run and even lists the test params.
......@@ -191,7 +191,7 @@ instrumented and simple tests::
(6/6) /tmp/simple_test.sh.1: PASS (0.02 s)
RESULTS : PASS 2 | ERROR 2 | FAIL 2 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.42-86911e49/html/results.html
TIME : 5.88 s
TESTS TIME : 5.88 s
.. _running-external-runner:
......@@ -234,7 +234,7 @@ files with shell code could be considered tests::
(2/2) /tmp/fail: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : /home/<user>/avocado/job-results/job-<date>-<shortid>/html/results.html
TIME : 0.01 s
TESTS TIME : 0.01 s
This example is pretty obvious, and could be achieved by giving
`/tmp/pass` and `/tmp/fail` shell "shebangs" (`#!/bin/sh`), making
......@@ -252,7 +252,7 @@ But now consider the following example::
(2/2) http://remote-avocado-server:9405/jobs/: FAIL (3.02 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : /home/<user>/avocado/job-results/job-<date>-<shortid>/html/results.html
TIME : 3.04 s
TESTS TIME : 3.04 s
This effectively makes `/bin/curl` an "external test runner", responsible for
trying to fetch those URLs, and reporting PASS or FAIL for each of them.
......
......@@ -20,7 +20,7 @@ Let's see an example. First, running a simple job with two urls::
(2/2) /bin/false: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T16.14-825b860/html/results.html
TIME : 0.02 s
TESTS TIME : 0.02 s
Now we can replay the job by running::
......@@ -33,7 +33,7 @@ Now we can replay the job by running::
(2/2) /bin/false: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T16.18-55a0d10/html/results.html
TIME : 0.01 s
TESTS TIME : 0.01 s
The replay feature will retrieve the original job urls, the multiplex
tree and the configuration. Let's see another example, now using
......@@ -93,7 +93,7 @@ multiplex file::
(48/48) /bin/false;24: FAIL (0.01 s)
RESULTS : PASS 24 | ERROR 0 | FAIL 24 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T21.56-bd6aa3b/html/results.html
TIME : 0.29 s
TESTS TIME : 0.29 s
We can replay the job as is, using ``$ avocado run --replay bd6aa3b``,
or replay the job ignoring the multiplex file, as below::
......@@ -108,7 +108,7 @@ or replay the job ignoring the multiplex file, as below::
(2/2) /bin/false: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T22.01-d5a4618/html/results.html
TIME : 0.02 s
TESTS TIME : 0.02 s
Also, it is possible to replay only the variants that faced a given
result, using the option ``--replay-test-status``. Using the same job
......@@ -169,7 +169,7 @@ result, using the option ``--replay-test-status``. Using the same job
(48/48) /bin/false;24: FAIL (0.01 s)
RESULTS : PASS 0 | ERROR 0 | FAIL 24 | SKIP 24 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2016-01-12T00.38-2e1dc41/html/results.html
TIME : 0.19 s
TESTS TIME : 0.19 s
To be able to replay a job, avocado records the job data in the same
job results directory, inside a subdirectory named ``replay``. If a
......@@ -184,7 +184,7 @@ below::
(1/1) /bin/true: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : /tmp/avocado_results/job-2016-01-11T22.10-f1b1c87/html/results.html
TIME : 0.01 s
TESTS TIME : 0.01 s
Trying to replay the job, it fails::
......@@ -201,4 +201,4 @@ In this case, we have to inform where the job results dir is located::
(1/1) /bin/true: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T22.15-19c76ab/html/results.html
TIME : 0.01 s
TESTS TIME : 0.01 s
......@@ -30,7 +30,7 @@ that is, the job and its test(s) results are constantly updated::
(3/3) synctest.py:SyncTest.test: PASS (1.98 s)
RESULTS : PASS 1 | ERROR 1 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.57-5ffe4792/html/results.html
TIME : 3.17 s
TESTS TIME : 3.17 s
The most important thing is to remember that programs should never need to parse
human output to figure out what happened to a test job run.
......
......@@ -60,7 +60,7 @@ Once the remote machine is properly setup, you may run your test. Example::
(1/2) examples/tests/sleeptest.py: PASS (1.00 s)
(2/2) examples/tests/failtest.py: FAIL (0.00 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.01 s
TESTS TIME : 1.01 s
As you can see, Avocado will copy the tests you have to your remote machine and
execute them. A bit of extra logging information is added to your job summary,
......@@ -140,7 +140,7 @@ Once the virtual machine is properly setup, you may run your test. Example::
(1/2) examples/tests/sleeptest.py:SleepTest.test: PASS (1.00 s)
(2/2) examples/tests/failtest.py:FailTest.test: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.01 s
TESTS TIME : 1.01 s
As you can see, Avocado will copy the tests you have to your libvirt domain and
execute them. A bit of extra logging information is added to your job summary,
......
......@@ -163,7 +163,7 @@ generation for sleeptest just like::
(3/3) sleeptest.py:SleepTest.test;3: PASS (5.00 s)
RESULTS : PASS 3 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/html/results.html
TIME : 6.50 s
TESTS TIME : 6.50 s
The ``--multiplex`` accepts either only ``$FILE_LOCATION`` or ``$INJECT_TO:$FILE_LOCATION``.
As explained in :doc:`MultiplexConfig` without any path the content gets
......@@ -200,7 +200,7 @@ You can also execute multiple tests with the same multiplex file::
(8/8) synctest.py:SyncTest.test;4: PASS (2.45 s)
RESULTS : PASS 8 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : $HOME/avocado/job-results/job-2016-05-04T09.25-cd20fc8/html/results.html
TIME : 26.26 s
TESTS TIME : 26.26 s
Advanced logging capabilities
......@@ -275,7 +275,7 @@ The outcome should be similar to::
PASS
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
JOB HTML : /home/cleber/avocado/job-results/job-2016-03-18T10.29-af786f8/html/results.html
TIME : 7.01 s
TESTS TIME : 7.01 s
The custom ``progress`` stream is combined with the application output, which
may or may not suit your needs or preferences. If you want the ``progress``
......@@ -580,7 +580,7 @@ option --output-check-record all to the test runner::
TESTS : 1
(1/1) synctest.py:SyncTest.test: PASS (2.20 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 2.20 s
TESTS TIME : 2.20 s
After the reference files are added, the check process is transparent, in the sense
......@@ -609,7 +609,7 @@ Let's record the output for this one::
TESTS : 1
(1/1) output_record.sh: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 0.01 s
TESTS TIME : 0.01 s
After this is done, you'll notice that a the test data directory
appeared in the same level of our shell script, containing 2 files::
......@@ -634,7 +634,7 @@ happens if we change the ``stdout.expected`` file contents to ``Hello, Avocado!`
TESTS : 1
(1/1) output_record.sh: FAIL (0.02 s)
RESULTS : PASS 0 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 0.02 s
TESTS TIME : 0.02 s
Verifying the failure reason::
......@@ -777,7 +777,7 @@ impact your test grid. You can account for that possibility and set up a
TESTS : 1
(1/1) sleeptest.py:SleepTest.test: ERROR (2.97 s)
RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 2.97 s
TESTS TIME : 2.97 s
::
......@@ -863,7 +863,7 @@ This accomplishes a similar effect to the multiplex setup defined in there.
TESTS : 1
(1/1) timeouttest.py:TimeoutTest.test: INTERRUPTED (3.04 s)
RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 3.04 s
TESTS TIME : 3.04 s
::
......
......@@ -79,7 +79,7 @@ directories. The output should be similar to::
TESTS : 1
(1/1) sleeptest.py: PASS (1.00 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.00 s
TESTS TIME : 1.00 s
The test directories will vary depending on you system and
installation method used. Still, it's pretty easy to find that out as shown
......@@ -303,7 +303,7 @@ And the output should look like::
SKIP : 0
WARN : 0
INTERRUPT : 0
TIME : 16.52 s
TESTS TIME : 16.52 s
The `multiplex` plugin and the test runner supports two kinds of global
filters, through the command line options `--filter-only` and `--filter-out`.
......@@ -489,7 +489,7 @@ files with shell code could be considered tests::
(1/2) /tmp/pass: PASS (0.01 s)
(2/2) /tmp/fail: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 0.01 s
TESTS TIME : 0.01 s
This example is pretty obvious, and could be achieved by giving
`/tmp/pass` and `/tmp/fail` shell "shebangs" (`#!/bin/sh`), making
......@@ -507,7 +507,7 @@ But now consider the following example::
(1/2) http://local-avocado-server:9405/jobs/: PASS (0.02 s)
(2/2) http://remote-avocado-server:9405/jobs/: FAIL (3.02 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 3.04 s
TESTS TIME : 3.04 s
This effectively makes `/bin/curl` an "external test runner", responsible for
trying to fetch those URLs, and reporting PASS or FAIL for each of them.
......@@ -545,7 +545,7 @@ option --output-check-record all to the test runner::
TESTS : 1
(1/1) examples/tests/synctest.py: PASS (2.20 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 2.20 s
TESTS TIME : 2.20 s
After the reference files are added, the check process is transparent, in the
sense that you do not need to provide special flags to the test runner.
......@@ -576,7 +576,7 @@ Let's record the output (both stdout and stderr) for this one::
TESTS : 1
(1/1) home/$USER/Code/avocado/output_record.sh: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 0.01 s
TESTS TIME : 0.01 s
After this is done, you'll notice that a the test data directory
appeared in the same level of our shell script, containing 2 files::
......@@ -620,7 +620,7 @@ The output should look like::
TESTS : 1
(1/1) sleeptest.py: PASS (1.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.01 s
TESTS TIME : 1.01 s
For more information, please consult the topic Remote Machine Plugin
on Avocado's online documentation.
......
......@@ -85,7 +85,7 @@ class InterruptTest(unittest.TestCase):
# We have to actually wait 2 seconds until the ignore window is over
time.sleep(2)
proc.sendline('\x03')
proc.read_until_last_line_matches('TIME : %d s')
proc.read_until_last_line_matches('TESTS TIME : %d s')
wait.wait_for(lambda: not proc.is_alive(), timeout=1)
# Make sure the bad test will be really gone from the process table
......@@ -140,7 +140,7 @@ class InterruptTest(unittest.TestCase):
proc = aexpect.Expect(command=cmd_line, linesep='')
proc.read_until_last_line_matches(os.path.basename(good_test.path))
proc.sendline('\x03')
proc.read_until_last_line_matches('TIME : %d s')
proc.read_until_last_line_matches('TESTS TIME : %d s')
wait.wait_for(lambda: not proc.is_alive(), timeout=1)
# Make sure the good test will be really gone from the process table
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册