avocado: Shorter app output

Instead of putting outputs on separate lines, condensate
test results summary into a single line. The new output
looks like:

$ avocado run passtest
JOB ID     : f2f5060440bd57cba646c1f223ec8c40d03f539b
JOB LOG    : /home/user/avocado/job-results/job-2015-07-27T17.13-f2f5060/job.log
JOB HTML   : /home/user/avocado/job-results/job-2015-07-27T17.13-f2f5060/html/results.html
TESTS      : 1
(1/1) passtest.py:PassTest.test: PASS (0.00 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME       : 0.00 s

We updated a few unittests in order to not depend on the
looks of the human output anymore, since unless we are
specifically testing for human output behavior, the unittests
should use machine readable output. Also, the documentation
was updated to reflect the new output layout.
Signed-off-by: NLucas Meneghel Rodrigues <lmr@redhat.com>
上级 c9e11fa1
......@@ -277,13 +277,14 @@ class HumanTestResult(TestResult):
Called once after all tests are executed.
"""
self._reconcile()
self.stream.notify(event="message", msg="PASS : %d" % len(self.passed))
self.stream.notify(event="message", msg="ERROR : %d" % len(self.errors))
self.stream.notify(event="message", msg="FAIL : %d" % len(self.failed))
self.stream.notify(event="message", msg="SKIP : %d" % len(self.skipped))
self.stream.notify(event="message", msg="WARN : %d" % len(self.warned))
self.stream.notify(event="message", msg="INTERRUPT : %d" % len(self.interrupted))
self.stream.notify(event="message", msg="TIME : %.2f s" % self.total_time)
self.stream.notify(event="message",
msg="RESULTS : PASS %d | ERROR %d | FAIL %d | "
"SKIP %d | WARN %d | INTERRUPT %s" %
(len(self.passed), len(self.errors),
len(self.failed), len(self.skipped),
len(self.warned), len(self.interrupted)))
self.stream.notify(event="message",
msg="TIME : %.2f s" % self.total_time)
def start_test(self, state):
"""
......
......@@ -78,12 +78,7 @@ To do so, please run ``avocado`` with the ``run`` sub-command and the chosen tes
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.39-381b849a/html/results.html
TESTS : 1
(1/1) /bin/true: PASS (0.01 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 0.01 s
You probably noticed that we used ``/bin/true`` as a test, and in accordance with our
......@@ -181,12 +176,7 @@ instrumented and simple tests::
(4/6) failtest.2: FAIL (0.00 s)
(5/6) synctest.2: ERROR (0.01 s)
(6/6) /tmp/simple_test.sh.1: PASS (0.02 s)
PASS : 2
ERROR : 2
FAIL : 2
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 2 | ERROR 2 | FAIL 2 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.04 s
Debugging tests
......
......@@ -29,12 +29,7 @@ that is, the job and its test(s) results are constantly updated::
(1/3) sleeptest.1: PASS (1.01 s)
(2/3) failtest.1: FAIL (0.00 s)
(3/3) synctest.1: PASS (1.98 s)
PASS : 1
ERROR : 1
FAIL : 1
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 1 | ERROR 1 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 3.17 s
The most important thing is to remember that programs should never need to parse
......
......@@ -60,11 +60,7 @@ Once the remote machine is properly setup, you may run your test. Example::
TESTS : 2
(1/2) examples/tests/sleeptest.py: PASS (1.00 s)
(2/2) examples/tests/failtest.py: FAIL (0.00 s)
PASS : 1
ERROR : 0
FAIL : 1
SKIP : 0
WARN : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.01 s
As you can see, Avocado will copy the tests you have to your remote machine and
......@@ -139,11 +135,7 @@ Once the virtual machine is properly setup, you may run your test. Example::
TESTS : 2
(1/2) examples/tests/sleeptest.py: PASS (1.00 s)
(2/2) examples/tests/failtest.py: FAIL (0.00 s)
PASS : 1
ERROR : 0
FAIL : 1
SKIP : 0
WARN : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.01 s
As you can see, Avocado will copy the tests you have to your libvirt domain and
......
......@@ -161,12 +161,7 @@ generation for sleeptest just like::
(1/3) sleeptest: PASS (0.50 s)
(2/3) sleeptest.1: PASS (1.01 s)
(3/3) sleeptest.2: PASS (5.01 s)
PASS : 3
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 3 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 6.52 s
The ``--multiplex`` accepts either only ``$FILE_LOCATION`` or ``$INJECT_TO:$FILE_LOCATION``.
......@@ -393,11 +388,7 @@ option --output-check-record all to the test runner::
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
TESTS : 1
(1/1) synctest.py: PASS (2.20 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 2.20 s
......@@ -426,11 +417,7 @@ Let's record the output for this one::
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.49-25c4244/job.log
TESTS : 1
(1/1) home/$USER/Code/avocado/output_record.sh: PASS (0.01 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 0.01 s
After this is done, you'll notice that a the test data directory
......@@ -455,11 +442,7 @@ happens if we change the ``stdout.expected`` file contents to ``Hello, Avocado!`
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
TESTS : 1
(1/1) home/$USER/Code/avocado/output_record.sh: FAIL (0.02 s)
PASS : 0
ERROR : 0
FAIL : 1
SKIP : 0
WARN : 0
RESULTS : PASS 0 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 0.02 s
Verifying the failure reason::
......@@ -593,12 +576,7 @@ impact your test grid. You can account for that possibility and set up a
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/html/results.html
TESTS : 1
(1/1) sleeptest.1: ERROR (2.97 s)
PASS : 0
ERROR : 1
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 2.97 s
::
......@@ -684,12 +662,7 @@ This accomplishes a similar effect to the multiplex setup defined in there.
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/html/results.html
TESTS : 1
(1/1) timeouttest.1: ERROR (2.97 s)
PASS : 0
ERROR : 1
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 2.97 s
......
......@@ -79,12 +79,7 @@ directories. The output should be similar to::
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 1
(1/1) sleeptest.py: PASS (1.00 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.00 s
The test directories will vary depending on you system and
......@@ -490,12 +485,7 @@ option --output-check-record all to the test runner::
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 1
(1/1) examples/tests/synctest.py: PASS (2.20 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 2.20 s
After the reference files are added, the check process is transparent, in the
......@@ -526,12 +516,7 @@ Let's record the output (both stdout and stderr) for this one::
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 1
(1/1) home/$USER/Code/avocado/output_record.sh: PASS (0.01 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 0.01 s
After this is done, you'll notice that a the test data directory
......@@ -575,12 +560,7 @@ The output should look like::
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 1
(1/1) sleeptest.py: PASS (1.01 s)
PASS : 1
ERROR : 0
FAIL : 0
SKIP : 0
WARN : 0
INTERRUPT : 0
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TIME : 1.01 s
For more information, please consult the topic Remote Machine Plugin
......
import os
import sys
import unittest
import tempfile
import shutil
import xml.dom.minidom
if sys.version_info[:2] == (2, 6):
import unittest2 as unittest
......@@ -37,6 +37,10 @@ class Dummy(Test):
"""
class ParseXMLError(Exception):
pass
class JobTimeOutTest(unittest.TestCase):
def setUp(self):
......@@ -53,32 +57,65 @@ class JobTimeOutTest(unittest.TestCase):
self.tmpdir = tempfile.mkdtemp()
os.chdir(basedir)
def run_and_check(self, cmd_line, e_rc, e_ntests, e_nerrors, e_nfailures,
e_nskip):
os.chdir(basedir)
result = process.run(cmd_line, ignore_status=True)
xml_output = result.stdout
self.assertEqual(result.exit_status, e_rc,
"Avocado did not return rc %d:\n%s" %
(e_rc, result))
try:
xunit_doc = xml.dom.minidom.parseString(xml_output)
except Exception, detail:
raise ParseXMLError("Failed to parse content: %s\n%s" %
(detail, xml_output))
testsuite_list = xunit_doc.getElementsByTagName('testsuite')
self.assertEqual(len(testsuite_list), 1, 'More than one testsuite tag')
testsuite_tag = testsuite_list[0]
self.assertEqual(len(testsuite_tag.attributes), 7,
'The testsuite tag does not have 7 attributes. '
'XML:\n%s' % xml_output)
n_tests = int(testsuite_tag.attributes['tests'].value)
self.assertEqual(n_tests, e_ntests,
"Unexpected number of executed tests, "
"XML:\n%s" % xml_output)
n_errors = int(testsuite_tag.attributes['errors'].value)
self.assertEqual(n_errors, e_nerrors,
"Unexpected number of test errors, "
"XML:\n%s" % xml_output)
n_failures = int(testsuite_tag.attributes['failures'].value)
self.assertEqual(n_failures, e_nfailures,
"Unexpected number of test failures, "
"XML:\n%s" % xml_output)
n_skip = int(testsuite_tag.attributes['skip'].value)
self.assertEqual(n_skip, e_nskip,
"Unexpected number of test skips, "
"XML:\n%s" % xml_output)
def test_sleep_longer_timeout(self):
cmd_line = ('./scripts/avocado run --job-results-dir %s --sysinfo=off '
'--job-timeout=5 %s examples/tests/passtest.py' % (self.tmpdir, self.script.path))
result = process.run(cmd_line, ignore_status=True)
self.assertEqual(result.exit_status, 0)
self.assertIn('PASS : 2', result.stdout)
self.assertIn('ERROR : 0', result.stdout)
self.assertIn('SKIP : 0', result.stdout)
'--xunit - --job-timeout=5 %s examples/tests/passtest.py' %
(self.tmpdir, self.script.path))
self.run_and_check(cmd_line, 0, 2, 0, 0, 0)
def test_sleep_short_timeout(self):
cmd_line = ('./scripts/avocado run --job-results-dir %s --sysinfo=off '
'--job-timeout=1 %s examples/tests/passtest.py' % (self.tmpdir, self.script.path))
result = process.run(cmd_line, ignore_status=True)
self.assertEqual(result.exit_status, 1)
self.assertIn('PASS : 0', result.stdout)
self.assertIn('ERROR : 1', result.stdout)
self.assertIn('SKIP : 1', result.stdout)
'--xunit - --job-timeout=1 %s examples/tests/passtest.py' %
(self.tmpdir, self.script.path))
self.run_and_check(cmd_line, 1, 2, 1, 0, 1)
def test_sleep_short_timeout_with_test_methods(self):
cmd_line = ('./scripts/avocado run --job-results-dir %s --sysinfo=off '
'--job-timeout=1 %s' % (self.tmpdir, self.py.path))
result = process.run(cmd_line, ignore_status=True)
self.assertEqual(result.exit_status, 1)
self.assertIn('PASS : 0', result.stdout)
self.assertIn('ERROR : 1', result.stdout)
self.assertIn('SKIP : 2', result.stdout)
'--xunit - --job-timeout=1 %s' %
(self.tmpdir, self.py.path))
self.run_and_check(cmd_line, 1, 3, 1, 0, 2)
def test_invalid_values(self):
cmd_line = ('./scripts/avocado run --job-results-dir %s --sysinfo=off '
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册