1. 26 10月, 2019 9 次提交
  2. 25 10月, 2019 2 次提交
  3. 23 10月, 2019 1 次提交
  4. 22 10月, 2019 8 次提交
    • M
      tests/qapi-schema: Cover feature documentation comments · 79598c8a
      Markus Armbruster 提交于
      Commit 8aa3a33e "tests/qapi-schema: Test for good feature lists in
      structs" neglected to cover documentation comments, and the previous
      commit followed its example.  Make up for them.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20191018081454.21369-5-armbru@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      79598c8a
    • P
      tests: qapi: Test 'features' of commands · 2e2e0df2
      Peter Krempa 提交于
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20191018081454.21369-4-armbru@redhat.com>
      2e2e0df2
    • P
      qapi: Add feature flags to commands · 23394b4c
      Peter Krempa 提交于
      Similarly to features for struct types introduce the feature flags also
      for commands. This will allow notifying management layers of fixes and
      compatible changes in the behaviour of a command which may not be
      detectable any other way.
      
      The changes were heavily inspired by commit 6a8c0b51.
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20191018081454.21369-3-armbru@redhat.com>
      23394b4c
    • M
      tests/qapi-schema: Tidy up test output indentation · 758f272b
      Markus Armbruster 提交于
      Command and event details are indented three spaces, everything else
      four.  Messed up in commit 156402e5.  Use four spaces consistently.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
      Message-Id: <20191018081454.21369-2-armbru@redhat.com>
      758f272b
    • M
      qapi: Split up scripts/qapi/common.py · e6c42b96
      Markus Armbruster 提交于
      The QAPI code generator clocks in at some 3100 SLOC in 8 source files.
      Almost 60% of the code is in qapi/common.py.  Split it into more
      focused modules:
      
      * Move QAPISchemaPragma and QAPISourceInfo to qapi/source.py.
      
      * Move QAPIError and its sub-classes to qapi/error.py.
      
      * Move QAPISchemaParser and QAPIDoc to parser.py.  Use the opportunity
        to put QAPISchemaParser first.
      
      * Move check_expr() & friends to qapi/expr.py.  Use the opportunity to
        put the code into a more sensible order.
      
      * Move QAPISchema & friends to qapi/schema.py
      
      * Move QAPIGen and its sub-classes, ifcontext,
        QAPISchemaModularCVisitor, and QAPISchemaModularCVisitor to qapi/gen.py
      
      * Delete camel_case(), it's unused since commit e98859a9 "qapi:
        Clean up after recent conversions to QAPISchemaVisitor"
      
      A number of helper functions remain in qapi/common.py.  I considered
      moving the code generator helpers to qapi/gen.py, but decided not to.
      Perhaps we should rewrite them as methods of QAPIGen some day.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20191018074345.24034-7-armbru@redhat.com>
      [Add "# -*- coding: utf-8 -*-" lines]
      e6c42b96
    • M
      qapi: Speed up frontend tests · f01338cc
      Markus Armbruster 提交于
      "make check-qapi-schema" takes around 10s user + system time for me.
      With -j, it takes a bit over 3s real time.  We have worse tests.  It's
      still annoying when you work on the QAPI generator.
      
      Some 1.4s user + system time is consumed by make figuring out what to
      do, measured by making a target that does nothing.  There's nothing I
      can do about that right now.  But let's see what we can do about the
      other 8s.
      
      Almost 7s are spent running test-qapi.py for every test case, the rest
      normalizing and diffing test-qapi.py output.  We have 190 test cases.
      
      If I downgrade to python2, it's 4.5s, but python2 is a goner.
      
      Hacking up test-qapi.py to exit(0) without doing anything makes it
      only marginally faster.  The problem is Python startup overhead.
      
      Our configure puts -B into $(PYTHON).  Running without -B is faster:
      4.4s.
      
      We could improve the Makefile to run test cases only when the test
      case or the generator changed.  But I'm after improvement in the case
      where the generator changed.
      
      test-qapi.py is designed to be the simplest possible building block
      for a shell script to do the complete job (it's actually a Makefile,
      not a shell script; no real difference).  Python is just not meant for
      that.  It's for bigger blocks.
      
      Move the post-processing and diffing into test-qapi.py, and make it
      capable of testing multiple schema files.  Set executable bits while
      there.
      
      Running it once per test case now takes slightly longer than 8s.  But
      running it once for all of them takes under 0.2s.
      
      Messing with the Makefile to run it only on the tests that need
      retesting is clearly not worth the bother.
      
      Expected error output changes because the new normalization strips off
      $(SRCDIR)/tests/qapi-schema/ instead of just $(SRCDIR)/.
      
      The .exit files go away, because there is no exit status to test
      anymore.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20191018074345.24034-5-armbru@redhat.com>
      f01338cc
    • M
      qapi: Don't suppress doc generation without pragma doc-required · f3d4aa5a
      Markus Armbruster 提交于
      Commit bc52d03f "qapi: Make doc comments optional where we don't
      need them" made scripts/qapi2texi.py fail[*] unless the schema had
      pragma 'doc-required': true.  The stated reason was inability to cope
      with incomplete documentation.
      
      When commit fb0bc835 "qapi-gen: New common driver for code and doc
      generators" folded scripts/qapi2texi.py into scripts/qapi-gen.py, it
      turned the failure into silent suppression.
      
      The doc generator can cope with incomplete documentation now.  I don't
      know since when, or what the problem was, or even whether it ever
      existed.
      
      Drop the silent suppression.
      
      [*] The fail part was broken, fixed in commit e8ba07ea.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20191018074345.24034-2-armbru@redhat.com>
      f3d4aa5a
    • M
      tests/migration: fix a typo in comment · 81864c2e
      Mao Zhongyi 提交于
      Signed-off-by: NMao Zhongyi <maozhongyi@cmss.chinamobile.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NLaurent Vivier <laurent@vivier.eu>
      Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <1d0aa8142a10edf735dac0a3330c46e98b06e8eb.1570208781.git.maozhongyi@cmss.chinamobile.com>
      Signed-off-by: NLaurent Vivier <laurent@vivier.eu>
      81864c2e
  5. 18 10月, 2019 3 次提交
  6. 16 10月, 2019 4 次提交
    • I
      tests: cpu-plug-test: fix device_add for pc/q35 machines · 021a007e
      Igor Mammedov 提交于
      Commit bc1fb850 silently broke device_add test for CPU hotplug which
      resulted in test successfully passing though it wasn't actually run.
      Fix it by making sure that all non present CPUs reported
      by "query-hotpluggable-cpus" are hotplugged instead of making up
      and hardcoding values.
      
      Use of query-hotpluggable-cpus also allows consolidatiate device_add
      cpu testcases and reuse the same test function for all targets.
      
      While at it also add a check that at least one CPU was hotplugged,
      to avoid silent breakage in the future.
      
      Fixes: bc1fb850 (vl.c deprecate incorrect CPUs topology)
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Message-Id: <20190830110723.15096-3-imammedo@redhat.com>
      Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
      021a007e
    • I
      tests: add qtest_qmp_device_add_qdict() helper · b4510bb4
      Igor Mammedov 提交于
      Add an API that takes QDict directly, so users could skip steps
      of first building json dictionary and converting it back to
      QDict in existing qtest_qmp_device_add() and instead use QDict
      directly without intermediate conversion.
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Message-Id: <20190830110723.15096-2-imammedo@redhat.com>
      Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
      b4510bb4
    • P
      tests/ptimer-test: Switch to transaction-based ptimer API · 91b37aea
      Peter Maydell 提交于
      Convert the ptimer test cases to the transaction-based ptimer API,
      by changing to ptimer_init(), dropping the now-unused QEMUBH
      variables, and surrounding each set of changes to the ptimer
      state in ptimer_transaction_begin/commit calls.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20191008171740.9679-4-peter.maydell@linaro.org
      91b37aea
    • P
      ptimer: Rename ptimer_init() to ptimer_init_with_bh() · b0142262
      Peter Maydell 提交于
      Currently the ptimer design uses a QEMU bottom-half as its
      mechanism for calling back into the device model using the
      ptimer when the timer has expired. Unfortunately this design
      is fatally flawed, because it means that there is a lag
      between the ptimer updating its own state and the device
      callback function updating device state, and guest accesses
      to device registers between the two can return inconsistent
      device state.
      
      We want to replace the bottom-half design with one where
      the guest device's callback is called either immediately
      (when the ptimer triggers by timeout) or when the device
      model code closes a transaction-begin/end section (when the
      ptimer triggers because the device model changed the
      ptimer's count value or other state). As the first step,
      rename ptimer_init() to ptimer_init_with_bh(), to free up
      the ptimer_init() name for the new API. We can then convert
      all the ptimer users away from ptimer_init_with_bh() before
      removing it entirely.
      
      (Commit created with
       git grep -l ptimer_init | xargs sed -i -e 's/ptimer_init/ptimer_init_with_bh/'
      and three overlong lines folded by hand.)
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20191008171740.9679-2-peter.maydell@linaro.org
      b0142262
  7. 14 10月, 2019 4 次提交
    • M
      iotests: Test large write request to qcow2 file · a1406a92
      Max Reitz 提交于
      Without HEAD^, the following happens when you attempt a large write
      request to a qcow2 file such that the number of bytes covered by all
      clusters involved in a single allocation will exceed INT_MAX:
      
      (A) handle_alloc_space() decides to fill the whole area with zeroes and
          fails because bdrv_co_pwrite_zeroes() fails (the request is too
          large).
      
      (B) If handle_alloc_space() does not do anything, but merge_cow()
          decides that the requests can be merged, it will create a too long
          IOV that later cannot be written.
      
      (C) Otherwise, all parts will be written separately, so those requests
          will work.
      
      In either B or C, though, qcow2_alloc_cluster_link_l2() will have an
      overflow: We use an int (i) to iterate over nb_clusters, and then
      calculate the L2 entry based on "i << s->cluster_bits" -- which will
      overflow if the range covers more than INT_MAX bytes.  This then leads
      to image corruption because the L2 entry will be wrong (it will be
      recognized as a compressed cluster).
      
      Even if that were not the case, the .cow_end area would be empty
      (because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so
      their difference (which is the .cow_end size) will be 0).
      
      So this test checks that on such large requests, the image will not be
      corrupted.  Unfortunately, we cannot check whether COW will be handled
      correctly, because that data is discarded when it is written to null-co
      (but we have to use null-co, because writing 2 GB of data in a test is
      not quite reasonable).
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      a1406a92
    • M
      iotests/028: Fix for long $TEST_DIRs · 48c8d3ce
      Max Reitz 提交于
      For long test image paths, the order of the "Formatting" line and the
      "(qemu)" prompt after a drive_backup HMP command may be reversed.  In
      fact, the interaction between the prompt and the line may lead to the
      "Formatting" to being greppable at all after "read"-ing it (if the
      prompt injects an IFS character into the "Formatting" string).
      
      So just wait until we get a prompt.  At that point, the block job must
      have been started, so "info block-jobs" will only return "No active
      jobs" once it is done.
      Reported-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NJohn Snow <jsnow@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      48c8d3ce
    • A
      block: Reject misaligned write requests with BDRV_REQ_NO_FALLBACK · f2208fdc
      Alberto Garcia 提交于
      The BDRV_REQ_NO_FALLBACK flag means that an operation should only be
      performed if it can be offloaded or otherwise performed efficiently.
      
      However a misaligned write request requires a RMW so we should return
      an error and let the caller decide how to proceed.
      
      This hits an assertion since commit c8bb23cb if the required
      alignment is larger than the cluster size:
      
      qemu-img create -f qcow2 -o cluster_size=2k img.qcow2 4G
      qemu-io -c "open -o driver=qcow2,file.align=4k blkdebug::img.qcow2" \
              -c 'write 0 512'
      qemu-io: block/io.c:1127: bdrv_driver_pwritev: Assertion `!(flags & BDRV_REQ_NO_FALLBACK)' failed.
      Aborted
      
      The reason is that when writing to an unallocated cluster we try to
      skip the copy-on-write part and zeroize it using BDRV_REQ_NO_FALLBACK
      instead, resulting in a write request that is too small (2KB cluster
      size vs 4KB required alignment).
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      f2208fdc
    • S
      test-bdrv-drain: fix iothread_join() hang · 69de4844
      Stefan Hajnoczi 提交于
      tests/test-bdrv-drain can hang in tests/iothread.c:iothread_run():
      
        while (!atomic_read(&iothread->stopping)) {
            aio_poll(iothread->ctx, true);
        }
      
      The iothread_join() function works as follows:
      
        void iothread_join(IOThread *iothread)
        {
            iothread->stopping = true;
            aio_notify(iothread->ctx);
            qemu_thread_join(&iothread->thread);
      
      If iothread_run() checks iothread->stopping before the iothread_join()
      thread sets stopping to true, then aio_notify() may be optimized away
      and iothread_run() hangs forever in aio_poll().
      
      The correct way to change iothread->stopping is from a BH that executes
      within iothread_run().  This ensures that iothread->stopping is checked
      after we set it to true.
      
      This was already fixed for ./iothread.c (note this is a different source
      file!) by commit 2362a28e ("iothread:
      fix iothread_stop() race condition"), but not for tests/iothread.c.
      
      Fixes: 0c330a73
             ("aio: introduce aio_co_schedule and aio_co_wake")
      Reported-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20191003100103.331-1-stefanha@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      69de4844
  8. 12 10月, 2019 1 次提交
    • E
      migration: Support gtree migration · 9a85e4b8
      Eric Auger 提交于
      Introduce support for GTree migration. A custom save/restore
      is implemented. Each item is made of a key and a data.
      
      If the key is a pointer to an object, 2 VMSDs are passed into
      the GTree VMStateField.
      
      When putting the items, the tree is traversed in sorted order by
      g_tree_foreach.
      
      On the get() path, gtrees must be allocated using the proper
      key compare, key destroy and value destroy. This must be handled
      beforehand, for example in a pre_load method.
      
      Tests are added to test save/dump of structs containing gtrees
      including the virtio-iommu domain/mappings scenario.
      Signed-off-by: NEric Auger <eric.auger@redhat.com>
      
      Message-Id: <20191011121724.433-1-eric.auger@redhat.com>
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Reviewed-by: NJuan Quintela <quintela@redhat.com>
      Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
        uintptr_t fixup for test on 32bit
      9a85e4b8
  9. 10 10月, 2019 8 次提交