1. 16 10月, 2019 6 次提交
    • P
      tests/ptimer-test: Switch to transaction-based ptimer API · 91b37aea
      Peter Maydell 提交于
      Convert the ptimer test cases to the transaction-based ptimer API,
      by changing to ptimer_init(), dropping the now-unused QEMUBH
      variables, and surrounding each set of changes to the ptimer
      state in ptimer_transaction_begin/commit calls.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20191008171740.9679-4-peter.maydell@linaro.org
      91b37aea
    • P
      ptimer: Provide new transaction-based API · 78b6eaa6
      Peter Maydell 提交于
      Provide the new transaction-based API. If a ptimer is created
      using ptimer_init() rather than ptimer_init_with_bh(), then
      instead of providing a QEMUBH, it provides a pointer to the
      callback function directly, and has opted into the transaction
      API. All calls to functions which modify ptimer state:
       - ptimer_set_period()
       - ptimer_set_freq()
       - ptimer_set_limit()
       - ptimer_set_count()
       - ptimer_run()
       - ptimer_stop()
      must be between matched calls to ptimer_transaction_begin()
      and ptimer_transaction_commit(). When ptimer_transaction_commit()
      is called it will evaluate the state of the timer after all the
      changes in the transaction, and call the callback if necessary.
      
      In the old API the individual update functions generally would
      call ptimer_trigger() immediately, which would schedule the QEMUBH.
      In the new API the update functions will instead defer the
      "set s->next_event and call ptimer_reload()" work to
      ptimer_transaction_commit().
      
      Because ptimer_trigger() can now immediately call into the
      device code which may then call other ptimer functions that
      update ptimer_state fields, we must be more careful in
      ptimer_reload() not to cache fields from ptimer_state across
      the ptimer_trigger() call. (This was harmless with the QEMUBH
      mechanism as the BH would not be invoked until much later.)
      
      We use assertions to check that:
       * the functions modifying ptimer state are not called outside
         a transaction block
       * ptimer_transaction_begin() and _commit() calls are paired
       * the transaction API is not used with a QEMUBH ptimer
      
      There is some slight repetition of code:
       * most of the set functions have similar looking "if s->bh
         call ptimer_reload, otherwise set s->need_reload" code
       * ptimer_init() and ptimer_init_with_bh() have similar code
      We deliberately don't try to avoid this repetition, because
      it will all be deleted when the QEMUBH version of the API
      is removed.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20191008171740.9679-3-peter.maydell@linaro.org
      78b6eaa6
    • P
      ptimer: Rename ptimer_init() to ptimer_init_with_bh() · b0142262
      Peter Maydell 提交于
      Currently the ptimer design uses a QEMU bottom-half as its
      mechanism for calling back into the device model using the
      ptimer when the timer has expired. Unfortunately this design
      is fatally flawed, because it means that there is a lag
      between the ptimer updating its own state and the device
      callback function updating device state, and guest accesses
      to device registers between the two can return inconsistent
      device state.
      
      We want to replace the bottom-half design with one where
      the guest device's callback is called either immediately
      (when the ptimer triggers by timeout) or when the device
      model code closes a transaction-begin/end section (when the
      ptimer triggers because the device model changed the
      ptimer's count value or other state). As the first step,
      rename ptimer_init() to ptimer_init_with_bh(), to free up
      the ptimer_init() name for the new API. We can then convert
      all the ptimer users away from ptimer_init_with_bh() before
      removing it entirely.
      
      (Commit created with
       git grep -l ptimer_init | xargs sed -i -e 's/ptimer_init/ptimer_init_with_bh/'
      and three overlong lines folded by hand.)
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20191008171740.9679-2-peter.maydell@linaro.org
      b0142262
    • E
      ARM: KVM: Check KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 for smp_cpus > 256 · fff9f555
      Eric Auger 提交于
      Host kernel within [4.18, 5.3] report an erroneous KVM_MAX_VCPUS=512
      for ARM. The actual capability to instantiate more than 256 vcpus
      was fixed in 5.4 with the upgrade of the KVM_IRQ_LINE ABI to support
      vcpu id encoded on 12 bits instead of 8 and a redistributor consuming
      a single KVM IO device instead of 2.
      
      So let's check this capability when attempting to use more than 256
      vcpus within any ARM kvm accelerated machine.
      Signed-off-by: NEric Auger <eric.auger@redhat.com>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Message-id: 20191003154640.22451-4-eric.auger@redhat.com
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      fff9f555
    • E
      intc/arm_gic: Support IRQ injection for more than 256 vpus · f6530926
      Eric Auger 提交于
      Host kernels that expose the KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 capability
      allow injection of interrupts along with vcpu ids larger than 255.
      Let's encode the vpcu id on 12 bits according to the upgraded KVM_IRQ_LINE
      ABI when needed.
      
      Given that we have two callsites that need to assemble
      the value for kvm_set_irq(), a new helper routine, kvm_arm_set_irq
      is introduced.
      
      Without that patch qemu exits with "kvm_set_irq: Invalid argument"
      message.
      Signed-off-by: NEric Auger <eric.auger@redhat.com>
      Reported-by: NZenghui Yu <yuzenghui@huawei.com>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Message-id: 20191003154640.22451-3-eric.auger@redhat.com
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      f6530926
    • E
      linux headers: update against v5.4-rc1 · f363d039
      Eric Auger 提交于
      Update the headers against commit:
      0f1a7b3fac05 ("timer-of: don't use conditional expression
      with mixed 'void' types")
      Signed-off-by: NEric Auger <eric.auger@redhat.com>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Message-id: 20191003154640.22451-2-eric.auger@redhat.com
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      f363d039
  2. 15 10月, 2019 6 次提交
  3. 14 10月, 2019 21 次提交
    • M
      iotests: Test large write request to qcow2 file · a1406a92
      Max Reitz 提交于
      Without HEAD^, the following happens when you attempt a large write
      request to a qcow2 file such that the number of bytes covered by all
      clusters involved in a single allocation will exceed INT_MAX:
      
      (A) handle_alloc_space() decides to fill the whole area with zeroes and
          fails because bdrv_co_pwrite_zeroes() fails (the request is too
          large).
      
      (B) If handle_alloc_space() does not do anything, but merge_cow()
          decides that the requests can be merged, it will create a too long
          IOV that later cannot be written.
      
      (C) Otherwise, all parts will be written separately, so those requests
          will work.
      
      In either B or C, though, qcow2_alloc_cluster_link_l2() will have an
      overflow: We use an int (i) to iterate over nb_clusters, and then
      calculate the L2 entry based on "i << s->cluster_bits" -- which will
      overflow if the range covers more than INT_MAX bytes.  This then leads
      to image corruption because the L2 entry will be wrong (it will be
      recognized as a compressed cluster).
      
      Even if that were not the case, the .cow_end area would be empty
      (because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so
      their difference (which is the .cow_end size) will be 0).
      
      So this test checks that on such large requests, the image will not be
      corrupted.  Unfortunately, we cannot check whether COW will be handled
      correctly, because that data is discarded when it is written to null-co
      (but we have to use null-co, because writing 2 GB of data in a test is
      not quite reasonable).
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      a1406a92
    • M
      qcow2: Limit total allocation range to INT_MAX · d1b9d19f
      Max Reitz 提交于
      When the COW areas are included, the size of an allocation can exceed
      INT_MAX.  This is kind of limited by handle_alloc() in that it already
      caps avail_bytes at INT_MAX, but the number of clusters still reflects
      the original length.
      
      This can have all sorts of effects, ranging from the storage layer write
      call failing to image corruption.  (If there were no image corruption,
      then I suppose there would be data loss because the .cow_end area is
      forced to be empty, even though there might be something we need to
      COW.)
      
      Fix all of it by limiting nb_clusters so the equivalent number of bytes
      will not exceed INT_MAX.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      d1b9d19f
    • K
      qemu-nbd: Support help options for --object · 495bf893
      Kevin Wolf 提交于
      Instead of parsing help options as normal object properties and
      returning an error, provide the same help functionality as the system
      emulator in qemu-nbd, too.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      495bf893
    • K
      qemu-img: Support help options for --object · c6e5cdfd
      Kevin Wolf 提交于
      Instead of parsing help options as normal object properties and
      returning an error, provide the same help functionality as the system
      emulator in qemu-img, too.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      c6e5cdfd
    • K
      qemu-io: Support help options for --object · 4fa1f0dc
      Kevin Wolf 提交于
      Instead of parsing help options as normal object properties and
      returning an error, provide the same help functionality as the system
      emulator in qemu-io, too.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      4fa1f0dc
    • K
      vl: Split off user_creatable_print_help() · 3e9297f3
      Kevin Wolf 提交于
      Printing help for --object is something that we not only want in the
      system emulator, but also in tools that support --object. Move it into a
      separate function in qom/object_interfaces.c to make the code accessible
      for tools.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      3e9297f3
    • M
      iotests/028: Fix for long $TEST_DIRs · 48c8d3ce
      Max Reitz 提交于
      For long test image paths, the order of the "Formatting" line and the
      "(qemu)" prompt after a drive_backup HMP command may be reversed.  In
      fact, the interaction between the prompt and the line may lead to the
      "Formatting" to being greppable at all after "read"-ing it (if the
      prompt injects an IFS character into the "Formatting" string).
      
      So just wait until we get a prompt.  At that point, the block job must
      have been started, so "info block-jobs" will only return "No active
      jobs" once it is done.
      Reported-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NJohn Snow <jsnow@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      48c8d3ce
    • A
      block: Reject misaligned write requests with BDRV_REQ_NO_FALLBACK · f2208fdc
      Alberto Garcia 提交于
      The BDRV_REQ_NO_FALLBACK flag means that an operation should only be
      performed if it can be offloaded or otherwise performed efficiently.
      
      However a misaligned write request requires a RMW so we should return
      an error and let the caller decide how to proceed.
      
      This hits an assertion since commit c8bb23cb if the required
      alignment is larger than the cluster size:
      
      qemu-img create -f qcow2 -o cluster_size=2k img.qcow2 4G
      qemu-io -c "open -o driver=qcow2,file.align=4k blkdebug::img.qcow2" \
              -c 'write 0 512'
      qemu-io: block/io.c:1127: bdrv_driver_pwritev: Assertion `!(flags & BDRV_REQ_NO_FALLBACK)' failed.
      Aborted
      
      The reason is that when writing to an unallocated cluster we try to
      skip the copy-on-write part and zeroize it using BDRV_REQ_NO_FALLBACK
      instead, resulting in a write request that is too small (2KB cluster
      size vs 4KB required alignment).
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      f2208fdc
    • P
      replay: add BH oneshot event for block layer · e4ec5ad4
      Pavel Dovgalyuk 提交于
      Replay is capable of recording normal BH events, but sometimes
      there are single use callbacks scheduled with aio_bh_schedule_oneshot
      function. This patch enables recording and replaying such callbacks.
      Block layer uses these events for calling the completion function.
      Replaying these calls makes the execution deterministic.
      Signed-off-by: NPavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
      Acked-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      e4ec5ad4
    • P
      replay: finish record/replay before closing the disks · ae25dccb
      Pavel Dovgalyuk 提交于
      After recent updates block devices cannot be closed on qemu exit.
      This happens due to the block request polling when replay is not finished.
      Therefore now we stop execution recording before closing the block devices.
      Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      ae25dccb
    • P
      replay: don't drain/flush bdrv queue while RR is working · c8aa7895
      Pavel Dovgalyuk 提交于
      In record/replay mode bdrv queue is controlled by replay mechanism.
      It does not allow saving or loading the snapshots
      when bdrv queue is not empty. Stopping the VM is not blocked by nonempty
      queue, but flushing the queue is still impossible there,
      because it may cause deadlocks in replay mode.
      This patch disables bdrv_drain_all and bdrv_flush_all in
      record/replay mode.
      
      Stopping the machine when the IO requests are not finished is needed
      for the debugging. E.g., breakpoint may be set at the specified step,
      and forcing the IO requests to finish may break the determinism
      of the execution.
      Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
      Acked-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      c8aa7895
    • P
      replay: update docs for record/replay with block devices · de499eb6
      Pavel Dovgalyuk 提交于
      This patch updates the description of the command lines for using
      record/replay with attached block devices.
      Signed-off-by: NPavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      de499eb6
    • P
      replay: disable default snapshot for record/replay · 25863975
      Pavel Dovgalyuk 提交于
      This patch disables setting '-snapshot' option on by default
      in record/replay mode. This is needed for creating vmstates in record
      and replay modes.
      Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
      Acked-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      25863975
    • P
      block: implement bdrv_snapshot_goto for blkreplay · 3c6c4348
      Pavel Dovgalyuk 提交于
      This patch enables making snapshots with blkreplay used in
      block devices.
      This function is required to make bdrv_snapshot_goto without
      calling .bdrv_open which is not implemented.
      Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
      Acked-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      3c6c4348
    • P
      block/vhdx: add check for truncated image files · 6caaad46
      Peter Lieven 提交于
      qemu is currently not able to detect truncated vhdx image files.
      Add a basic check if all allocated blocks are reachable at open and
      report all errors during bdrv_co_check.
      Signed-off-by: NPeter Lieven <pl@kamp.de>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      6caaad46
    • P
      Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20191011a' into staging · c760cb77
      Peter Maydell 提交于
      Migration pull 2019-10-11
      
      Mostly cleanups and minor fixes
      
      [Note I'm seeing a hang on the aarch64 hosted x86-64 tcg migration
      test in xbzrle; but I'm seeing that on current head as well]
      
      # gpg: Signature made Fri 11 Oct 2019 20:14:31 BST
      # gpg:                using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7
      # gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full]
      # Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A  9FA9 0516 331E BC5B FDE7
      
      * remotes/dgilbert/tags/pull-migration-20191011a: (21 commits)
        migration: Support gtree migration
        migration/multifd: pages->used would be cleared when attach to multifd_send_state
        migration/multifd: initialize packet->magic/version once at setup stage
        migration/multifd: use pages->allocated instead of the static max
        migration/multifd: fix a typo in comment of multifd_recv_unfill_packet()
        migration/postcopy: check PostcopyState before setting to POSTCOPY_INCOMING_RUNNING
        migration/postcopy: rename postcopy_ram_enable_notify to postcopy_ram_incoming_setup
        migration/postcopy: postpone setting PostcopyState to END
        migration/postcopy: mis->have_listen_thread check will never be touched
        migration: report SaveStateEntry id and name on failure
        migration: pass in_postcopy instead of check state again
        migration/postcopy: fix typo in mark_postcopy_blocktime_begin's comment
        migration/postcopy: map large zero page in postcopy_ram_incoming_setup()
        migration/postcopy: allocate tmp_page in setup stage
        migration: Don't try and recover return path in non-postcopy
        rcu: Use automatic rc_read unlock in core memory/exec code
        migration: Use automatic rcu_read unlock in rdma.c
        migration: Use automatic rcu_read unlock in ram.c
        migration: Fix missing rcu_read_unlock
        rcu: Add automatically released rcu_read_lock variants
        ...
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      c760cb77
    • P
      Merge remote-tracking branch 'remotes/awilliam/tags/vfio-update-20191010.0' into staging · 22dbfdec
      Peter Maydell 提交于
      VFIO update 2019-10-10
      
       - Fix MSI error path double free (Evgeny Yakovlev)
      
      # gpg: Signature made Thu 10 Oct 2019 20:07:39 BST
      # gpg:                using RSA key 239B9B6E3BB08B22
      # gpg: Good signature from "Alex Williamson <alex.williamson@redhat.com>" [full]
      # gpg:                 aka "Alex Williamson <alex@shazbot.org>" [full]
      # gpg:                 aka "Alex Williamson <alwillia@redhat.com>" [full]
      # gpg:                 aka "Alex Williamson <alex.l.williamson@gmail.com>" [full]
      # Primary key fingerprint: 42F6 C04E 540B D1A9 9E7B  8A90 239B 9B6E 3BB0 8B22
      
      * remotes/awilliam/tags/vfio-update-20191010.0:
        hw/vfio/pci: fix double free in vfio_msi_disable
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      22dbfdec
    • P
      Merge remote-tracking branch 'remotes/gkurz/tags/9p-next-2019-10-10' into staging · c8b2bc51
      Peter Maydell 提交于
      The most notable change is that we now detect cross-device setups in the
      host since it may cause inode number collision and mayhem in the guest.
      A new fsdev property is added for the user to choose the appropriate
      policy to handle that: either remap all inode numbers or fail I/Os to
      another host device or just print out a warning (default behaviour).
      
      This is also my last PR as _active_ maintainer of 9pfs.
      
      # gpg: Signature made Thu 10 Oct 2019 12:14:07 BST
      # gpg:                using RSA key B4828BAF943140CEF2A3491071D4D5E5822F73D6
      # gpg: Good signature from "Greg Kurz <groug@kaod.org>" [full]
      # gpg:                 aka "Gregory Kurz <gregory.kurz@free.fr>" [full]
      # gpg:                 aka "[jpeg image of size 3330]" [full]
      # Primary key fingerprint: B482 8BAF 9431 40CE F2A3  4910 71D4 D5E5 822F 73D6
      
      * remotes/gkurz/tags/9p-next-2019-10-10:
        MAINTAINERS: Downgrade status of virtio-9p to "Odd Fixes"
        9p: Use variable length suffixes for inode remapping
        9p: stat_to_qid: implement slow path
        9p: Added virtfs option 'multidevs=remap|forbid|warn'
        9p: Treat multiple devices on one export as an error
        fsdev: Add return value to fsdev_throttle_parse_opts()
        9p: Simplify error path of v9fs_device_realize_common()
        9p: unsigned type for type, version, path
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      c8b2bc51
    • P
      Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2019-10-10' into staging · 088d6709
      Peter Maydell 提交于
      Block patches:
      - Parallelized request handling for qcow2
      - Backup job refactoring to use a filter node instead of before-write
        notifiers
      - Add discard accounting information to file-posix nodes
      - Allow trivial reopening of nbd nodes
      - Some iotest fixes
      
      # gpg: Signature made Thu 10 Oct 2019 12:40:34 BST
      # gpg:                using RSA key 91BEB60A30DB3E8857D11829F407DB0061D5CF40
      # gpg:                issuer "mreitz@redhat.com"
      # gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full]
      # Primary key fingerprint: 91BE B60A 30DB 3E88 57D1  1829 F407 DB00 61D5 CF40
      
      * remotes/maxreitz/tags/pull-block-2019-10-10: (36 commits)
        iotests/162: Fix for newer Linux 5.3+
        tests: fix I/O test for hosts defaulting to LUKSv2
        nbd: add empty .bdrv_reopen_prepare
        block/backup: use backup-top instead of write notifiers
        block: introduce backup-top filter driver
        block/block-copy: split block_copy_set_callbacks function
        block/backup: move write_flags calculation inside backup_job_create
        block/backup: move in-flight requests handling from backup to block-copy
        iotests: Use stat -c %b in 125
        iotests: Disable 125 on broken XFS versions
        iotests: Fix 125 for growth_mode = metadata
        qapi: query-blockstat: add driver specific file-posix stats
        file-posix: account discard operations
        scsi: account unmap operations
        scsi: move unmap error checking to the complete callback
        scsi: store unmap offset and nb_sectors in request struct
        ide: account UNMAP (TRIM) operations
        block: add empty account cookie type
        qapi: add unmap to BlockDeviceStats
        qapi: group BlockDeviceStats fields
        ...
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      088d6709
    • P
      Merge remote-tracking branch 'remotes/davidhildenbrand/tags/s390x-tcg-2019-10-10' into staging · cdfc44ac
      Peter Maydell 提交于
      - MMU DAT translation rewrite and cleanup
      - Implement more TCG CPU features related to the MMU (e.g., IEP)
      - Add the current instruction length to unwind data and clean up
      - Resolve one TODO for the MVCL instruction
      
      # gpg: Signature made Thu 10 Oct 2019 12:25:06 BST
      # gpg:                using RSA key 1BD9CAAD735C4C3A460DFCCA4DDE10F700FF835A
      # gpg:                issuer "david@redhat.com"
      # gpg: Good signature from "David Hildenbrand <david@redhat.com>" [unknown]
      # gpg:                 aka "David Hildenbrand <davidhildenbrand@gmail.com>" [full]
      # Primary key fingerprint: 1BD9 CAAD 735C 4C3A 460D  FCCA 4DDE 10F7 00FF 835A
      
      * remotes/davidhildenbrand/tags/s390x-tcg-2019-10-10: (31 commits)
        s390x/tcg: MVCL: Exit to main loop if requested
        target/s390x: Remove ILEN_UNWIND
        target/s390x: Remove ilen argument from trigger_pgm_exception
        target/s390x: Remove ilen argument from trigger_access_exception
        target/s390x: Remove ILEN_AUTO
        target/s390x: Rely on unwinding in s390_cpu_virt_mem_rw
        target/s390x: Rely on unwinding in s390_cpu_tlb_fill
        target/s390x: Simplify helper_lra
        target/s390x: Remove fail variable from s390_cpu_tlb_fill
        target/s390x: Return exception from translate_pages
        target/s390x: Return exception from mmu_translate
        target/s390x: Remove exc argument to mmu_translate_asce
        target/s390x: Return exception from mmu_translate_real
        target/s390x: Handle tec in s390_cpu_tlb_fill
        target/s390x: Push trigger_pgm_exception lower in s390_cpu_tlb_fill
        target/s390x: Use tcg_s390_program_interrupt in TCG helpers
        target/s390x: Remove ilen parameter from s390_program_interrupt
        target/s390x: Remove ilen parameter from tcg_s390_program_interrupt
        target/s390x: Add ilen to unwind data
        s390x/cpumodel: Add new TCG features to QEMU cpu model
        ...
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      cdfc44ac
    • S
      test-bdrv-drain: fix iothread_join() hang · 69de4844
      Stefan Hajnoczi 提交于
      tests/test-bdrv-drain can hang in tests/iothread.c:iothread_run():
      
        while (!atomic_read(&iothread->stopping)) {
            aio_poll(iothread->ctx, true);
        }
      
      The iothread_join() function works as follows:
      
        void iothread_join(IOThread *iothread)
        {
            iothread->stopping = true;
            aio_notify(iothread->ctx);
            qemu_thread_join(&iothread->thread);
      
      If iothread_run() checks iothread->stopping before the iothread_join()
      thread sets stopping to true, then aio_notify() may be optimized away
      and iothread_run() hangs forever in aio_poll().
      
      The correct way to change iothread->stopping is from a BH that executes
      within iothread_run().  This ensures that iothread->stopping is checked
      after we set it to true.
      
      This was already fixed for ./iothread.c (note this is a different source
      file!) by commit 2362a28e ("iothread:
      fix iothread_stop() race condition"), but not for tests/iothread.c.
      
      Fixes: 0c330a73
             ("aio: introduce aio_co_schedule and aio_co_wake")
      Reported-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20191003100103.331-1-stefanha@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      69de4844
  4. 12 10月, 2019 2 次提交
  5. 11 10月, 2019 5 次提交