1. 12 12月, 2018 5 次提交
  2. 07 12月, 2018 1 次提交
  3. 06 12月, 2018 1 次提交
  4. 05 12月, 2018 1 次提交
  5. 04 12月, 2018 10 次提交
  6. 03 12月, 2018 4 次提交
    • V
      iotests: simple mirror test with kvm on 1G image · db5e8210
      Vladimir Sementsov-Ogievskiy 提交于
      This test is broken without previous commit fixing dead-lock in mirror.
      Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Acked-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      db5e8210
    • V
      mirror: fix dead-lock · d12ade57
      Vladimir Sementsov-Ogievskiy 提交于
      Let start from the beginning:
      
      Commit b9e413dd (in 2.9)
      "block: explicitly acquire aiocontext in aio callbacks that need it"
      added pairs of aio_context_acquire/release to mirror_write_complete and
      mirror_read_complete, when they were aio callbacks for blk_aio_* calls.
      
      Then, commit 2e1990b2 (in 3.0) "block/mirror: Convert to coroutines"
      dropped these blk_aio_* calls, than mirror_write_complete and
      mirror_read_complete are not callbacks more, and don't need additional
      aiocontext acquiring. Furthermore, mirror_read_complete calls
      blk_co_pwritev inside these pair of aio_context_acquire/release, which
      leads to the following dead-lock with mirror:
      
       (gdb) info thr
         Id   Target Id         Frame
         3    Thread (LWP 145412) "qemu-system-x86" syscall ()
         2    Thread (LWP 145416) "qemu-system-x86" __lll_lock_wait ()
       * 1    Thread (LWP 145411) "qemu-system-x86" __lll_lock_wait ()
      
       (gdb) bt
       #0  __lll_lock_wait ()
       #1  _L_lock_812 ()
       #2  __GI___pthread_mutex_lock
       #3  qemu_mutex_lock_impl (mutex=0x561032dce420 <qemu_global_mutex>,
           file=0x5610327d8654 "util/main-loop.c", line=236) at
           util/qemu-thread-posix.c:66
       #4  qemu_mutex_lock_iothread_impl
       #5  os_host_main_loop_wait (timeout=480116000) at util/main-loop.c:236
       #6  main_loop_wait (nonblocking=0) at util/main-loop.c:497
       #7  main_loop () at vl.c:1892
       #8  main
      
      Printing contents of qemu_global_mutex, I see that "__owner = 145416",
      so, thr1 is main loop, and now it wants BQL, which is owned by thr2.
      
       (gdb) thr 2
       (gdb) bt
       #0  __lll_lock_wait ()
       #1  _L_lock_870 ()
       #2  __GI___pthread_mutex_lock
       #3  qemu_mutex_lock_impl (mutex=0x561034d25dc0, ...
       #4  aio_context_acquire (ctx=0x561034d25d60)
       #5  dma_blk_cb
       #6  dma_blk_io
       #7  dma_blk_read
       #8  ide_dma_cb
       #9  bmdma_cmd_writeb
       #10 bmdma_write
       #11 memory_region_write_accessor
       #12 access_with_adjusted_size
       #15 flatview_write
       #16 address_space_write
       #17 address_space_rw
       #18 kvm_handle_io
       #19 kvm_cpu_exec
       #20 qemu_kvm_cpu_thread_fn
       #21 qemu_thread_start
       #22 start_thread
       #23 clone ()
      
      Printing mutex in fr 2, I see "__owner = 145411", so thr2 wants aio
      context mutex, which is owned by thr1. Classic dead-lock.
      
      Then, let's check that aio context is hold by mirror coroutine: just
      print coroutine stack of first tracked request in mirror job target:
      
       (gdb) [...]
       (gdb) qemu coroutine 0x561035dd0860
       #0  qemu_coroutine_switch
       #1  qemu_coroutine_yield
       #2  qemu_co_mutex_lock_slowpath
       #3  qemu_co_mutex_lock
       #4  qcow2_co_pwritev
       #5  bdrv_driver_pwritev
       #6  bdrv_aligned_pwritev
       #7  bdrv_co_pwritev
       #8  blk_co_pwritev
       #9  mirror_read_complete () at block/mirror.c:232
       #10 mirror_co_read () at block/mirror.c:370
       #11 coroutine_trampoline
       #12 __start_context
      
      Yes it is mirror_read_complete calling blk_co_pwritev after acquiring
      aio context.
      Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      d12ade57
    • R
      i386: hvf: Fix overrun of _decode_tbl1 · 83ea23cd
      Roman Bolshakov 提交于
      Single opcode instructions in ff group were incorrectly processed
      because an overrun of _decode_tbl1[0xff] resulted in access of
      _decode_tbl2[0x0]. Thus, decode_sldtgroup was called instead of
      decode_ffgroup:
        7d71: decode_sldtgroup: 1
        Unimplemented handler (7d71) for 108 (ff 0)
      
      While at it correct maximum length for _decode_tbl2 and _decode_tbl3.
      Signed-off-by: NRoman Bolshakov <r.bolshakov@yadro.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      83ea23cd
    • C
      i2c: Add a length check to the SMBus write handling · 629457a1
      Corey Minyard 提交于
      Avoid an overflow.
      Signed-off-by: NCorey Minyard <cminyard@mvista.com>
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com>
      Tested-by: NPhilippe Mathieu-Daudé <philmd@redhat.com>
      Cc: QEMU Stable <qemu-stable@nongnu.org>
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      629457a1
  7. 01 12月, 2018 3 次提交
    • E
      nbd/client: Send NBD_CMD_DISC if open fails after connect · c688e6ca
      Eric Blake 提交于
      If nbd_client_init() fails after we are already connected,
      then the server will spam logs with:
      
      Disconnect client, due to: Unexpected end-of-file before all bytes were read
      
      unless we gracefully disconnect before closing the connection.
      
      Ways to trigger this:
      
      $ opts=driver=nbd,export=foo,server.type=inet,server.host=localhost,server.port=10809
      $  qemu-img map --output=json --image-opts $opts,read-only=off
      $  qemu-img map --output=json --image-opts $opts,x-dirty-bitmap=nosuch:
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20181130023232.3079982-4-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      c688e6ca
    • E
      nbd/client: Make x-dirty-bitmap more reliable · 47829c40
      Eric Blake 提交于
      The implementation of x-dirty-bitmap in qemu 3.0 (commit 216ee365)
      silently falls back to treating the server as not supporting
      NBD_CMD_BLOCK_STATUS if a requested meta_context name was not
      negotiated, which in turn means treating the _entire_ image as
      data. Since our hack relied on using 'qemu-img map' to view
      which portions of the image were dirty by seeing what the
      redirected bdrv_block_status() treats as holes, this means
      that our fallback treats the entire image as clean.  Better
      would have been to treat the entire image as dirty, or to fail
      to connect because the user's request for a specific context
      could not be honored. This patch goes with the latter.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20181130023232.3079982-3-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      47829c40
    • E
      nbd/server: Advertise all contexts in response to bare LIST · e31d8024
      Eric Blake 提交于
      The NBD spec, and even our code comment, says that if the client
      asks for NBD_OPT_LIST_META_CONTEXT with 0 queries, then we should
      reply with (a possibly-compressed representation of) ALL contexts
      that we are willing to let them try.  But commit 3d068aff forgot
      to advertise qemu:dirty-bitmap:FOO.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20181130023232.3079982-2-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      e31d8024
  8. 29 11月, 2018 1 次提交
  9. 28 11月, 2018 7 次提交
  10. 27 11月, 2018 7 次提交