1. 30 8月, 2014 3 次提交
  2. 29 8月, 2014 9 次提交
    • R
      curl: Don't deref NULL pointer in call to aio_poll. · a2f468e4
      Richard W.M. Jones 提交于
      In commit 63f0f45f the following
      mechanical change was made:
      
               if (!state) {
      -            qemu_aio_wait();
      +            aio_poll(state->s->aio_context, true);
               }
      
      The new code now checks if state is NULL and then dereferences it
      ('state->s') which is obviously incorrect.
      
      This commit replaces state->s->aio_context with
      bdrv_get_aio_context(bs), fixing this problem.  The two other hunks
      are concerned with getting the BlockDriverState pointer bs to where it
      is needed.
      
      The original bug causes a segfault when using libguestfs to access a
      VMware vCenter Server and doing any kind of complex read-heavy
      operations.  With this commit the segfault goes away.
      Signed-off-by: NRichard W.M. Jones <rjones@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NBenoît Canet <benoit.canet@nodalink.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      a2f468e4
    • R
      curl: Allow a cookie or cookies to be sent with http/https requests. · a94f83d9
      Richard W.M. Jones 提交于
      In order to access VMware ESX efficiently, we need to send a session
      cookie.  This patch is very simple and just allows you to send that
      session cookie.  It punts on the question of how you get the session
      cookie in the first place, but in practice you can just run a `curl'
      command against the server and extract the cookie that way.
      
      To use it, add file.cookie to the curl URL.  For example:
      
      $ qemu-img info 'json: {
          "file.driver":"https",
          "file.url":"https://vcenter/folder/Windows%202003/Windows%202003-flat.vmdk?dcPath=Datacenter&dsName=datastore1",
          "file.sslverify":"off",
          "file.cookie":"vmware_soap_session=\"52a01262-bf93-ccce-d379-8dabb3e55560\""}'
      image: [...]
      file format: raw
      virtual size: 8.0G (8589934592 bytes)
      disk size: unavailable
      Signed-off-by: NRichard W.M. Jones <rjones@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      a94f83d9
    • S
      linux-aio: avoid deadlock in nested aio_poll() calls · 2cdff7f6
      Stefan Hajnoczi 提交于
      If two Linux AIO request completions are fetched in the same
      io_getevents() call, QEMU will deadlock if request A's callback waits
      for request B to complete using an aio_poll() loop.  This was reported
      to happen with the mirror blockjob.
      
      This patch moves completion processing into a BH and makes it resumable.
      Nested event loops can resume completion processing so that request B
      will complete and the deadlock will not occur.
      
      Cc: Kevin Wolf <kwolf@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Ming Lei <ming.lei@canonical.com>
      Cc: Marcin Gibuła <m.gibula@beyond.pl>
      Reported-by: NMarcin Gibuła <m.gibula@beyond.pl>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Tested-by: NMarcin Gibuła <m.gibula@beyond.pl>
      2cdff7f6
    • L
      sheepdog: fix a core dump while do auto-reconnecting · a780dea0
      Liu Yuan 提交于
      We should reinit local_err as NULL inside the while loop or g_free() will report
      corrupption and abort the QEMU when sheepdog driver tries reconnecting.
      
      This was broken in commit 356b4ca2.
      
      qemu-system-x86_64: failed to get the header, Resource temporarily unavailable
      qemu-system-x86_64: Failed to connect to socket: Connection refused
      qemu-system-x86_64: (null)
      [xcb] Unknown sequence number while awaiting reply
      [xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
      [xcb] Aborting, sorry about that.
      qemu-system-x86_64: ../../src/xcb_io.c:298: poll_for_response: Assertion `!xcb_xlib_threads_sequence_lost' failed.
      Aborted (core dumped)
      
      Cc: qemu-devel@nongnu.org
      Cc: Markus Armbruster <armbru@redhat.com>
      Cc: Kevin Wolf <kwolf@redhat.com>
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NLiu Yuan <namei.unix@gmail.com>
      Reviewed-by: NBenoît Canet <benoit.canet@nodalink.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      a780dea0
    • P
      aio-win32: add support for sockets · b493317d
      Paolo Bonzini 提交于
      Uses the same select/WSAEventSelect scheme as main-loop.c.
      WSAEventSelect() is edge-triggered, so it cannot be used
      directly, but it is still used as a way to exit from a
      blocking g_poll().
      
      Before g_poll() is called, we poll sockets with a non-blocking
      select() to achieve the level-triggered semantics we require:
      if a socket is ready, the g_poll() is made non-blocking too.
      
      Based on a patch from Or Goshen.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      b493317d
    • L
      block/quorum: add simple read pattern support · a9db86b2
      Liu Yuan 提交于
      This patch adds single read pattern to quorum driver and quorum vote is default
      pattern.
      
      For now we do a quorum vote on all the reads, it is designed for unreliable
      underlying storage such as non-redundant NFS to make sure data integrity at the
      cost of the read performance.
      
      For some use cases as following:
      
              VM
        --------------
        |            |
        v            v
        A            B
      
      Both A and B has hardware raid storage to justify the data integrity on its own.
      So it would help performance if we do a single read instead of on all the nodes.
      Further, if we run VM on either of the storage node, we can make a local read
      request for better performance.
      
      This patch generalize the above 2 nodes case in the N nodes. That is,
      
      vm -> write to all the N nodes, read just one of them. If single read fails, we
      try to read next node in FIFO order specified by the startup command.
      
      The 2 nodes case is very similar to DRBD[1] though lack of auto-sync
      functionality in the single device/node failure for now. But compared with DRBD
      we still have some advantages over it:
      
      - Suppose we have 20 VMs running on one(assume A) of 2 nodes' DRBD backed
      storage. And if A crashes, we need to restart all the VMs on node B. But for
      practice case, we can't because B might not have enough resources to setup 20 VMs
      at once. So if we run our 20 VMs with quorum driver, and scatter the replicated
      images over the data center, we can very likely restart 20 VMs without any
      resource problem.
      
      After all, I think we can build a more powerful replicated image functionality
      on quorum and block jobs(block mirror) to meet various High Availibility needs.
      
      E.g, Enable single read pattern on 2 children,
      
      -drive driver=quorum,children.0.file.filename=0.qcow2,\
      children.1.file.filename=1.qcow2,read-pattern=fifo,vote-threshold=1
      
      [1] http://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device
      
      [Dropped \n from an error_setg() error message
      --Stefan]
      
      Cc: Benoit Canet <benoit@irqsave.net>
      Cc: Eric Blake <eblake@redhat.com>
      Cc: Kevin Wolf <kwolf@redhat.com>
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NLiu Yuan <namei.unix@gmail.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      a9db86b2
    • H
      sheepdog: improve error handling for a case of failed lock · 38890b24
      Hitoshi Mitake 提交于
      Recently, sheepdog revived its VDI locking functionality. This patch
      updates sheepdog driver of QEMU for this feature. It changes an error
      code for a case of failed locking. -EBUSY is a suitable one.
      Reported-by: NValerio Pachera <sirio81@gmail.com>
      Cc: Kevin Wolf <kwolf@redhat.com>
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Cc: Liu Yuan <namei.unix@gmail.com>
      Cc: MORITA Kazutaka <morita.kazutaka@gmail.com>
      Signed-off-by: NHitoshi Mitake <mitake.hitoshi@lab.ntt.co.jp>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      38890b24
    • H
      sheepdog: adopting protocol update for VDI locking · 1dbfafed
      Hitoshi Mitake 提交于
      The update is required for supporting iSCSI multipath. It doesn't
      affect behavior of QEMU driver but adding a new field to vdi request
      struct is required.
      
      Cc: Kevin Wolf <kwolf@redhat.com>
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Cc: Liu Yuan <namei.unix@gmail.com>
      Cc: MORITA Kazutaka <morita.kazutaka@gmail.com>
      Signed-off-by: NHitoshi Mitake <mitake.hitoshi@lab.ntt.co.jp>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      1dbfafed
    • D
      block.curl: adding 'timeout' option · 212aefaa
      Daniel Henrique Barboza 提交于
      The curl hardcoded timeout (5 seconds) sometimes is not long
      enough depending on the remote server configuration and network
      traffic. The user should be able to set how much long he is
      willing to wait for the connection.
      
      Adding a new option to set this timeout gives the user this
      flexibility. The previous default timeout of 5 seconds will be
      used if this option is not present.
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Signed-off-by: NDaniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
      Reviewed-by: NBenoit Canet <benoit.canet@nodalink.com>
      Tested-by: NRichard W.M. Jones <rjones@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      212aefaa
  3. 28 8月, 2014 1 次提交
  4. 26 8月, 2014 1 次提交
  5. 22 8月, 2014 4 次提交
  6. 21 8月, 2014 1 次提交
  7. 20 8月, 2014 10 次提交
  8. 16 8月, 2014 6 次提交
  9. 15 8月, 2014 5 次提交