1. 13 8月, 2012 7 次提交
    • J
      client rpc: Separate call creation from running IO loop · 400a5a92
      Jiri Denemark 提交于
      This makes it possible to create and queue new calls while we are
      running IO loop.
      (cherry picked from commit c57103e5)
      400a5a92
    • J
      client rpc: Drop unused return value of virNetClientSendNonBlock · 419cb872
      Jiri Denemark 提交于
      As we never drop non-blocking calls, the return value that used to
      indicate a call was dropped is no longer needed.
      (cherry picked from commit ca9b13e3)
      419cb872
    • J
      client rpc: Just queue non-blocking call if another thread has the buck · 4779cf0f
      Jiri Denemark 提交于
      As non-blocking calls are no longer dropped, we don't really need to
      care that much about their fate and wait for the thread with the buck
      to process them. If another thread has the buck, we can just push a
      non-blocking call to the queue and be done with it.
      (cherry picked from commit ef392614)
      4779cf0f
    • J
      client rpc: Don't drop non-blocking calls · 5badf8c4
      Jiri Denemark 提交于
      So far, we were dropping non-blocking calls whenever sending them would
      block. In case a client is sending lots of stream calls (which are not
      supposed to generate any reply), the assumption that having other calls
      in a queue is sufficient to get a reply from the server doesn't work. I
      tried to fix this in b1e374a7 but
      failed and reverted that commit.
      
      With this patch, non-blocking calls are never dropped (unless the
      connection is being closed) and will always be sent.
      (cherry picked from commit 78602c4e)
      5badf8c4
    • J
      client rpc: Use event loop for writing · 8cb0d089
      Jiri Denemark 提交于
      Normally, when every call has a thread associated with it, the thread
      may get the buck and be in charge of sending all calls until its own
      call is done. When we introduced non-blocking calls, we had to add
      special handling of new non-blocking calls. This patch uses event loop
      to send data if there is no thread to get the buck so that any
      non-blocking calls left in the queue are properly sent without having to
      handle them specially. It also avoids adding even more cruft to client
      IO loop in the following patches.
      
      With this change in, non-blocking calls may see unpredictable delays in
      delivery when the client has no event loop registered. However, the only
      non-blocking calls we have are keepalives and we already require event
      loop for them, which makes this a non-issue until someone introduces new
      non-blocking calls.
      (cherry picked from commit 9e747e5c)
      8cb0d089
    • J
      client rpc: Improve debug messages in virNetClientIO · b1dcd198
      Jiri Denemark 提交于
      When analyzing our debug log, I'm always confused about what each of the
      pointers mean. Let's be explicit.
      (cherry picked from commit 71689f95)
      b1dcd198
    • P
      keepalive: Add ability to disable keepalive messages · 6f429469
      Peter Krempa 提交于
      The docs for virConnectSetKeepAlive() advertise that this function
      should be able to disable keepalives on negative or zero interval time.
      
      This patch removes the check that prohibited this and adds code to
      disable keepalives on negative/zero interval.
      
      * src/libvirt.c: virConnectSetKeepAlive(): - remove check for negative
                                                   values
      * src/rpc/virnetclient.c
      * src/rpc/virnetclient.h: - add virNetClientKeepAliveStop() to disable
                                  keepalive messages
      * src/remote/remote_driver.c: remoteSetKeepAlive(): -add ability to
                                                           disable keepalives
      (cherry picked from commit 6446a9e2)
      6f429469
  2. 15 6月, 2012 1 次提交
    • J
      Revert "rpc: Discard non-blocking calls only when necessary" · d4d87744
      Jiri Denemark 提交于
      This reverts commit b1e374a7, which was
      rather bad since I failed to consider all sides of the issue. The main
      things I didn't consider properly are:
      
      - a thread which sends a non-blocking call waits for the thread with
        the buck to process the call
      - the code doesn't expect non-blocking calls to remain in the queue
        unless they were already partially sent
      
      Thus, the reverted patch actually breaks more than what it fixes and
      clients (which may even be libvirtd during p2p migrations) will likely
      end up in a deadlock.
      (cherry picked from commit 63643f67)
      d4d87744
  3. 27 4月, 2012 1 次提交
    • J
      rpc: Discard non-blocking calls only when necessary · 0129b9ac
      Jiri Denemark 提交于
      Currently, non-blocking calls are either sent immediately or discarded
      in case sending would block. This was implemented based on the
      assumption that the non-blocking keepalive call is not needed as there
      are other calls in the queue which would keep the connection alive.
      However, if those calls are no-reply calls (such as those carrying
      stream data), the remote party knows the connection is alive but since
      we don't get any reply from it, we think the connection is dead.
      
      This is most visible in tunnelled migration. If it happens to be longer
      than keepalive timeout (30s by default), it may be unexpectedly aborted
      because the connection is considered to be dead.
      
      With this patch, we only discard non-blocking calls when the last call
      with a thread is completed and thus there is no thread left to keep
      sending the remaining non-blocking calls.
      0129b9ac
  4. 30 3月, 2012 1 次提交
  5. 05 3月, 2012 1 次提交
    • J
      rpc: Fix client crash on connection close · 720bee30
      Jiri Denemark 提交于
      A multi-threaded client with event loop may crash if one of its threads
      closes a connection while event loop is in the middle of sending
      keep-alive message (either request or response). The right place for it
      is inside virNetClientIOEventLoop() between poll() and
      virNetClientLock(). We should only close a connection directly if no-one
      is using it and defer the closing to the last user otherwise. So far we
      only did so if the close was initiated by keep-alive timeout.
      720bee30
  6. 12 1月, 2012 1 次提交
    • M
      stream: Check for stream EOF · 833b901c
      Michal Privoznik 提交于
      If client stream does not have any data to sink and neither received
      EOF, a dummy packet is sent to the daemon signalising client is ready to
      sink some data. However, after we added event loop to client a race may
      occur:
      
      Thread 1 calls virNetClientStreamRecvPacket and since no data are cached
      nor stream has EOF, it decides to send dummy packet to server which will
      sent some data in turn. However, during this decision and actual message
      exchange with server -
      
      Thread 2 receives last stream data from server. Therefore an EOF is set
      on stream and if there is a call waiting (which is not yet) it is woken
      up. However, Thread 1 haven't sent anything so far, so there is no call
      to be woken up. So this thread sent dummy packet to daemon, which
      ignores that as no stream is associated with such packet and therefore
      no reply will ever come.
      
      This race causes client to hang indefinitely.
      833b901c
  7. 08 12月, 2011 2 次提交
    • D
      Fix updating of haveTheBuck in RPC client to be race-free · e9708637
      Daniel P. Berrange 提交于
      When one thread passes the buck to another thread, it uses
      virCondSignal to wake up the target thread. The variable
      'haveTheBuck' is not updated in a race-free manner when
      this occurs. The current thread sets it to false, and the
      woken up thread sets it to true. There is a window where
      a 3rd thread can come in and grab the buck.
      
      Even if this didn't lead to crashes & deadlocks, this would
      still result in unfairness in the buckpassing algorithm.
      
      A better solution is to *never* set haveTheBuck to false
      when we're passing the buck. Only set it to false when there
      is no further thread waiting for the buck.
      
      * src/rpc/virnetclient.c: Only set haveTheBuck to false
        if no thread is waiting
      e9708637
    • D
      Revert fd066925 · 50a4f49c
      Daniel P. Berrange 提交于
      Commit fd066925 tried to fix
      a race condition in
      
        commit fa959500
        Author: Daniel P. Berrange <berrange@redhat.com>
        Date:   Fri Nov 11 15:28:41 2011 +0000
      
          Explicitly track whether the buck is held in remote client
      
      Unfortunately there is a second race condition whereby the
      event loop can trigger due to incoming data to read. Revert
      this fix, so a complete fix for the problem can be cleanly
      applied
      
      * src/rpc/virnetclient.c: Revert fd066925
      50a4f49c
  8. 04 12月, 2011 1 次提交
    • E
      maint: fix improper use of 'an' · 3a9ce767
      Eric Blake 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=648855 mentioned a
      misuse of 'an' where 'a' is proper; that has since been fixed,
      but a search found other problems (some were a spelling error for
      'and', while most were fixed by 'a').
      
      * daemon/stream.c: Fix grammar.
      * src/conf/domain_conf.c: Likewise.
      * src/conf/domain_event.c: Likewise.
      * src/esx/esx_driver.c: Likewise.
      * src/esx/esx_vi.c: Likewise.
      * src/rpc/virnetclient.c: Likewise.
      * src/rpc/virnetserverprogram.c: Likewise.
      * src/storage/storage_backend_fs.c: Likewise.
      * src/util/conf.c: Likewise.
      * src/util/dnsmasq.c: Likewise.
      * src/util/iptables.c: Likewise.
      * src/xen/xen_hypervisor.c: Likewise.
      * src/xen/xend_internal.c: Likewise.
      * src/xen/xs_internal.c: Likewise.
      * tools/virsh.c: Likewise.
      3a9ce767
  9. 02 12月, 2011 1 次提交
    • P
      client: Check if other thread claims it has the buck before claiming it. · fd066925
      Peter Krempa 提交于
      Originaly, the code checked if another client is the queue and infered
      ownership of the buck from that. Commit fa959500
      added a separate variable to track the buck. That caused, that a new
      call might enter claiming it has the buck, while another thread was
      signalled to take the buck. This ends in two threads claiming they hold
      the buck and entering poll(). This happens due to a race on waking up
      threads on the client lock mutex.
      
      This caused multi-threaded clients to hang, most prominently visible and
      reproducible on python based clients, like virt-manager.
      
      This patch causes threads, that have been signalled to take the buck to
      re-check if buck is held by another thread.
      fd066925
  10. 01 12月, 2011 1 次提交
  11. 29 11月, 2011 1 次提交
  12. 24 11月, 2011 7 次提交
  13. 16 11月, 2011 7 次提交
    • D
      Don't return a fatal error if receiving unexpected stream data · a38710bd
      Daniel P. Berrange 提交于
      Due to the asynchronous nature of streams, we might continue to
      receive some stream packets from the server even after we have
      shutdown the stream on the client side. These should be discarded
      silently, rather than raising an error in the RPC layer.
      
      * src/rpc/virnetclient.c: Discard stream data silently
      a38710bd
    • D
      Allow non-blocking message sending on virNetClient · ff465ad2
      Daniel P. Berrange 提交于
      Add a new virNetClientSendNonBlock which returns 2 on
      full send, 1 on partial send, 0 on no send, -1 on error
      
      If a partial send occurs, then a subsequent call to any
      of the virNetClientSend* APIs will finish any outstanding
      I/O.
      
      TODO: the virNetClientEvent event handler could be used
      to speed up completion of partial sends if an event loop
      is present.
      
      * src/rpc/virnetsocket.h, src/rpc/virnetsocket.c: Add new
        virNetSocketHasPendingData() API to test for cached
        data pending send.
      * src/rpc/virnetclient.c, src/rpc/virnetclient.h: Add new
        virNetClientSendNonBlock() API to send non-blocking API
      ff465ad2
    • D
      Refactor code for enabling/disabling I/O callback in remote client · b1962203
      Daniel P. Berrange 提交于
      * src/rpc/virnetclient.c: Add helper for setting I/O callback events
      b1962203
    • D
      Split virNetClientSend into 2 methods · 5990f227
      Daniel P. Berrange 提交于
      Stop multiplexing virNetClientSend for two different purposes,
      instead add virNetClientSendWithReply and virNetClientSendNoReply
      
      * src/rpc/virnetclient.c, src/rpc/virnetclient.h: Replace
        virNetClientSend with virNetClientSendWithReply and
        virNetClientSendNoReply
      * src/rpc/virnetclientprogram.c, src/rpc/virnetclientstream.c:
        Update for new API names
      5990f227
    • D
      Refactor code for passing the buck in the remote client · 9f28ad00
      Daniel P. Berrange 提交于
      Remove some duplication by pulling the code for passing the
      buck out into a helper method
      
      * src/rpc/virnetclient.c: Introduce virNetClientIOEventLoopPassTheBuck
      9f28ad00
    • D
      Explicitly track whether the buck is held in remote client · fa959500
      Daniel P. Berrange 提交于
      Instead of inferring whether the buck is held from the waitDispatch
      pointer, use an explicit 'bool haveTheBuck' field
      
      * src/rpc/virnetclient.c: Explicitly track the buck
      fa959500
    • D
      Remove all linked list handling from remote client event loop · 2501d27e
      Daniel P. Berrange 提交于
      Directly messing around with the linked list is potentially
      dangerous. Introduce some helper APIs to deal with list
      manipulating the list
      
      * src/rpc/virnetclient.c: Create linked list handlers
      2501d27e
  14. 07 11月, 2011 1 次提交
    • D
      Fix sending/receiving of FDs when stream returns EAGAIN · b2c62316
      Daniel P. Berrange 提交于
      The code calling sendfd/recvfd was mistakenly assuming those
      calls would never block. They can in fact return EAGAIN and
      this is causing us to drop the client connection when blocking
      ocurrs while sending/receiving FDs.
      
      Fixing this is a little hairy on the incoming side, since at
      the point where we see the EAGAIN, we already thought we had
      finished receiving all data for the packet. So we play a little
      trick to reset bufferOffset again and go back into polling for
      more data.
      
      * src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Update
        virNetSocketSendFD/RecvFD to return 0 on EAGAIN, or 1
        on success
      * src/rpc/virnetclient.c: Move decoding of header & fds
        out of virNetClientCallDispatch and into virNetClientIOHandleInput.
        Handling blocking when sending/receiving FDs
      * src/rpc/virnetmessage.h: Add a 'donefds' field to track
        how many FDs we've sent / received
      * src/rpc/virnetserverclient.c: Handling blocking when
        sending/receiving FDs
      b2c62316
  15. 28 10月, 2011 1 次提交
    • D
      Add client side support for FD passing · 36a9c83d
      Daniel P. Berrange 提交于
      Extend the RPC client code to allow file descriptors to be sent
      to the server with calls, and received back with replies.
      
      * src/remote/remote_driver.c: Stub extra args
      * src/libvirt_private.syms, src/rpc/virnetclient.c,
        src/rpc/virnetclient.h, src/rpc/virnetclientprogram.c,
        src/rpc/virnetclientprogram.h: Extend APIs to allow
        FD passing
      36a9c83d
  16. 11 10月, 2011 1 次提交
    • D
      Rewrite all the DTrace/SystemTAP probing · ddf3bd32
      Daniel P. Berrange 提交于
      The libvirtd daemon had a few crude system tap probes. Some of
      these were broken during the RPC rewrite. The new modular RPC
      code is structured in a way that allows much more effective
      tracing. Instead of trying to hook up the original probes,
      define a new set of probes for the RPC and event code.
      
      The master probes file is now src/probes.d.  This contains
      probes for virNetServerClientPtr, virNetClientPtr, virSocketPtr
      virNetTLSContextPtr and virNetTLSSessionPtr modules. Also add
      probes for the poll event loop.
      
      The src/dtrace2systemtap.pl script can convert the probes.d
      file into a libvirt_probes.stp file to make use from systemtap
      much simpler.
      
      The src/rpc/gensystemtap.pl script can generate a set of
      systemtap functions for translating RPC enum values into
      printable strings. This works for all RPC header enums (program,
      type, status, procedure) and also the authentication enum
      
      The PROBE macro will automatically generate a VIR_DEBUG
      statement, so any place with a PROBE can remove any existing
      manual DEBUG statements.
      
      * daemon/libvirtd.stp, daemon/probes.d: Remove obsolete probing
      * daemon/libvirtd.h: Remove probe macros
      * daemon/Makefile.am: Remove all probe buildings/install
      * daemon/remote.c: Update authentication probes
      * src/dtrace2systemtap.pl, src/rpc/gensystemtap.pl: Scripts
        to generate STP files
      * src/internal.h: Add probe macros
      * src/probes.d: Master list of probes
      * src/rpc/virnetclient.c, src/rpc/virnetserverclient.c,
        src/rpc/virnetsocket.c, src/rpc/virnettlscontext.c,
        src/util/event_poll.c: Insert probe points, removing any
        DEBUG statements that duplicate the info
      ddf3bd32
  17. 23 9月, 2011 1 次提交
    • D
      Fix synchronous reading of stream data · cb610092
      Daniel P. Berrange 提交于
      commit 984840a2 removed the
      notification of waiting calls when VIR_NET_CONTINUE messages
      arrive. This was to fix the case of a virStreamAbort() call
      being prematurely notified of completion.
      
      The problem is that sometimes there are dummy calls from a
      virStreamRecv() call waiting that *do* need to be notified.
      
      These dummy calls should have a status VIR_NET_CONTINUE. So
      re-add the notification upon VIR_NET_CONTINUE, but only if
      the waiter also has a status of VIR_NET_CONTINUE.
      
      * src/rpc/virnetclient.c: Notify waiting call if stream data
        arrives
      * src/rpc/virnetclientstream.c:  Mark dummy stream read packet
        with status VIR_NET_CONTINUE
      cb610092
  18. 18 8月, 2011 1 次提交
    • D
      Ensure async packets never get marked for sync replies · 984840a2
      Daniel P. Berrange 提交于
      If a client had initiated a stream abort, it will have a call
      waiting for a reply in the queue. If more data continues to
      arrive on the stream, the abort command could mistakenly get
      signalled as complete. Remove the code from async data processing
      that looked for waiting calls. Add a sanity check to ensure no
      async call can ever be marked as needing a reply
      
      * src/rpc/virnetclient.c: Ensure async data packets can't
        trigger a reply
      984840a2
  19. 17 8月, 2011 1 次提交
  20. 15 8月, 2011 1 次提交
  21. 22 7月, 2011 1 次提交