1. 21 12月, 2012 2 次提交
  2. 02 11月, 2012 1 次提交
  3. 21 9月, 2012 1 次提交
  4. 10 9月, 2012 1 次提交
    • C
      Fix unwanted closing of libvirt client connection · 164c03d3
      Christophe Fergeau 提交于
      e5a1bee0 introduced a regression in Boxes: when Boxes is left idle
      (it's still doing some libvirt calls in the background), the
      libvirt connection gets closed after a few minutes. What happens is
      that this code in virNetClientIOHandleOutput gets triggered:
      
      if (!thecall)
          return -1; /* Shouldn't happen, but you never know... */
      
      and after the changes in e5a1bee0, this causes the libvirt connection
      to be closed.
      
      Upon further investigation, what happens is that
      virNetClientIOHandleOutput is called from gvir_event_handle_dispatch
      in libvirt-glib, which is triggered because the client fd became
      writable. However, between the times gvir_event_handle_dispatch
      is called, and the time the client lock is grabbed and
      virNetClientIOHandleOutput is called, another thread runs and
      completes the current call. 'thecall' is then NULL when the first
      thread gets to run virNetClientIOHandleOutput.
      
      After describing this situation on IRC, danpb suggested this:
      
      11:37 < danpb> In that case I think the correct thing would be to change
                     'return -1' above to 'return 0' since that's not actually an
                     error - its a rare, but expected event
      
      which is what this patch is doing. I've tested it against master
      libvirt, and I didn't get disconnected in ~10 minutes while this
      happens in less than 5 minutes without this patch.
      164c03d3
  5. 27 8月, 2012 1 次提交
  6. 22 8月, 2012 1 次提交
    • P
      client: Change default location of known_hosts file for libssh2 layer · 225f2807
      Peter Krempa 提交于
      Unfortunately libssh2 doesn't support all types of host keys that can be
      saved in the known_hosts file. Also it does not report that parsing of
      the file failed. This results into truncated known_hosts files where the
      standard client stores keys also in other formats (eg.
      ecdsa-sha2-nistp256).
      
      This patch changes the default location of the known_hosts file into the
      libvirt private configuration directory, where it will be only written
      by the libssh2 layer itself. This prevents trashing user's known_host
      file.
      225f2807
  7. 21 8月, 2012 1 次提交
  8. 15 8月, 2012 1 次提交
  9. 09 8月, 2012 1 次提交
    • P
      Fix errno check, prevent spurious errors under heavy load · bfa74ebe
      Peter Feiner 提交于
      From man poll(2), poll does not set errno=EAGAIN on interrupt, however
      it does set errno=EINTR. Have libvirt retry on the appropriate errno.
      
      Under heavy load, a program of mine kept getting libvirt errors 'poll on
      socket failed: Interrupted system call'. The signals were SIGCHLD from
      processes forked by threads unrelated to those using libvirt.
      bfa74ebe
  10. 07 8月, 2012 5 次提交
  11. 04 8月, 2012 1 次提交
  12. 30 7月, 2012 3 次提交
  13. 23 7月, 2012 1 次提交
    • O
      Desert the FSF address in copyright · f9ce7dad
      Osier Yang 提交于
      Per the FSF address could be changed from time to time, and GNU
      recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html)
      
        You should have received a copy of the GNU General Public License
        along with Foobar.  If not, see <http://www.gnu.org/licenses/>.
      
      This patch removes the explicit FSF address, and uses above instead
      (of course, with inserting 'Lesser' before 'General').
      
      Except a bunch of files for security driver, all others are changed
      automatically, the copyright for securify files are not complete,
      that's why to do it manually:
      
        src/security/security_selinux.h
        src/security/security_driver.h
        src/security/security_selinux.c
        src/security/security_apparmor.h
        src/security/security_apparmor.c
        src/security/security_driver.c
      f9ce7dad
  14. 18 7月, 2012 1 次提交
  15. 13 6月, 2012 10 次提交
    • D
      client rpc: Fix error checking after poll() · 5d490603
      Daniel P. Berrange 提交于
      First 'poll' can't return EWOULDBLOCK, and second, we're checking errno
      so far away from the poll() call that we've probably already trashed the
      original errno value.
      5d490603
    • J
      client rpc: Send keepalive requests from IO event loop · 4d971dc7
      Jiri Denemark 提交于
      In addition to keepalive responses, we also need to send keepalive
      requests from client IO loop to properly detect dead connection in case
      a libvirt API is called from the main loop, which prevents any timers to
      be called.
      4d971dc7
    • J
      rpc: Remove unused parameter in virKeepAliveStopInternal · 0ec514b3
      Jiri Denemark 提交于
      The previous commit removed the only usage of ``all'' parameter in
      virKeepAliveStopInternal, which was actually the only reason for having
      virKeepAliveStopInternal. This effectively reverts most of commit
      6446a9e2.
      0ec514b3
    • J
      rpc: Do not use timer for sending keepalive responses · bb85f229
      Jiri Denemark 提交于
      When a libvirt API is called from the main event loop (which seems to be
      common in event-based glib apps), the client IO loop would properly
      handle keepalive requests sent by a server but will not actually send
      them because the main event loop is blocked with the API. This patch
      gets rid of response timer and the thread which is processing keepalive
      requests is also responsible for queueing responses for delivery.
      bb85f229
    • J
      client rpc: Separate call creation from running IO loop · c57103e5
      Jiri Denemark 提交于
      This makes it possible to create and queue new calls while we are
      running IO loop.
      c57103e5
    • J
      client rpc: Drop unused return value of virNetClientSendNonBlock · ca9b13e3
      Jiri Denemark 提交于
      As we never drop non-blocking calls, the return value that used to
      indicate a call was dropped is no longer needed.
      ca9b13e3
    • J
      client rpc: Just queue non-blocking call if another thread has the buck · ef392614
      Jiri Denemark 提交于
      As non-blocking calls are no longer dropped, we don't really need to
      care that much about their fate and wait for the thread with the buck
      to process them. If another thread has the buck, we can just push a
      non-blocking call to the queue and be done with it.
      ef392614
    • J
      client rpc: Don't drop non-blocking calls · 78602c4e
      Jiri Denemark 提交于
      So far, we were dropping non-blocking calls whenever sending them would
      block. In case a client is sending lots of stream calls (which are not
      supposed to generate any reply), the assumption that having other calls
      in a queue is sufficient to get a reply from the server doesn't work. I
      tried to fix this in b1e374a7 but
      failed and reverted that commit.
      
      With this patch, non-blocking calls are never dropped (unless the
      connection is being closed) and will always be sent.
      78602c4e
    • J
      client rpc: Use event loop for writing · 9e747e5c
      Jiri Denemark 提交于
      Normally, when every call has a thread associated with it, the thread
      may get the buck and be in charge of sending all calls until its own
      call is done. When we introduced non-blocking calls, we had to add
      special handling of new non-blocking calls. This patch uses event loop
      to send data if there is no thread to get the buck so that any
      non-blocking calls left in the queue are properly sent without having to
      handle them specially. It also avoids adding even more cruft to client
      IO loop in the following patches.
      
      With this change in, non-blocking calls may see unpredictable delays in
      delivery when the client has no event loop registered. However, the only
      non-blocking calls we have are keepalives and we already require event
      loop for them, which makes this a non-issue until someone introduces new
      non-blocking calls.
      9e747e5c
    • J
      client rpc: Improve debug messages in virNetClientIO · 71689f95
      Jiri Denemark 提交于
      When analyzing our debug log, I'm always confused about what each of the
      pointers mean. Let's be explicit.
      71689f95
  16. 05 6月, 2012 1 次提交
    • M
      rpc: Switch to dynamically allocated message buffer · a2c304f6
      Michal Privoznik 提交于
      Currently, we are allocating buffer for RPC messages statically.
      This is not such pain when RPC limits are small. However, if we want
      ever to increase those limits, we need to allocate buffer dynamically,
      based on RPC message len (= the first 4 bytes). Therefore we will
      decrease our mem usage in most cases and still be flexible enough in
      corner cases.
      a2c304f6
  17. 23 5月, 2012 1 次提交
    • J
      Revert "rpc: Discard non-blocking calls only when necessary" · 63643f67
      Jiri Denemark 提交于
      This reverts commit b1e374a7, which was
      rather bad since I failed to consider all sides of the issue. The main
      things I didn't consider properly are:
      
      - a thread which sends a non-blocking call waits for the thread with
        the buck to process the call
      - the code doesn't expect non-blocking calls to remain in the queue
        unless they were already partially sent
      
      Thus, the reverted patch actually breaks more than what it fixes and
      clients (which may even be libvirtd during p2p migrations) will likely
      end up in a deadlock.
      63643f67
  18. 26 4月, 2012 2 次提交
    • J
      rpc: Discard non-blocking calls only when necessary · b1e374a7
      Jiri Denemark 提交于
      Currently, non-blocking calls are either sent immediately or discarded
      in case sending would block. This was implemented based on the
      assumption that the non-blocking keepalive call is not needed as there
      are other calls in the queue which would keep the connection alive.
      However, if those calls are no-reply calls (such as those carrying
      stream data), the remote party knows the connection is alive but since
      we don't get any reply from it, we think the connection is dead.
      
      This is most visible in tunnelled migration. If it happens to be longer
      than keepalive timeout (30s by default), it may be unexpectedly aborted
      because the connection is considered to be dead.
      
      With this patch, we only discard non-blocking calls when the last call
      with a thread is completed and thus there is no thread left to keep
      sending the remaining non-blocking calls.
      b1e374a7
    • P
      keepalive: Add ability to disable keepalive messages · 6446a9e2
      Peter Krempa 提交于
      The docs for virConnectSetKeepAlive() advertise that this function
      should be able to disable keepalives on negative or zero interval time.
      
      This patch removes the check that prohibited this and adds code to
      disable keepalives on negative/zero interval.
      
      * src/libvirt.c: virConnectSetKeepAlive(): - remove check for negative
                                                   values
      * src/rpc/virnetclient.c
      * src/rpc/virnetclient.h: - add virNetClientKeepAliveStop() to disable
                                  keepalive messages
      * src/remote/remote_driver.c: remoteSetKeepAlive(): -add ability to
                                                           disable keepalives
      6446a9e2
  19. 30 3月, 2012 1 次提交
  20. 05 3月, 2012 1 次提交
    • J
      rpc: Fix client crash on connection close · 720bee30
      Jiri Denemark 提交于
      A multi-threaded client with event loop may crash if one of its threads
      closes a connection while event loop is in the middle of sending
      keep-alive message (either request or response). The right place for it
      is inside virNetClientIOEventLoop() between poll() and
      virNetClientLock(). We should only close a connection directly if no-one
      is using it and defer the closing to the last user otherwise. So far we
      only did so if the close was initiated by keep-alive timeout.
      720bee30
  21. 12 1月, 2012 1 次提交
    • M
      stream: Check for stream EOF · 833b901c
      Michal Privoznik 提交于
      If client stream does not have any data to sink and neither received
      EOF, a dummy packet is sent to the daemon signalising client is ready to
      sink some data. However, after we added event loop to client a race may
      occur:
      
      Thread 1 calls virNetClientStreamRecvPacket and since no data are cached
      nor stream has EOF, it decides to send dummy packet to server which will
      sent some data in turn. However, during this decision and actual message
      exchange with server -
      
      Thread 2 receives last stream data from server. Therefore an EOF is set
      on stream and if there is a call waiting (which is not yet) it is woken
      up. However, Thread 1 haven't sent anything so far, so there is no call
      to be woken up. So this thread sent dummy packet to daemon, which
      ignores that as no stream is associated with such packet and therefore
      no reply will ever come.
      
      This race causes client to hang indefinitely.
      833b901c
  22. 08 12月, 2011 2 次提交
    • D
      Fix updating of haveTheBuck in RPC client to be race-free · e9708637
      Daniel P. Berrange 提交于
      When one thread passes the buck to another thread, it uses
      virCondSignal to wake up the target thread. The variable
      'haveTheBuck' is not updated in a race-free manner when
      this occurs. The current thread sets it to false, and the
      woken up thread sets it to true. There is a window where
      a 3rd thread can come in and grab the buck.
      
      Even if this didn't lead to crashes & deadlocks, this would
      still result in unfairness in the buckpassing algorithm.
      
      A better solution is to *never* set haveTheBuck to false
      when we're passing the buck. Only set it to false when there
      is no further thread waiting for the buck.
      
      * src/rpc/virnetclient.c: Only set haveTheBuck to false
        if no thread is waiting
      e9708637
    • D
      Revert fd066925 · 50a4f49c
      Daniel P. Berrange 提交于
      Commit fd066925 tried to fix
      a race condition in
      
        commit fa959500
        Author: Daniel P. Berrange <berrange@redhat.com>
        Date:   Fri Nov 11 15:28:41 2011 +0000
      
          Explicitly track whether the buck is held in remote client
      
      Unfortunately there is a second race condition whereby the
      event loop can trigger due to incoming data to read. Revert
      this fix, so a complete fix for the problem can be cleanly
      applied
      
      * src/rpc/virnetclient.c: Revert fd066925
      50a4f49c