1. 28 6月, 2017 1 次提交
    • M
      docs: Add callback-related info to virStream{Abort,Finish} · f1096c02
      Martin Kletzander 提交于
      When one has a non-blocking stream and aborts or finishes it without
      removing the callback, any event loop invocation will trigger that
      callback, but it cannot be removed any more.  We cannot remove the
      callback automatically from virStream{Abort,Finish} functions due to
      forward-compatibility.  So let's at least document this behaviour,
      because it is not easy to find out the reason for.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      f1096c02
  2. 22 5月, 2017 1 次提交
    • M
      virStreamSparseSendAll: Reset @want in each iteration · 9b991d02
      Michal Privoznik 提交于
      There's a slight problem with the current function. Assume we are
      currently in a data section and we have say 42 bytes until next
      section. Therefore, just before (handler) is called to fill up
      the buffer with data, @want is changed to 42 to match the amount
      of data left in the current section. However, after hole is
      processed, we are back in data section but with incredibly small
      @want size. Nobody will ever reset it back. This results in
      incredible data fragmentation.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      9b991d02
  3. 18 5月, 2017 7 次提交
  4. 02 5月, 2016 1 次提交
    • M
      virStream{Recv,Send}All: Increase client buffer · 809d02ca
      Michal Privoznik 提交于
      These are wrappers over virStreamRecv and virStreamSend so that
      users have to care about nothing but writing data into / reading
      data from a sink (typically a file). Note, that these wrappers
      are used exclusively on client side as the daemon has slightly
      different approach. Anyway, the wrappers allocate this buffer and
      use it for intermediate data storage until the data is passed to
      stream to send, or to the client application. So far, we are
      using 64KB buffer. This is enough, but suboptimal because server
      can send messages up to VIR_NET_MESSAGE_LEGACY_PAYLOAD_MAX bytes
      big (262120B, roughly 256KB). So if we make the buffer this big,
      a single message containing the data is sent instead of four,
      which is current situation. This means lower overhead, because
      each message contains a header which needs to be processed, each
      message is processed roughly same amount of time regardless of
      its size, less bytes need to be sent through the wire, and so on.
      Note that since server will never sent us a stream message bigger
      than VIR_NET_MESSAGE_LEGACY_PAYLOAD_MAX there's no point in
      sizing up the client buffer past this threshold.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      809d02ca
  5. 24 10月, 2014 1 次提交