1. 23 11月, 2016 1 次提交
    • J
      nbd: add multi-connection support · 9561a7ad
      Josef Bacik 提交于
      NBD can become contended on its single connection.  We have to serialize all
      writes and we can only process one read response at a time.  Fix this by
      allowing userspace to provide multiple connections to a single nbd device.  This
      coupled with block-mq drastically increases performance in multi-process cases.
      Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9561a7ad
  2. 06 5月, 2015 1 次提交
  3. 28 2月, 2013 1 次提交
    • A
      nbd: support FLUSH requests · 75f187ab
      Alex Bligh 提交于
      Currently, the NBD device does not accept flush requests from the Linux
      block layer.  If the NBD server opened the target with neither O_SYNC nor
      O_DSYNC, however, the device will be effectively backed by a writeback
      cache.  Without issuing flushes properly, operation of the NBD device will
      not be safe against power losses.
      
      The NBD protocol has support for both a cache flush command and a FUA
      command flag; the server will also pass a flag to note its support for
      these features.  This patch adds support for the cache flush command and
      flag.  In the kernel, we receive the flags via the NBD_SET_FLAGS ioctl,
      and map NBD_FLAG_SEND_FLUSH to the argument of blk_queue_flush.  When the
      flag is active the block layer will send REQ_FLUSH requests, which we
      translate to NBD_CMD_FLUSH commands.
      
      FUA support is not included in this patch because all free software
      servers implement it with a full fdatasync; thus it has no advantage over
      supporting flush only.  Because I [Paolo] cannot really benchmark it in a
      realistic scenario, I cannot tell if it is a good idea or not.  It is also
      not clear if it is valid for an NBD server to support FUA but not flush.
      The Linux block layer gives a warning for this combination, the NBD
      protocol documentation says nothing about it.
      
      The patch also fixes a small problem in the handling of flags: nbd->flags
      must be cleared at the end of NBD_DO_IT, but the driver was not doing
      that.  The bug manifests itself as follows.  Suppose you two different
      client/server pairs to start the NBD device.  Suppose also that the first
      client supports NBD_SET_FLAGS, and the first server sends
      NBD_FLAG_SEND_FLUSH; the second pair instead does neither of these two
      things.  Before this patch, the second invocation of NBD_DO_IT will use a
      stale value of nbd->flags, and the second server will issue an error every
      time it receives an NBD_CMD_FLUSH command.
      
      This bug is pre-existing, but it becomes much more important after this
      patch; flush failures make the device pretty much unusable, unlike
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NAlex Bligh <alex@alex.org.uk>
      Acked-by: NPaul Clements <Paul.Clements@steeleye.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      75f187ab
  4. 13 10月, 2012 1 次提交
  5. 06 10月, 2012 2 次提交
  6. 23 8月, 2010 1 次提交
  7. 03 6月, 2010 1 次提交
  8. 29 4月, 2008 2 次提交
  9. 09 2月, 2008 1 次提交
  10. 17 10月, 2007 1 次提交
    • P
      NBD: allow hung network I/O to be cancelled · 7fdfd406
      Paul Clements 提交于
      Allow NBD I/O to be cancelled when a network outage occurs.  Previously, I/O
      would just hang, and if enough I/O was hung in nbd, the system (at least
      user-level) would completely hang until a TCP timeout (default, 15 minutes)
      occurred.
      
      The patch introduces a new ioctl NBD_SET_TIMEOUT that allows a transmit
      timeout value (in seconds) to be specified.  Any network send that exceeds the
      timeout will be cancelled and the nbd connection will be shut down.  I've
      tested with various timeout values and 6 seconds seems to be a good choice for
      the timeout.  If the NBD_SET_TIMEOUT ioctl is not called, you get the old (I/O
      hang) behavior.
      Signed-off-by: NPaul Clements <paul.clements@steeleye.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7fdfd406
  11. 08 12月, 2006 1 次提交
  12. 12 10月, 2006 1 次提交
  13. 26 6月, 2006 1 次提交
  14. 04 5月, 2006 1 次提交
  15. 23 3月, 2006 1 次提交
  16. 07 1月, 2006 1 次提交
    • H
      [PATCH] nbd: fix TX/RX race condition · 4b2f0260
      Herbert Xu 提交于
      Janos Haar of First NetCenter Bt.  reported numerous crashes involving the
      NBD driver.  With his help, this was tracked down to bogus bio vectors
      which in turn was the result of a race condition between the
      receive/transmit routines in the NBD driver.
      
      The bug manifests itself like this:
      
      CPU0				CPU1
      do_nbd_request
      	add req to queuelist
      	nbd_send_request
      		send req head
      		for each bio
      			kmap
      			send
      				nbd_read_stat
      					nbd_find_request
      					nbd_end_request
      			kunmap
      
      When CPU1 finishes nbd_end_request, the request and all its associated
      bio's are freed.  So when CPU0 calls kunmap whose argument is derived from
      the last bio, it may crash.
      
      Under normal circumstances, the race occurs only on the last bio.  However,
      if an error is encountered on the remote NBD server (such as an incorrect
      magic number in the request), or if there were a bug in the server, it is
      possible for the nbd_end_request to occur any time after the request's
      addition to the queuelist.
      
      The following patch fixes this problem by making sure that requests are not
      added to the queuelist until after they have been completed transmission.
      
      In order for the receiving side to be ready for responses involving
      requests still being transmitted, the patch introduces the concept of the
      active request.
      
      When a response matches the current active request, its processing is
      delayed until after the tranmission has come to a stop.
      
      This has been tested by Janos and it has been successful in curing this
      race condition.
      
      From: Herbert Xu <herbert@gondor.apana.org.au>
      
        Here is an updated patch which removes the active_req wait in
        nbd_clear_queue and the associated memory barrier.
      
        I've also clarified this in the comment.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Cc: <djani22@dynamicweb.hu>
      Cc: Paul Clements <Paul.Clements@SteelEye.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4b2f0260
  17. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4