1. 17 8月, 2015 7 次提交
  2. 17 7月, 2015 1 次提交
  3. 20 5月, 2015 1 次提交
  4. 06 5月, 2015 2 次提交
  5. 03 4月, 2015 7 次提交
  6. 05 3月, 2015 1 次提交
  7. 05 10月, 2014 1 次提交
    • M
      block: disable entropy contributions for nonrot devices · b277da0a
      Mike Snitzer 提交于
      Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
      QUEUE_FLAG_NONROT.
      
      Historically, all block devices have automatically made entropy
      contributions.  But as previously stated in commit e2e1a148 ("block: add
      sysfs knob for turning off disk entropy contributions"):
          - On SSD disks, the completion times aren't as random as they
            are for rotational drives. So it's questionable whether they
            should contribute to the random pool in the first place.
          - Calling add_disk_randomness() has a lot of overhead.
      
      There are more reliable sources for randomness than non-rotational block
      devices.  From a security perspective it is better to err on the side of
      caution than to allow entropy contributions from unreliable "random"
      sources.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b277da0a
  8. 07 6月, 2014 1 次提交
  9. 18 4月, 2014 1 次提交
  10. 02 4月, 2014 1 次提交
  11. 24 11月, 2013 2 次提交
    • K
      block: Immutable bio vecs · 4550dd6c
      Kent Overstreet 提交于
      This adds a mechanism by which we can advance a bio by an arbitrary
      number of bytes without modifying the biovec: bio->bi_iter.bi_bvec_done
      indicates the number of bytes completed in the current bvec.
      
      Various driver code still needs to be updated to not refer to the bvec
      directly before we can use this for interesting things, like efficient
      bio splitting.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
      Cc: Paul Clements <Paul.Clements@steeleye.com>
      Cc: drbd-user@lists.linbit.com
      Cc: nbd-general@lists.sourceforge.net
      4550dd6c
    • K
      block: Convert bio_for_each_segment() to bvec_iter · 7988613b
      Kent Overstreet 提交于
      More prep work for immutable biovecs - with immutable bvecs drivers
      won't be able to use the biovec directly, they'll need to use helpers
      that take into account bio->bi_iter.bi_bvec_done.
      
      This updates callers for the new usage without changing the
      implementation yet.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Paul Clements <Paul.Clements@steeleye.com>
      Cc: Jim Paris <jim@jtan.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Yehuda Sadeh <yehuda@inktank.com>
      Cc: Sage Weil <sage@inktank.com>
      Cc: Alex Elder <elder@inktank.com>
      Cc: ceph-devel@vger.kernel.org
      Cc: Joshua Morris <josh.h.morris@us.ibm.com>
      Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux390@de.ibm.com
      Cc: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com>
      Cc: Sreekanth Reddy <Sreekanth.Reddy@lsi.com>
      Cc: support@lsi.com
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Guo Chao <yan@linux.vnet.ibm.com>
      Cc: Asai Thambi S P <asamymuthupa@micron.com>
      Cc: Selvan Mani <smani@micron.com>
      Cc: Sam Bradshaw <sbradshaw@micron.com>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: Quoc-Son Anh <quoc-sonx.anh@intel.com>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: "Darrick J. Wong" <darrick.wong@oracle.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: drbd-user@lists.linbit.com
      Cc: nbd-general@lists.sourceforge.net
      Cc: cbe-oss-dev@lists.ozlabs.org
      Cc: xen-devel@lists.xensource.com
      Cc: virtualization@lists.linux-foundation.org
      Cc: linux-raid@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: DL-MPTFusionLinux@lsi.com
      Cc: linux-scsi@vger.kernel.org
      Cc: devel@driverdev.osuosl.org
      Cc: linux-fsdevel@vger.kernel.org
      Cc: cluster-devel@redhat.com
      Cc: linux-mm@kvack.org
      Acked-by: NGeoff Levand <geoff@infradead.org>
      7988613b
  12. 04 7月, 2013 3 次提交
  13. 01 5月, 2013 1 次提交
  14. 28 2月, 2013 4 次提交
    • A
      nbd: fix sparse warning · 398eb085
      Alex Elder 提交于
      I just fixed this in "drivers/block/rbd.c" and I noticed that
      "drivers/block/nbd.c" has the same problem.  Fix a warning issued by
      sparse by adding some lockdep annotations to indicate the queue lock gets
      dropped (because it's held when do_nbd_request() is called) and
      re-acquired within the function.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Paul Clements <paul.clements@us.sios.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      398eb085
    • P
      nbd: show read-only state in sysfs · a83e814b
      Paolo Bonzini 提交于
      Pass the read-only flag to set_device_ro, so that it will be visible to
      the block layer and in sysfs.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: Paul Clements <Paul.Clements@steeleye.com>
      Cc: Alex Bligh <alex@alex.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a83e814b
    • P
      nbd: fsync and kill block device on shutdown · 3a2d63f8
      Paolo Bonzini 提交于
      There are two problems with shutdown in the NBD driver.
      
      1: Receiving the NBD_DISCONNECT ioctl does not sync the filesystem.
      
         This patch adds the sync operation into __nbd_ioctl()'s
         NBD_DISCONNECT handler.  This is useful because BLKFLSBUF is restricted
         to processes that have CAP_SYS_ADMIN, and the NBD client may not
         possess it (fsync of the block device does not sync the filesystem,
         either).
      
      2: Once we clear the socket we have no guarantee that later reads will
         come from the same backing storage.
      
         The patch adds calls to kill_bdev() in __nbd_ioctl()'s socket
         clearing code so the page cache is cleaned, lest reads that hit on the
         page cache will return stale data from the previously-accessible disk.
      
      Example:
      
          # qemu-nbd -r -c/dev/nbd0 /dev/sr0
          # file -s /dev/nbd0
          /dev/stdin: # UDF filesystem data (version 1.5) etc.
          # qemu-nbd -d /dev/nbd0
          # qemu-nbd -r -c/dev/nbd0 /dev/sda
          # file -s /dev/nbd0
          /dev/stdin: # UDF filesystem data (version 1.5) etc.
      
      While /dev/sda has:
      
          # file -s /dev/sda
          /dev/sda: x86 boot sector; etc.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NPaul Clements <Paul.Clements@steeleye.com>
      Cc: Alex Bligh <alex@alex.org.uk>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3a2d63f8
    • A
      nbd: support FLUSH requests · 75f187ab
      Alex Bligh 提交于
      Currently, the NBD device does not accept flush requests from the Linux
      block layer.  If the NBD server opened the target with neither O_SYNC nor
      O_DSYNC, however, the device will be effectively backed by a writeback
      cache.  Without issuing flushes properly, operation of the NBD device will
      not be safe against power losses.
      
      The NBD protocol has support for both a cache flush command and a FUA
      command flag; the server will also pass a flag to note its support for
      these features.  This patch adds support for the cache flush command and
      flag.  In the kernel, we receive the flags via the NBD_SET_FLAGS ioctl,
      and map NBD_FLAG_SEND_FLUSH to the argument of blk_queue_flush.  When the
      flag is active the block layer will send REQ_FLUSH requests, which we
      translate to NBD_CMD_FLUSH commands.
      
      FUA support is not included in this patch because all free software
      servers implement it with a full fdatasync; thus it has no advantage over
      supporting flush only.  Because I [Paolo] cannot really benchmark it in a
      realistic scenario, I cannot tell if it is a good idea or not.  It is also
      not clear if it is valid for an NBD server to support FUA but not flush.
      The Linux block layer gives a warning for this combination, the NBD
      protocol documentation says nothing about it.
      
      The patch also fixes a small problem in the handling of flags: nbd->flags
      must be cleared at the end of NBD_DO_IT, but the driver was not doing
      that.  The bug manifests itself as follows.  Suppose you two different
      client/server pairs to start the NBD device.  Suppose also that the first
      client supports NBD_SET_FLAGS, and the first server sends
      NBD_FLAG_SEND_FLUSH; the second pair instead does neither of these two
      things.  Before this patch, the second invocation of NBD_DO_IT will use a
      stale value of nbd->flags, and the second server will issue an error every
      time it receives an NBD_CMD_FLUSH command.
      
      This bug is pre-existing, but it becomes much more important after this
      patch; flush failures make the device pretty much unusable, unlike
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NAlex Bligh <alex@alex.org.uk>
      Acked-by: NPaul Clements <Paul.Clements@steeleye.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      75f187ab
  15. 23 2月, 2013 1 次提交
  16. 06 10月, 2012 2 次提交
  17. 18 9月, 2012 1 次提交
    • P
      nbd: clear waiting_queue on shutdown · fded4e09
      Paul Clements 提交于
      Fix a serious but uncommon bug in nbd which occurs when there is heavy
      I/O going to the nbd device while, at the same time, a failure (server,
      network) or manual disconnect of the nbd connection occurs.
      
      There is a small window between the time that the nbd_thread is stopped
      and the socket is shutdown where requests can continue to be queued to
      nbd's internal waiting_queue.  When this happens, those requests are
      never completed or freed.
      
      The fix is to clear the waiting_queue on shutdown of the nbd device, in
      the same way that the nbd request queue (queue_head) is already being
      cleared.
      Signed-off-by: NPaul Clements <paul.clements@steeleye.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fded4e09
  18. 01 8月, 2012 1 次提交
    • M
      nbd: set SOCK_MEMALLOC for access to PFMEMALLOC reserves · 7f338fe4
      Mel Gorman 提交于
      Set SOCK_MEMALLOC on the NBD socket to allow access to PFMEMALLOC reserves
      so pages backed by NBD, particularly if swap related, can be cleaned to
      prevent the machine being deadlocked.  It is still possible that the
      PFMEMALLOC reserves get depleted resulting in deadlock but this can be
      resolved by the administrator by increasing min_free_kbytes.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Cc: David Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f338fe4
  19. 31 7月, 2012 1 次提交
  20. 29 3月, 2012 1 次提交