1. 08 6月, 2016 4 次提交
  2. 19 5月, 2016 1 次提交
  3. 17 5月, 2016 3 次提交
    • T
      block: Update blkdev_dax_capable() for consistency · a8078b1f
      Toshi Kani 提交于
      blkdev_dax_capable() is similar to bdev_dax_supported(), but needs
      to remain as a separate interface for checking dax capability of
      a raw block device.
      
      Rename and relocate blkdev_dax_capable() to keep them maintained
      consistently, and call bdev_direct_access() for the dax capability
      check.
      
      There is no change in the behavior.
      
      Link: https://lkml.org/lkml/2016/5/9/950Signed-off-by: NToshi Kani <toshi.kani@hpe.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Boaz Harrosh <boaz@plexistor.com>
      Signed-off-by: NVishal Verma <vishal.l.verma@intel.com>
      a8078b1f
    • T
      block: Add bdev_dax_supported() for dax mount checks · 2d96afc8
      Toshi Kani 提交于
      DAX imposes additional requirements to a device.  Add
      bdev_dax_supported() which performs all the precondition checks
      necessary for filesystem to mount the device with dax option.
      
      Also add a new check to verify if a partition is aligned by 4KB.
      When a partition is unaligned, any dax read/write access fails,
      except for metadata update.
      Signed-off-by: NToshi Kani <toshi.kani@hpe.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Boaz Harrosh <boaz@plexistor.com>
      Signed-off-by: NVishal Verma <vishal.l.verma@intel.com>
      2d96afc8
    • T
      block: Add vfs_msg() interface · 2af3a815
      Toshi Kani 提交于
      In preparation of moving DAX capability checks to the block layer
      from filesystem code, add a VFS message interface that aligns with
      filesystem's message format.
      
      For instance, a vfs_msg() message followed by XFS messages in case
      of a dax mount error may look like:
      
        VFS (pmem0p1): error: unaligned partition for dax
        XFS (pmem0p1): DAX unsupported by block device. Turning off DAX.
        XFS (pmem0p1): Mounting V5 Filesystem
         :
      
      vfs_msg() is largely based on ext4_msg().
      Signed-off-by: NToshi Kani <toshi.kani@hpe.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Boaz Harrosh <boaz@plexistor.com>
      Signed-off-by: NVishal Verma <vishal.l.verma@intel.com>
      2af3a815
  4. 02 5月, 2016 1 次提交
  5. 14 4月, 2016 1 次提交
  6. 13 4月, 2016 3 次提交
  7. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  8. 05 3月, 2016 1 次提交
  9. 04 3月, 2016 3 次提交
  10. 19 2月, 2016 1 次提交
    • M
      block: Add blk_set_runtime_active() · d07ab6d1
      Mika Westerberg 提交于
      If block device is left runtime suspended during system suspend, resume
      hook of the driver typically corrects runtime PM status of the device back
      to "active" after it is resumed. However, this is not enough as queue's
      runtime PM status is still "suspended". As long as it is in this state
      blk_pm_peek_request() returns NULL and thus prevents new requests to be
      processed.
      
      Add new function blk_set_runtime_active() that can be used to force the
      queue status back to "active" as needed.
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Acked-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d07ab6d1
  11. 05 2月, 2016 1 次提交
  12. 16 1月, 2016 2 次提交
    • D
      mm, dax, pmem: introduce pfn_t · 34c0fd54
      Dan Williams 提交于
      For the purpose of communicating the optional presence of a 'struct
      page' for the pfn returned from ->direct_access(), introduce a type that
      encapsulates a page-frame-number plus flags.  These flags contain the
      historical "page_link" encoding for a scatterlist entry, but can also
      denote "device memory".  Where "device memory" is a set of pfns that are
      not part of the kernel's linear mapping by default, but are accessed via
      the same memory controller as ram.
      
      The motivation for this new type is large capacity persistent memory
      that needs struct page entries in the 'memmap' to support 3rd party DMA
      (i.e.  O_DIRECT I/O with a persistent memory source/target).  However,
      we also need it in support of maintaining a list of mapped inodes which
      need to be unmapped at driver teardown or freeze_bdev() time.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      34c0fd54
    • D
      dax: fix lifetime of in-kernel dax mappings with dax_map_atomic() · b2e0d162
      Dan Williams 提交于
      The DAX implementation needs to protect new calls to ->direct_access()
      and usage of its return value against the driver for the underlying
      block device being disabled.  Use blk_queue_enter()/blk_queue_exit() to
      hold off blk_cleanup_queue() from proceeding, or otherwise fail new
      mapping requests if the request_queue is being torn down.
      
      This also introduces blk_dax_ctl to simplify the interface from fs/dax.c
      through dax_map_atomic() to bdev_direct_access().
      
      [willy@linux.intel.com: fix read() of a hole]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Cc: Jan Kara <jack@suse.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b2e0d162
  13. 29 12月, 2015 1 次提交
  14. 23 12月, 2015 1 次提交
    • C
      block: defer timeouts to a workqueue · 287922eb
      Christoph Hellwig 提交于
      Timer context is not very useful for drivers to perform any meaningful abort
      action from.  So instead of calling the driver from this useless context
      defer it to a workqueue as soon as possible.
      
      Note that while a delayed_work item would seem the right thing here I didn't
      dare to use it due to the magic in blk_add_timer that pokes deep into timer
      internals.  But maybe this encourages Tejun to add a sensible API for that to
      the workqueue API and we'll all be fine in the end :)
      
      Contains a major update from Keith Bush:
      
      "This patch removes synchronizing the timeout work so that the timer can
       start a freeze on its own queue. The timer enters the queue, so timer
       context can only start a freeze, but not wait for frozen."
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      287922eb
  15. 02 12月, 2015 1 次提交
  16. 30 11月, 2015 1 次提交
    • H
      block: Always check queue limits for cloned requests · bf4e6b4e
      Hannes Reinecke 提交于
      When a cloned request is retried on other queues it always needs
      to be checked against the queue limits of that queue.
      Otherwise the calculations for nr_phys_segments might be wrong,
      leading to a crash in scsi_init_sgtable().
      
      To clarify this the patch renames blk_rq_check_limits()
      to blk_cloned_rq_check_limits() and removes the symbol
      export, as the new function should only be used for
      cloned requests and never exported.
      
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Ewan Milne <emilne@redhat.com>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NHannes Reinecke <hare@suse.de>
      Fixes: e2a60da7 ("block: Clean up special command handling logic")
      Cc: stable@vger.kernel.org # 3.7+
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      bf4e6b4e
  17. 26 11月, 2015 1 次提交
    • M
      block/sd: Fix device-imposed transfer length limits · ca369d51
      Martin K. Petersen 提交于
      Commit 4f258a46 ("sd: Fix maximum I/O size for BLOCK_PC requests")
      had the unfortunate side-effect of removing an implicit clamp to
      BLK_DEF_MAX_SECTORS for REQ_TYPE_FS requests in the block layer
      code. This caused problems for some SMR drives.
      
      Debugging this issue revealed a few problems with the existing
      infrastructure since the block layer didn't know how to deal with
      device-imposed limits, only limits set by the I/O controller.
      
       - Introduce a new queue limit, max_dev_sectors, which is used by the
         ULD to signal the maximum sectors for a REQ_TYPE_FS request.
      
       - Ensure that max_dev_sectors is correctly stacked and taken into
         account when overriding max_sectors through sysfs.
      
       - Rework sd_read_block_limits() so it saves the max_xfer and opt_xfer
         values for later processing.
      
       - In sd_revalidate() set the queue's max_dev_sectors based on the
         MAXIMUM TRANSFER LENGTH value in the Block Limits VPD. If this value
         is not reported, fall back to a cap based on the CDB TRANSFER LENGTH
         field size.
      
       - In sd_revalidate(), use OPTIMAL TRANSFER LENGTH from the Block Limits
         VPD--if reported and sane--to signal the preferred device transfer
         size for FS requests. Otherwise use BLK_DEF_MAX_SECTORS.
      
       - blk_limits_max_hw_sectors() is no longer used and can be removed.
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=93581Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: sweeneygj@gmx.com
      Tested-by: NArzeets <anatol.pomozov@gmail.com>
      Tested-by: NDavid Eisner <david.eisner@oriel.oxon.org>
      Tested-by: NMario Kicherer <dev@kicherer.org>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      ca369d51
  18. 20 11月, 2015 1 次提交
    • D
      block: protect rw_page against device teardown · 2e6edc95
      Dan Williams 提交于
      Fix use after free crashes like the following:
      
       general protection fault: 0000 [#1] SMP
       Call Trace:
        [<ffffffffa0050216>] ? pmem_do_bvec.isra.12+0xa6/0xf0 [nd_pmem]
        [<ffffffffa0050ba2>] pmem_rw_page+0x42/0x80 [nd_pmem]
        [<ffffffff8128fd90>] bdev_read_page+0x50/0x60
        [<ffffffff812972f0>] do_mpage_readpage+0x510/0x770
        [<ffffffff8128fd20>] ? I_BDEV+0x20/0x20
        [<ffffffff811d86dc>] ? lru_cache_add+0x1c/0x50
        [<ffffffff81297657>] mpage_readpages+0x107/0x170
        [<ffffffff8128fd20>] ? I_BDEV+0x20/0x20
        [<ffffffff8128fd20>] ? I_BDEV+0x20/0x20
        [<ffffffff8129058d>] blkdev_readpages+0x1d/0x20
        [<ffffffff811d615f>] __do_page_cache_readahead+0x28f/0x310
        [<ffffffff811d6039>] ? __do_page_cache_readahead+0x169/0x310
        [<ffffffff811c5abd>] ? pagecache_get_page+0x2d/0x1d0
        [<ffffffff811c76f6>] filemap_fault+0x396/0x530
        [<ffffffff811f816e>] __do_fault+0x4e/0xf0
        [<ffffffff811fce7d>] handle_mm_fault+0x11bd/0x1b50
      
      Cc: <stable@vger.kernel.org>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Reported-by: Nkbuild test robot <lkp@intel.com>
      Acked-by: NMatthew Wilcox <willy@linux.intel.com>
      [willy: symmetry fixups]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      2e6edc95
  19. 08 11月, 2015 2 次提交
  20. 22 10月, 2015 7 次提交
    • C
      block: add an API for Persistent Reservations · bbd3e064
      Christoph Hellwig 提交于
      This commits adds a driver API and ioctls for controlling Persistent
      Reservations s/genericly/generically/ at the block layer.  Persistent
      Reservations are supported by SCSI and NVMe and allow controlling who gets
      access to a device in a shared storage setup.
      
      Note that we add a pr_ops structure to struct block_device_operations
      instead of adding the members directly to avoid bloating all instances
      of devices that will never support Persistent Reservations.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      bbd3e064
    • D
      block: move blk_integrity to request_queue · ac6fc48c
      Dan Williams 提交于
      A trace like the following proceeds a crash in bio_integrity_process()
      when it goes to use an already freed blk_integrity profile.
      
       BUG: unable to handle kernel paging request at ffff8800d31b10d8
       IP: [<ffff8800d31b10d8>] 0xffff8800d31b10d8
       PGD 2f65067 PUD 21fffd067 PMD 80000000d30001e3
       Oops: 0011 [#1] SMP
       Dumping ftrace buffer:
       ---------------------------------
          ndctl-2222    2.... 44526245us : disk_release: pmem1s
       systemd--2223    4.... 44573945us : bio_integrity_endio: pmem1s
          <...>-409     4.... 44574005us : bio_integrity_process: pmem1s
       ---------------------------------
      [..]
        Call Trace:
        [<ffffffff8144e0f9>] ? bio_integrity_process+0x159/0x2d0
        [<ffffffff8144e4f6>] bio_integrity_verify_fn+0x36/0x60
        [<ffffffff810bd2dc>] process_one_work+0x1cc/0x4e0
      
      Given that a request_queue is pinned while i/o is in flight and that a
      gendisk is allowed to have a shorter lifetime, move blk_integrity to
      request_queue to satisfy requests arriving after the gendisk has been
      torn down.
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Martin K. Petersen <martin.petersen@oracle.com>
      [martin: fix the CONFIG_BLK_DEV_INTEGRITY=n case]
      Tested-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      ac6fc48c
    • D
      block: generic request_queue reference counting · 3ef28e83
      Dan Williams 提交于
      Allow pmem, and other synchronous/bio-based block drivers, to fallback
      on a per-cpu reference count managed by the core for tracking queue
      live/dead state.
      
      The existing per-cpu reference count for the blk_mq case is promoted to
      be used in all block i/o scenarios.  This involves initializing it by
      default, waiting for it to drop to zero at exit, and holding a live
      reference over the invocation of q->make_request_fn() in
      generic_make_request().  The blk_mq code continues to take its own
      reference per blk_mq request and retains the ability to freeze the
      queue, but the check that the queue is frozen is moved to
      generic_make_request().
      
      This fixes crash signatures like the following:
      
       BUG: unable to handle kernel paging request at ffff880140000000
       [..]
       Call Trace:
        [<ffffffff8145e8bf>] ? copy_user_handle_tail+0x5f/0x70
        [<ffffffffa004e1e0>] pmem_do_bvec.isra.11+0x70/0xf0 [nd_pmem]
        [<ffffffffa004e331>] pmem_make_request+0xd1/0x200 [nd_pmem]
        [<ffffffff811c3162>] ? mempool_alloc+0x72/0x1a0
        [<ffffffff8141f8b6>] generic_make_request+0xd6/0x110
        [<ffffffff8141f966>] submit_bio+0x76/0x170
        [<ffffffff81286dff>] submit_bh_wbc+0x12f/0x160
        [<ffffffff81286e62>] submit_bh+0x12/0x20
        [<ffffffff813395bd>] jbd2_write_superblock+0x8d/0x170
        [<ffffffff8133974d>] jbd2_mark_journal_empty+0x5d/0x90
        [<ffffffff813399cb>] jbd2_journal_destroy+0x24b/0x270
        [<ffffffff810bc4ca>] ? put_pwq_unlocked+0x2a/0x30
        [<ffffffff810bc6f5>] ? destroy_workqueue+0x225/0x250
        [<ffffffff81303494>] ext4_put_super+0x64/0x360
        [<ffffffff8124ab1a>] generic_shutdown_super+0x6a/0xf0
      
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      3ef28e83
    • M
      block: Inline blk_integrity in struct gendisk · 25520d55
      Martin K. Petersen 提交于
      Up until now the_integrity profile has been dynamically allocated and
      attached to struct gendisk after the disk has been made active.
      
      This causes problems because NVMe devices need to register the profile
      prior to the partition table being read due to a mandatory metadata
      buffer requirement. In addition, DM goes through hoops to deal with
      preallocating, but not initializing integrity profiles.
      
      Since the integrity profile is small (4 bytes + a pointer), Christoph
      suggested moving it to struct gendisk proper. This requires several
      changes:
      
       - Moving the blk_integrity definition to genhd.h.
      
       - Inlining blk_integrity in struct gendisk.
      
       - Removing the dynamic allocation code.
      
       - Adding helper functions which allow gendisk to set up and tear down
         the integrity sysfs dir when a disk is added/deleted.
      
       - Adding a blk_integrity_revalidate() callback for updating the stable
         pages bdi setting.
      
       - The calls that depend on whether a device has an integrity profile or
         not now key off of the bi->profile pointer.
      
       - Simplifying the integrity support routines in DM (Mike Snitzer).
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Reported-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      25520d55
    • M
      block: Reduce the size of struct blk_integrity · a48f041d
      Martin K. Petersen 提交于
      The per-device properties in the blk_integrity structure were previously
      unsigned short. However, most of the values fit inside a char. The only
      exception is the data interval size and we can work around that by
      storing it as a power of two.
      
      This cuts the size of the dynamic portion of blk_integrity in half.
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Reported-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a48f041d
    • M
      block: Consolidate static integrity profile properties · 0f8087ec
      Martin K. Petersen 提交于
      We previously made a complete copy of a device's data integrity profile
      even though several of the fields inside the blk_integrity struct are
      pointers to fixed template entries in t10-pi.c.
      
      Split the static and per-device portions so that we can reference the
      template directly.
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Reported-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagig@mellanox.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      0f8087ec
    • M
      block: Move integrity kobject to struct gendisk · aff34e19
      Martin K. Petersen 提交于
      The integrity kobject purely exists to support the integrity
      subdirectory in sysfs and doesn't really have anything to do with the
      blk_integrity data structure. Move the kobject to struct gendisk where
      it belongs.
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Reported-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      aff34e19
  21. 30 9月, 2015 1 次提交
    • A
      blk-mq: fix sysfs registration/unregistration race · 4593fdbe
      Akinobu Mita 提交于
      There is a race between cpu hotplug handling and adding/deleting
      gendisk for blk-mq, where both are trying to register and unregister
      the same sysfs entries.
      
      null_add_dev
          --> blk_mq_init_queue
              --> blk_mq_init_allocated_queue
                  --> add to 'all_q_list' (*)
          --> add_disk
              --> blk_register_queue
                  --> blk_mq_register_disk (++)
      
      null_del_dev
          --> del_gendisk
              --> blk_unregister_queue
                  --> blk_mq_unregister_disk (--)
          --> blk_cleanup_queue
              --> blk_mq_free_queue
                  --> del from 'all_q_list' (*)
      
      blk_mq_queue_reinit
          --> blk_mq_sysfs_unregister (-)
          --> blk_mq_sysfs_register (+)
      
      While the request queue is added to 'all_q_list' (*),
      blk_mq_queue_reinit() can be called for the queue anytime by CPU
      hotplug callback.  But blk_mq_sysfs_unregister (-) and
      blk_mq_sysfs_register (+) in blk_mq_queue_reinit must not be called
      before blk_mq_register_disk (++) and after blk_mq_unregister_disk (--)
      is finished.  Because '/sys/block/*/mq/' is not exists.
      
      There has already been BLK_MQ_F_SYSFS_UP flag in hctx->flags which can
      be used to track these sysfs stuff, but it is only fixing this issue
      partially.
      
      In order to fix it completely, we just need per-queue flag instead of
      per-hctx flag with appropriate locking.  So this introduces
      q->mq_sysfs_init_done which is properly protected with all_q_mutex.
      
      Also, we need to ensure that blk_mq_map_swqueue() is called with
      all_q_mutex is held.  Since hctx->nr_ctx is reset temporarily and
      updated in blk_mq_map_swqueue(), so we should avoid
      blk_mq_register_hctx() seeing the temporary hctx->nr_ctx value
      in CPU hotplug handling or adding/deleting gendisk .
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Reviewed-by: NMing Lei <tom.leiming@gmail.com>
      Cc: Ming Lei <tom.leiming@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4593fdbe
  22. 13 9月, 2015 1 次提交
    • L
      blk: rq_data_dir() should not return a boolean · 10fbd36e
      Linus Torvalds 提交于
      rq_data_dir() returns either READ or WRITE (0 == READ, 1 == WRITE), not
      a boolean value.
      
      Now, admittedly the "!= 0" doesn't really change the value (0 stays as
      zero, 1 stays as one), but it's not only redundant, it confuses gcc, and
      causes gcc to warn about the construct
      
          switch (rq_data_dir(req)) {
              case READ:
                  ...
              case WRITE:
                  ...
      
      that we have in a few drivers.
      
      Now, the gcc warning is silly and stupid (it seems to warn not about the
      switch value having a different type from the case statements, but about
      _any_ boolean switch value), but in this case the code itself is silly
      and stupid too, so let's just change it, and get rid of warnings like
      this:
      
        drivers/block/hd.c: In function ‘hd_request’:
        drivers/block/hd.c:630:11: warning: switch condition has boolean value [-Wswitch-bool]
           switch (rq_data_dir(req)) {
      
      The odd '!= 0' came in when "cmd_flags" got turned into a "u64" in
      commit 5953316d ("block: make rq->cmd_flags be 64-bit") and is
      presumably because the old code (that just did a logical 'and' with 1)
      would then end up making the type of rq_data_dir() be u64 too.
      
      But if we want to retain the old regular integer type, let's just cast
      the result to 'int' rather than use that rather odd '!= 0'.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      10fbd36e
  23. 11 9月, 2015 1 次提交