1. 23 9月, 2014 15 次提交
  2. 10 9月, 2014 2 次提交
    • J
      blk-mq: scale depth and rq map appropriate if low on memory · a5164405
      Jens Axboe 提交于
      If we are running in a kdump environment, resources are scarce.
      For some SCSI setups with a huge set of shared tags, we run out
      of memory allocating what the drivers is asking for. So implement
      a scale back logic to reduce the tag depth for those cases, allowing
      the driver to successfully load.
      
      We should extend this to detect low memory situations, and implement
      a sane fallback for those (1 queue, 64 tags, or something like that).
      Tested-by: NRobert Elliott <elliott@hp.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a5164405
    • A
      Block: fix unbalanced bypass-disable in blk_register_queue · df35c7c9
      Alan Stern 提交于
      When a queue is registered, the block layer turns off the bypass
      setting (because bypass is enabled when the queue is created).  This
      doesn't work well for queues that are unregistered and then registered
      again; we get a WARNING because of the unbalanced calls to
      blk_queue_bypass_end().
      
      This patch fixes the problem by making blk_register_queue() call
      blk_queue_bypass_end() only the first time the queue is registered.
      Signed-off-by: NAlan Stern <stern@rowland.harvard.edu>
      Acked-by: NTejun Heo <tj@kernel.org>
      CC: James Bottomley <James.Bottomley@HansenPartnership.com>
      CC: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      df35c7c9
  3. 09 9月, 2014 1 次提交
    • T
      block, bdi: an active gendisk always has a request_queue associated with it · ff9ea323
      Tejun Heo 提交于
      bdev_get_queue() returns the request_queue associated with the
      specified block_device.  blk_get_backing_dev_info() makes use of
      bdev_get_queue() to determine the associated bdi given a block_device.
      
      All the callers of bdev_get_queue() including
      blk_get_backing_dev_info() assume that bdev_get_queue() may return
      NULL and implement NULL handling; however, bdev_get_queue() requires
      the passed in block_device is opened and attached to its gendisk.
      Because an active gendisk always has a valid request_queue associated
      with it, bdev_get_queue() can never return NULL and neither can
      blk_get_backing_dev_info().
      
      Make it clear that neither of the two functions can return NULL and
      remove NULL handling from all the callers.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      ff9ea323
  4. 08 9月, 2014 1 次提交
    • T
      blkcg: remove blkcg->id · f4da8072
      Tejun Heo 提交于
      blkcg->id is a unique id given to each blkcg; however, the
      cgroup_subsys_state which each blkcg embeds already has ->serial_nr
      which can be used for the same purpose.  Drop blkcg->id and replace
      its uses with blkcg->css.serial_nr.  Rename cfq_cgroup->blkcg_id to
      ->blkcg_serial_nr and @id in check_blkcg_changed() to @serial_nr for
      consistency.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f4da8072
  5. 04 9月, 2014 2 次提交
    • K
      block: Fix dev_t minor allocation lifetime · 2da78092
      Keith Busch 提交于
      Releases the dev_t minor when all references are closed to prevent
      another device from acquiring the same major/minor.
      
      Since the partition's release may be invoked from call_rcu's soft-irq
      context, the ext_dev_idr's mutex had to be replaced with a spinlock so
      as not so sleep.
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Cc: stable@kernel.org
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2da78092
    • R
      blk-mq: cleanup after blk_mq_init_rq_map failures · 5676e7b6
      Robert Elliott 提交于
      In blk-mq.c blk_mq_alloc_tag_set, if:
      	set->tags = kmalloc_node()
      succeeds, but one of the blk_mq_init_rq_map() calls fails,
      	goto out_unwind;
      needs to free set->tags so the caller is not obligated
      to do so.  None of the current callers (null_blk,
      virtio_blk, virtio_blk, or the forthcoming scsi-mq)
      do so.
      
      set->tags needs to be set to NULL after doing so,
      so other tag cleanup logic doesn't try to free
      a stale pointer later.  Also set it to NULL
      in blk_mq_free_tag_set.
      
      Tested with error injection on the forthcoming
      scsi-mq + hpsa combination.
      Signed-off-by: NRobert Elliott <elliott@hp.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      5676e7b6
  6. 03 9月, 2014 1 次提交
  7. 29 8月, 2014 2 次提交
  8. 28 8月, 2014 1 次提交
  9. 27 8月, 2014 2 次提交
    • J
      block,scsi: verify return pointer from blk_get_request · eb571eea
      Joe Lawrence 提交于
      The blk-core dead queue checks introduce an error scenario to
      blk_get_request that returns NULL if the request queue has been
      shutdown. This affects the behavior for __GFP_WAIT callers, who should
      verify the return value before dereferencing.
      Signed-off-by: NJoe Lawrence <joe.lawrence@stratus.com>
      Acked-by: Jiri Kosina <jkosina@suse.cz> [for pktdvd]
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      eb571eea
    • T
      cfq-iosched: Fix wrong children_weight calculation · e15693ef
      Toshiaki Makita 提交于
      cfq_group_service_tree_add() is applying new_weight at the beginning of
      the function via cfq_update_group_weight().
      This actually allows weight to change between adding it to and subtracting
      it from children_weight, and triggers WARN_ON_ONCE() in
      cfq_group_service_tree_del(), or even causes oops by divide error during
      vfr calculation in cfq_group_service_tree_add().
      
      The detailed scenario is as follows:
      1. Create blkio cgroups X and Y as a child of X.
         Set X's weight to 500 and perform some I/O to apply new_weight.
         This X's I/O completes before starting Y's I/O.
      2. Y starts I/O and cfq_group_service_tree_add() is called with Y.
      3. cfq_group_service_tree_add() walks up the tree during children_weight
         calculation and adds parent X's weight (500) to children_weight of root.
         children_weight becomes 500.
      4. Set X's weight to 1000.
      5. X starts I/O and cfq_group_service_tree_add() is called with X.
      6. cfq_group_service_tree_add() applies its new_weight (1000).
      7. I/O of Y completes and cfq_group_service_tree_del() is called with Y.
      8. I/O of X completes and cfq_group_service_tree_del() is called with X.
      9. cfq_group_service_tree_del() subtracts X's weight (1000) from
         children_weight of root. children_weight becomes -500.
         This triggers WARN_ON_ONCE().
      10. Set X's weight to 500.
      11. X starts I/O and cfq_group_service_tree_add() is called with X.
      12. cfq_group_service_tree_add() applies its new_weight (500) and adds it
          to children_weight of root. children_weight becomes 0. Calcularion of
          vfr triggers oops by divide error.
      
      weight should be updated right before adding it to children_weight.
      Reported-by: NRuki Sekiya <sekiya.ruki@lab.ntt.co.jp>
      Signed-off-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e15693ef
  10. 26 8月, 2014 1 次提交
  11. 23 8月, 2014 2 次提交
  12. 22 8月, 2014 6 次提交
  13. 16 8月, 2014 1 次提交
  14. 06 8月, 2014 1 次提交
  15. 02 8月, 2014 1 次提交
    • M
      block: use kmalloc alignment for bio slab · 6a241483
      Mikulas Patocka 提交于
      Various subsystems can ask the bio subsystem to create a bio slab cache
      with some free space before the bio.  This free space can be used for any
      purpose.  Device mapper uses this per-bio-data feature to place some
      target-specific and device-mapper specific data before the bio, so that
      the target-specific data doesn't have to be allocated separately.
      
      This per-bio-data mechanism is used in place of kmalloc, so we need the
      allocated slab to have the same memory alignment as memory allocated
      with kmalloc.
      
      Change bio_find_or_create_slab() so that it uses ARCH_KMALLOC_MINALIGN
      alignment when creating the slab cache.  This is needed so that dm-crypt
      can use per-bio-data for encryption - the crypto subsystem assumes this
      data will have the same alignment as kmalloc'ed memory.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Acked-by: NJens Axboe <axboe@fb.com>
      6a241483
  16. 16 7月, 2014 1 次提交
    • T
      blkcg: don't call into policy draining if root_blkg is already gone · 2a1b4cf2
      Tejun Heo 提交于
      While a queue is being destroyed, all the blkgs are destroyed and its
      ->root_blkg pointer is set to NULL.  If someone else starts to drain
      while the queue is in this state, the following oops happens.
      
        NULL pointer dereference at 0000000000000028
        IP: [<ffffffff8144e944>] blk_throtl_drain+0x84/0x230
        PGD e4a1067 PUD b773067 PMD 0
        Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
        Modules linked in: cfq_iosched(-) [last unloaded: cfq_iosched]
        CPU: 1 PID: 537 Comm: bash Not tainted 3.16.0-rc3-work+ #2
        Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
        task: ffff88000e222250 ti: ffff88000efd4000 task.ti: ffff88000efd4000
        RIP: 0010:[<ffffffff8144e944>]  [<ffffffff8144e944>] blk_throtl_drain+0x84/0x230
        RSP: 0018:ffff88000efd7bf0  EFLAGS: 00010046
        RAX: 0000000000000000 RBX: ffff880015091450 RCX: 0000000000000001
        RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
        RBP: ffff88000efd7c10 R08: 0000000000000000 R09: 0000000000000001
        R10: ffff88000e222250 R11: 0000000000000000 R12: ffff880015091450
        R13: ffff880015092e00 R14: ffff880015091d70 R15: ffff88001508fc28
        FS:  00007f1332650740(0000) GS:ffff88001fa80000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
        CR2: 0000000000000028 CR3: 0000000009446000 CR4: 00000000000006e0
        Stack:
         ffffffff8144e8f6 ffff880015091450 0000000000000000 ffff880015091d80
         ffff88000efd7c28 ffffffff8144ae2f ffff880015091450 ffff88000efd7c58
         ffffffff81427641 ffff880015091450 ffffffff82401f00 ffff880015091450
        Call Trace:
         [<ffffffff8144ae2f>] blkcg_drain_queue+0x1f/0x60
         [<ffffffff81427641>] __blk_drain_queue+0x71/0x180
         [<ffffffff81429b3e>] blk_queue_bypass_start+0x6e/0xb0
         [<ffffffff814498b8>] blkcg_deactivate_policy+0x38/0x120
         [<ffffffff8144ec44>] blk_throtl_exit+0x34/0x50
         [<ffffffff8144aea5>] blkcg_exit_queue+0x35/0x40
         [<ffffffff8142d476>] blk_release_queue+0x26/0xd0
         [<ffffffff81454968>] kobject_cleanup+0x38/0x70
         [<ffffffff81454848>] kobject_put+0x28/0x60
         [<ffffffff81427505>] blk_put_queue+0x15/0x20
         [<ffffffff817d07bb>] scsi_device_dev_release_usercontext+0x16b/0x1c0
         [<ffffffff810bc339>] execute_in_process_context+0x89/0xa0
         [<ffffffff817d064c>] scsi_device_dev_release+0x1c/0x20
         [<ffffffff817930e2>] device_release+0x32/0xa0
         [<ffffffff81454968>] kobject_cleanup+0x38/0x70
         [<ffffffff81454848>] kobject_put+0x28/0x60
         [<ffffffff817934d7>] put_device+0x17/0x20
         [<ffffffff817d11b9>] __scsi_remove_device+0xa9/0xe0
         [<ffffffff817d121b>] scsi_remove_device+0x2b/0x40
         [<ffffffff817d1257>] sdev_store_delete+0x27/0x30
         [<ffffffff81792ca8>] dev_attr_store+0x18/0x30
         [<ffffffff8126f75e>] sysfs_kf_write+0x3e/0x50
         [<ffffffff8126ea87>] kernfs_fop_write+0xe7/0x170
         [<ffffffff811f5e9f>] vfs_write+0xaf/0x1d0
         [<ffffffff811f69bd>] SyS_write+0x4d/0xc0
         [<ffffffff81d24692>] system_call_fastpath+0x16/0x1b
      
      776687bc ("block, blk-mq: draining can't be skipped even if
      bypass_depth was non-zero") made it easier to trigger this bug by
      making blk_queue_bypass_start() drain even when it loses the first
      bypass test to blk_cleanup_queue(); however, the bug has always been
      there even before the commit as blk_queue_bypass_start() could race
      against queue destruction, win the initial bypass test but perform the
      actual draining after blk_cleanup_queue() already destroyed all blkgs.
      
      Fix it by skippping calling into policy draining if all the blkgs are
      already gone.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NShirish Pargaonkar <spargaonkar@suse.com>
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Reported-by: NJet Chen <jet.chen@intel.com>
      Cc: stable@vger.kernel.org
      Tested-by: NShirish Pargaonkar <spargaonkar@suse.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2a1b4cf2