1. 15 5月, 2018 9 次提交
  2. 14 5月, 2018 5 次提交
  3. 11 5月, 2018 7 次提交
  4. 09 5月, 2018 11 次提交
  5. 08 5月, 2018 3 次提交
    • T
      block: Shorten interrupt disabled regions · 50864670
      Thomas Gleixner 提交于
      Commit 9c40cef2 ("sched: Move blk_schedule_flush_plug() out of
      __schedule()") moved the blk_schedule_flush_plug() call out of the
      interrupt/preempt disabled region in the scheduler. This allows to replace
      local_irq_save/restore(flags) by local_irq_disable/enable() in
      blk_flush_plug_list().
      
      But it makes more sense to disable interrupts explicitly when the request
      queue is locked end reenable them when the request to is unlocked. This
      shortens the interrupt disabled section which is important when the plug
      list contains requests for more than one queue. The comment which claims
      that disabling interrupts around the loop is misleading as the called
      functions can reenable interrupts unconditionally anyway and obfuscates the
      scope badly:
      
       local_irq_save(flags);
         spin_lock(q->queue_lock);
         ...
         queue_unplugged(q...);
           scsi_request_fn();
             spin_unlock_irq(q->queue_lock);
      
      -------------------^^^ ????
      
             spin_lock_irq(q->queue_lock);
           spin_unlock(q->queue_lock);
       local_irq_restore(flags);
      
      Aside of that the detached interrupt disabling is a constant pain for
      PREEMPT_RT as it requires patching and special casing when RT is enabled
      while with the spin_*_irq() variants this happens automatically.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.deSigned-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      50864670
    • A
      block: Remove redundant WARN_ON() · 656cb6d0
      Anna-Maria Gleixner 提交于
      Commit 2fff8a92 ("block: Check locking assumptions at runtime") added a
      lockdep_assert_held(q->queue_lock) which makes the WARN_ON() redundant
      because lockdep will detect and warn about context violations.
      
      The unconditional WARN_ON() does not provide real additional value, so it
      can be removed.
      Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      656cb6d0
    • S
      block: don't disable interrupts during kmap_atomic() · f3a1075e
      Sebastian Andrzej Siewior 提交于
      bounce_copy_vec() disables interrupts around kmap_atomic(). This is a
      leftover from the old kmap_atomic() implementation which relied on fixed
      mapping slots, so the caller had to make sure that the same slot could not
      be reused from an interrupting context.
      
      kmap_atomic() was changed to dynamic slots long ago and commit 1ec9c5dd
      ("include/linux/highmem.h: remove the second argument of k[un]map_atomic()")
      removed the slot assignements, but the callers were not checked for now
      redundant interrupt disabling.
      
      Remove the conditional interrupt disable.
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f3a1075e
  6. 26 4月, 2018 2 次提交
  7. 25 4月, 2018 2 次提交
  8. 19 4月, 2018 1 次提交