1. 18 5月, 2011 1 次提交
  2. 14 4月, 2011 1 次提交
  3. 03 3月, 2011 1 次提交
    • V
      block: Move blk_throtl_exit() call to blk_cleanup_queue() · da527770
      Vivek Goyal 提交于
      Move blk_throtl_exit() in blk_cleanup_queue() as blk_throtl_exit() is
      written in such a way that it needs queue lock. In blk_release_queue()
      there is no gurantee that ->queue_lock is still around.
      
      Initially blk_throtl_exit() was in blk_cleanup_queue() but Ingo reported
      one problem.
      
        https://lkml.org/lkml/2010/10/23/86
      
        And a quick fix moved blk_throtl_exit() to blk_release_queue().
      
              commit 7ad58c02
              Author: Jens Axboe <jaxboe@fusionio.com>
              Date:   Sat Oct 23 20:40:26 2010 +0200
      
              block: fix use-after-free bug in blk throttle code
      
      This patch reverts above change and does not try to shutdown the
      throtl work in blk_sync_queue(). By avoiding call to
      throtl_shutdown_timer_wq() from blk_sync_queue(), we should also avoid
      the problem reported by Ingo.
      
      blk_sync_queue() seems to be used only by md driver and it seems to be
      using it to make sure q->unplug_fn is not called as md registers its
      own unplug functions and it is about to free up the data structures
      used by unplug_fn(). Block throttle does not call back into unplug_fn()
      or into md. So there is no need to cancel blk throttle work.
      
      In fact I think cancelling block throttle work is bad because it might
      happen that some bios are throttled and scheduled to be dispatched later
      with the help of pending work and if work is cancelled, these bios might
      never be dispatched.
      
      Block layer also uses blk_sync_queue() during blk_cleanup_queue() and
      blk_release_queue() time. That should be safe as we are also calling
      blk_throtl_exit() which should make sure all the throttling related
      data structures are cleaned up.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      da527770
  4. 17 12月, 2010 1 次提交
    • M
      block: Deprecate QUEUE_FLAG_CLUSTER and use queue_limits instead · e692cb66
      Martin K. Petersen 提交于
      When stacking devices, a request_queue is not always available. This
      forced us to have a no_cluster flag in the queue_limits that could be
      used as a carrier until the request_queue had been set up for a
      metadevice.
      
      There were several problems with that approach. First of all it was up
      to the stacking device to remember to set queue flag after stacking had
      completed. Also, the queue flag and the queue limits had to be kept in
      sync at all times. We got that wrong, which could lead to us issuing
      commands that went beyond the max scatterlist limit set by the driver.
      
      The proper fix is to avoid having two flags for tracking the same thing.
      We deprecate QUEUE_FLAG_CLUSTER and use the queue limit directly in the
      block layer merging functions. The queue_limit 'no_cluster' is turned
      into 'cluster' to avoid double negatives and to ease stacking.
      Clustering defaults to being enabled as before. The queue flag logic is
      removed from the stacking function, and explicitly setting the cluster
      flag is no longer necessary in DM and MD.
      Reported-by: NEd Lin <ed.lin@promise.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      e692cb66
  5. 24 10月, 2010 1 次提交
  6. 11 9月, 2010 1 次提交
  7. 23 8月, 2010 1 次提交
  8. 08 8月, 2010 2 次提交
  9. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  10. 15 3月, 2010 1 次提交
  11. 08 3月, 2010 1 次提交
  12. 29 1月, 2010 1 次提交
    • A
      block: Added in stricter no merge semantics for block I/O · 488991e2
      Alan D. Brunelle 提交于
      Updated 'nomerges' tunable to accept a value of '2' - indicating that _no_
      merges at all are to be attempted (not even the simple one-hit cache).
      
      The following table illustrates the additional benefit - 5 minute runs of
      a random I/O load were applied to a dozen devices on a 16-way x86_64 system.
      
      nomerges        Throughput      %System         Improvement (tput / %sys)
      --------        ------------    -----------     -------------------------
      0               12.45 MB/sec    0.669365609
      1               12.50 MB/sec    0.641519199     0.40% / 2.71%
      2               12.52 MB/sec    0.639849750     0.56% / 2.96%
      Signed-off-by: NAlan D. Brunelle <alan.brunelle@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      488991e2
  13. 03 12月, 2009 1 次提交
    • M
      block: Allow devices to indicate whether discarded blocks are zeroed · 98262f27
      Martin K. Petersen 提交于
      The discard ioctl is used by mkfs utilities to clear a block device
      prior to putting metadata down.  However, not all devices return zeroed
      blocks after a discard.  Some drives return stale data, potentially
      containing old superblocks.  It is therefore important to know whether
      discarded blocks are properly zeroed.
      
      Both ATA and SCSI drives have configuration bits that indicate whether
      zeroes are returned after a discard operation.  Implement a block level
      interface that allows this information to be bubbled up the stack and
      queried via a new block device ioctl.
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      98262f27
  14. 10 11月, 2009 1 次提交
  15. 02 10月, 2009 1 次提交
    • Z
      Add missing blk_trace_remove_sysfs to be in pair with blk_trace_init_sysfs · 48c0d4d4
      Zdenek Kabelac 提交于
      Add missing blk_trace_remove_sysfs to be in pair with blk_trace_init_sysfs
      introduced in commit 1d54ad6d.
      Release kobject also in case the request_fn is NULL.
      
      Problem was noticed via kmemleak backtrace when some sysfs entries were
      note properly destroyed during  device removal:
      
      unreferenced object 0xffff88001aa76640 (size 80):
        comm "lvcreate", pid 2120, jiffies 4294885144
        hex dump (first 32 bytes):
          01 00 00 00 00 00 00 00 f0 65 a7 1a 00 88 ff ff  .........e......
          90 66 a7 1a 00 88 ff ff 86 1d 53 81 ff ff ff ff  .f........S.....
        backtrace:
          [<ffffffff813f9cc6>] kmemleak_alloc+0x26/0x60
          [<ffffffff8111d693>] kmem_cache_alloc+0x133/0x1c0
          [<ffffffff81195891>] sysfs_new_dirent+0x41/0x120
          [<ffffffff81194b0c>] sysfs_add_file_mode+0x3c/0xb0
          [<ffffffff81197c81>] internal_create_group+0xc1/0x1a0
          [<ffffffff81197d93>] sysfs_create_group+0x13/0x20
          [<ffffffff810d8004>] blk_trace_init_sysfs+0x14/0x20
          [<ffffffff8123f45c>] blk_register_queue+0x3c/0xf0
          [<ffffffff812447e4>] add_disk+0x94/0x160
          [<ffffffffa00d8b08>] dm_create+0x598/0x6e0 [dm_mod]
          [<ffffffffa00de951>] dev_create+0x51/0x350 [dm_mod]
          [<ffffffffa00de823>] ctl_ioctl+0x1a3/0x240 [dm_mod]
          [<ffffffffa00de8f2>] dm_compat_ctl_ioctl+0x12/0x20 [dm_mod]
          [<ffffffff81177bfd>] compat_sys_ioctl+0xcd/0x4f0
          [<ffffffff81036ed8>] sysenter_dispatch+0x7/0x2c
          [<ffffffffffffffff>] 0xffffffffffffffff
      Signed-off-by: NZdenek Kabelac <zkabelac@redhat.com>
      Reviewed-by: NLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      48c0d4d4
  16. 14 9月, 2009 1 次提交
  17. 02 9月, 2009 1 次提交
    • N
      block: Allow changing max_sectors_kb above the default 512 · c295fc05
      Nikanth Karthikesan 提交于
      The patch "block: Use accessor functions for queue limits"
      (ae03bf63) changed queue_max_sectors_store()
      to use blk_queue_max_sectors() instead of directly assigning the value.
      
      But blk_queue_max_sectors() differs a bit
      1. It sets both max_sectors_kb, and max_hw_sectors_kb
      2. Never allows one to change max_sectors_kb above BLK_DEF_MAX_SECTORS. If one
      specifies a value greater then max_hw_sectors is set to that value but
      max_sectors is set to BLK_DEF_MAX_SECTORS
      
      I am not sure whether blk_queue_max_sectors() should be changed, as it seems
      to be that way for a long time. And there may be callers dependent on that
      behaviour.
      
      This patch simply reverts to the older way of directly assigning the value to
      max_sectors as it was before.
      Signed-off-by: NNikanth Karthikesan <knikanth@suse.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c295fc05
  18. 17 7月, 2009 1 次提交
  19. 23 5月, 2009 4 次提交
  20. 24 4月, 2009 1 次提交
  21. 16 4月, 2009 1 次提交
    • L
      blktrace: add trace/ to /sys/block/sda · 1d54ad6d
      Li Zefan 提交于
      Impact: allow ftrace-plugin blktrace to trace device-mapper devices
      
      To trace a single partition:
        # echo 1 > /sys/block/sda/sda1/enable
      
      To trace the whole sda instead:
        # echo 1 > /sys/block/sda/enable
      
      Thus we also fix an issue reported by Ted, that ftrace-plugin blktrace
      can't be used to trace device-mapper devices.
      
      Now:
      
        # echo 1 > /sys/block/dm-0/trace/enable
        echo: write error: No such device or address
        # mount -t ext4 /dev/dm-0 /mnt
        # echo 1 > /sys/block/dm-0/trace/enable
        # echo blk > /debug/tracing/current_tracer
      Reported-by: NTheodore Tso <tytso@mit.edu>
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Acked-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Shawn Du <duyuyang@gmail.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      LKML-Reference: <49E42665.6020506@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1d54ad6d
  22. 15 4月, 2009 1 次提交
  23. 07 4月, 2009 1 次提交
  24. 06 4月, 2009 1 次提交
  25. 30 1月, 2009 2 次提交
  26. 29 12月, 2008 1 次提交
  27. 09 10月, 2008 2 次提交
    • J
      block: add support for IO CPU affinity · c7c22e4d
      Jens Axboe 提交于
      This patch adds support for controlling the IO completion CPU of
      either all requests on a queue, or on a per-request basis. We export
      a sysfs variable (rq_affinity) which, if set, migrates completions
      of requests to the CPU that originally submitted it. A bio helper
      (bio_set_completion_cpu()) is also added, so that queuers can ask
      for completion on that specific CPU.
      
      In testing, this has been show to cut the system time by as much
      as 20-40% on synthetic workloads where CPU affinity is desired.
      
      This requires a little help from the architecture, so it'll only
      work as designed for archs that are using the new generic smp
      helper infrastructure.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c7c22e4d
    • T
      block: implement and use {disk|part}_to_dev() · ed9e1982
      Tejun Heo 提交于
      Implement {disk|part}_to_dev() and use them to access generic device
      instead of directly dereferencing {disk|part}->dev.  To make sure no
      user is left behind, rename generic devices fields to __dev.
      
      This is in preparation of unifying partition 0 handling with other
      partitions.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      ed9e1982
  28. 07 5月, 2008 1 次提交
  29. 29 4月, 2008 1 次提交
    • A
      block: Skip I/O merges when disabled · ac9fafa1
      Alan D. Brunelle 提交于
      The block I/O + elevator + I/O scheduler code spend a lot of time trying
      to merge I/Os -- rightfully so under "normal" circumstances. However,
      if one were to know that the incoming I/O stream was /very/ random in
      nature, the cycles are wasted.
      
      This patch adds a per-request_queue tunable that (when set) disables
      merge attempts (beyond the simple one-hit cache check), thus freeing up
      a non-trivial amount of CPU cycles.
      Signed-off-by: NAlan D. Brunelle <alan.brunelle@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      ac9fafa1
  30. 21 4月, 2008 1 次提交
  31. 01 2月, 2008 1 次提交
  32. 30 1月, 2008 2 次提交