1. 10 9月, 2010 8 次提交
    • C
      block: pass gfp_mask and flags to sb_issue_discard · 2cf6d26a
      Christoph Hellwig 提交于
      We'll need to get rid of the BLKDEV_IFL_BARRIER flag, and to facilitate
      that and to make the interface less confusing pass all flags explicitly.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      2cf6d26a
    • T
      block: implement REQ_FLUSH/FUA based interface for FLUSH/FUA requests · 4fed947c
      Tejun Heo 提交于
      Now that the backend conversion is complete, export sequenced
      FLUSH/FUA capability through REQ_FLUSH/FUA flags.  REQ_FLUSH means the
      device cache should be flushed before executing the request.  REQ_FUA
      means that the data in the request should be on non-volatile media on
      completion.
      
      Block layer will choose the correct way of implementing the semantics
      and execute it.  The request may be passed to the device directly if
      the device can handle it; otherwise, it will be sequenced using one or
      more proxy requests.  Devices will never see REQ_FLUSH and/or FUA
      which it doesn't support.
      
      Also, unlike the original REQ_HARDBARRIER, REQ_FLUSH/FUA requests are
      never failed with -EOPNOTSUPP.  If the underlying device doesn't
      support FLUSH/FUA, the block layer simply make those noop.  IOW, it no
      longer distinguishes between writeback cache which doesn't support
      cache flush and writethrough/no cache.  Devices which have WB cache
      w/o flush are very difficult to come by these days and there's nothing
      much we can do anyway, so it doesn't make sense to require everyone to
      implement -EOPNOTSUPP handling.  This will simplify filesystems and
      block drivers as they can drop -EOPNOTSUPP retry logic for barriers.
      
      * QUEUE_ORDERED_* are removed and QUEUE_FSEQ_* are moved into
        blk-flush.c.
      
      * REQ_FLUSH w/o data can also be directly passed to drivers without
        sequencing but some drivers assume that zero length requests don't
        have rq->bio which isn't true for these requests requiring the use
        of proxy requests.
      
      * REQ_COMMON_MASK now includes REQ_FLUSH | REQ_FUA so that they are
        copied from bio to request.
      
      * WRITE_BARRIER is marked deprecated and WRITE_FLUSH, WRITE_FUA and
        WRITE_FLUSH_FUA are added.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      4fed947c
    • T
      block: rename barrier/ordered to flush · dd4c133f
      Tejun Heo 提交于
      With ordering requirements dropped, barrier and ordered are misnomers.
      Now all block layer does is sequencing FLUSH and FUA.  Rename them to
      flush.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      dd4c133f
    • T
      block: drop barrier ordering by queue draining · 28e7d184
      Tejun Heo 提交于
      Filesystems will take all the responsibilities for ordering requests
      around commit writes and will only indicate how the commit writes
      themselves should be handled by block layers.  This patch drops
      barrier ordering by queue draining from block layer.  Ordering by
      draining implementation was somewhat invasive to request handling.
      List of notable changes follow.
      
      * Each queue has 1 bit color which is flipped on each barrier issue.
        This is used to track whether a given request is issued before the
        current barrier or not.  REQ_ORDERED_COLOR flag and coloring
        implementation in __elv_add_request() are removed.
      
      * Requests which shouldn't be processed yet for draining were stalled
        by returning -EAGAIN from blk_do_ordered() according to the test
        result between blk_ordered_req_seq() and blk_blk_ordered_cur_seq().
        This logic is removed.
      
      * Draining completion logic in elv_completed_request() removed.
      
      * All barrier sequence requests were queued to request queue and then
        trckled to lower layer according to progress and thus maintaining
        request orders during requeue was necessary.  This is replaced by
        queueing the next request in the barrier sequence only after the
        current one is complete from blk_ordered_complete_seq(), which
        removes the need for multiple proxy requests in struct request_queue
        and the request sorting logic in the ELEVATOR_INSERT_REQUEUE path of
        elv_insert().
      
      * As barriers no longer have ordering constraints, there's no need to
        dump the whole elevator onto the dispatch queue on each barrier.
        Insert barriers at the front instead.
      
      * If other barrier requests come to the front of the dispatch queue
        while one is already in progress, they are stored in
        q->pending_barriers and restored to dispatch queue one-by-one after
        each barrier completion from blk_ordered_complete_seq().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      28e7d184
    • T
      block: misc cleanups in barrier code · dd831006
      Tejun Heo 提交于
      Make the following cleanups in preparation of barrier/flush update.
      
      * blk_do_ordered() declaration is moved from include/linux/blkdev.h to
        block/blk.h.
      
      * blk_do_ordered() now returns pointer to struct request, with %NULL
        meaning "try the next request" and ERR_PTR(-EAGAIN) "try again
        later".  The third case will be dropped with further changes.
      
      * In the initialization of proxy barrier request, data direction is
        already set by init_request_from_bio().  Drop unnecessary explicit
        REQ_WRITE setting and move init_request_from_bio() above REQ_FUA
        flag setting.
      
      * add_request() is collapsed into __make_request().
      
      These changes don't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      dd831006
    • T
      block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush() · 4913efe4
      Tejun Heo 提交于
      Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA
      requests.  Deprecate barrier.  All REQ_HARDBARRIERs are failed with
      -EOPNOTSUPP and blk_queue_ordered() is replaced with simpler
      blk_queue_flush().
      
      blk_queue_flush() takes combinations of REQ_FLUSH and FUA.  If a
      device has write cache and can flush it, it should set REQ_FLUSH.  If
      the device can handle FUA writes, it should also set REQ_FUA.
      
      All blk_queue_ordered() users are converted.
      
      * ORDERED_DRAIN is mapped to 0 which is the default value.
      * ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH.
      * ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NBoaz Harrosh <bharrosh@panasas.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Pierre Ossman <drzeus@drzeus.cx>
      Cc: Stefan Weinhuber <wein@de.ibm.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      4913efe4
    • T
      block: kill QUEUE_ORDERED_BY_TAG · 6958f145
      Tejun Heo 提交于
      Nobody is making meaningful use of ORDERED_BY_TAG now and queue
      draining for barrier requests will be removed soon which will render
      the advantage of tag ordering moot.  Kill ORDERED_BY_TAG.  The
      following users are affected.
      
      * brd: converted to ORDERED_DRAIN.
      * virtio_blk: ORDERED_TAG path was already marked deprecated.  Removed.
      * xen-blkfront: ORDERED_TAG case dropped.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      6958f145
    • T
      ide: remove unnecessary blk_queue_flushing() test in do_ide_request() · 0da2f509
      Tejun Heo 提交于
      Unplugging from a request function doesn't really help much (it's
      already in the request_fn) and soon block layer will be updated to mix
      barrier sequence with other commands, so there's no need to treat
      queue flushing any differently.
      
      ide was the only user of blk_queue_flushing().  Remove it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      0da2f509
  2. 12 8月, 2010 1 次提交
  3. 08 8月, 2010 10 次提交
  4. 01 6月, 2010 1 次提交
  5. 22 5月, 2010 1 次提交
    • T
      block,ide: simplify bdops->set_capacity() to ->unlock_native_capacity() · c3e33e04
      Tejun Heo 提交于
      bdops->set_capacity() is unnecessarily generic.  All that's required
      is a simple one way notification to lower level driver telling it to
      try to unlock native capacity.  There's no reason to pass in target
      capacity or return the new capacity.  The former is always the
      inherent native capacity and the latter can be handled via the usual
      device resize / revalidation path.  In fact, the current API is always
      used that way.
      
      Replace ->set_capacity() with ->unlock_native_capacity() which take
      only @disk and doesn't return anything.  IDE which is the only current
      user of the API is converted accordingly.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c3e33e04
  6. 19 5月, 2010 1 次提交
  7. 11 5月, 2010 1 次提交
    • M
      block: allow initialization of previously allocated request_queue · 01effb0d
      Mike Snitzer 提交于
      blk_init_queue() allocates the request_queue structure and then
      initializes it as needed (request_fn, elevator, etc).
      
      Split initialization out to blk_init_allocated_queue_node.
      Introduce blk_init_allocated_queue wrapper function to model existing
      blk_init_queue and blk_init_queue_node interfaces.
      
      Export elv_register_queue to allow a newly added elevator to be
      registered with sysfs.  Export elv_unregister_queue for symmetry.
      
      These changes allow DM to initialize a device's request_queue with more
      precision.  In particular, DM no longer unconditionally initializes a
      full request_queue (elevator et al).  It only does so for a
      request-based DM device.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      01effb0d
  8. 29 4月, 2010 2 次提交
  9. 21 4月, 2010 1 次提交
    • V
      blkio: Fix blkio crash during rq stat update · 7f1dc8a2
      Vivek Goyal 提交于
      blkio + cfq was crashing even when two sequential readers were put in two
      separate cgroups (group_isolation=0).
      
      The reason being that cfqq can migrate across groups based on its being
      sync-noidle or not, it can happen that at request insertion time, cfqq
      belonged to one cfqg and at request dispatch time, it belonged to root
      group. In this case request stats per cgroup can go wrong and it also runs
      into BUG_ON().
      
      This patch implements rq stashing away a cfq group pointer and not relying
      on cfqq->cfqg pointer alone for rq stat accounting.
      
      [   65.163523] ------------[ cut here ]------------
      [   65.164301] kernel BUG at block/blk-cgroup.c:117!
      [   65.164301] invalid opcode: 0000 [#1] SMP
      [   65.164301] last sysfs file: /sys/devices/pci0000:00/0000:00:05.0/0000:60:00.1/host9/rport-9:0-0/target9:0:0/9:0:0:2/block/sde/stat
      [   65.164301] CPU 1
      [   65.164301] Modules linked in: dm_round_robin dm_multipath qla2xxx scsi_transport_fc dm_zero dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
      [   65.164301]
      [   65.164301] Pid: 4505, comm: fio Not tainted 2.6.34-rc4-blk-for-35 #34 0A98h/HP xw8600 Workstation
      [   65.164301] RIP: 0010:[<ffffffff8121924f>]  [<ffffffff8121924f>] blkiocg_update_io_remove_stats+0x5b/0xaf
      [   65.164301] RSP: 0018:ffff8800ba5a79e8  EFLAGS: 00010046
      [   65.164301] RAX: 0000000000000096 RBX: ffff8800bb268d60 RCX: 0000000000000000
      [   65.164301] RDX: ffff8800bb268eb8 RSI: 0000000000000000 RDI: ffff8800bb268e00
      [   65.164301] RBP: ffff8800ba5a7a08 R08: 0000000000000064 R09: 0000000000000001
      [   65.164301] R10: 0000000000079640 R11: ffff8800a0bd5bf0 R12: ffff8800bab4af01
      [   65.164301] R13: ffff8800bab4af00 R14: ffff8800bb1d8928 R15: 0000000000000000
      [   65.164301] FS:  00007f18f75056f0(0000) GS:ffff880001e40000(0000) knlGS:0000000000000000
      [   65.164301] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [   65.164301] CR2: 000000000040e7f0 CR3: 00000000ba52b000 CR4: 00000000000006e0
      [   65.164301] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [   65.164301] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      [   65.164301] Process fio (pid: 4505, threadinfo ffff8800ba5a6000, task ffff8800ba45ae80)
      [   65.164301] Stack:
      [   65.164301]  ffff8800ba5a7a08 ffff8800ba722540 ffff8800bab4af68 ffff8800bab4af68
      [   65.164301] <0> ffff8800ba5a7a38 ffffffff8121d814 ffff8800ba722540 ffff8800bab4af68
      [   65.164301] <0> ffff8800ba722540 ffff8800a08f6800 ffff8800ba5a7a68 ffffffff8121d8ca
      [   65.164301] Call Trace:
      [   65.164301]  [<ffffffff8121d814>] cfq_remove_request+0xe4/0x116
      [   65.164301]  [<ffffffff8121d8ca>] cfq_dispatch_insert+0x84/0xe1
      [   65.164301]  [<ffffffff8121e833>] cfq_dispatch_requests+0x767/0x8e8
      [   65.164301]  [<ffffffff8120e524>] ? submit_bio+0xc3/0xcc
      [   65.164301]  [<ffffffff810ad657>] ? sync_page_killable+0x0/0x35
      [   65.164301]  [<ffffffff8120ea8d>] blk_peek_request+0x191/0x1a7
      [   65.164301]  [<ffffffffa000109c>] ? dm_get_live_table+0x44/0x4f [dm_mod]
      [   65.164301]  [<ffffffffa0002799>] dm_request_fn+0x38/0x14c [dm_mod]
      [   65.164301]  [<ffffffff810ad657>] ? sync_page_killable+0x0/0x35
      [   65.164301]  [<ffffffff8120f600>] __generic_unplug_device+0x32/0x37
      [   65.164301]  [<ffffffff8120f8a0>] generic_unplug_device+0x2e/0x3c
      [   65.164301]  [<ffffffffa00011a6>] dm_unplug_all+0x42/0x5b [dm_mod]
      [   65.164301]  [<ffffffff8120b063>] blk_unplug+0x29/0x2d
      [   65.164301]  [<ffffffff8120b079>] blk_backing_dev_unplug+0x12/0x14
      [   65.164301]  [<ffffffff81108a82>] block_sync_page+0x35/0x39
      [   65.164301]  [<ffffffff810ad64e>] sync_page+0x41/0x4a
      [   65.164301]  [<ffffffff810ad665>] sync_page_killable+0xe/0x35
      [   65.164301]  [<ffffffff81589027>] __wait_on_bit_lock+0x46/0x8f
      [   65.164301]  [<ffffffff810ad52d>] __lock_page_killable+0x66/0x6d
      [   65.164301]  [<ffffffff81055fd4>] ? wake_bit_function+0x0/0x33
      [   65.164301]  [<ffffffff810ad560>] lock_page_killable+0x2c/0x2e
      [   65.164301]  [<ffffffff810aebfd>] generic_file_aio_read+0x361/0x4f0
      [   65.164301]  [<ffffffff810e906c>] do_sync_read+0xcb/0x108
      [   65.164301]  [<ffffffff811e32a3>] ? security_file_permission+0x16/0x18
      [   65.164301]  [<ffffffff810e96d3>] vfs_read+0xab/0x108
      [   65.164301]  [<ffffffff810e97f0>] sys_read+0x4a/0x6e
      [   65.164301]  [<ffffffff81002b5b>] system_call_fastpath+0x16/0x1b
      [   65.164301] Code: 00 74 1c 48 8b 8b 60 01 00 00 48 85 c9 75 04 0f 0b eb fe 48 ff c9 48 89 8b 60 01 00 00 eb 1a 48 8b 8b 58 01 00 00 48 85 c9 75 04 <0f> 0b eb fe 48 ff c9 48 89 8b 58 01 00 00 45 84 e4 74 16 48 8b
      [   65.164301] RIP  [<ffffffff8121924f>] blkiocg_update_io_remove_stats+0x5b/0xaf
      [   65.164301]  RSP <ffff8800ba5a79e8>
      [   65.164301] ---[ end trace 1b2b828753032e68 ]---
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      7f1dc8a2
  10. 09 4月, 2010 1 次提交
    • D
      blkio: Changes to IO controller additional stats patches · 84c124da
      Divyesh Shah 提交于
      that include some minor fixes and addresses all comments.
      
      Changelog: (most based on Vivek Goyal's comments)
      o renamed blkiocg_reset_write to blkiocg_reset_stats
      o more clarification in the documentation on io_service_time and io_wait_time
      o Initialize blkg->stats_lock
      o rename io_add_stat to blkio_add_stat and declare it static
      o use bool for direction and sync
      o derive direction and sync info from existing rq methods
      o use 12 for major:minor string length
      o define io_service_time better to cover the NCQ case
      o add a separate reset_stats interface
      o make the indexed stats a 2d array to simplify macro and function pointer code
      o blkio.time now exports in jiffies as before
      o Added stats description in patch description and
        Documentation/cgroup/blkio-controller.txt
      o Prefix all stats functions with blkio and make them static as applicable
      o replace IO_TYPE_MAX with IO_TYPE_TOTAL
      o Moved #define constant to top of blk-cgroup.c
      o Pass dev_t around instead of char *
      o Add note to documentation file about resetting stats
      o use BLK_CGROUP_MODULE in addition to BLK_CGROUP config option in #ifdef
        statements
      o Avoid struct request specific knowledge in blk-cgroup. blk-cgroup.h now has
        rq_direction() and rq_sync() functions which are used by CFQ and when using
        io-controller at a higher level, bio_* functions can be added.
      
      Signed-off-by: Divyesh Shah<dpshah@google.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      84c124da
  11. 02 4月, 2010 1 次提交
  12. 19 3月, 2010 1 次提交
  13. 15 3月, 2010 1 次提交
  14. 26 2月, 2010 4 次提交
  15. 23 2月, 2010 1 次提交
  16. 29 1月, 2010 1 次提交
    • A
      block: Added in stricter no merge semantics for block I/O · 488991e2
      Alan D. Brunelle 提交于
      Updated 'nomerges' tunable to accept a value of '2' - indicating that _no_
      merges at all are to be attempted (not even the simple one-hit cache).
      
      The following table illustrates the additional benefit - 5 minute runs of
      a random I/O load were applied to a dozen devices on a 16-way x86_64 system.
      
      nomerges        Throughput      %System         Improvement (tput / %sys)
      --------        ------------    -----------     -------------------------
      0               12.45 MB/sec    0.669365609
      1               12.50 MB/sec    0.641519199     0.40% / 2.71%
      2               12.52 MB/sec    0.639849750     0.56% / 2.96%
      Signed-off-by: NAlan D. Brunelle <alan.brunelle@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      488991e2
  17. 11 1月, 2010 3 次提交
  18. 30 12月, 2009 1 次提交