1. 29 10月, 2013 1 次提交
    • C
      blk-mq: fix for flush deadlock · 3228f48b
      Christoph Hellwig 提交于
      The flush state machine takes in a struct request, which then is
      submitted multiple times to the underling driver.  The old block code
      requeses the same request for each of those, so it does not have an
      issue with tapping into the request pool.  The new one on the other hand
      allocates a new request for each of the actualy steps of the flush
      sequence. If have already allocated all of the tags for IO, we will
      fail allocating the flush request.
      
      Set aside a reserved request just for flushes.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3228f48b
  2. 25 10月, 2013 3 次提交
    • J
      blk-mq: new multi-queue block IO queueing mechanism · 320ae51f
      Jens Axboe 提交于
      Linux currently has two models for block devices:
      
      - The classic request_fn based approach, where drivers use struct
        request units for IO. The block layer provides various helper
        functionalities to let drivers share code, things like tag
        management, timeout handling, queueing, etc.
      
      - The "stacked" approach, where a driver squeezes in between the
        block layer and IO submitter. Since this bypasses the IO stack,
        driver generally have to manage everything themselves.
      
      With drivers being written for new high IOPS devices, the classic
      request_fn based driver doesn't work well enough. The design dates
      back to when both SMP and high IOPS was rare. It has problems with
      scaling to bigger machines, and runs into scaling issues even on
      smaller machines when you have IOPS in the hundreds of thousands
      per device.
      
      The stacked approach is then most often selected as the model
      for the driver. But this means that everybody has to re-invent
      everything, and along with that we get all the problems again
      that the shared approach solved.
      
      This commit introduces blk-mq, block multi queue support. The
      design is centered around per-cpu queues for queueing IO, which
      then funnel down into x number of hardware submission queues.
      We might have a 1:1 mapping between the two, or it might be
      an N:M mapping. That all depends on what the hardware supports.
      
      blk-mq provides various helper functions, which include:
      
      - Scalable support for request tagging. Most devices need to
        be able to uniquely identify a request both in the driver and
        to the hardware. The tagging uses per-cpu caches for freed
        tags, to enable cache hot reuse.
      
      - Timeout handling without tracking request on a per-device
        basis. Basically the driver should be able to get a notification,
        if a request happens to fail.
      
      - Optional support for non 1:1 mappings between issue and
        submission queues. blk-mq can redirect IO completions to the
        desired location.
      
      - Support for per-request payloads. Drivers almost always need
        to associate a request structure with some driver private
        command structure. Drivers can tell blk-mq this at init time,
        and then any request handed to the driver will have the
        required size of memory associated with it.
      
      - Support for merging of IO, and plugging. The stacked model
        gets neither of these. Even for high IOPS devices, merging
        sequential IO reduces per-command overhead and thus
        increases bandwidth.
      
      For now, this is provided as a potential 3rd queueing model, with
      the hope being that, as it matures, it can replace both the classic
      and stacked model. That would get us back to having just 1 real
      model for block devices, leaving the stacked approach to dm/md
      devices (as it was originally intended).
      
      Contributions in this patch from the following people:
      
      Shaohua Li <shli@fusionio.com>
      Alexander Gordeev <agordeev@redhat.com>
      Christoph Hellwig <hch@infradead.org>
      Mike Christie <michaelc@cs.wisc.edu>
      Matias Bjorling <m@bjorling.me>
      Jeff Moyer <jmoyer@redhat.com>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      320ae51f
    • C
      block: remove request ref_count · 71fe07d0
      Christoph Hellwig 提交于
      This reference count has been around since before git history, but the only
      place where it's used is in blk_execute_rq, and ther it is entirely useless
      as it is incremented before submitting the request and decremented in the
      end_io handler before waking up the submitter thread.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      71fe07d0
    • J
      block: make rq->cmd_flags be 64-bit · 5953316d
      Jens Axboe 提交于
      We have officially run out of flags in a 32-bit space. Extend it
      to 64-bit even on 32-bit archs.
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5953316d
  3. 12 9月, 2013 1 次提交
  4. 24 8月, 2013 2 次提交
  5. 01 7月, 2013 1 次提交
  6. 17 5月, 2013 1 次提交
  7. 15 5月, 2013 1 次提交
    • V
      block: queue work on power efficient wq · 695588f9
      Viresh Kumar 提交于
      Block layer uses workqueues for multiple purposes. There is no real dependency
      of scheduling these on the cpu which scheduled them.
      
      On a idle system, it is observed that and idle cpu wakes up many times just to
      service this work. It would be better if we can schedule it on a cpu which the
      scheduler believes to be the most appropriate one.
      
      This patch replaces normal workqueues with power efficient versions.
      
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      695588f9
  8. 19 4月, 2013 1 次提交
  9. 24 3月, 2013 2 次提交
    • K
      block: Add bio_end_sector() · f73a1c7d
      Kent Overstreet 提交于
      Just a little convenience macro - main reason to add it now is preparing
      for immutable bio vecs, it'll reduce the size of the patch that puts
      bi_sector/bi_size/bi_idx into a struct bvec_iter.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: Lars Ellenberg <drbd-dev@lists.linbit.com>
      CC: Jiri Kosina <jkosina@suse.cz>
      CC: Alasdair Kergon <agk@redhat.com>
      CC: dm-devel@redhat.com
      CC: Neil Brown <neilb@suse.de>
      CC: Martin Schwidefsky <schwidefsky@de.ibm.com>
      CC: Heiko Carstens <heiko.carstens@de.ibm.com>
      CC: linux-s390@vger.kernel.org
      CC: Chris Mason <chris.mason@fusionio.com>
      CC: Steven Whitehouse <swhiteho@redhat.com>
      Acked-by: NSteven Whitehouse <swhiteho@redhat.com>
      f73a1c7d
    • K
      block: Refactor blk_update_request() · f79ea416
      Kent Overstreet 提交于
      Converts it to use bio_advance(), simplifying it quite a bit in the
      process.
      
      Note that req_bio_endio() now always calls bio_advance() - which means
      it always loops over the biovec, not just on partial completions. Don't
      expect it to affect performance, but worth noting.
      
      Tested it by forcing partial updates, and dumping before and after on
      various bio/bvec fields when doing a partial update.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      f79ea416
  10. 23 3月, 2013 2 次提交
  11. 22 2月, 2013 1 次提交
    • D
      block: optionally snapshot page contents to provide stable pages during write · ffecfd1a
      Darrick J. Wong 提交于
      This provides a band-aid to provide stable page writes on jbd without
      needing to backport the fixed locking and page writeback bit handling
      schemes of jbd2.  The band-aid works by using bounce buffers to snapshot
      page contents instead of waiting.
      
      For those wondering about the ext3 bandage -- fixing the jbd locking
      (which was done as part of ext4dev years ago) is a lot of surgery, and
      setting PG_writeback on data pages when we actually hold the page lock
      dropped ext3 performance by nearly an order of magnitude.  If we're
      going to migrate iscsi and raid to use stable page writes, the
      complaints about high latency will likely return.  We might as well
      centralize their page snapshotting thing to one place.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Tested-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Artem Bityutskiy <dedekind1@gmail.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Eric Van Hensbergen <ericvh@gmail.com>
      Cc: Ron Minnich <rminnich@sandia.gov>
      Cc: Latchesar Ionkov <lucho@ionkov.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ffecfd1a
  12. 14 1月, 2013 2 次提交
    • T
      block: add @req to bio_{front|back}_merge tracepoints · 8c1cf6bb
      Tejun Heo 提交于
      bio_{front|back}_merge tracepoints report a bio merging into an
      existing request but didn't specify which request the bio is being
      merged into.  Add @req to it.  This makes it impossible to share the
      event template with block_bio_queue - split it out.
      
      @req isn't used or exported to userland at this point and there is no
      userland visible behavior change.  Later changes will make use of the
      extra parameter.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      8c1cf6bb
    • T
      block: add missing block_bio_complete() tracepoint · 3a366e61
      Tejun Heo 提交于
      bio completion didn't kick block_bio_complete TP.  Only dm was
      explicitly triggering the TP on IO completion.  This makes
      block_bio_complete TP useless for tracers which want to know about
      bios, and all other bio based drivers skip generating blktrace
      completion events.
      
      This patch makes all bio completions via bio_endio() generate
      block_bio_complete TP.
      
      * Explicit trace_block_bio_complete() invocation removed from dm and
        the trace point is unexported.
      
      * @rq dropped from trace_block_bio_complete().  bios may fly around
        w/o queue associated.  Verifying and accessing the assocaited queue
        belongs to TP probes.
      
      * blktrace now gets both request and bio completions.  Make it ignore
        bio completions if request completion path is happening.
      
      This makes all bio based drivers generate blktrace completion events
      properly and makes the block_bio_complete TP actually useful.
      
      v2: With this change, block_bio_complete TP could be invoked on sg
          commands which have bio's with %NULL bi_bdev.  Update TP
          assignment code to check whether bio->bi_bdev is %NULL before
          dereferencing.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Original-patch-by: NNamhyung Kim <namhyung@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: dm-devel@redhat.com
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3a366e61
  13. 11 1月, 2013 1 次提交
  14. 15 12月, 2012 1 次提交
  15. 06 12月, 2012 5 次提交
    • B
      block: Make blk_cleanup_queue() wait until request_fn finished · 24faf6f6
      Bart Van Assche 提交于
      Some request_fn implementations, e.g. scsi_request_fn(), unlock
      the queue lock internally. This may result in multiple threads
      executing request_fn for the same queue simultaneously. Keep
      track of the number of active request_fn calls and make sure that
      blk_cleanup_queue() waits until all active request_fn invocations
      have finished. A block driver may start cleaning up resources
      needed by its request_fn as soon as blk_cleanup_queue() finished,
      so blk_cleanup_queue() must wait for all outstanding request_fn
      invocations to finish.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Reported-by: NChanho Min <chanho.min@lge.com>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      24faf6f6
    • B
      block: Avoid scheduling delayed work on a dead queue · 70460571
      Bart Van Assche 提交于
      Running a queue must continue after it has been marked dying until
      it has been marked dead. So the function blk_run_queue_async() must
      not schedule delayed work after blk_cleanup_queue() has marked a queue
      dead. Hence add a test for that queue state in blk_run_queue_async()
      and make sure that queue_unplugged() invokes that function with the
      queue lock held. This avoids that the queue state can change after
      it has been tested and before mod_delayed_work() is invoked. Drop
      the queue dying test in queue_unplugged() since it is now
      superfluous: __blk_run_queue() already tests whether or not the
      queue is dead.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      70460571
    • B
      block: Avoid that request_fn is invoked on a dead queue · c246e80d
      Bart Van Assche 提交于
      A block driver may start cleaning up resources needed by its
      request_fn as soon as blk_cleanup_queue() finished, so request_fn
      must not be invoked after draining finished. This is important
      when blk_run_queue() is invoked without any requests in progress.
      As an example, if blk_drain_queue() and scsi_run_queue() run in
      parallel, blk_drain_queue() may have finished all requests after
      scsi_run_queue() has taken a SCSI device off the starved list but
      before that last function has had a chance to run the queue.
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Chanho Min <chanho.min@lge.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c246e80d
    • B
      block: Let blk_drain_queue() caller obtain the queue lock · 807592a4
      Bart Van Assche 提交于
      Let the caller of blk_drain_queue() obtain the queue lock to improve
      readability of the patch called "Avoid that request_fn is invoked on
      a dead queue".
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Chanho Min <chanho.min@lge.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      807592a4
    • B
      block: Rename queue dead flag · 3f3299d5
      Bart Van Assche 提交于
      QUEUE_FLAG_DEAD is used to indicate that queuing new requests must
      stop. After this flag has been set queue draining starts. However,
      during the queue draining phase it is still safe to invoke the
      queue's request_fn, so QUEUE_FLAG_DYING is a better name for this
      flag.
      
      This patch has been generated by running the following command
      over the kernel source tree:
      
      git grep -lEw 'blk_queue_dead|QUEUE_FLAG_DEAD' |
          xargs sed -i.tmp -e 's/blk_queue_dead/blk_queue_dying/g'      \
              -e 's/QUEUE_FLAG_DEAD/QUEUE_FLAG_DYING/g';                \
      sed -i.tmp -e "s/QUEUE_FLAG_DYING$(printf \\t)*5/QUEUE_FLAG_DYING$(printf \\t)5/g" \
          include/linux/blkdev.h;                                       \
      sed -i.tmp -e 's/ DEAD/ DYING/g' -e 's/dead queue/a dying queue/' \
          -e 's/Dead queue/A dying queue/' block/blk-core.c
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Chanho Min <chanho.min@lge.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3f3299d5
  16. 10 11月, 2012 1 次提交
  17. 26 10月, 2012 1 次提交
    • J
      block: Add blk_rq_pos(rq) to sort rq when plushing · 975927b9
      Jianpeng Ma 提交于
      My workload is a raid5 which had 16 disks. And used our filesystem to
      write using direct-io mode.
      
      I used the blktrace to find those message:
      8,16   0     6647     2.453665504  2579  M   W 7493152 + 8 [md0_raid5]
      8,16   0     6648     2.453672411  2579  Q   W 7493160 + 8 [md0_raid5]
      8,16   0     6649     2.453672606  2579  M   W 7493160 + 8 [md0_raid5]
      8,16   0     6650     2.453679255  2579  Q   W 7493168 + 8 [md0_raid5]
      8,16   0     6651     2.453679441  2579  M   W 7493168 + 8 [md0_raid5]
      8,16   0     6652     2.453685948  2579  Q   W 7493176 + 8 [md0_raid5]
      8,16   0     6653     2.453686149  2579  M   W 7493176 + 8 [md0_raid5]
      8,16   0     6654     2.453693074  2579  Q   W 7493184 + 8 [md0_raid5]
      8,16   0     6655     2.453693254  2579  M   W 7493184 + 8 [md0_raid5]
      8,16   0     6656     2.453704290  2579  Q   W 7493192 + 8 [md0_raid5]
      8,16   0     6657     2.453704482  2579  M   W 7493192 + 8 [md0_raid5]
      8,16   0     6658     2.453715016  2579  Q   W 7493200 + 8 [md0_raid5]
      8,16   0     6659     2.453715247  2579  M   W 7493200 + 8 [md0_raid5]
      8,16   0     6660     2.453721730  2579  Q   W 7493208 + 8 [md0_raid5]
      8,16   0     6661     2.453721974  2579  M   W 7493208 + 8 [md0_raid5]
      8,16   0     6662     2.453728202  2579  Q   W 7493216 + 8 [md0_raid5]
      8,16   0     6663     2.453728436  2579  M   W 7493216 + 8 [md0_raid5]
      8,16   0     6664     2.453734782  2579  Q   W 7493224 + 8 [md0_raid5]
      8,16   0     6665     2.453735019  2579  M   W 7493224 + 8 [md0_raid5]
      8,16   0     6666     2.453741401  2579  Q   W 7493232 + 8 [md0_raid5]
      8,16   0     6667     2.453741632  2579  M   W 7493232 + 8 [md0_raid5]
      8,16   0     6668     2.453748148  2579  Q   W 7493240 + 8 [md0_raid5]
      8,16   0     6669     2.453748386  2579  M   W 7493240 + 8 [md0_raid5]
      8,16   0     6670     2.453851843  2579  I   W 7493144 + 104 [md0_raid5]
      8,16   0        0     2.453853661     0  m   N cfq2579 insert_request
      8,16   0     6671     2.453854064  2579  I   W 7493120 + 24 [md0_raid5]
      8,16   0        0     2.453854439     0  m   N cfq2579 insert_request
      8,16   0     6672     2.453854793  2579  U   N [md0_raid5] 2
      8,16   0        0     2.453855513     0  m   N cfq2579 Not idling.st->count:1
      8,16   0        0     2.453855927     0  m   N cfq2579 dispatch_insert
      8,16   0        0     2.453861771     0  m   N cfq2579 dispatched a request
      8,16   0        0     2.453862248     0  m   N cfq2579 activate rq,drv=1
      8,16   0     6673     2.453862332  2579  D   W 7493120 + 24 [md0_raid5]
      8,16   0        0     2.453865957     0  m   N cfq2579 Not idling.st->count:1
      8,16   0        0     2.453866269     0  m   N cfq2579 dispatch_insert
      8,16   0        0     2.453866707     0  m   N cfq2579 dispatched a request
      8,16   0        0     2.453867061     0  m   N cfq2579 activate rq,drv=2
      8,16   0     6674     2.453867145  2579  D   W 7493144 + 104 [md0_raid5]
      8,16   0     6675     2.454147608     0  C   W 7493120 + 24 [0]
      8,16   0        0     2.454149357     0  m   N cfq2579 complete rqnoidle 0
      8,16   0     6676     2.454791505     0  C   W 7493144 + 104 [0]
      8,16   0        0     2.454794803     0  m   N cfq2579 complete rqnoidle 0
      8,16   0        0     2.454795160     0  m   N cfq schedule dispatch
      
      From above messages,we can find rq[W 7493144 + 104] and rq[W
      7493120 + 24] do not merge.
      Because the bio order is:
        8,16   0     6638     2.453619407  2579  Q   W 7493144 + 8 [md0_raid5]
        8,16   0     6639     2.453620460  2579  G   W 7493144 + 8 [md0_raid5]
        8,16   0     6640     2.453639311  2579  Q   W 7493120 + 8 [md0_raid5]
        8,16   0     6641     2.453639842  2579  G   W 7493120 + 8 [md0_raid5]
      The bio(7493144) first and bio(7493120) later.So the subsequent
      bios will be divided into two parts.
      When flushing plug-list,because elv_attempt_insert_merge only support
      backmerge,not supporting frontmerge.
      So rq[7493120 + 24] can't merge with rq[7493144 + 104].
      
      From my test,i found those situation can count 25% in our system.
      Using this patch, there is no this situation.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      CC:Shaohua Li <shli@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      975927b9
  18. 21 9月, 2012 2 次提交
    • T
      block: fix request_queue->flags initialization · 60ea8226
      Tejun Heo 提交于
      A queue newly allocated with blk_alloc_queue_node() has only
      QUEUE_FLAG_BYPASS set.  For request-based drivers,
      blk_init_allocated_queue() is called and q->queue_flags is overwritten
      with QUEUE_FLAG_DEFAULT which doesn't include BYPASS even though the
      initial bypass is still in effect.
      
      In blk_init_allocated_queue(), or QUEUE_FLAG_DEFAULT to q->queue_flags
      instead of overwriting.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      60ea8226
    • T
      block: lift the initial queue bypass mode on blk_register_queue() instead of... · 749fefe6
      Tejun Heo 提交于
      block: lift the initial queue bypass mode on blk_register_queue() instead of blk_init_allocated_queue()
      
      b82d4b19 ("blkcg: make request_queue bypassing on allocation") made
      request_queues bypassed on allocation to avoid switching on and off
      bypass mode on a queue being initialized.  Some drivers allocate and
      then destroy a lot of queues without fully initializing them and
      incurring bypass latency overhead on each of them could add upto
      significant overhead.
      
      Unfortunately, blk_init_allocated_queue() is never used by queues of
      bio-based drivers, which means that all bio-based driver queues are in
      bypass mode even after initialization and registration complete
      successfully.
      
      Due to the limited way request_queues are used by bio drivers, this
      problem is hidden pretty well but it shows up when blk-throttle is
      used in combination with a bio-based driver.  Trying to configure
      (echoing to cgroupfs file) blk-throttle for a bio-based driver hangs
      indefinitely in blkg_conf_prep() waiting for bypass mode to end.
      
      This patch moves the initial blk_queue_bypass_end() call from
      blk_init_allocated_queue() to blk_register_queue() which is called for
      any userland-visible queues regardless of its type.
      
      I believe this is correct because I don't think there is any block
      driver which needs or wants working elevator and blk-cgroup on a queue
      which isn't visible to userland.  If there are such users, we need a
      different solution.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NJoseph Glanville <joseph.glanville@orionvm.com.au>
      Cc: stable@vger.kernel.org
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      749fefe6
  19. 20 9月, 2012 3 次提交
  20. 09 9月, 2012 4 次提交
  21. 31 8月, 2012 1 次提交
    • Y
      block: rate-limit the error message from failing commands · 37d7b34f
      Yi Zou 提交于
      When performing a cable pull test w/ active stress I/O using fio over
      a dual port Intel 82599 FCoE CNA, w/ 256LUNs on one port and about 32LUNs
      on the other, it is observed that the system becomes not usable due to
      scsi-ml being busy printing the error messages for all the failing commands.
      I don't believe this problem is specific to FCoE and these commands are
      anyway failing due to link being down (DID_NO_CONNECT), just rate-limit
      the messages here to solve this issue.
      
      v2->v1: use __ratelimit() as Tomas Henzl mentioned as the proper way for
      rate-limit per function. However, in this case, the failed i/o gets to
      blk_end_request_err() and then blk_update_request(), which also has to
      be rate-limited, as added in the v2 of this patch.
      
      v3-v2: resolved conflict to apply on current 3.6-rc3 upstream tip.
      Signed-off-by: NYi Zou <yi.zou@intel.com>
      Cc: www.Open-FCoE.org <devel@open-fcoe.org>
      Cc: Tomas Henzl <thenzl@redhat.com>
      Cc: <linux-scsi@vger.kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      37d7b34f
  22. 22 8月, 2012 2 次提交
    • T
      workqueue: deprecate __cancel_delayed_work() · 136b5721
      Tejun Heo 提交于
      Now that cancel_delayed_work() can be safely called from IRQ handlers,
      there's no reason to use __cancel_delayed_work().  Use
      cancel_delayed_work() instead of __cancel_delayed_work() and mark the
      latter deprecated.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NJens Axboe <axboe@kernel.dk>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Roland Dreier <roland@kernel.org>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
      136b5721
    • T
      workqueue: use mod_delayed_work() instead of __cancel + queue · e7c2f967
      Tejun Heo 提交于
      Now that mod_delayed_work() is safe to call from IRQ handlers,
      __cancel_delayed_work() followed by queue_delayed_work() can be
      replaced with mod_delayed_work().
      
      Most conversions are straight-forward except for the following.
      
      * net/core/link_watch.c: linkwatch_schedule_work() was doing a quite
        elaborate dancing around its delayed_work.  Collapse it such that
        linkwatch_work is queued for immediate execution if LW_URGENT and
        existing timer is kept otherwise.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> 
      e7c2f967
  23. 31 7月, 2012 1 次提交