1. 27 8月, 2008 1 次提交
    • F
      block: move cmdfilter from gendisk to request_queue · abf54393
      FUJITA Tomonori 提交于
      cmd_filter works only for the block layer SG_IO with SCSI block
      devices. It breaks scsi/sg.c, bsg, and the block layer SG_IO with SCSI
      character devices (such as st). We hit a kernel crash with them.
      
      The problem is that cmd_filter code accesses to gendisk (having struct
      blk_scsi_cmd_filter) via inode->i_bdev->bd_disk. It works for only
      SCSI block device files. With character device files, inode->i_bdev
      leads you to struct cdev. inode->i_bdev->bd_disk->blk_scsi_cmd_filter
      isn't safe.
      
      SCSI ULDs don't expose gendisk; they keep it private. bsg needs to be
      independent on any protocols. We shouldn't change ULDs to expose their
      gendisk.
      
      This patch moves struct blk_scsi_cmd_filter from gendisk to
      request_queue, a common object, which eveyone can access to.
      
      The user interface doesn't change; users can change the filters via
      /sys/block/. gendisk has a pointer to request_queue so the cmd_filter
      code accesses to struct blk_scsi_cmd_filter.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      abf54393
  2. 02 8月, 2008 1 次提交
  3. 16 7月, 2008 1 次提交
  4. 03 7月, 2008 2 次提交
  5. 28 5月, 2008 1 次提交
  6. 25 5月, 2008 1 次提交
    • C
      Remove argument from open_softirq which is always NULL · 962cf36c
      Carlos R. Mafra 提交于
      As git-grep shows, open_softirq() is always called with the last argument
      being NULL
      
      block/blk-core.c:       open_softirq(BLOCK_SOFTIRQ, blk_done_softirq, NULL);
      kernel/hrtimer.c:       open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq, NULL);
      kernel/rcuclassic.c:    open_softirq(RCU_SOFTIRQ, rcu_process_callbacks, NULL);
      kernel/rcupreempt.c:    open_softirq(RCU_SOFTIRQ, rcu_process_callbacks, NULL);
      kernel/sched.c: open_softirq(SCHED_SOFTIRQ, run_rebalance_domains, NULL);
      kernel/softirq.c:       open_softirq(TASKLET_SOFTIRQ, tasklet_action, NULL);
      kernel/softirq.c:       open_softirq(HI_SOFTIRQ, tasklet_hi_action, NULL);
      kernel/timer.c: open_softirq(TIMER_SOFTIRQ, run_timer_softirq, NULL);
      net/core/dev.c: open_softirq(NET_TX_SOFTIRQ, net_tx_action, NULL);
      net/core/dev.c: open_softirq(NET_RX_SOFTIRQ, net_rx_action, NULL);
      
      This observation has already been made by Matthew Wilcox in June 2002
      (http://www.cs.helsinki.fi/linux/linux-kernel/2002-25/0687.html)
      
      "I notice that none of the current softirq routines use the data element
      passed to them."
      
      and the situation hasn't changed since them. So it appears we can safely
      remove that extra argument to save 128 (54) bytes of kernel data (text).
      Signed-off-by: NCarlos R. Mafra <crmafra@ift.unesp.br>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      962cf36c
  7. 15 5月, 2008 1 次提交
    • N
      Remove blkdev warning triggered by using md · e7e72bf6
      Neil Brown 提交于
      As setting and clearing queue flags now requires that we hold a spinlock
      on the queue, and as blk_queue_stack_limits is called without that lock,
      get the lock inside blk_queue_stack_limits.
      
      For blk_queue_stack_limits to be able to find the right lock, each md
      personality needs to set q->queue_lock to point to the appropriate lock.
      Those personalities which didn't previously use a spin_lock, us
      q->__queue_lock.  So always initialise that lock when allocated.
      
      With this in place, setting/clearing of the QUEUE_FLAG_PLUGGED bit will no
      longer cause warnings as it will be clear that the proper lock is held.
      
      Thanks to Dan Williams for review and fixing the silly bugs.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Alistair John Strachan <alistair@devzero.co.uk>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Jacek Luczak <difrost.kernel@gmail.com>
      Cc: Prakash Punnoor <prakash@punnoor.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7e72bf6
  8. 07 5月, 2008 2 次提交
    • J
      block: avoid duplicate calls to get_part() in disk stat code · 28f13702
      Jens Axboe 提交于
      get_part() is fairly expensive, as it O(N) loops over partitions
      to find the right one. In lots of normal IO paths we end up looking
      up the partition twice, to make matters even worse. Change the
      stat add code to accept a passed in partition instead.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      28f13702
    • J
      block: optimize generic_unplug_device() · dbaf2c00
      Jens Axboe 提交于
      Original patch from Mikulas Patocka <mpatocka@redhat.com>
      
      Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
      disks mapped to one logical device via device mapper.
      
      He found that there was a slowdown on request_queue->lock in function
      generic_unplug_device. The slowdown is caused by the fact that when some
      code calls unplug on the device mapper, device mapper calls unplug on all
      physical disks. These unplug calls take the lock, find that the queue is
      already unplugged, release the lock and exit.
      
      With the below patch, performance of the benchmark was increased by 18%
      (the whole OLTP application, not just block layer microbenchmarks).
      
      So I'm submitting this patch for upstream. I think the patch is correct,
      because when more threads call simultaneously plug and unplug, it is
      unspecified, if the queue is or isn't plugged (so the patch can't make
      this worse). And the caller that plugged the queue should unplug it
      anyway. (if it doesn't, there's 3ms timeout).
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      dbaf2c00
  9. 01 5月, 2008 1 次提交
  10. 29 4月, 2008 5 次提交
  11. 04 3月, 2008 3 次提交
  12. 19 2月, 2008 2 次提交
    • T
      block: add request->raw_data_len · 6b00769f
      Tejun Heo 提交于
      With padding and draining moved into it, block layer now may extend
      requests as directed by queue parameters, so now a request has two
      sizes - the original request size and the extended size which matches
      the size of area pointed to by bios and later by sgs.  The latter size
      is what lower layers are primarily interested in when allocating,
      filling up DMA tables and setting up the controller.
      
      Both padding and draining extend the data area to accomodate
      controller characteristics.  As any controller which speaks SCSI can
      handle underflows, feeding larger data area is safe.
      
      So, this patch makes the primary data length field, request->data_len,
      indicate the size of full data area and add a separate length field,
      request->raw_data_len, for the unmodified request size.  The latter is
      used to report to higher layer (userland) and where the original
      request size should be fed to the controller or device.
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      6b00769f
    • A
      make blk-core.c:request_cachep static again · 5ece6c52
      Adrian Bunk 提交于
      request_cachep needlessly became global.
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      Signed-off-by: NJens Axboe <axboe@carl.home.kernel.dk>
      5ece6c52
  13. 08 2月, 2008 3 次提交
  14. 01 2月, 2008 2 次提交
  15. 30 1月, 2008 6 次提交
  16. 28 1月, 2008 8 次提交