1. 23 8月, 2010 1 次提交
  2. 08 8月, 2010 1 次提交
    • A
      block: push down BKL into .open and .release · 6e9624b8
      Arnd Bergmann 提交于
      The open and release block_device_operations are currently
      called with the BKL held. In order to change that, we must
      first make sure that all drivers that currently rely
      on this have no regressions.
      
      This blindly pushes the BKL into all .open and .release
      operations for all block drivers to prepare for the
      next step. The drivers can subsequently replace the BKL
      with their own locks or remove it completely when it can
      be shown that it is not needed.
      
      The functions blkdev_get and blkdev_put are the only
      remaining users of the big kernel lock in the block
      layer, besides a few uses in the ioctl code, none
      of which need to serialize with blkdev_{get,put}.
      
      Most of these two functions is also under the protection
      of bdev->bd_mutex, including the actual calls to
      ->open and ->release, and the common code does not
      access any global data structures that need the BKL.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      6e9624b8
  3. 26 2月, 2010 2 次提交
  4. 14 1月, 2010 1 次提交
  5. 19 12月, 2009 1 次提交
  6. 07 12月, 2009 2 次提交
  7. 14 10月, 2009 1 次提交
    • M
      [S390] tape390: Fix request queue handling in block driver · 03cadd36
      Michael Holzheu 提交于
      When setting a channel attached tape online under Linux 2.6.31, the
      
      "vol_id" process from udev hangs in sync_page():
       2 sync_page+144 [0x1dfaac]
       3 __wait_on_bit_lock+194 [0x58c23e]
       4 __lock_page+116 [0x1df9dc]
       5 truncate_inode_pages_range+728 [0x1ed7cc]
       6 __blkdev_put+244 [0x25f738]
       7 __fput+300 [0x229c4c]
       8 filp_close+122 [0x225a3a]
      
      The reason for that is an error in the request queue handling. It can
      happen that we fetch a request, but do not process it further because
      the number of queued requests exceeds TAPEBLOCK_MIN_REQUEUE.
      To fix this, we should call blk_peek_request() instead of
      blk_fetch_request() in the while condition and fetch the request in
      the loop body afterwards.
      
      This bug was introduced with the patch "block: implement and enforce
      request peek/start/fetch" (9934c8c0)
      Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      03cadd36
  8. 22 9月, 2009 1 次提交
  9. 11 9月, 2009 2 次提交
  10. 23 5月, 2009 1 次提交
  11. 11 5月, 2009 2 次提交
    • T
      block: implement and enforce request peek/start/fetch · 9934c8c0
      Tejun Heo 提交于
      Till now block layer allowed two separate modes of request execution.
      A request is always acquired from the request queue via
      elv_next_request().  After that, drivers are free to either dequeue it
      or process it without dequeueing.  Dequeue allows elv_next_request()
      to return the next request so that multiple requests can be in flight.
      
      Executing requests without dequeueing has its merits mostly in
      allowing drivers for simpler devices which can't do sg to deal with
      segments only without considering request boundary.  However, the
      benefit this brings is dubious and declining while the cost of the API
      ambiguity is increasing.  Segment based drivers are usually for very
      old or limited devices and as converting to dequeueing model isn't
      difficult, it doesn't justify the API overhead it puts on block layer
      and its more modern users.
      
      Previous patches converted all block low level drivers to dequeueing
      model.  This patch completes the API transition by...
      
      * renaming elv_next_request() to blk_peek_request()
      
      * renaming blkdev_dequeue_request() to blk_start_request()
      
      * adding blk_fetch_request() which is combination of peek and start
      
      * disallowing completion of queued (not started) requests
      
      * applying new API to all LLDs
      
      Renamings are for consistency and to break out of tree code so that
      it's apparent that out of tree drivers need updating.
      
      [ Impact: block request issue API cleanup, no functional change ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Mike Miller <mike.miller@hp.com>
      Cc: unsik Kim <donari75@gmail.com>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Laurent Vivier <Laurent@lvivier.info>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Cc: Borislav Petkov <petkovbb@googlemail.com>
      Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: Pierre Ossman <drzeus@drzeus.cx>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
      Cc: Stefan Weinhuber <wein@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      9934c8c0
    • T
      block: convert to pos and nr_sectors accessors · 83096ebf
      Tejun Heo 提交于
      With recent cleanups, there is no place where low level driver
      directly manipulates request fields.  This means that the 'hard'
      request fields always equal the !hard fields.  Convert all
      rq->sectors, nr_sectors and current_nr_sectors references to
      accessors.
      
      While at it, drop superflous blk_rq_pos() < 0 test in swim.c.
      
      [ Impact: use pos and nr_sectors accessors ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Tested-by: NGrant Likely <grant.likely@secretlab.ca>
      Acked-by: NGrant Likely <grant.likely@secretlab.ca>
      Tested-by: NAdrian McMenamin <adrian@mcmen.demon.co.uk>
      Acked-by: NAdrian McMenamin <adrian@mcmen.demon.co.uk>
      Acked-by: NMike Miller <mike.miller@hp.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Cc: Borislav Petkov <petkovbb@googlemail.com>
      Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
      Cc: Eric Moore <Eric.Moore@lsi.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Dario Ballabio <ballabio_dario@emc.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: unsik Kim <donari75@gmail.com>
      Cc: Laurent Vivier <Laurent@lvivier.info>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      83096ebf
  12. 28 4月, 2009 1 次提交
    • T
      block: implement and use [__]blk_end_request_all() · 40cbbb78
      Tejun Heo 提交于
      There are many [__]blk_end_request() call sites which call it with
      full request length and expect full completion.  Many of them ensure
      that the request actually completes by doing BUG_ON() the return
      value, which is awkward and error-prone.
      
      This patch adds [__]blk_end_request_all() which takes @rq and @error
      and fully completes the request.  BUG_ON() is added to to ensure that
      this actually happens.
      
      Most conversions are simple but there are a few noteworthy ones.
      
      * cdrom/viocd: viocd_end_request() replaced with direct calls to
        __blk_end_request_all().
      
      * s390/block/dasd: dasd_end_request() replaced with direct calls to
        __blk_end_request_all().
      
      * s390/char/tape_block: tapeblock_end_request() replaced with direct
        calls to blk_end_request_all().
      
      [ Impact: cleanup ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Mike Miller <mike.miller@hp.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      40cbbb78
  13. 26 3月, 2009 1 次提交
  14. 28 10月, 2008 1 次提交
  15. 21 10月, 2008 2 次提交
    • A
      [PATCH] switch tape_block · 4e999af9
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4e999af9
    • A
      [PATCH] beginning of methods conversion · d4430d62
      Al Viro 提交于
      To keep the size of changesets sane we split the switch by drivers;
      to keep the damn thing bisectable we do the following:
      	1) rename the affected methods, add ones with correct
      prototypes, make (few) callers handle both.  That's this changeset.
      	2) for each driver convert to new methods.  *ALL* drivers
      are converted in this series.
      	3) kill the old (renamed) methods.
      
      Note that it _is_ a flagday; all in-tree drivers are converted and by the
      end of this series no trace of old methods remain.  The only reason why
      we do that this way is to keep the damn thing bisectable and allow per-driver
      debugging if anything goes wrong.
      
      New methods:
      	open(bdev, mode)
      	release(disk, mode)
      	ioctl(bdev, mode, cmd, arg)		/* Called without BKL */
      	compat_ioctl(bdev, mode, cmd, arg)
      	locked_ioctl(bdev, mode, cmd, arg)	/* Called with BKL, legacy */
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d4430d62
  16. 11 10月, 2008 1 次提交
  17. 30 5月, 2008 1 次提交
    • M
      [S390] tape: Fix race condition in tape block device driver · f71ad62a
      Michael Holzheu 提交于
      Due to incorrect function call sequence it can happen that a tape block
      request is finished before the request is taken from the block request queue.
      
      The following sequence leads to that condition:
       * tapeblock_start_request() -> start CCW program
       * Request finishes -> IO interrupt
       * tapeblock_end_request()
       * end_that_request_last()
      
      If blkdev_dequeue_request() has not been called before end_that_request_last(),
      a kernel bug is triggered in end_that_request_last() because the request is
      still queued. To solve that problem blkdev_dequeue_request() has to be called
      before starting the CCW program.
      Signed-off-by: NMichael Holzheu <holzheu@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      f71ad62a
  18. 28 1月, 2008 1 次提交
  19. 24 7月, 2007 1 次提交
  20. 06 2月, 2007 1 次提交
  21. 08 12月, 2006 1 次提交
  22. 01 7月, 2006 1 次提交
  23. 11 4月, 2006 1 次提交
  24. 01 4月, 2006 1 次提交
  25. 07 1月, 2006 1 次提交
  26. 06 1月, 2006 1 次提交
    • T
      [BLOCK] add @uptodate to end_that_request_last() and @error to rq_end_io_fn() · 8ffdc655
      Tejun Heo 提交于
      add @uptodate argument to end_that_request_last() and @error
      to rq_end_io_fn().  there's no generic way to pass error code
      to request completion function, making generic error handling
      of non-fs request difficult (rq->errors is driver-specific and
      each driver uses it differently).  this patch adds @uptodate
      to end_that_request_last() and @error to rq_end_io_fn().
      
      for fs requests, this doesn't really matter, so just using the
      same uptodate argument used in the last call to
      end_that_request_first() should suffice.  imho, this can also
      help the generic command-carrying request jens is working on.
      Signed-off-by: Ntejun heo <htejun@gmail.com>
      Signed-Off-By: NJens Axboe <axboe@suse.de>
      8ffdc655
  27. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4