1. 27 2月, 2010 2 次提交
  2. 25 2月, 2010 2 次提交
  3. 30 11月, 2009 1 次提交
  4. 26 11月, 2009 1 次提交
    • I
      block: add helpers to run flush_dcache_page() against a bio and a request's pages · 2d4dc890
      Ilya Loginov 提交于
      Mtdblock driver doesn't call flush_dcache_page for pages in request.  So,
      this causes problems on architectures where the icache doesn't fill from
      the dcache or with dcache aliases.  The patch fixes this.
      
      The ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE symbol was introduced to avoid
      pointless empty cache-thrashing loops on architectures for which
      flush_dcache_page() is a no-op.  Every architecture was provided with this
      flush pages on architectires where ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE is
      equal 1 or do nothing otherwise.
      
      See "fix mtd_blkdevs problem with caches on some architectures" discussion
      on LKML for more information.
      Signed-off-by: NIlya Loginov <isloginov@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Peter Horton <phorton@bitbox.co.uk>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      2d4dc890
  5. 20 10月, 2009 1 次提交
  6. 02 10月, 2009 2 次提交
    • C
      block: use normal I/O path for discard requests · c15227de
      Christoph Hellwig 提交于
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c15227de
    • C
      block: use normal I/O path for discard requests · 1122a26f
      Christoph Hellwig 提交于
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      1122a26f
  7. 22 9月, 2009 1 次提交
  8. 03 8月, 2009 1 次提交
  9. 26 5月, 2009 1 次提交
  10. 23 5月, 2009 1 次提交
  11. 11 5月, 2009 4 次提交
    • T
      block: implement and enforce request peek/start/fetch · 9934c8c0
      Tejun Heo 提交于
      Till now block layer allowed two separate modes of request execution.
      A request is always acquired from the request queue via
      elv_next_request().  After that, drivers are free to either dequeue it
      or process it without dequeueing.  Dequeue allows elv_next_request()
      to return the next request so that multiple requests can be in flight.
      
      Executing requests without dequeueing has its merits mostly in
      allowing drivers for simpler devices which can't do sg to deal with
      segments only without considering request boundary.  However, the
      benefit this brings is dubious and declining while the cost of the API
      ambiguity is increasing.  Segment based drivers are usually for very
      old or limited devices and as converting to dequeueing model isn't
      difficult, it doesn't justify the API overhead it puts on block layer
      and its more modern users.
      
      Previous patches converted all block low level drivers to dequeueing
      model.  This patch completes the API transition by...
      
      * renaming elv_next_request() to blk_peek_request()
      
      * renaming blkdev_dequeue_request() to blk_start_request()
      
      * adding blk_fetch_request() which is combination of peek and start
      
      * disallowing completion of queued (not started) requests
      
      * applying new API to all LLDs
      
      Renamings are for consistency and to break out of tree code so that
      it's apparent that out of tree drivers need updating.
      
      [ Impact: block request issue API cleanup, no functional change ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Mike Miller <mike.miller@hp.com>
      Cc: unsik Kim <donari75@gmail.com>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Laurent Vivier <Laurent@lvivier.info>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Cc: Borislav Petkov <petkovbb@googlemail.com>
      Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: Pierre Ossman <drzeus@drzeus.cx>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
      Cc: Stefan Weinhuber <wein@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      9934c8c0
    • T
      mtd_blkdevs: dequeue in-flight request · 1498ada7
      Tejun Heo 提交于
      mtd_blkdevs processes requests one-by-one synchronously from a kthread
      and can be easily converted to dequeueing model.  Convert it.
      
      [ Impact: dequeue in-flight request ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      1498ada7
    • T
      block: blk_rq_[cur_]_{sectors|bytes}() usage cleanup · 1011c1b9
      Tejun Heo 提交于
      With the previous changes, the followings are now guaranteed for all
      requests in any valid state.
      
      * blk_rq_sectors() == blk_rq_bytes() >> 9
      * blk_rq_cur_sectors() == blk_rq_cur_bytes() >> 9
      
      Clean up accessor usages.  Notable changes are
      
      * nbd,i2o_block: end_all used instead of explicit byte count
      * scsi_lib: unnecessary conditional on request type removed
      
      [ Impact: cleanup ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Boaz Harrosh <bharrosh@panasas.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      1011c1b9
    • T
      block: convert to pos and nr_sectors accessors · 83096ebf
      Tejun Heo 提交于
      With recent cleanups, there is no place where low level driver
      directly manipulates request fields.  This means that the 'hard'
      request fields always equal the !hard fields.  Convert all
      rq->sectors, nr_sectors and current_nr_sectors references to
      accessors.
      
      While at it, drop superflous blk_rq_pos() < 0 test in swim.c.
      
      [ Impact: use pos and nr_sectors accessors ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Tested-by: NGrant Likely <grant.likely@secretlab.ca>
      Acked-by: NGrant Likely <grant.likely@secretlab.ca>
      Tested-by: NAdrian McMenamin <adrian@mcmen.demon.co.uk>
      Acked-by: NAdrian McMenamin <adrian@mcmen.demon.co.uk>
      Acked-by: NMike Miller <mike.miller@hp.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Cc: Borislav Petkov <petkovbb@googlemail.com>
      Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
      Cc: Eric Moore <Eric.Moore@lsi.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Dario Ballabio <ballabio_dario@emc.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: unsik Kim <donari75@gmail.com>
      Cc: Laurent Vivier <Laurent@lvivier.info>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      83096ebf
  12. 28 4月, 2009 1 次提交
    • T
      block: replace end_request() with [__]blk_end_request_cur() · f06d9a2b
      Tejun Heo 提交于
      end_request() has been kept around for backward compatibility;
      however, it's about time for it to go away.
      
      * There aren't too many users left.
      
      * Its use of @updtodate is pretty confusing.
      
      * In some cases, newer code ends up using mixture of end_request() and
        [__]blk_end_request[_all](), which is way too confusing.
      
      So, add [__]blk_end_request_cur() and replace end_request() with it.
      Most conversions are straightforward.  Noteworthy ones are...
      
      * paride/pcd: next_request() updated to take 0/-errno instead of 1/0.
      
      * paride/pf: pf_end_request() and next_request() updated to take
        0/-errno instead of 1/0.
      
      * xd: xd_readwrite() updated to return 0/-errno instead of 1/0.
      
      * mtd/mtd_blkdevs: blktrans_discard_request() updated to return
        0/-errno instead of 1/0.  Unnecessary local variable res
        initialization removed from mtd_blktrans_thread().
      
      [ Impact: cleanup ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NJoerg Dorchain <joerg@dorchain.net>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NGrant Likely <grant.likely@secretlab.ca>
      Acked-by: NLaurent Vivier <Laurent@lvivier.info>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: unsik Kim <donari75@gmail.com>
      f06d9a2b
  13. 04 4月, 2009 1 次提交
    • D
      [MTD] driver model updates · 1f24b5a8
      David Brownell 提交于
      Update driver model support in the MTD framework, so it fits
      better into the current udev-based hotplug framework:
      
       - Each mtd_info now has a device node.  MTD drivers should set
         the dev.parent field to point to the physical device, before
         setting up partitions or otherwise declaring MTDs.
      
       - Those device nodes always map to /sys/class/mtdX device nodes,
         which no longer depend on MTD_CHARDEV.
      
       - Those mtdX sysfs nodes have a "starter set" of attributes;
         it's not yet sufficient to replace /proc/mtd.
      
       - Enabling MTD_CHARDEV provides /sys/class/mtdXro/ nodes and the
         /sys/class/mtd*/dev attributes (for udev, mdev, etc).
      
       - Include a MODULE_ALIAS_CHARDEV_MAJOR macro.  It'll work with
         udev creating the /dev/mtd* nodes, not just a static rootfs.
      
      So the sysfs structure is pretty much what you'd expect, except
      that readonly chardev nodes are a bit quirky.
      Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net>
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      1f24b5a8
  14. 03 4月, 2009 1 次提交
  15. 21 10月, 2008 2 次提交
    • A
      [PATCH] switch mtd_blkdevs · af0e2a0a
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      af0e2a0a
    • A
      [PATCH] beginning of methods conversion · d4430d62
      Al Viro 提交于
      To keep the size of changesets sane we split the switch by drivers;
      to keep the damn thing bisectable we do the following:
      	1) rename the affected methods, add ones with correct
      prototypes, make (few) callers handle both.  That's this changeset.
      	2) for each driver convert to new methods.  *ALL* drivers
      are converted in this series.
      	3) kill the old (renamed) methods.
      
      Note that it _is_ a flagday; all in-tree drivers are converted and by the
      end of this series no trace of old methods remain.  The only reason why
      we do that this way is to keep the damn thing bisectable and allow per-driver
      debugging if anything goes wrong.
      
      New methods:
      	open(bdev, mode)
      	release(disk, mode)
      	ioctl(bdev, mode, cmd, arg)		/* Called without BKL */
      	compat_ioctl(bdev, mode, cmd, arg)
      	locked_ioctl(bdev, mode, cmd, arg)	/* Called with BKL, legacy */
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d4430d62
  16. 09 10月, 2008 1 次提交
  17. 05 6月, 2008 2 次提交
  18. 03 12月, 2007 1 次提交
    • D
      [MTD] Always initialise mutex in new mtd_blktrans_dev. · ce37ab42
      David Woodhouse 提交于
      We were only initialising the mutex in the case where the new device was
      automatically allocated the highest minor number. If the caller
      specified a minor number, or if it filled in a free slot which was made
      by a previous device deregistering, the mutex wouldn't get initialised
      when we jumped out of the loop.
      
      Reported by Monte Copeland <catboat@texas.net>
      Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
      ce37ab42
  19. 18 7月, 2007 1 次提交
    • R
      Freezer: make kernel threads nonfreezable by default · 83144186
      Rafael J. Wysocki 提交于
      Currently, the freezer treats all tasks as freezable, except for the kernel
      threads that explicitly set the PF_NOFREEZE flag for themselves.  This
      approach is problematic, since it requires every kernel thread to either
      set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
      care for the freezing of tasks at all.
      
      It seems better to only require the kernel threads that want to or need to
      be frozen to use some freezer-related code and to remove any
      freezer-related code from the other (nonfreezable) kernel threads, which is
      done in this patch.
      
      The patch causes all kernel threads to be nonfreezable by default (ie.  to
      have PF_NOFREEZE set by default) and introduces the set_freezable()
      function that should be called by the freezable kernel threads in order to
      unset PF_NOFREEZE.  It also makes all of the currently freezable kernel
      threads call set_freezable(), so it shouldn't cause any (intentional)
      change of behaviour to appear.  Additionally, it updates documentation to
      describe the freezing of tasks more accurately.
      
      [akpm@linux-foundation.org: build fixes]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NNigel Cunningham <nigel@nigel.suspend2.net>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83144186
  20. 29 6月, 2007 2 次提交
  21. 09 5月, 2007 1 次提交
  22. 23 4月, 2007 1 次提交
  23. 20 4月, 2007 1 次提交
  24. 29 11月, 2006 2 次提交
  25. 02 10月, 2006 1 次提交
    • J
      [MTD] fix printk warning · 9a292308
      Jeff Garzik 提交于
      gcc spits out this warning:
      
      drivers/mtd/mtd_blkdevs.c: In function ‘do_blktrans_request’:
      drivers/mtd/mtd_blkdevs.c:72: warning: format ‘%ld’ expects type ‘long int’, but argument 2 has type ‘unsigned int’
      
      This could be fixed any number of ways, including use of BUG().
      rq_data_dir() only returns 0 or 1, so this entire case is superfluous.
      I did the most simple fix.
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
      9a292308
  26. 01 10月, 2006 1 次提交
    • J
      [PATCH] Split struct request ->flags into two parts · 4aff5e23
      Jens Axboe 提交于
      Right now ->flags is a bit of a mess: some are request types, and
      others are just modifiers. Clean this up by splitting it into
      ->cmd_type and ->cmd_flags. This allows introduction of generic
      Linux block message types, useful for sending generic Linux commands
      to block devices.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      4aff5e23
  27. 22 9月, 2006 2 次提交
  28. 01 4月, 2006 1 次提交
  29. 27 3月, 2006 1 次提交