1. 06 5月, 2013 1 次提交
  2. 21 11月, 2012 2 次提交
  3. 27 3月, 2012 1 次提交
  4. 10 1月, 2012 1 次提交
    • B
      mtd: mtd_blkdevs: don't increase 'open' count on error path · 342ff28f
      Brian Norris 提交于
      Some error paths in mtd_blkdevs were fixed in the following commit:
      
          commit 94735ec4
          mtd: mtd_blkdevs: fix error path in blktrans_open
      
      But on these error paths, the block device's `dev->open' count is
      already incremented before we check for errors. This meant that, while
      the error path was handled correctly on the first time through
      blktrans_open(), the device is erroneously considered already open on
      the second time through.
      
      This problem can be seen, for instance, when a UBI volume is
      simultaneously mounted as a UBIFS partition and read through its
      corresponding gluebi mtdblockX device. This results in blktrans_open()
      passing its error checks (with `dev->open > 0') without actually having
      a handle on the device. Here's a summarized log of the actions and
      results with nandsim:
      
          # modprobe nandsim
          # modprobe mtdblock
          # modprobe gluebi
          # modprobe ubifs
          # ubiattach /dev/ubi_ctrl -m 0
          ...
          # ubimkvol /dev/ubi0 -N test -s 16MiB
          ...
          # mount -t ubifs ubi0:test /mnt
          # ls /dev/mtdblock*
          /dev/mtdblock0  /dev/mtdblock1
          # cat /dev/mtdblock1 > /dev/null
          cat: can't open '/dev/mtdblock4': Device or resource busy
          # cat /dev/mtdblock1 > /dev/null
      
          CPU 0 Unable to handle kernel paging request at virtual address
          fffffff0, epc == 8031536c, ra == 8031f280
          Oops[#1]:
          ...
          Call Trace:
          [<8031536c>] ubi_leb_read+0x14/0x164
          [<8031f280>] gluebi_read+0xf0/0x148
          [<802edba8>] mtdblock_readsect+0x64/0x198
          [<802ecfe4>] mtd_blktrans_thread+0x330/0x3f4
          [<8005be98>] kthread+0x88/0x90
          [<8000bc04>] kernel_thread_helper+0x10/0x18
      
      Cc: stable@kernel.org [3.0+]
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      342ff28f
  5. 14 10月, 2011 1 次提交
  6. 25 5月, 2011 1 次提交
  7. 11 3月, 2011 4 次提交
  8. 30 10月, 2010 2 次提交
  9. 25 10月, 2010 3 次提交
  10. 05 10月, 2010 1 次提交
    • A
      block: autoconvert trivial BKL users to private mutex · 2a48fc0a
      Arnd Bergmann 提交于
      The block device drivers have all gained new lock_kernel
      calls from a recent pushdown, and some of the drivers
      were already using the BKL before.
      
      This turns the BKL into a set of per-driver mutexes.
      Still need to check whether this is safe to do.
      
      file=$1
      name=$2
      if grep -q lock_kernel ${file} ; then
          if grep -q 'include.*linux.mutex.h' ${file} ; then
                  sed -i '/include.*<linux\/smp_lock.h>/d' ${file}
          else
                  sed -i 's/include.*<linux\/smp_lock.h>.*$/include <linux\/mutex.h>/g' ${file}
          fi
          sed -i ${file} \
              -e "/^#include.*linux.mutex.h/,$ {
                      1,/^\(static\|int\|long\)/ {
                           /^\(static\|int\|long\)/istatic DEFINE_MUTEX(${name}_mutex);
      
      } }"  \
          -e "s/\(un\)*lock_kernel\>[ ]*()/mutex_\1lock(\&${name}_mutex)/g" \
          -e '/[      ]*cycle_kernel_lock();/d'
      else
          sed -i -e '/include.*\<smp_lock.h\>/d' ${file}  \
                      -e '/cycle_kernel_lock()/d'
      fi
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      2a48fc0a
  11. 09 8月, 2010 1 次提交
  12. 08 8月, 2010 3 次提交
  13. 06 8月, 2010 1 次提交
  14. 02 8月, 2010 1 次提交
  15. 09 3月, 2010 1 次提交
  16. 27 2月, 2010 4 次提交
  17. 25 2月, 2010 2 次提交
  18. 30 11月, 2009 1 次提交
  19. 26 11月, 2009 1 次提交
    • I
      block: add helpers to run flush_dcache_page() against a bio and a request's pages · 2d4dc890
      Ilya Loginov 提交于
      Mtdblock driver doesn't call flush_dcache_page for pages in request.  So,
      this causes problems on architectures where the icache doesn't fill from
      the dcache or with dcache aliases.  The patch fixes this.
      
      The ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE symbol was introduced to avoid
      pointless empty cache-thrashing loops on architectures for which
      flush_dcache_page() is a no-op.  Every architecture was provided with this
      flush pages on architectires where ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE is
      equal 1 or do nothing otherwise.
      
      See "fix mtd_blkdevs problem with caches on some architectures" discussion
      on LKML for more information.
      Signed-off-by: NIlya Loginov <isloginov@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Peter Horton <phorton@bitbox.co.uk>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      2d4dc890
  20. 20 10月, 2009 1 次提交
  21. 02 10月, 2009 2 次提交
    • C
      block: use normal I/O path for discard requests · c15227de
      Christoph Hellwig 提交于
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c15227de
    • C
      block: use normal I/O path for discard requests · 1122a26f
      Christoph Hellwig 提交于
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      1122a26f
  22. 22 9月, 2009 1 次提交
  23. 03 8月, 2009 1 次提交
  24. 26 5月, 2009 1 次提交
  25. 23 5月, 2009 1 次提交
  26. 11 5月, 2009 1 次提交
    • T
      block: implement and enforce request peek/start/fetch · 9934c8c0
      Tejun Heo 提交于
      Till now block layer allowed two separate modes of request execution.
      A request is always acquired from the request queue via
      elv_next_request().  After that, drivers are free to either dequeue it
      or process it without dequeueing.  Dequeue allows elv_next_request()
      to return the next request so that multiple requests can be in flight.
      
      Executing requests without dequeueing has its merits mostly in
      allowing drivers for simpler devices which can't do sg to deal with
      segments only without considering request boundary.  However, the
      benefit this brings is dubious and declining while the cost of the API
      ambiguity is increasing.  Segment based drivers are usually for very
      old or limited devices and as converting to dequeueing model isn't
      difficult, it doesn't justify the API overhead it puts on block layer
      and its more modern users.
      
      Previous patches converted all block low level drivers to dequeueing
      model.  This patch completes the API transition by...
      
      * renaming elv_next_request() to blk_peek_request()
      
      * renaming blkdev_dequeue_request() to blk_start_request()
      
      * adding blk_fetch_request() which is combination of peek and start
      
      * disallowing completion of queued (not started) requests
      
      * applying new API to all LLDs
      
      Renamings are for consistency and to break out of tree code so that
      it's apparent that out of tree drivers need updating.
      
      [ Impact: block request issue API cleanup, no functional change ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Mike Miller <mike.miller@hp.com>
      Cc: unsik Kim <donari75@gmail.com>
      Cc: Paul Clements <paul.clements@steeleye.com>
      Cc: Tim Waugh <tim@cyberelk.net>
      Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Laurent Vivier <Laurent@lvivier.info>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Cc: Borislav Petkov <petkovbb@googlemail.com>
      Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
      Cc: Alex Dubov <oakad@yahoo.com>
      Cc: Pierre Ossman <drzeus@drzeus.cx>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
      Cc: Stefan Weinhuber <wein@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Pete Zaitcev <zaitcev@redhat.com>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      9934c8c0