1. 17 8月, 2015 1 次提交
  2. 17 7月, 2015 1 次提交
  3. 18 6月, 2015 1 次提交
    • R
      mmc: queue: prevent soft lockups on PREEMPT=n · a8c27c0b
      Rabin Vincent 提交于
      On systems with CONFIG_PREEMPT=n, under certain circumstances, mmcqd
      can continuously process requests for several seconds without blocking,
      triggering the soft lockup watchdog.  For example, this can happen if
      mmcqd runs on the CPU which services the controller's interrupt, and
      a process on a different CPU continuously writes to the MMC block
      device.
      
       NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mmcqd/0:664]
       CPU: 0 PID: 664 Comm: mmcqd/0 Not tainted 4.1.0-rc7+ #4
       PC is at _raw_spin_unlock_irqrestore+0x24/0x28
       LR is at mmc_start_request+0x104/0x134
       ...
       [<805112a8>] (_raw_spin_unlock_irqrestore) from [<803db664>] (mmc_start_request+0x104/0x134)
       [<803db664>] (mmc_start_request) from [<803dc008>] (mmc_start_req+0x274/0x394)
       [<803dc008>] (mmc_start_req) from [<803eb2c4>] (mmc_blk_issue_rw_rq+0xd0/0xb98)
       [<803eb2c4>] (mmc_blk_issue_rw_rq) from [<803ebe8c>] (mmc_blk_issue_rq+0x100/0x470)
       [<803ebe8c>] (mmc_blk_issue_rq) from [<803ecab8>] (mmc_queue_thread+0xd0/0x170)
       [<803ecab8>] (mmc_queue_thread) from [<8003fd14>] (kthread+0xe0/0xfc)
       [<8003fd14>] (kthread) from [<8000f768>] (ret_from_fork+0x14/0x2c)
      
      Fix it by adding a cond_resched() in the request handling loop so that
      other processes get a chance to run.
      Signed-off-by: NRabin Vincent <rabin.vincent@axis.com>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      a8c27c0b
  4. 15 6月, 2015 1 次提交
  5. 06 5月, 2015 1 次提交
    • C
      mmc: card: Don't access RPMB partitions for normal read/write · 4e93b9a6
      Chuanxiao Dong 提交于
      During kernel boot, it will try to read some logical sectors
      of each block device node for the possible partition table.
      
      But since RPMB partition is special and can not be accessed
      by normal eMMC read / write CMDs, it will cause below error
      messages during kernel boot:
      ...
       mmc0: Got data interrupt 0x00000002 even though no data operation was in progress.
       mmcblk0rpmb: error -110 transferring data, sector 0, nr 32, cmd response 0x900, card status 0xb00
       mmcblk0rpmb: retrying using single block read
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       end_request: I/O error, dev mmcblk0rpmb, sector 0
       Buffer I/O error on device mmcblk0rpmb, logical block 0
       end_request: I/O error, dev mmcblk0rpmb, sector 8
       Buffer I/O error on device mmcblk0rpmb, logical block 1
       end_request: I/O error, dev mmcblk0rpmb, sector 16
       Buffer I/O error on device mmcblk0rpmb, logical block 2
       end_request: I/O error, dev mmcblk0rpmb, sector 24
       Buffer I/O error on device mmcblk0rpmb, logical block 3
      ...
      
      This patch will discard the access request in eMMC queue if
      it is RPMB partition access request. By this way, it avoids
      trigger above error messages.
      
      Fixes: 090d25fe ("mmc: core: Expose access to RPMB partition")
      Signed-off-by: NYunpeng Gao <yunpeng.gao@intel.com>
      Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com>
      Tested-by: NMichael Shigorin <mike@altlinux.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      4e93b9a6
  6. 05 12月, 2014 1 次提交
  7. 05 10月, 2014 1 次提交
    • M
      block: disable entropy contributions for nonrot devices · b277da0a
      Mike Snitzer 提交于
      Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
      QUEUE_FLAG_NONROT.
      
      Historically, all block devices have automatically made entropy
      contributions.  But as previously stated in commit e2e1a148 ("block: add
      sysfs knob for turning off disk entropy contributions"):
          - On SSD disks, the completion times aren't as random as they
            are for rotational drives. So it's questionable whether they
            should contribute to the random pool in the first place.
          - Calling add_disk_randomness() has a lot of overhead.
      
      There are more reliable sources for randomness than non-rotational block
      devices.  From a security perspective it is better to err on the side of
      caution than to allow entropy contributions from unreliable "random"
      sources.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b277da0a
  8. 24 9月, 2014 1 次提交
  9. 18 2月, 2014 1 次提交
  10. 31 10月, 2013 1 次提交
  11. 27 5月, 2013 1 次提交
    • M
      mmc: card: Adding support for sanitize in eMMC 4.5 · 775a9362
      Maya Erez 提交于
      The sanitize support is added as a user-app ioctl call, and
      was removed from the block-device request, since its purpose is
      to be invoked not via File-System but by a user.
      
      This feature deletes the unmap memory region of the eMMC card,
      by writing to a specific register in the EXT_CSD.
      
      unmap region is the memory region that was previously deleted
      (by erase, trim or discard operation).
      
      In order to avoid timeout when sanitizing large-scale cards,
      the timeout for sanitize operation is 240 seconds.
      Signed-off-by: NYaniv Gardi <ygardi@codeaurora.org>
      Signed-off-by: NMaya Erez <merez@codeaurora.org>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      775a9362
  12. 23 3月, 2013 1 次提交
    • S
      mmc: block: fix the host's claim-release in special request · ef3a69c7
      Seungwon Jeon 提交于
      For normal request mmc_blk_issue_rq is called twice with asynchronous
      transfer(cur and prev). Host's claim and release can be done in each
      mmc_blk_issue_rq. However, Special request is currently excluded in
      asynchronous transfer. After special request is finished, if there is
      no new request, mmc_release_host won't be called in mmc_blk_issue_rq.
      The problem is founded during mmc_suspend.
      
      [<c0541124>] (__schedule+0x0/0x78c) from [<c05419e8>] (schedule+0x38/0x78)
      [<c05419b0>] (schedule+0x0/0x78) from [<c03a843c>] (__mmc_claim_host+0xac/0x1b4)
      [<c03a8390>] (__mmc_claim_host+0x0/0x1b4) from [<c03ac98c>] (mmc_suspend+0x28/0x9c)
      [<c03ac964>] (mmc_suspend+0x0/0x9c) from [<c03aad24>] (mmc_suspend_host+0xb4/0x194)
      ...
      Reported-by: NJohan Rudholm <jrudholm@gmail.com>
      Signed-off-by: NSeungwon Jeon <tgih.jun@samsung.com>
      Tested-by: NJohan Rudholm <johan.rudholm@stericsson.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      ef3a69c7
  13. 25 2月, 2013 1 次提交
    • S
      mmc: support packed write command for eMMC4.5 devices · ce39f9d1
      Seungwon Jeon 提交于
      This patch supports packed write command of eMMC4.5 devices.  Several
      writes can be grouped in packed command and all data of the individual
      commands can be sent in a single transfer on the bus. Large amounts of
      data in one transfer rather than several data of small size are
      effective for eMMC write internally.  As a result, packed command help
      write throughput be improved.  The following tables show the results
      of packed write.
      
      Type A:
      test     none |  packed
      iozone   25.8 |  31
      tiotest  27.6 |  31.2
      lmdd     31.2 |  35.4
      
      Type B:
      test     none |  packed
      iozone   44.1 |  51.1
      tiotest  47.9 |  52.5
      lmdd     51.6 |  59.2
      
      Type C:
      test     none |  packed
      iozone   19.5 |  32
      tiotest  19.9 |  34.5
      lmdd     22.8 |  40.7
      Signed-off-by: NSeungwon Jeon <tgih.jun@samsung.com>
      Reviewed-by: NMaya Erez <merez@codeaurora.org>
      Reviewed-by: NNamjae Jeon <linkinjeon@gmail.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      ce39f9d1
  14. 12 2月, 2013 2 次提交
  15. 28 1月, 2013 2 次提交
  16. 07 12月, 2012 1 次提交
  17. 09 5月, 2012 2 次提交
  18. 21 4月, 2012 1 次提交
  19. 12 1月, 2012 1 次提交
  20. 27 10月, 2011 3 次提交
  21. 21 7月, 2011 4 次提交
  22. 26 6月, 2011 2 次提交
  23. 25 5月, 2011 1 次提交
  24. 10 3月, 2011 1 次提交
  25. 23 10月, 2010 3 次提交
  26. 10 9月, 2010 1 次提交
    • T
      block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush() · 4913efe4
      Tejun Heo 提交于
      Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA
      requests.  Deprecate barrier.  All REQ_HARDBARRIERs are failed with
      -EOPNOTSUPP and blk_queue_ordered() is replaced with simpler
      blk_queue_flush().
      
      blk_queue_flush() takes combinations of REQ_FLUSH and FUA.  If a
      device has write cache and can flush it, it should set REQ_FLUSH.  If
      the device can handle FUA writes, it should also set REQ_FUA.
      
      All blk_queue_ordered() users are converted.
      
      * ORDERED_DRAIN is mapped to 0 which is the default value.
      * ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH.
      * ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NBoaz Harrosh <bharrosh@panasas.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Pierre Ossman <drzeus@drzeus.cx>
      Cc: Stefan Weinhuber <wein@de.ibm.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      4913efe4
  27. 12 8月, 2010 2 次提交
  28. 08 8月, 2010 1 次提交