1. 25 8月, 2021 1 次提交
  2. 24 8月, 2021 1 次提交
  3. 01 7月, 2021 1 次提交
  4. 30 3月, 2021 1 次提交
  5. 08 2月, 2021 1 次提交
  6. 01 2月, 2021 2 次提交
  7. 15 1月, 2021 1 次提交
  8. 09 10月, 2020 1 次提交
    • C
      mmc: core: don't set limits.discard_granularity as 0 · 42432191
      Coly Li 提交于
      In mmc_queue_setup_discard() the mmc driver queue's discard_granularity
      might be set as 0 (when card->pref_erase > max_discard) while the mmc
      device still declares to support discard operation. This is buggy and
      triggered the following kernel warning message,
      
      WARNING: CPU: 0 PID: 135 at __blkdev_issue_discard+0x200/0x294
      CPU: 0 PID: 135 Comm: f2fs_discard-17 Not tainted 5.9.0-rc6 #1
      Hardware name: Google Kevin (DT)
      pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
      pc : __blkdev_issue_discard+0x200/0x294
      lr : __blkdev_issue_discard+0x54/0x294
      sp : ffff800011dd3b10
      x29: ffff800011dd3b10 x28: 0000000000000000 x27: ffff800011dd3cc4 x26: ffff800011dd3e18 x25: 000000000004e69b x24: 0000000000000c40 x23: ffff0000f1deaaf0 x22: ffff0000f2849200 x21: 00000000002734d8 x20: 0000000000000008 x19: 0000000000000000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000394 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 00000000000008b0 x9 : ffff800011dd3cb0 x8 : 000000000004e69b x7 : 0000000000000000 x6 : ffff0000f1926400 x5 : ffff0000f1940800 x4 : 0000000000000000 x3 : 0000000000000c40 x2 : 0000000000000008 x1 : 00000000002734d8 x0 : 0000000000000000 Call trace:
      __blkdev_issue_discard+0x200/0x294
      __submit_discard_cmd+0x128/0x374
      __issue_discard_cmd_orderly+0x188/0x244
      __issue_discard_cmd+0x2e8/0x33c
      issue_discard_thread+0xe8/0x2f0
      kthread+0x11c/0x120
      ret_from_fork+0x10/0x1c
      ---[ end trace e4c8023d33dfe77a ]---
      
      This patch fixes the issue by setting discard_granularity as SECTOR_SIZE
      instead of 0 when (card->pref_erase > max_discard) is true. Now no more
      complain from __blkdev_issue_discard() for the improper value of discard
      granularity.
      
      This issue is exposed after commit b35fd742 ("block: check queue's
      limits.discard_granularity in __blkdev_issue_discard()"), a "Fixes:" tag
      is also added for the commit to make sure people won't miss this patch
      after applying the change of __blkdev_issue_discard().
      
      Fixes: e056a1b5 ("mmc: queue: let host controllers specify maximum discard timeout")
      Fixes: b35fd742 ("block: check queue's limits.discard_granularity in __blkdev_issue_discard()").
      Reported-and-tested-by: NVicente Bergas <vicencb@gmail.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Acked-by: NAdrian Hunter <adrian.hunter@intel.com>
      Cc: Ulf Hansson <ulf.hansson@linaro.org>
      Link: https://lore.kernel.org/r/20201002013852.51968-1-colyli@suse.deSigned-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      42432191
  9. 25 9月, 2020 1 次提交
  10. 13 7月, 2020 1 次提交
  11. 08 5月, 2020 2 次提交
  12. 24 3月, 2020 1 次提交
    • B
      mmc: Add MMC host software queue support · 511ce378
      Baolin Wang 提交于
      Now the MMC read/write stack will always wait for previous request is
      completed by mmc_blk_rw_wait(), before sending a new request to hardware,
      or queue a work to complete request, that will bring context switching
      overhead and spend some extra time to poll the card for busy completion
      for I/O writes via sending CMD13, especially for high I/O per second
      rates, to affect the IO performance.
      
      Thus this patch introduces MMC software queue interface based on the
      hardware command queue engine's interfaces, which is similar with the
      hardware command queue engine's idea, that can remove the context
      switching. Moreover we set the default queue depth as 64 for software
      queue, which allows more requests to be prepared, merged and inserted
      into IO scheduler to improve performance, but we only allow 2 requests
      in flight, that is enough to let the irq handler always trigger the
      next request without a context switch, as well as avoiding a long latency.
      
      Moreover the host controller should support HW busy detection for I/O
      operations when enabling the host software queue. That means, the host
      controller must not complete a data transfer request, until after the
      card stops signals busy.
      
      From the fio testing data in cover letter, we can see the software
      queue can improve some performance with 4K block size, increasing
      about 16% for random read, increasing about 90% for random write,
      though no obvious improvement for sequential read and write.
      
      Moreover we can expand the software queue interface to support MMC
      packed request or packed command in future.
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NBaolin Wang <baolin.wang@linaro.org>
      Signed-off-by: NBaolin Wang <baolin.wang7@gmail.com>
      Link: https://lore.kernel.org/r/4409c1586a9b3ed20d57ad2faf6c262fc3ccb6e2.1581478568.git.baolin.wang7@gmail.comSigned-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      511ce378
  13. 12 9月, 2019 1 次提交
  14. 03 9月, 2019 1 次提交
  15. 22 7月, 2019 1 次提交
    • A
      mmc: mmc_spi: Enable stable writes · 3a6ffb3c
      Andreas Koop 提交于
      While using the mmc_spi driver occasionally errors like this popped up:
      
      mmcblk0: error -84 transferring data end_request: I/O error, dev mmcblk0, sector 581756
      
      I looked on the Internet for occurrences of the same problem and came
      across a helpful post [1]. It includes source code to reproduce the bug.
      There is also an analysis about the cause. During transmission data in the
      supplied buffer is being modified. Thus the previously calculated checksum
      is not correct anymore.
      
      After some digging I found out that device drivers are supposed to report
      they need stable writes. To fix this I set the appropriate flag at queue
      initialization if CRC checksumming is enabled for that SPI host.
      
      [1]
      https://groups.google.com/forum/#!msg/sim1/gLlzWeXGFr8/KevXinUXfc8JSigned-off-by: NAndreas Koop <andreas.koop@zf.com>
      [shihpo: Rebase on top of v5.3-rc1]
      Signed-off-by: NShihPo Hung <shihpo.hung@sifive.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      CC: stable@vger.kernel.org
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      3a6ffb3c
  16. 10 7月, 2019 1 次提交
  17. 19 6月, 2019 1 次提交
  18. 06 6月, 2019 1 次提交
  19. 06 5月, 2019 1 次提交
  20. 28 2月, 2019 1 次提交
  21. 15 2月, 2019 1 次提交
  22. 17 11月, 2018 1 次提交
  23. 16 11月, 2018 2 次提交
  24. 21 8月, 2018 1 次提交
  25. 29 5月, 2018 1 次提交
  26. 09 3月, 2018 1 次提交
  27. 11 12月, 2017 7 次提交
  28. 30 10月, 2017 2 次提交
  29. 04 10月, 2017 1 次提交
    • L
      mmc: Delete bounce buffer handling · de3ee99b
      Linus Walleij 提交于
      In may, Steven sent a patch deleting the bounce buffer handling
      and the CONFIG_MMC_BLOCK_BOUNCE option.
      
      I chose the less invasive path of making it a runtime config
      option, and we merged that successfully for kernel v4.12.
      
      The code is however just standing in the way and taking up
      space for seemingly no gain on any systems in wide use today.
      
      Pierre says the code was there to improve speed on TI SDHCI
      controllers on certain HP laptops and possibly some Ricoh
      controllers as well. Early SDHCI controllers lacked the
      scatter-gather feature, which made software bounce buffers
      a significant speed boost.
      
      We are clearly talking about the list of SDHCI PCI-based
      MMC/SD card readers found in the pci_ids[] list in
      drivers/mmc/host/sdhci-pci-core.c.
      
      The TI SDHCI derivative is not supported by the upstream
      kernel. This leaves the Ricoh.
      
      What we can however notice is that the x86 defconfigs in the
      kernel did not enable CONFIG_MMC_BLOCK_BOUNCE option, which
      means that any such laptop would have to have a custom
      configured kernel to actually take advantage of this
      bounce buffer speed-up. It simply seems like there was
      a speed optimization for the Ricoh controllers that noone
      was using. (I have not checked the distro defconfigs but
      I am pretty sure the situation is the same there.)
      
      Bounce buffers increased performance on the OMAP HSMMC
      at one point, and was part of the original submission in
      commit a45c6cb8 ("[ARM] 5369/1: omap mmc: Add new
         omap hsmmc controller for 2430 and 34xx, v3")
      
      This optimization was removed in
      commit 0ccd76d4 ("omap_hsmmc: Implement scatter-gather
         emulation")
      which found that scatter-gather emulation provided even
      better performance.
      
      The same was introduced for SDHCI in
      commit 2134a922 ("sdhci: scatter-gather (ADMA) support")
      
      I am pretty positively convinced that software
      scatter-gather emulation will do for any host controller what
      the bounce buffers were doing. Essentially, the bounce buffer
      was a reimplementation of software scatter-gather-emulation in
      the MMC subsystem, and it should be done away with.
      
      Cc: Pierre Ossman <pierre@ossman.eu>
      Cc: Juha Yrjola <juha.yrjola@solidboot.com>
      Cc: Steven J. Hill <Steven.Hill@cavium.com>
      Cc: Shawn Lin <shawn.lin@rock-chips.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Suggested-by: NSteven J. Hill <Steven.Hill@cavium.com>
      Suggested-by: NShawn Lin <shawn.lin@rock-chips.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      de3ee99b
  30. 08 9月, 2017 1 次提交