1. 30 10月, 2017 1 次提交
    • L
      mmc: block: Delete mmc_access_rpmb() · 14f4ca7e
      Linus Walleij 提交于
      This function is used by the block layer queue to bail out of
      requests if the current request is towards an RPMB
      "block device".
      
      This was done to avoid boot time scanning of this "block
      device" which was never really a block device, thus duct-taping
      over the fact that it was badly engineered.
      
      This problem is now gone as we removed the offending RPMB block
      device in another patch and replaced it with a character
      device.
      
      Cc: Tomas Winkler <tomas.winkler@intel.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      14f4ca7e
  2. 04 10月, 2017 1 次提交
    • L
      mmc: Delete bounce buffer handling · de3ee99b
      Linus Walleij 提交于
      In may, Steven sent a patch deleting the bounce buffer handling
      and the CONFIG_MMC_BLOCK_BOUNCE option.
      
      I chose the less invasive path of making it a runtime config
      option, and we merged that successfully for kernel v4.12.
      
      The code is however just standing in the way and taking up
      space for seemingly no gain on any systems in wide use today.
      
      Pierre says the code was there to improve speed on TI SDHCI
      controllers on certain HP laptops and possibly some Ricoh
      controllers as well. Early SDHCI controllers lacked the
      scatter-gather feature, which made software bounce buffers
      a significant speed boost.
      
      We are clearly talking about the list of SDHCI PCI-based
      MMC/SD card readers found in the pci_ids[] list in
      drivers/mmc/host/sdhci-pci-core.c.
      
      The TI SDHCI derivative is not supported by the upstream
      kernel. This leaves the Ricoh.
      
      What we can however notice is that the x86 defconfigs in the
      kernel did not enable CONFIG_MMC_BLOCK_BOUNCE option, which
      means that any such laptop would have to have a custom
      configured kernel to actually take advantage of this
      bounce buffer speed-up. It simply seems like there was
      a speed optimization for the Ricoh controllers that noone
      was using. (I have not checked the distro defconfigs but
      I am pretty sure the situation is the same there.)
      
      Bounce buffers increased performance on the OMAP HSMMC
      at one point, and was part of the original submission in
      commit a45c6cb8 ("[ARM] 5369/1: omap mmc: Add new
         omap hsmmc controller for 2430 and 34xx, v3")
      
      This optimization was removed in
      commit 0ccd76d4 ("omap_hsmmc: Implement scatter-gather
         emulation")
      which found that scatter-gather emulation provided even
      better performance.
      
      The same was introduced for SDHCI in
      commit 2134a922 ("sdhci: scatter-gather (ADMA) support")
      
      I am pretty positively convinced that software
      scatter-gather emulation will do for any host controller what
      the bounce buffers were doing. Essentially, the bounce buffer
      was a reimplementation of software scatter-gather-emulation in
      the MMC subsystem, and it should be done away with.
      
      Cc: Pierre Ossman <pierre@ossman.eu>
      Cc: Juha Yrjola <juha.yrjola@solidboot.com>
      Cc: Steven J. Hill <Steven.Hill@cavium.com>
      Cc: Shawn Lin <shawn.lin@rock-chips.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Suggested-by: NSteven J. Hill <Steven.Hill@cavium.com>
      Suggested-by: NShawn Lin <shawn.lin@rock-chips.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      de3ee99b
  3. 08 9月, 2017 1 次提交
  4. 28 6月, 2017 1 次提交
  5. 20 6月, 2017 3 次提交
    • L
      mmc: block: remove req back pointer · 67e69d52
      Linus Walleij 提交于
      Just as we can use blk_mq_rq_from_pdu() to get the per-request
      tag we can use blk_mq_rq_to_pdu() to get a request from a tag.
      Introduce a static inline helper so we are on the clear what
      is happening.
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      67e69d52
    • L
      mmc: core: Allocate per-request data using the block layer core · 304419d8
      Linus Walleij 提交于
      The mmc_queue_req is a per-request state container the MMC core uses
      to carry bounce buffers, pointers to asynchronous requests and so on.
      Currently allocated as a static array of objects, then as a request
      comes in, a mmc_queue_req is assigned to it, and used during the
      lifetime of the request.
      
      This is backwards compared to how other block layer drivers work:
      they usally let the block core provide a per-request struct that get
      allocated right beind the struct request, and which can be obtained
      using the blk_mq_rq_to_pdu() helper. (The _mq_ infix in this function
      name is misleading: it is used by both the old and the MQ block
      layer.)
      
      The per-request struct gets allocated to the size stored in the queue
      variable .cmd_size initialized using the .init_rq_fn() and
      cleaned up using .exit_rq_fn().
      
      The block layer code makes the MMC core rely on this mechanism to
      allocate the per-request mmc_queue_req state container.
      
      Doing this make a lot of complicated queue handling go away. We only
      need to keep the .qnct that keeps count of how many request are
      currently being processed by the MMC layer. The MQ block layer will
      replace also this once we transition to it.
      
      Doing this refactoring is necessary to move the ioctl() operations
      into custom block layer requests tagged with REQ_OP_DRV_[IN|OUT]
      instead of the custom code using the BigMMCHostLock that we have
      today: those require that per-request data be obtainable easily from
      a request after creating a custom request with e.g.:
      
      struct request *rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
      struct mmc_queue_req *mq_rq = req_to_mq_rq(rq);
      
      And this is not possible with the current construction, as the request
      is not immediately assigned the per-request state container, but
      instead it gets assigned when the request finally enters the MMC
      queue, which is way too late for custom requests.
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      [Ulf: Folded in the fix to drop a call to blk_cleanup_queue()]
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      Tested-by: NHeiner Kallweit <hkallweit1@gmail.com>
      304419d8
    • L
      mmc: core: Delete bounce buffer Kconfig option · c3dccb74
      Linus Walleij 提交于
      This option is activated by all multiplatform configs and what
      not so we almost always have it turned on, and the memory it
      saves is negligible, even more so moving forward. The actual
      bounce buffer only gets allocated only when used, the only
      thing the ifdefs are saving is a little bit of code.
      
      It is highly improper to have this as a Kconfig option that
      get turned on by Kconfig, make this a pure runtime-thing and
      let the host decide whether we use bounce buffers. We add a
      new property "disable_bounce" to the host struct.
      
      Notice that mmc_queue_calc_bouncesz() already disables the
      bounce buffers if host->max_segs != 1, so any arch that has a
      maximum number of segments higher than 1 will have bounce
      buffers disabled.
      
      The option CONFIG_MMC_BLOCK_BOUNCE is default y so the
      majority of platforms in the kernel already have it on, and
      it then gets turned off at runtime since most of these have
      a host->max_segs > 1. The few exceptions that have
      host->max_segs == 1 and still turn off the bounce buffering
      are those that disable it in their defconfig.
      
      Those are the following:
      
      arch/arm/configs/colibri_pxa300_defconfig
      arch/arm/configs/zeus_defconfig
      - Uses MMC_PXA, drivers/mmc/host/pxamci.c
      - Sets host->max_segs = NR_SG, which is 1
      - This needs its bounce buffer deactivated so we set
        host->disable_bounce to true in the host driver
      
      arch/arm/configs/davinci_all_defconfig
      - Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c
      - This driver sets host->max_segs to MAX_NR_SG, which is 16
      - That means this driver anyways disabled bounce buffers
      - No special action needed for this platform
      
      arch/arm/configs/lpc32xx_defconfig
      arch/arm/configs/nhk8815_defconfig
      arch/arm/configs/u300_defconfig
      - Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h]
      - This driver by default sets host->max_segs to NR_SG,
        which is 128, unless a DMA engine is used, and in that case
        the number of segments are also > 1
      - That means this driver already disables bounce buffers
      - No special action needed for these platforms
      
      arch/arm/configs/sama5_defconfig
      - Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI
      - Uses drivers/mmc/host/sdhci.c
      - Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and
        thus disables bounce buffers
      - Sets host->max_segs to 1 if SDHCI_USE_SDMA is set
      - SDHCI_USE_SDMA is only set by SDHCI on PCI adapers
      - That means that for this platform bounce buffers are already
        disabled at runtime
      - No special action needed for this platform
      
      arch/blackfin/configs/CM-BF533_defconfig
      arch/blackfin/configs/CM-BF537E_defconfig
      - Uses MMC_SPI (a simple MMC card connected on SPI pins)
      - Uses drivers/mmc/host/mmc_spi.c
      - Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128
      - That means this platform already disables bounce buffers at
        runtime
      - No special action needed for these platforms
      
      arch/mips/configs/cavium_octeon_defconfig
      - Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c
      - Sets host->max_segs to 16 or 1
      - Setting host->disable_bounce to be sure for the 1 case
      
      arch/mips/configs/qi_lb60_defconfig
      - Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c
      - This sets host->max_segs to 128 so bounce buffers are
        already runtime disabled
      - No action needed for this platform
      
      It would be interesting to come up with a list of the platforms
      that actually end up using bounce buffers. I have not been
      able to infer such a list, but it occurs when
      host->max_segs == 1 and the bounce buffering is not explicitly
      disabled.
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      c3dccb74
  6. 09 6月, 2017 1 次提交
    • C
      block: introduce new block status code type · 2a842aca
      Christoph Hellwig 提交于
      Currently we use nornal Linux errno values in the block layer, and while
      we accept any error a few have overloaded magic meanings.  This patch
      instead introduces a new  blk_status_t value that holds block layer specific
      status codes and explicitly explains their meaning.  Helpers to convert from
      and to the previous special meanings are provided for now, but I suspect
      we want to get rid of them in the long run - those drivers that have a
      errno input (e.g. networking) usually get errnos that don't know about
      the special block layer overloads, and similarly returning them to userspace
      will usually return somethings that strictly speaking isn't correct
      for file system operations, but that's left as an exercise for later.
      
      For now the set of errors is a very limited set that closely corresponds
      to the previous overloaded errno values, but there is some low hanging
      fruite to improve it.
      
      blk_status_t (ab)uses the sparse __bitwise annotations to allow for sparse
      typechecking, so that we can easily catch places passing the wrong values.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2a842aca
  7. 25 4月, 2017 2 次提交
  8. 09 4月, 2017 1 次提交
  9. 13 2月, 2017 4 次提交
  10. 01 2月, 2017 1 次提交
  11. 12 12月, 2016 1 次提交
    • U
      mmc: block: Move files to core · f397c8d8
      Ulf Hansson 提交于
      Once upon a time it made sense to keep the mmc block device driver and its
      related code, in its own directory called card. Over time, more an more
      functions/structures have become shared through generic mmc header files,
      between the core and the card directory. In other words, the relationship
      between them has become closer.
      
      By sharing functions/structures via generic header files, it becomes easy
      for outside users to abuse them. In a way to avoid that from happen, let's
      move the files from card directory into the core directory, as it enables
      us to move definitions of functions/structures into mmc core specific
      header files.
      
      Note, this is only the first step in providing a cleaner mmc interface for
      outside users. Following changes will do the actual cleanup, as that is not
      part of this change.
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      Reviewed-by: NLinus Walleij <linus.walleij@linaro.org>
      f397c8d8
  12. 05 12月, 2016 7 次提交
  13. 29 11月, 2016 2 次提交
    • L
      mmc: block: delete packed command support · 03d640ae
      Linus Walleij 提交于
      I've had it with this code now.
      
      The packed command support is a complex hurdle in the MMC/SD block
      layer, around 500+ lines of code which was introduced in 2013 in
      
      commit ce39f9d1 ("mmc: support packed write command for eMMC4.5
      devices")
      commit abd9ac14 ("mmc: add packed command feature of eMMC4.5")
      
      ...and since then it has been rotting. The original author of the
      code has disappeared from the community and the mail address is
      bouncing.
      
      For the code to be exercised the host must flag that it supports
      packed commands, so in mmc_blk_prep_packed_list() which is called for
      every single request, the following construction appears:
      
      u8 max_packed_rw = 0;
      
      if ((rq_data_dir(cur) == WRITE) &&
          mmc_host_packed_wr(card->host))
              max_packed_rw = card->ext_csd.max_packed_writes;
      
      if (max_packed_rw == 0)
          goto no_packed;
      
      This has the following logical deductions:
      
      - Only WRITE commands can really be packed, so the solution is
        only half-done: we support packed WRITE but not packed READ.
        The packed command support has not been finalized by supporting
        reads in three years!
      
      - mmc_host_packed_wr() is just a static inline that checks
        host->caps2 & MMC_CAP2_PACKED_WR. The problem with this is
        that NO upstream host sets this capability flag! No driver
        in the kernel is using it, and we can't test it. Packed
        command may be supported in out-of-tree code, but I doubt
        it. I doubt that the code is even working anymore due to
        other refactorings in the MMC block layer, who would
        notice if patches affecting it broke packed commands?
        No one.
      
      - There is no Device Tree binding or code to mark a host as
        supporting packed read or write commands, just this flag
        in caps2, so for sure there are not any DT systems using
        it either.
      
      It has other problems as well: mmc_blk_prep_packed_list() is
      speculatively picking requests out of the request queue with
      blk_fetch_request() making the MMC/SD stack harder to convert
      to the multiqueue block layer. By this we get rid of an
      obstacle.
      
      The way I see it this is just cruft littering the MMC/SD
      stack.
      
      Cc: Namjae Jeon <namjae.jeon@samsung.com>
      Cc: Maya Erez <qca_merez@qca.qualcomm.com>
      Acked-by: NJaehoon Chung <jh80.chung@samsung.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      03d640ae
    • L
      mmc: block: move packed command struct init · e01071dd
      Linus Walleij 提交于
      By moving the mmc_packed_init() and mmc_packed_clean() into the
      only file in the kernel where they are used, we save two exported
      functions and can staticize those to the block.c file.
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      e01071dd
  14. 28 10月, 2016 1 次提交
  15. 27 9月, 2016 1 次提交
    • L
      mmc: card: do away with indirection pointer · 29eb7bd0
      Linus Walleij 提交于
      We have enough vtables in the kernel as it is, we don't need
      this one to create even more artificial separation of concerns.
      
      As is proved by the Makefile:
      
      obj-$(CONFIG_MMC_BLOCK)         += mmc_block.o
      mmc_block-objs                  := block.o queue.o
      
      block.c and queue.c are baked into the same mmc_block.o object.
      So why would one of these objects access a function in the
      other object by dereferencing a pointer?
      
      Create a new block.h header file for the single shared function
      from block to queue and remove the function pointer and just
      call the queue request function.
      
      Apart from making the code more readable, this also makes link
      optimizations possible and probably speeds up the call as well.
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      29eb7bd0
  16. 26 8月, 2016 1 次提交
  17. 16 8月, 2016 1 次提交
  18. 09 6月, 2016 1 次提交
  19. 08 6月, 2016 1 次提交
  20. 17 8月, 2015 1 次提交
  21. 17 7月, 2015 1 次提交
  22. 18 6月, 2015 1 次提交
    • R
      mmc: queue: prevent soft lockups on PREEMPT=n · a8c27c0b
      Rabin Vincent 提交于
      On systems with CONFIG_PREEMPT=n, under certain circumstances, mmcqd
      can continuously process requests for several seconds without blocking,
      triggering the soft lockup watchdog.  For example, this can happen if
      mmcqd runs on the CPU which services the controller's interrupt, and
      a process on a different CPU continuously writes to the MMC block
      device.
      
       NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mmcqd/0:664]
       CPU: 0 PID: 664 Comm: mmcqd/0 Not tainted 4.1.0-rc7+ #4
       PC is at _raw_spin_unlock_irqrestore+0x24/0x28
       LR is at mmc_start_request+0x104/0x134
       ...
       [<805112a8>] (_raw_spin_unlock_irqrestore) from [<803db664>] (mmc_start_request+0x104/0x134)
       [<803db664>] (mmc_start_request) from [<803dc008>] (mmc_start_req+0x274/0x394)
       [<803dc008>] (mmc_start_req) from [<803eb2c4>] (mmc_blk_issue_rw_rq+0xd0/0xb98)
       [<803eb2c4>] (mmc_blk_issue_rw_rq) from [<803ebe8c>] (mmc_blk_issue_rq+0x100/0x470)
       [<803ebe8c>] (mmc_blk_issue_rq) from [<803ecab8>] (mmc_queue_thread+0xd0/0x170)
       [<803ecab8>] (mmc_queue_thread) from [<8003fd14>] (kthread+0xe0/0xfc)
       [<8003fd14>] (kthread) from [<8000f768>] (ret_from_fork+0x14/0x2c)
      
      Fix it by adding a cond_resched() in the request handling loop so that
      other processes get a chance to run.
      Signed-off-by: NRabin Vincent <rabin.vincent@axis.com>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      a8c27c0b
  23. 15 6月, 2015 1 次提交
  24. 06 5月, 2015 1 次提交
    • C
      mmc: card: Don't access RPMB partitions for normal read/write · 4e93b9a6
      Chuanxiao Dong 提交于
      During kernel boot, it will try to read some logical sectors
      of each block device node for the possible partition table.
      
      But since RPMB partition is special and can not be accessed
      by normal eMMC read / write CMDs, it will cause below error
      messages during kernel boot:
      ...
       mmc0: Got data interrupt 0x00000002 even though no data operation was in progress.
       mmcblk0rpmb: error -110 transferring data, sector 0, nr 32, cmd response 0x900, card status 0xb00
       mmcblk0rpmb: retrying using single block read
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900
       end_request: I/O error, dev mmcblk0rpmb, sector 0
       Buffer I/O error on device mmcblk0rpmb, logical block 0
       end_request: I/O error, dev mmcblk0rpmb, sector 8
       Buffer I/O error on device mmcblk0rpmb, logical block 1
       end_request: I/O error, dev mmcblk0rpmb, sector 16
       Buffer I/O error on device mmcblk0rpmb, logical block 2
       end_request: I/O error, dev mmcblk0rpmb, sector 24
       Buffer I/O error on device mmcblk0rpmb, logical block 3
      ...
      
      This patch will discard the access request in eMMC queue if
      it is RPMB partition access request. By this way, it avoids
      trigger above error messages.
      
      Fixes: 090d25fe ("mmc: core: Expose access to RPMB partition")
      Signed-off-by: NYunpeng Gao <yunpeng.gao@intel.com>
      Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com>
      Tested-by: NMichael Shigorin <mike@altlinux.org>
      Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
      4e93b9a6
  25. 05 12月, 2014 1 次提交
  26. 05 10月, 2014 1 次提交
    • M
      block: disable entropy contributions for nonrot devices · b277da0a
      Mike Snitzer 提交于
      Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
      QUEUE_FLAG_NONROT.
      
      Historically, all block devices have automatically made entropy
      contributions.  But as previously stated in commit e2e1a148 ("block: add
      sysfs knob for turning off disk entropy contributions"):
          - On SSD disks, the completion times aren't as random as they
            are for rotational drives. So it's questionable whether they
            should contribute to the random pool in the first place.
          - Calling add_disk_randomness() has a lot of overhead.
      
      There are more reliable sources for randomness than non-rotational block
      devices.  From a security perspective it is better to err on the side of
      caution than to allow entropy contributions from unreliable "random"
      sources.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b277da0a
  27. 24 9月, 2014 1 次提交