- 30 10月, 2017 1 次提交
-
-
由 Linus Walleij 提交于
This function is used by the block layer queue to bail out of requests if the current request is towards an RPMB "block device". This was done to avoid boot time scanning of this "block device" which was never really a block device, thus duct-taping over the fact that it was badly engineered. This problem is now gone as we removed the offending RPMB block device in another patch and replaced it with a character device. Cc: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 04 10月, 2017 1 次提交
-
-
由 Linus Walleij 提交于
In may, Steven sent a patch deleting the bounce buffer handling and the CONFIG_MMC_BLOCK_BOUNCE option. I chose the less invasive path of making it a runtime config option, and we merged that successfully for kernel v4.12. The code is however just standing in the way and taking up space for seemingly no gain on any systems in wide use today. Pierre says the code was there to improve speed on TI SDHCI controllers on certain HP laptops and possibly some Ricoh controllers as well. Early SDHCI controllers lacked the scatter-gather feature, which made software bounce buffers a significant speed boost. We are clearly talking about the list of SDHCI PCI-based MMC/SD card readers found in the pci_ids[] list in drivers/mmc/host/sdhci-pci-core.c. The TI SDHCI derivative is not supported by the upstream kernel. This leaves the Ricoh. What we can however notice is that the x86 defconfigs in the kernel did not enable CONFIG_MMC_BLOCK_BOUNCE option, which means that any such laptop would have to have a custom configured kernel to actually take advantage of this bounce buffer speed-up. It simply seems like there was a speed optimization for the Ricoh controllers that noone was using. (I have not checked the distro defconfigs but I am pretty sure the situation is the same there.) Bounce buffers increased performance on the OMAP HSMMC at one point, and was part of the original submission in commit a45c6cb8 ("[ARM] 5369/1: omap mmc: Add new omap hsmmc controller for 2430 and 34xx, v3") This optimization was removed in commit 0ccd76d4 ("omap_hsmmc: Implement scatter-gather emulation") which found that scatter-gather emulation provided even better performance. The same was introduced for SDHCI in commit 2134a922 ("sdhci: scatter-gather (ADMA) support") I am pretty positively convinced that software scatter-gather emulation will do for any host controller what the bounce buffers were doing. Essentially, the bounce buffer was a reimplementation of software scatter-gather-emulation in the MMC subsystem, and it should be done away with. Cc: Pierre Ossman <pierre@ossman.eu> Cc: Juha Yrjola <juha.yrjola@solidboot.com> Cc: Steven J. Hill <Steven.Hill@cavium.com> Cc: Shawn Lin <shawn.lin@rock-chips.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Suggested-by: NSteven J. Hill <Steven.Hill@cavium.com> Suggested-by: NShawn Lin <shawn.lin@rock-chips.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 08 9月, 2017 1 次提交
-
-
由 Adrian Hunter 提交于
mmc_init_request() depends on card->bouncesz so it must be calculated before blk_init_allocated_queue() starts allocating requests. Reported-by: NSeraphime Kirkovski <kirkseraph@gmail.com> Fixes: 304419d8 ("mmc: core: Allocate per-request data using the..") Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Tested-by: NSeraphime Kirkovski <kirkseraph@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Tested-by: NPavel Machek <pavel@ucw.cz>
-
- 28 6月, 2017 1 次提交
-
-
由 Christoph Hellwig 提交于
BLK_BOUNCE_ANY is the defauly now, so the call is superflous. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 20 6月, 2017 3 次提交
-
-
由 Linus Walleij 提交于
Just as we can use blk_mq_rq_from_pdu() to get the per-request tag we can use blk_mq_rq_to_pdu() to get a request from a tag. Introduce a static inline helper so we are on the clear what is happening. Suggested-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Linus Walleij 提交于
The mmc_queue_req is a per-request state container the MMC core uses to carry bounce buffers, pointers to asynchronous requests and so on. Currently allocated as a static array of objects, then as a request comes in, a mmc_queue_req is assigned to it, and used during the lifetime of the request. This is backwards compared to how other block layer drivers work: they usally let the block core provide a per-request struct that get allocated right beind the struct request, and which can be obtained using the blk_mq_rq_to_pdu() helper. (The _mq_ infix in this function name is misleading: it is used by both the old and the MQ block layer.) The per-request struct gets allocated to the size stored in the queue variable .cmd_size initialized using the .init_rq_fn() and cleaned up using .exit_rq_fn(). The block layer code makes the MMC core rely on this mechanism to allocate the per-request mmc_queue_req state container. Doing this make a lot of complicated queue handling go away. We only need to keep the .qnct that keeps count of how many request are currently being processed by the MMC layer. The MQ block layer will replace also this once we transition to it. Doing this refactoring is necessary to move the ioctl() operations into custom block layer requests tagged with REQ_OP_DRV_[IN|OUT] instead of the custom code using the BigMMCHostLock that we have today: those require that per-request data be obtainable easily from a request after creating a custom request with e.g.: struct request *rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM); struct mmc_queue_req *mq_rq = req_to_mq_rq(rq); And this is not possible with the current construction, as the request is not immediately assigned the per-request state container, but instead it gets assigned when the request finally enters the MMC queue, which is way too late for custom requests. Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> [Ulf: Folded in the fix to drop a call to blk_cleanup_queue()] Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Tested-by: NHeiner Kallweit <hkallweit1@gmail.com>
-
由 Linus Walleij 提交于
This option is activated by all multiplatform configs and what not so we almost always have it turned on, and the memory it saves is negligible, even more so moving forward. The actual bounce buffer only gets allocated only when used, the only thing the ifdefs are saving is a little bit of code. It is highly improper to have this as a Kconfig option that get turned on by Kconfig, make this a pure runtime-thing and let the host decide whether we use bounce buffers. We add a new property "disable_bounce" to the host struct. Notice that mmc_queue_calc_bouncesz() already disables the bounce buffers if host->max_segs != 1, so any arch that has a maximum number of segments higher than 1 will have bounce buffers disabled. The option CONFIG_MMC_BLOCK_BOUNCE is default y so the majority of platforms in the kernel already have it on, and it then gets turned off at runtime since most of these have a host->max_segs > 1. The few exceptions that have host->max_segs == 1 and still turn off the bounce buffering are those that disable it in their defconfig. Those are the following: arch/arm/configs/colibri_pxa300_defconfig arch/arm/configs/zeus_defconfig - Uses MMC_PXA, drivers/mmc/host/pxamci.c - Sets host->max_segs = NR_SG, which is 1 - This needs its bounce buffer deactivated so we set host->disable_bounce to true in the host driver arch/arm/configs/davinci_all_defconfig - Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c - This driver sets host->max_segs to MAX_NR_SG, which is 16 - That means this driver anyways disabled bounce buffers - No special action needed for this platform arch/arm/configs/lpc32xx_defconfig arch/arm/configs/nhk8815_defconfig arch/arm/configs/u300_defconfig - Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h] - This driver by default sets host->max_segs to NR_SG, which is 128, unless a DMA engine is used, and in that case the number of segments are also > 1 - That means this driver already disables bounce buffers - No special action needed for these platforms arch/arm/configs/sama5_defconfig - Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI - Uses drivers/mmc/host/sdhci.c - Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and thus disables bounce buffers - Sets host->max_segs to 1 if SDHCI_USE_SDMA is set - SDHCI_USE_SDMA is only set by SDHCI on PCI adapers - That means that for this platform bounce buffers are already disabled at runtime - No special action needed for this platform arch/blackfin/configs/CM-BF533_defconfig arch/blackfin/configs/CM-BF537E_defconfig - Uses MMC_SPI (a simple MMC card connected on SPI pins) - Uses drivers/mmc/host/mmc_spi.c - Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128 - That means this platform already disables bounce buffers at runtime - No special action needed for these platforms arch/mips/configs/cavium_octeon_defconfig - Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c - Sets host->max_segs to 16 or 1 - Setting host->disable_bounce to be sure for the 1 case arch/mips/configs/qi_lb60_defconfig - Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c - This sets host->max_segs to 128 so bounce buffers are already runtime disabled - No action needed for this platform It would be interesting to come up with a list of the platforms that actually end up using bounce buffers. I have not been able to infer such a list, but it occurs when host->max_segs == 1 and the bounce buffering is not explicitly disabled. Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 09 6月, 2017 1 次提交
-
-
由 Christoph Hellwig 提交于
Currently we use nornal Linux errno values in the block layer, and while we accept any error a few have overloaded magic meanings. This patch instead introduces a new blk_status_t value that holds block layer specific status codes and explicitly explains their meaning. Helpers to convert from and to the previous special meanings are provided for now, but I suspect we want to get rid of them in the long run - those drivers that have a errno input (e.g. networking) usually get errnos that don't know about the special block layer overloads, and similarly returning them to userspace will usually return somethings that strictly speaking isn't correct for file system operations, but that's left as an exercise for later. For now the set of errors is a very limited set that closely corresponds to the previous overloaded errno values, but there is some low hanging fruite to improve it. blk_status_t (ab)uses the sparse __bitwise annotations to allow for sparse typechecking, so that we can easily catch places passing the wrong values. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 25 4月, 2017 2 次提交
-
-
由 Adrian Hunter 提交于
eMMC can have multiple internal partitions that are represented as separate disks / queues. However switching between partitions is only done when the queue is empty. Consequently the array of mmc requests that are queued can be shared between partitions saving memory. Keep a pointer to the mmc request queue on the card, and use that instead of allocating a new one for each partition. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Adrian Hunter 提交于
Change from viewing the requests in progress as 'current' and 'previous', to viewing them as a queue. The current request is allocated to the first free slot. The presence of incomplete requests is determined from the count (mq->qcnt) of entries in the queue. Non-read-write requests (i.e. discards and flushes) are not added to the queue at all and require no special handling. Also no special handling is needed for the MMC_BLK_NEW_REQUEST case. As well as allowing an arbitrarily sized queue, the queue thread function is significantly simpler. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 09 4月, 2017 1 次提交
-
-
由 Christoph Hellwig 提交于
mmc only supports discarding on large alignments, so the zeroing code would always fall back to explicit writings of zeroes. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 13 2月, 2017 4 次提交
-
-
由 Linus Walleij 提交于
Instead of masking and setting two bits in the "flags" field for the mmc_queue, just use two bools named "suspended" and "new_request". The masking and setting would likely have race conditions anyways, it is better to use a simple member like this. Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Ulf Hansson 提交于
A significant amount of functions and other definitions are available through the public mmc card.h header file. Let's slim down this public mmc interface, as to prevent users from abusing it, by moving some of the functions/definitions to private mmc header files. This change concentrates on moving the functions into private mmc headers, following changes may continue with additional clean-ups. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NShawn Lin <shawn.lin@rock-chips.com>
-
由 Ulf Hansson 提交于
A significant amount of functions are available through the public mmc core.h header file. Let's slim down this public mmc interface, as to prevent users from abusing it, by moving some of the functions to private mmc header files. This change concentrates on moving the functions into private mmc headers, following changes may continue with additional clean-ups, as an example some functions can be turned into static. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NShawn Lin <shawn.lin@rock-chips.com>
-
由 Markus Elfring 提交于
* A multiplication for the size determination of a memory allocation indicated that an array data structure should be processed. Thus use the corresponding function "kmalloc_array". This issue was detected by using the Coccinelle software. * Replace the specification of a data structure by a pointer dereference to make the corresponding size determination a bit safer according to the Linux coding style convention. Signed-off-by: NMarkus Elfring <elfring@users.sourceforge.net> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NShawn Lin <shawn.lin@rock-chips.com> Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 01 2月, 2017 1 次提交
-
-
由 Christoph Hellwig 提交于
The block layer won't send requests the driver isn't asking for, so remove this check. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 12 12月, 2016 1 次提交
-
-
由 Ulf Hansson 提交于
Once upon a time it made sense to keep the mmc block device driver and its related code, in its own directory called card. Over time, more an more functions/structures have become shared through generic mmc header files, between the core and the card directory. In other words, the relationship between them has become closer. By sharing functions/structures via generic header files, it becomes easy for outside users to abuse them. In a way to avoid that from happen, let's move the files from card directory into the core directory, as it enables us to move definitions of functions/structures into mmc core specific header files. Note, this is only the first step in providing a cleaner mmc interface for outside users. Following changes will do the actual cleanup, as that is not part of this change. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 05 12月, 2016 7 次提交
-
-
由 Shawn Lin 提交于
bounce_sg for mqrq_cur and mqrq_pre are proper allocated when initializing the queue and will not be freed before explicitly cleaning the queue. So from the code itself it should be quite confident to remove this check. If that BUG_ON take effects, it is mostly likely the memory is randomly oopsing. Signed-off-by: NShawn Lin <shawn.lin@rock-chips.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Adrian Hunter 提交于
Add a mmc_queue member to record the size of the queue, which currently supports 2 requests on-the-go at a time. Instead of allocating resources for 2 slots in the queue, allow for an arbitrary number. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Adrian Hunter 提交于
In preparation for supporting a queue of requests, factor out mmc_queue_reqs_free_bufs(). Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NHarjani Ritesh <riteshh@codeaurora.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Adrian Hunter 提交于
In preparation for supporting a queue of requests, factor out mmc_queue_alloc_sgs(). Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NHarjani Ritesh <riteshh@codeaurora.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Adrian Hunter 提交于
In preparation for supporting a queue of requests, factor out mmc_queue_alloc_bounce_sgs(). Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NHarjani Ritesh <riteshh@codeaurora.org> [Ulf: Fixed compiler warning] Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Adrian Hunter 提交于
In preparation for supporting a queue of requests, factor out mmc_queue_alloc_bounce_bufs(). Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NHarjani Ritesh <riteshh@codeaurora.org> [Ulf: Fixed compiler warning] Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Adrian Hunter 提交于
The only time the driver sleeps expecting to be woken upon the arrival of a new request, is when the dispatch queue is empty. The only time that it is known whether the dispatch queue is empty is after NULL is returned from blk_fetch_request() while under the queue lock. Recognizing those facts, simplify the synchronization between the queue thread and the request function. A couple of flags tell the request function what to do, and the queue lock and barriers associated with wake-ups ensure synchronization. The result is simpler and allows the removal of the context_info lock. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NHarjani Ritesh <riteshh@codeaurora.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 29 11月, 2016 2 次提交
-
-
由 Linus Walleij 提交于
I've had it with this code now. The packed command support is a complex hurdle in the MMC/SD block layer, around 500+ lines of code which was introduced in 2013 in commit ce39f9d1 ("mmc: support packed write command for eMMC4.5 devices") commit abd9ac14 ("mmc: add packed command feature of eMMC4.5") ...and since then it has been rotting. The original author of the code has disappeared from the community and the mail address is bouncing. For the code to be exercised the host must flag that it supports packed commands, so in mmc_blk_prep_packed_list() which is called for every single request, the following construction appears: u8 max_packed_rw = 0; if ((rq_data_dir(cur) == WRITE) && mmc_host_packed_wr(card->host)) max_packed_rw = card->ext_csd.max_packed_writes; if (max_packed_rw == 0) goto no_packed; This has the following logical deductions: - Only WRITE commands can really be packed, so the solution is only half-done: we support packed WRITE but not packed READ. The packed command support has not been finalized by supporting reads in three years! - mmc_host_packed_wr() is just a static inline that checks host->caps2 & MMC_CAP2_PACKED_WR. The problem with this is that NO upstream host sets this capability flag! No driver in the kernel is using it, and we can't test it. Packed command may be supported in out-of-tree code, but I doubt it. I doubt that the code is even working anymore due to other refactorings in the MMC block layer, who would notice if patches affecting it broke packed commands? No one. - There is no Device Tree binding or code to mark a host as supporting packed read or write commands, just this flag in caps2, so for sure there are not any DT systems using it either. It has other problems as well: mmc_blk_prep_packed_list() is speculatively picking requests out of the request queue with blk_fetch_request() making the MMC/SD stack harder to convert to the multiqueue block layer. By this we get rid of an obstacle. The way I see it this is just cruft littering the MMC/SD stack. Cc: Namjae Jeon <namjae.jeon@samsung.com> Cc: Maya Erez <qca_merez@qca.qualcomm.com> Acked-by: NJaehoon Chung <jh80.chung@samsung.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Linus Walleij 提交于
By moving the mmc_packed_init() and mmc_packed_clean() into the only file in the kernel where they are used, we save two exported functions and can staticize those to the block.c file. Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 28 10月, 2016 1 次提交
-
-
由 Christoph Hellwig 提交于
A lot of the REQ_* flags are only used on struct requests, and only of use to the block layer and a few drivers that dig into struct request internals. This patch adds a new req_flags_t rq_flags field to struct request for them, and thus dramatically shrinks the number of common requests. It also removes the unfortunate situation where we have to fit the fields from the same enum into 32 bits for struct bio and 64 bits for struct request. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NShaun Tancheff <shaun.tancheff@seagate.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 27 9月, 2016 1 次提交
-
-
由 Linus Walleij 提交于
We have enough vtables in the kernel as it is, we don't need this one to create even more artificial separation of concerns. As is proved by the Makefile: obj-$(CONFIG_MMC_BLOCK) += mmc_block.o mmc_block-objs := block.o queue.o block.c and queue.c are baked into the same mmc_block.o object. So why would one of these objects access a function in the other object by dereferencing a pointer? Create a new block.h header file for the single shared function from block to queue and remove the function pointer and just call the queue request function. Apart from making the code more readable, this also makes link optimizations possible and probably speeds up the call as well. Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 26 8月, 2016 1 次提交
-
-
由 Adrian Hunter 提交于
We call mmc_req_is_special() after having processed a request, but it could be freed after that. Check that ahead of time, and use the cached value. Reported-by: NHans de Goede <hdegoede@redhat.com> Tested-by: NHans de Goede <hdegoede@redhat.com> Fixes: c2df40df ("drivers: use req op accessor") Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 16 8月, 2016 1 次提交
-
-
由 Adrian Hunter 提交于
Commit 288dab8a ("block: add a separate operation type for secure erase") split REQ_OP_SECURE_ERASE from REQ_OP_DISCARD without considering all the places REQ_OP_DISCARD was being used to mean either. Fix those. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Fixes: 288dab8a ("block: add a separate operation type for secure erase") Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 09 6月, 2016 1 次提交
-
-
由 Christoph Hellwig 提交于
Instead of overloading the discard support with the REQ_SECURE flag. Use the opportunity to rename the queue flag as well, and remove the dead checks for this flag in the RAID 1 and RAID 10 drivers that don't claim support for secure erase. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 08 6月, 2016 1 次提交
-
-
由 Mike Christie 提交于
The req operation REQ_OP is separated from the rq_flag_bits definition. This converts the block layer drivers to use req_op to get the op from the request struct. Signed-off-by: NMike Christie <mchristi@redhat.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 17 8月, 2015 1 次提交
-
-
由 Dan Williams 提交于
Signed-off-by: NDan Williams <dan.j.williams@intel.com> [hch: split from a larger patch by Dan] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 17 7月, 2015 1 次提交
-
-
由 Jens Axboe 提交于
Some drivers use it now, others just set the limits field manually. But in preparation for splitting this into a hard and soft limit, ensure that they all call the proper function for setting the hw limit for discards. Reviewed-by: NJeff Moyer <jmoyer@redhat.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 18 6月, 2015 1 次提交
-
-
由 Rabin Vincent 提交于
On systems with CONFIG_PREEMPT=n, under certain circumstances, mmcqd can continuously process requests for several seconds without blocking, triggering the soft lockup watchdog. For example, this can happen if mmcqd runs on the CPU which services the controller's interrupt, and a process on a different CPU continuously writes to the MMC block device. NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mmcqd/0:664] CPU: 0 PID: 664 Comm: mmcqd/0 Not tainted 4.1.0-rc7+ #4 PC is at _raw_spin_unlock_irqrestore+0x24/0x28 LR is at mmc_start_request+0x104/0x134 ... [<805112a8>] (_raw_spin_unlock_irqrestore) from [<803db664>] (mmc_start_request+0x104/0x134) [<803db664>] (mmc_start_request) from [<803dc008>] (mmc_start_req+0x274/0x394) [<803dc008>] (mmc_start_req) from [<803eb2c4>] (mmc_blk_issue_rw_rq+0xd0/0xb98) [<803eb2c4>] (mmc_blk_issue_rw_rq) from [<803ebe8c>] (mmc_blk_issue_rq+0x100/0x470) [<803ebe8c>] (mmc_blk_issue_rq) from [<803ecab8>] (mmc_queue_thread+0xd0/0x170) [<803ecab8>] (mmc_queue_thread) from [<8003fd14>] (kthread+0xe0/0xfc) [<8003fd14>] (kthread) from [<8000f768>] (ret_from_fork+0x14/0x2c) Fix it by adding a cond_resched() in the request handling loop so that other processes get a chance to run. Signed-off-by: NRabin Vincent <rabin.vincent@axis.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 15 6月, 2015 1 次提交
-
-
由 Fabian Frederick 提交于
Use kernel.h macro definition. Thanks to Julia Lawall for Coccinelle scripting support. Signed-off-by: NFabian Frederick <fabf@skynet.be> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 06 5月, 2015 1 次提交
-
-
由 Chuanxiao Dong 提交于
During kernel boot, it will try to read some logical sectors of each block device node for the possible partition table. But since RPMB partition is special and can not be accessed by normal eMMC read / write CMDs, it will cause below error messages during kernel boot: ... mmc0: Got data interrupt 0x00000002 even though no data operation was in progress. mmcblk0rpmb: error -110 transferring data, sector 0, nr 32, cmd response 0x900, card status 0xb00 mmcblk0rpmb: retrying using single block read mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 mmcblk0rpmb: timed out sending r/w cmd command, card status 0x400900 end_request: I/O error, dev mmcblk0rpmb, sector 0 Buffer I/O error on device mmcblk0rpmb, logical block 0 end_request: I/O error, dev mmcblk0rpmb, sector 8 Buffer I/O error on device mmcblk0rpmb, logical block 1 end_request: I/O error, dev mmcblk0rpmb, sector 16 Buffer I/O error on device mmcblk0rpmb, logical block 2 end_request: I/O error, dev mmcblk0rpmb, sector 24 Buffer I/O error on device mmcblk0rpmb, logical block 3 ... This patch will discard the access request in eMMC queue if it is RPMB partition access request. By this way, it avoids trigger above error messages. Fixes: 090d25fe ("mmc: core: Expose access to RPMB partition") Signed-off-by: NYunpeng Gao <yunpeng.gao@intel.com> Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Tested-by: NMichael Shigorin <mike@altlinux.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 05 12月, 2014 1 次提交
-
-
由 Bhuvanesh Surachari 提交于
Allocation of previous bounce buffer in mmc_init_queue when the current bounce buffer allocation fails was leading to a crash later in __blk_segment_map_sg. Error handling is improved by allocating previous bounce buffer only if the current bounce buffer allocation succeeds. Signed-off-by: NBhuvanesh Surachari <bhuvanesh_surachari@mentor.com> Signed-off-by: NHarish Jenny K N <harish_kandiga@mentor.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 05 10月, 2014 1 次提交
-
-
由 Mike Snitzer 提交于
Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set QUEUE_FLAG_NONROT. Historically, all block devices have automatically made entropy contributions. But as previously stated in commit e2e1a148 ("block: add sysfs knob for turning off disk entropy contributions"): - On SSD disks, the completion times aren't as random as they are for rotational drives. So it's questionable whether they should contribute to the random pool in the first place. - Calling add_disk_randomness() has a lot of overhead. There are more reliable sources for randomness than non-rotational block devices. From a security perspective it is better to err on the side of caution than to allow entropy contributions from unreliable "random" sources. Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 24 9月, 2014 1 次提交
-
-
由 Joe Perches 提交于
Use the much more common pr_warn instead of pr_warning. Other miscellanea: o Coalesce formats o Realign arguments o Remove extra spaces when coalescing formats Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-