- 09 5月, 2012 2 次提交
-
-
由 Venkatraman S 提交于
Not needed to memset, as they are pointers and are assigned to proper values in the next line anyway. Signed-off-by: NVenkatraman S <svenkatr@ti.com> Reviewed-by: NNamjae Jeon <linkinjeon@gmail.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Venkatraman S 提交于
The name mmc_request is used for both the issue function and a data structure, which creates conflicts in symbol lookups in editors. Rename the function to mmc_request_fn. Signed-off-by: NVenkatraman S <svenkatr@ti.com> Reviewed-by: NNamjae Jeon <linkinjeon@gmail.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 21 4月, 2012 1 次提交
-
-
由 Adrian Hunter 提交于
eMMC v4.5 discard operation is significantly different from the existing trim operation because it is not guaranteed to work with the new sanitize operation. Consequently mmc_can_trim() is separated from mmc_can_discard(). Also the new discard operation does not result in the sectors being set to all-zeros, so discard_zeroes_data must not be set. In addition, the new discard has the same timeout as trim, but from v4.5 trim is defined to use the hc timeout. The timeout calculation is adjusted accordingly. Fixes apply to linux 3.2 on. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: <stable@vger.kernel.org> Acked-by: NJaehoon Chung <jh80.chung@samsung.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 12 1月, 2012 1 次提交
-
-
由 Sujit Reddy Thumma 提交于
Kill block requests when the host realizes that the card is removed from the slot and is sure that subsequent requests are bound to fail. Do this silently so that the block layer doesn't output unnecessary error messages. Signed-off-by: NSujit Reddy Thumma <sthumma@codeaurora.org> Acked-by: NAdrian Hunter <adrian.hunter@intel.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 27 10月, 2011 3 次提交
-
-
由 Kyungmin Park 提交于
In the v4.5, there's no secure erase & trim support. Instead it supports the sanitize feature. Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NJaehoon Chung <jh80.chung@samsung.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Girish K S 提交于
All the files using printk function for displaying kernel messages in the mmc driver have been replaced with corresponding macro. Signed-off-by: NGirish K S <girish.shivananjappa@linaro.org> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Venkatraman S 提交于
Fix the sparse warning "drivers/mmc/card/queue.c:111:20: warning: symbol 'mmc_alloc_sg' was not declared. Should it be static?" Signed-off-by: NVenkatraman S <svenkatr@ti.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 21 7月, 2011 4 次提交
-
-
由 Per Forlin 提交于
Change mmc_blk_issue_rw_rq() to become asynchronous. The execution flow looks like this: * The mmc-queue calls issue_rw_rq(), which sends the request to the host and returns back to the mmc-queue. * The mmc-queue calls issue_rw_rq() again with a new request. * This new request is prepared in issue_rw_rq(), then it waits for the active request to complete before pushing it to the host. * When the mmc-queue is empty it will call issue_rw_rq() with a NULL req to finish off the active request without starting a new request. Signed-off-by: NPer Forlin <per.forlin@linaro.org> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NVenkatraman S <svenkatr@ti.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Per Forlin 提交于
Add an additional mmc queue request instance to make way for two active block requests. One request may be active while the other request is being prepared. Signed-off-by: NPer Forlin <per.forlin@linaro.org> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NVenkatraman S <svenkatr@ti.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Per Forlin 提交于
The way the request data is organized in the mmc queue struct, it only allows processing of one request at a time. This patch adds a new struct to hold mmc queue request data such as sg list, request, blk request and bounce buffers, and updates any functions depending on the mmc queue struct. This prepares for using multiple active requests in one mmc queue. Signed-off-by: NPer Forlin <per.forlin@linaro.org> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NVenkatraman S <svenkatr@ti.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Adrian Hunter 提交于
Some host controllers will not operate without a hardware timeout that is limited in value. However large discards require large timeouts, so there needs to be a way to specify the maximum discard size. A host controller driver may now specify the maximum discard timeout possible so that max_discard_sectors can be calculated. However, for eMMC when the High Capacity Erase Group Size is not in use, the timeout calculation depends on clock rate which may change. For that case Preferred Erase Size is used instead. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 26 6月, 2011 2 次提交
-
-
由 Adrian Hunter 提交于
SCSI defines discard alignment as the offset to the first optimal discard. In the case of SD/MMC, that is always zero which is the default. SCSI defines discard granularity as a hint of a optimal discard size. That is much better expressed by the MMC "preferred erase size" (pref_erase) field. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Adrian Hunter 提交于
For example, an eMMC with 2 boot partitions will have 3 threads. The names change from: 40 ? 00:00:00 mmcqd/0 41 ? 00:00:00 mmcqd/0 42 ? 00:00:00 mmcqd/0 to: 40 ? 00:00:00 mmcqd/0 41 ? 00:00:00 mmcqd/0boot0 42 ? 00:00:00 mmcqd/0boot1 Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: Andrei Warkentin <andreiw@motorola.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 25 5月, 2011 1 次提交
-
-
由 John Ogness 提交于
There is no need to disable irq's when using the sg_copy_*_buffer() functions because those functions do that already. There are also no races for the mm_queue struct here that would require the irq's to be disabled before calling sg_copy_*_buffer(). Signed-off-by: NJohn Ogness <john.ogness@linutronix.de> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 10 3月, 2011 1 次提交
-
-
由 Jens Axboe 提交于
Code has been converted over to the new explicit on-stack plugging, and delay users have been converted to use the new API for that. So lets kill off the old plugging along with aops->sync_page(). Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 23 10月, 2010 3 次提交
-
-
由 Ethan Du 提交于
Usually there are multiple mmc host controllers; rename mmc queue thread by host index so we can easily identify which controller it belongs to. Signed-off-by: NEthan Du <ethan.too@gmail.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Thomas Gleixner 提交于
Get rid of init_MUTEX[_LOCKED]() and use sema_init() instead. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux-mmc@vger.kernel.org Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Martin K. Petersen 提交于
We have deprecated the distinction between hardware and physical segments in the block layer. Consolidate the two limits into one in drivers/mmc/. Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 10 9月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA requests. Deprecate barrier. All REQ_HARDBARRIERs are failed with -EOPNOTSUPP and blk_queue_ordered() is replaced with simpler blk_queue_flush(). blk_queue_flush() takes combinations of REQ_FLUSH and FUA. If a device has write cache and can flush it, it should set REQ_FLUSH. If the device can handle FUA writes, it should also set REQ_FUA. All blk_queue_ordered() users are converted. * ORDERED_DRAIN is mapped to 0 which is the default value. * ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH. * ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NBoaz Harrosh <bharrosh@panasas.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Cc: David S. Miller <davem@davemloft.net> Cc: Alasdair G Kergon <agk@redhat.com> Cc: Pierre Ossman <drzeus@drzeus.cx> Cc: Stefan Weinhuber <wein@de.ibm.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 12 8月, 2010 2 次提交
-
-
由 Adrian Hunter 提交于
Secure discard is implemented by Secure Trim if the discard is unaligned or Secure Erase otherwise. Signed-off-by: NAdrian Hunter <adrian.hunter@nokia.com> Acked-by: NJens Axboe <axboe@kernel.dk> Cc: Kyungmin Park <kmpark@infradead.org> Cc: Madhusudhan Chikkature <madhu.cr@ti.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Ben Gardiner <bengardiner@nanometrics.ca> Cc: <linux-mmc@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Adrian Hunter 提交于
Enable MMC to service discard requests. In the case of SD and MMC cards that do not support trim, discards become erases. In the case of cards (MMC) that only allow erases in multiples of erase group size, round to the nearest completely discarded erase group. Signed-off-by: NAdrian Hunter <adrian.hunter@nokia.com> Acked-by: NJens Axboe <axboe@kernel.dk> Cc: Kyungmin Park <kmpark@infradead.org> Cc: Madhusudhan Chikkature <madhu.cr@ti.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Ben Gardiner <bengardiner@nanometrics.ca> Cc: <linux-mmc@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 8月, 2010 2 次提交
-
-
由 FUJITA Tomonori 提交于
This removes q->prepare_flush_fn completely (changes the blk_queue_ordered API). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Christoph Hellwig 提交于
Remove all the trivial wrappers for the cmd_type and cmd_flags fields in struct requests. This allows much easier grepping for different request types instead of unwinding through macros. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- 26 2月, 2010 2 次提交
-
-
由 Martin K. Petersen 提交于
Except for SCSI no device drivers distinguish between physical and hardware segment limits. Consolidate the two into a single segment limit. Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Martin K. Petersen 提交于
The block layer calling convention is blk_queue_<limit name>. blk_queue_max_sectors predates this practice, leading to some confusion. Rename the function to appropriately reflect that its intended use is to set max_hw_sectors. Also introduce a temporary wrapper for backwards compability. This can be removed after the merge window is closed. Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 12 1月, 2010 1 次提交
-
-
由 Adrian Hunter 提交于
The main bug was that 'blk_cleanup_queue()' was called while the block device could still be in use, for example, because the card was removed while files were still open. In addition, to be sure that 'mmc_request()' will get called for all new requests (so it can error them out), the queue is emptied during cleanup. This is done after the worker thread is stopped to avoid racing with it. Finally, it is not a device error for this to be happening, so quiet the (sometimes very many) error messages. Signed-off-by: NAdrian Hunter <adrian.hunter@nokia.com> Cc: <linux-mmc@vger.kernel.org> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 5月, 2009 2 次提交
-
-
由 Tejun Heo 提交于
Till now block layer allowed two separate modes of request execution. A request is always acquired from the request queue via elv_next_request(). After that, drivers are free to either dequeue it or process it without dequeueing. Dequeue allows elv_next_request() to return the next request so that multiple requests can be in flight. Executing requests without dequeueing has its merits mostly in allowing drivers for simpler devices which can't do sg to deal with segments only without considering request boundary. However, the benefit this brings is dubious and declining while the cost of the API ambiguity is increasing. Segment based drivers are usually for very old or limited devices and as converting to dequeueing model isn't difficult, it doesn't justify the API overhead it puts on block layer and its more modern users. Previous patches converted all block low level drivers to dequeueing model. This patch completes the API transition by... * renaming elv_next_request() to blk_peek_request() * renaming blkdev_dequeue_request() to blk_start_request() * adding blk_fetch_request() which is combination of peek and start * disallowing completion of queued (not started) requests * applying new API to all LLDs Renamings are for consistency and to break out of tree code so that it's apparent that out of tree drivers need updating. [ Impact: block request issue API cleanup, no functional change ] Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Mike Miller <mike.miller@hp.com> Cc: unsik Kim <donari75@gmail.com> Cc: Paul Clements <paul.clements@steeleye.com> Cc: Tim Waugh <tim@cyberelk.net> Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Cc: David S. Miller <davem@davemloft.net> Cc: Laurent Vivier <Laurent@lvivier.info> Cc: Jeff Garzik <jgarzik@pobox.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Cc: Borislav Petkov <petkovbb@googlemail.com> Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com> Cc: Alex Dubov <oakad@yahoo.com> Cc: Pierre Ossman <drzeus@drzeus.cx> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Markus Lidel <Markus.Lidel@shadowconnect.com> Cc: Stefan Weinhuber <wein@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Pete Zaitcev <zaitcev@redhat.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Tejun Heo 提交于
plat-omap/mailbox, floppy, viocd, mspro_block, i2o_block and mmc/card/queue are already pretty close to dequeueing model and can be converted with simple changes. Convert them. While at it, * xen-blkfront: !fs check moved downwards to share dequeue call with normal path. * mspro_block: __blk_end_request(..., blk_rq_cur_byte()) converted to __blk_end_request_cur() * mmc/card/queue: loop of __blk_end_request() converted to __blk_end_request_all() [ Impact: dequeue in-flight request ] Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Alex Dubov <oakad@yahoo.com> Cc: Markus Lidel <Markus.Lidel@shadowconnect.com> Cc: Pierre Ossman <drzeus@drzeus.cx> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 15 10月, 2008 1 次提交
-
-
由 Pierre Ossman 提交于
Signed-off-by: NPierre Ossman <drzeus@drzeus.cx>
-
- 12 10月, 2008 3 次提交
-
-
由 Pierre Ossman 提交于
We do not support PC (SCSI) commands, so don't pretend we do by letting them through. Signed-off-by: NPierre Ossman <drzeus@drzeus.cx>
-
由 Pierre Ossman 提交于
The MMC block driver services requests one at a time and in strict order. Indicate this to the block layer so that it can handle barriers in an efficient manner. Signed-off-by: NPierre Ossman <drzeus@drzeus.cx>
-
由 Pierre Ossman 提交于
Make sure we consider the maximum block count when we tell the block layer about the maximum sector count. That way we don't have to chop up the request ourselves. Signed-off-by: NPierre Ossman <drzeus@drzeus.cx>
-
- 23 7月, 2008 1 次提交
-
-
由 Pierre Ossman 提交于
Support highmem pages in the bounce buffer code by using the sg_copy_from/to_buffer() functions. Signed-off-by: NPierre Ossman <drzeus@drzeus.cx>
-
- 28 1月, 2008 1 次提交
-
-
由 Kiyoshi Ueda 提交于
This patch converts mmc to use blk_end_request interfaces. Related 'uptodate' arguments are converted to 'error'. Cc: Pierre Ossman <drzeus-mmc@drzeus.cx> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 27 11月, 2007 1 次提交
-
-
由 Haavard Skinnemoen 提交于
mmc_init_queue only initializes the scatterlists with sg_init_table() when using a bounce buffer. This leads to a BUG() when CONFIG_DEBUG_SG is set. Signed-off-by: NHaavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 08 11月, 2007 1 次提交
-
-
由 Roland Dreier 提交于
Commit 45711f1a ("[SG] Update drivers to use sg helpers") had the following bogus change in drivers/mmc/card/queue.c: > - src_buf = page_address(src->page) + src->offset; > + src_buf = sg_virt(dst); (Notice that "src" is converted to "dst"). Turn this "dst" back into the intended "src". Signed-off-by: NRoland Dreier <roland@digitalvampire.org> Tested-by: NRomano Giannetti <romano.giannetti@gmail.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 23 10月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 16 10月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
Otherwise we could have junk in the sg fields, fooling the sg chaining into thinking ->page is valid. Acked-by: NPierre Ossman <drzeus@drzeus.cx> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 09 8月, 2007 1 次提交
-
-
由 Pierre Ossman 提交于
Reorganize the code that initializes mmc_block's bounce buffer in order to avoid warnings when MMC_BLOCK_BOUNCE isn't used. Signed-off-by: NPierre Ossman <drzeus@drzeus.cx>
-