- 03 7月, 2008 2 次提交
-
-
由 Martin K. Petersen 提交于
Some block devices support verifying the integrity of requests by way of checksums or other protection information that is submitted along with the I/O. This patch implements support for generating and verifying integrity metadata, as well as correctly merging, splitting and cloning bios and requests that have this extra information attached. See Documentation/block/data-integrity.txt for more information. Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Everything was moved to struct request_queue a few kernel revisions ago, maintaining the deprecated typedef to avoid breaking things. Now the time has come to get rid of that typedef. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 30 4月, 2008 2 次提交
-
-
由 Jens Axboe 提交于
spin_is_locked() doesn't work on UP without spinlock debugging. Make it safer and just return 1 on UP, so we don't get false positives. The plan is to kill this debug function during the -rc cycle. Signed-off-by: NJens Axboe <jens.axboe@oracle.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
The new queue_flag_set/clear() functions verify that the queue is locked, but in doing so they will actually instead oops if the queue lock hasn't been initialized at all. So fix the lock debug test to consider the "no lock" case to be unlocked. This way you get a nice WARN_ON_ONCE() instead of a fatal oops. Bug introduced by commit 75ad23bc ("block: make queue flags non-atomic"). Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 4月, 2008 4 次提交
-
-
由 Alan D. Brunelle 提交于
The block I/O + elevator + I/O scheduler code spend a lot of time trying to merge I/Os -- rightfully so under "normal" circumstances. However, if one were to know that the incoming I/O stream was /very/ random in nature, the cycles are wasted. This patch adds a per-request_queue tunable that (when set) disables merge attempts (beyond the simple one-hit cache check), thus freeing up a non-trivial amount of CPU cycles. Signed-off-by: NAlan D. Brunelle <alan.brunelle@hp.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 FUJITA Tomonori 提交于
This patch changes rq->cmd from the static array to a pointer to support large commands. We rarely handle large commands. So for optimization, a struct request still has a static array for a command. rq_init sets rq->cmd pointer to the static array. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 FUJITA Tomonori 提交于
This rename rq_init() blk_rq_init() and export it. Any path that hands the request to the block layer needs to call it to initialize the request. This is a preparation for large command support, which needs to initialize the request in a proper way (that is, just doing a memset() will not work). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Nick Piggin 提交于
We can save some atomic ops in the IO path, if we clearly define the rules of how to modify the queue flags. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 21 4月, 2008 2 次提交
-
-
由 Andi Kleen 提交于
Only noticed this while hacking something else, no test case. blk_max_low_pfn is initialized once at bootup by the block layer from max_low_pfn. But max_low_pfn is not necessarily constant over the runtime of the system when you consider memory hotplug. What could happen if that someone adds memory later the block layer wouldn't get updated and then start bouncing memory unnecessarily. Also on 64bit blk_max_low_pfn actually isn't needed because it just disables bouncing essentially and there is no highmem. And nobody can pass pfns > max_low_pfn to the block layer, because those wouldn't have a struct page and I suspect block layer wouldn't be very happy without that. So set BLK_BOUNCE_HIGH to infinity (-1ULL) on 64bit. That avoids the problem of having to update it on memory hotadd. On 32bit I kept the same behaviour because at least on i386 memory hotadd only adds HIGHMEM, never lowmem. BLK_BOUNCE_ANY is always set to infinity on both 32 and 64bit. Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Jens Axboe <jens.axboe@oracle.com> Acked-by: NYasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 FUJITA Tomonori 提交于
blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule that req->data_len (the true data length) is equal to sum(bio). It broke the scsi command completion code. commit e97a294e was introduced to fix the above issue. However, the partial completion code doesn't work with it. The commit is also a layer violation (scsi mid-layer should not know about the block layer's padding). This patch moves the padding adjustment to blk_rq_map_sg (suggested by James). The padding works like the drain buffer. This patch breaks the rule that req->data_len is equal to sum(sg), however, the drain buffer already broke it. So this patch just restores the rule that req->data_len is equal to sub(bio) without breaking anything new. Now when a low level driver needs padding, blk_rq_map_user and blk_rq_map_user_iov guarantee there's enough room for padding. blk_rq_map_sg can safely extend the last entry of a scatter list. blk_rq_map_sg must extend the last entry of a scatter list only for a request that got through bio_copy_user_iov. This patches introduces new REQ_COPY_USER flag. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Tejun Heo <htejun@gmail.com> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 04 3月, 2008 2 次提交
-
-
由 Tejun Heo 提交于
Block layer alignment was used for two different purposes - memory alignment and padding. This causes problems in lower layers because drivers which only require memory alignment ends up with adjusted rq->data_len. Separate out padding such that padding occurs iff driver explicitly requests it. Tomo: restorethe code to update bio in blk_rq_map_user introduced by the commit 40b01b9b according to padding alignment. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 FUJITA Tomonori 提交于
The meaning of rq->data_len was changed to the length of an allocated buffer from the true data length. It breaks SG_IO friends and bsg. This patch restores the meaning of rq->data_len to the true data length and adds rq->extra_len to store an extended length (due to drain buffer and padding). This patch also removes the code to update bio in blk_rq_map_user introduced by the commit 40b01b9b. The commit adjusts bio according to memory alignment (queue_dma_alignment). However, memory alignment is NOT padding alignment. This adjustment also breaks SG_IO friends and bsg. Padding alignment needs to be fixed in a proper way (by a separate patch). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <axboe@carl.home.kernel.dk>
-
- 19 2月, 2008 2 次提交
-
-
由 Tejun Heo 提交于
Draining shouldn't be done for commands where overflow may indicate data integrity issues. Add dma_drain_needed callback to request_queue. Drain buffer is appened iff this function returns non-zero. Signed-off-by: NTejun Heo <htejun@gmail.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Tejun Heo 提交于
With padding and draining moved into it, block layer now may extend requests as directed by queue parameters, so now a request has two sizes - the original request size and the extended size which matches the size of area pointed to by bios and later by sgs. The latter size is what lower layers are primarily interested in when allocating, filling up DMA tables and setting up the controller. Both padding and draining extend the data area to accomodate controller characteristics. As any controller which speaks SCSI can handle underflows, feeding larger data area is safe. So, this patch makes the primary data length field, request->data_len, indicate the size of full data area and add a separate length field, request->raw_data_len, for the unmodified request size. The latter is used to report to higher layer (userland) and where the original request size should be fed to the controller or device. Signed-off-by: NTejun Heo <htejun@gmail.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 08 2月, 2008 1 次提交
-
-
由 Jens Axboe 提交于
Rearrange fields in cache order and initialize some fields that we didn't previously init. Remove init of ->completion_data, it's part of a union with ->hash. Luckily clearing the rb node is the same as setting it to null! Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 01 2月, 2008 2 次提交
-
-
由 Jens Axboe 提交于
It blindly copies everything in the io_context, including the lock. That doesn't work so well for either lock ordering or lockdep. There seems zero point in swapping io contexts on a request to request merge, so the best point of action is to just remove it. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
No point in passing signed integers as the byte count, they can never be negative. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 30 1月, 2008 1 次提交
-
-
由 Jens Axboe 提交于
struct io_context was not defined, just add an empty forward decl. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 28 1月, 2008 10 次提交
-
-
由 James Bottomley 提交于
These DMA drain buffer implementations in drivers are pretty horrible to do in terms of manipulating the scatterlist. Plus they're being done at least in drivers/ide and drivers/ata, so we now have code duplication. The one use case for this, as I understand it is AHCI controllers doing PIO mode to mmc devices but translating this to DMA at the controller level. So, what about adding a callback to the block layer that permits the adding of the drain buffer for the problem devices. The idea is that you'd do this in slave_configure after you find one of these devices. The beauty of doing it in the block layer is that it quietly adds the drain buffer to the end of the sg list, so it automatically gets mapped (and unmapped) without anything unusual having to be done to the scatterlist in driver/scsi or drivers/ata and without any alteration to the transfer length. Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Detach task state from ioc, instead keep track of how many processes are accessing the ioc. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
This is where it belongs and then it doesn't take up space for a process that doesn't do IO. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch converts 'uptodate' arguments of no longer exported interfaces, end_that_request_first/last, to 'error', and removes internal conversions for it in blk_end_request interfaces. Also, this patch removes no longer needed end_io_error(). Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch removes the following functions: o end_that_request_first() o end_that_request_chunk() and stops exporting the functions below: o end_that_request_last() Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch adds a variant of the interface, blk_end_bidi_request(), which completes a bidi request. Bidi request must be completed as a whole, both rq and rq->next_rq at once. So the interface has 2 arguments for completion size. As for ->end_io, only rq->end_io is called (rq->next_rq->end_io is not called). So if special completion handling is needed, the handler must be set to rq->end_io. And the handler must take care of freeing next_rq too, since the interface doesn't care of it if rq->end_io is not NULL. Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch adds a variant of the interface, blk_end_request_callback(), which has driver callback feature. Drivers may need to do special works between end_that_request_first() and end_that_request_last(). For such drivers, blk_end_request_callback() allows it to pass a callback function which is called between end_that_request_first() and end_that_request_last(). This interface is only for fallback of other blk_end_request interfaces. Drivers should avoid their tricky behaviors and use other interfaces as much as possible. Currently, only one driver, ide-cd, needs this interface. So this interface should/will be removed, after the driver removes such tricky behaviors. o ide-cd (cdrom_newpc_intr()) In PIO mode, cdrom_newpc_intr() needs to defer end_that_request_last() until the device clears DRQ_STAT and raises an interrupt after end_that_request_first(). So end_that_request_first() and end_that_request_last() are called separately in cdrom_newpc_intr(). This means blk_end_request_callback() has to return without completing request even if no leftover in the request. To satisfy the requirement, callback function has return value so that drivers can tell blk_end_request_callback() to return without completing request. Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch adds/exports functions to get the size of request in bytes. They are useful because blk_end_request interfaces take bytes as a completed I/O size instead of sectors. Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch adds 2 new interfaces for request completion: o blk_end_request() : called without queue lock o __blk_end_request() : called with queue lock held blk_end_request takes 'error' as an argument instead of 'uptodate', which current end_that_request_* take. The meanings of values are below and the value is used when bio is completed. 0 : success < 0 : error Some device drivers call some generic functions below between end_that_request_{first/chunk} and end_that_request_last(). o add_disk_randomness() o blk_queue_end_tag() o blkdev_dequeue_request() These are called in the blk_end_request interfaces as a part of generic request completion. So all device drivers become to call above functions. To decide whether to call blkdev_dequeue_request(), blk_end_request uses list_empty(&rq->queuelist) (blk_queued_rq() macro is added for it). So drivers must re-initialize it using list_init() or so before calling blk_end_request if drivers use it for its specific purpose. (Currently, there is no driver which completes request without re-initializing the queuelist after used it. So rq->queuelist can be used for the purpose above.) "Normal" drivers can be converted to use blk_end_request() in a standard way shown below. a) end_that_request_{chunk/first} spin_lock_irqsave() (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => blk_end_request() b) spin_lock_irqsave() end_that_request_{chunk/first} (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => spin_lock_irqsave() __blk_end_request() spin_unlock_irqsave() c) spin_lock_irqsave() (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => blk_end_request() or spin_lock_irqsave() __blk_end_request() spin_unlock_irqrestore() Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Pete Wyckoff 提交于
Let queue_dma_alignment return 0 if it was specifically set to 0. This permits devices with no particular alignment restrictions to use arbitrary user space buffers without copying. Signed-off-by: NPete Wyckoff <pw@osc.edu> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 27 1月, 2008 1 次提交
-
-
Based on the earlier work by Tejun Heo. All users are gone so we can finally remove it. Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: NBartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
- 26 1月, 2008 1 次提交
-
-
Based on the earlier work by Tejun Heo. All users are gone so we can finally remove it. Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: NBartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
- 12 1月, 2008 1 次提交
-
-
由 James Bottomley 提交于
The purpose of this is to allow stacked alignment settings, with the ultimate queue alignment being set to the largest alignment requirement in the stack. The reason for this is so that the SCSI mid-layer can relax the default alignment requirements (which are basically causing a lot of superfluous copying to go on in the SG_IO interface) while allowing transports, devices or HBAs to add stricter limits if they need them. Acked-by: NJens Axboe <jens.axboe@oracle.com> Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
- 09 11月, 2007 1 次提交
-
-
由 Alan D. Brunelle 提交于
Added blk_unplug interface, allowing all invocations of unplugs to result in a generated blktrace UNPLUG. Signed-off-by: NAlan D. Brunelle <Alan.Brunelle@hp.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 29 10月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
For the locking to work, only the tag map and tag bit map may be shared (incidentally, I was just explaining this to Nick yesterday, but I apparently didn't review the code well enough myself). But we also share the busy list! The busy_list must be queue private, or we need a block_queue_tag covering lock as well. So we have to move the busy_list to the queue. This'll work fine, and it'll actually also fix a problem with blk_queue_invalidate_tags() which will invalidate tags across all shared queues. This is a bit confusing, the low level driver should call it for each queue seperately since otherwise you cannot kill tags on just a single queue for eg a hard drive that stops responding. Since the function has no callers currently, it's not an issue. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 16 10月, 2007 3 次提交
-
-
由 Jens Axboe 提交于
Then we can get rid of ->issue_flush_fn() and all the driver private implementations of that. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
This implements functionality to pass down or insert a barrier in a queue, without having data attached to it. The ->prepare_flush_fn() infrastructure from data barriers are reused to provide this functionality. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
We can use this helper in the elevator core for BLKPREP_KILL, and it'll also be useful for the empty barrier patch. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 12 10月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
We need it even if CONFIG_BLOCK is disabled, so move it outside of the block layer include system. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 10 10月, 2007 3 次提交
-
-
由 NeilBrown 提交于
The entire function of flush_dry_bio_endio is to undo the effects of bio_endio (when called on a barrier request). So remove the function and the call to bio_endio. This allows us to remove "bi_size" from "struct request_queue". Signed-off-by: NNeil Brown <neilb@suse.de> ### Diffstat output ./block/ll_rw_blk.c | 39 ++------------------------------------- ./include/linux/blkdev.h | 1 - 2 files changed, 2 insertions(+), 38 deletions(-) diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Hide everything in blkdev.h with CONFIG_BLOCK isn't set, and fixup the (few) files that fail to build because they were relying on blkdev.h pulling in extra includes for them. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 NeilBrown 提交于
blk_rq_bio_prep is exported for use in exactly one place. That place can benefit from using the new blk_rq_append_bio instead. So - change dm-emc to call blk_rq_append_bio - stop exporting blk_rq_bio_prep, and - initialise rq_disk in blk_rq_bio_prep, as dm-emc needs it. Signed-off-by: NNeil Brown <neilb@suse.de> diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-