- 04 3月, 2008 2 次提交
-
-
由 FUJITA Tomonori 提交于
The meaning of rq->data_len was changed to the length of an allocated buffer from the true data length. It breaks SG_IO friends and bsg. This patch restores the meaning of rq->data_len to the true data length and adds rq->extra_len to store an extended length (due to drain buffer and padding). This patch also removes the code to update bio in blk_rq_map_user introduced by the commit 40b01b9b. The commit adjusts bio according to memory alignment (queue_dma_alignment). However, memory alignment is NOT padding alignment. This adjustment also breaks SG_IO friends and bsg. Padding alignment needs to be fixed in a proper way (by a separate patch). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <axboe@carl.home.kernel.dk>
-
由 Randy Dunlap 提交于
kernel-doc for block/: - add missing parameters - fix one function's parameter list (remove blank line) - add 2 source files to docbook for non-exported kernel-doc functions Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 19 2月, 2008 2 次提交
-
-
由 Tejun Heo 提交于
With padding and draining moved into it, block layer now may extend requests as directed by queue parameters, so now a request has two sizes - the original request size and the extended size which matches the size of area pointed to by bios and later by sgs. The latter size is what lower layers are primarily interested in when allocating, filling up DMA tables and setting up the controller. Both padding and draining extend the data area to accomodate controller characteristics. As any controller which speaks SCSI can handle underflows, feeding larger data area is safe. So, this patch makes the primary data length field, request->data_len, indicate the size of full data area and add a separate length field, request->raw_data_len, for the unmodified request size. The latter is used to report to higher layer (userland) and where the original request size should be fed to the controller or device. Signed-off-by: NTejun Heo <htejun@gmail.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Adrian Bunk 提交于
request_cachep needlessly became global. Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NJens Axboe <axboe@carl.home.kernel.dk>
-
- 08 2月, 2008 3 次提交
-
-
由 Jerome Marchand 提交于
Removes the now unused old partition statistic code. Signed-off-by: NJerome Marchand <jmarchan@redhat.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jerome Marchand 提交于
Updates the enhanced partition statistics in generic block layer besides the disk statistics. Signed-off-by: NJerome Marchand <jmarchan@redhat.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Rearrange fields in cache order and initialize some fields that we didn't previously init. Remove init of ->completion_data, it's part of a union with ->hash. Luckily clearing the rb node is the same as setting it to null! Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 01 2月, 2008 2 次提交
-
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
No point in passing signed integers as the byte count, they can never be negative. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 30 1月, 2008 6 次提交
-
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Adds files for barrier handling, rq execution, io context handling, mapping data to requests, and queue settings. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Seperates the tag and sysfs handling from ll_rw_blk. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Then we retain history in blk-core.c Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 28 1月, 2008 12 次提交
-
-
由 James Bottomley 提交于
These DMA drain buffer implementations in drivers are pretty horrible to do in terms of manipulating the scatterlist. Plus they're being done at least in drivers/ide and drivers/ata, so we now have code duplication. The one use case for this, as I understand it is AHCI controllers doing PIO mode to mmc devices but translating this to DMA at the controller level. So, what about adding a callback to the block layer that permits the adding of the drain buffer for the problem devices. The idea is that you'd do this in slave_configure after you find one of these devices. The beauty of doing it in the block layer is that it quietly adds the drain buffer to the end of the sg list, so it automatically gets mapped (and unmapped) without anything unusual having to be done to the scatterlist in driver/scsi or drivers/ata and without any alteration to the transfer length. Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
The io context sharing introduced a per-ioc spinlock, that would protect the cfq io context lookup. That is a regression from the original, since we never needed any locking there because the ioc/cic were process private. The cic lookup is changed from an rbtree construct to a radix tree, which we can then use RCU to make the reader side lockless. That is the performance critical path, modifying the radix tree is only done on process creation (when that process first does IO, actually) and on process exit (if that process has done IO). As it so happens, radix trees are also much faster for this type of lookup where the key is a pointer. It's a very sparse tree. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Detach task state from ioc, instead keep track of how many processes are accessing the ioc. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
This is where it belongs and then it doesn't take up space for a process that doesn't do IO. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch merges complete_request() into end_that_request_last() for cleanup. complete_request() was introduced by earlier part of this patch-set, not to break the existing users of end_that_request_last(). Since all users are converted to blk_end_request interfaces and end_that_request_last() is no longer exported, the code can be merged to end_that_request_last(). Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch converts 'uptodate' arguments of no longer exported interfaces, end_that_request_first/last, to 'error', and removes internal conversions for it in blk_end_request interfaces. Also, this patch removes no longer needed end_io_error(). Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch removes the following functions: o end_that_request_first() o end_that_request_chunk() and stops exporting the functions below: o end_that_request_last() Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch adds a variant of the interface, blk_end_bidi_request(), which completes a bidi request. Bidi request must be completed as a whole, both rq and rq->next_rq at once. So the interface has 2 arguments for completion size. As for ->end_io, only rq->end_io is called (rq->next_rq->end_io is not called). So if special completion handling is needed, the handler must be set to rq->end_io. And the handler must take care of freeing next_rq too, since the interface doesn't care of it if rq->end_io is not NULL. Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch adds a variant of the interface, blk_end_request_callback(), which has driver callback feature. Drivers may need to do special works between end_that_request_first() and end_that_request_last(). For such drivers, blk_end_request_callback() allows it to pass a callback function which is called between end_that_request_first() and end_that_request_last(). This interface is only for fallback of other blk_end_request interfaces. Drivers should avoid their tricky behaviors and use other interfaces as much as possible. Currently, only one driver, ide-cd, needs this interface. So this interface should/will be removed, after the driver removes such tricky behaviors. o ide-cd (cdrom_newpc_intr()) In PIO mode, cdrom_newpc_intr() needs to defer end_that_request_last() until the device clears DRQ_STAT and raises an interrupt after end_that_request_first(). So end_that_request_first() and end_that_request_last() are called separately in cdrom_newpc_intr(). This means blk_end_request_callback() has to return without completing request even if no leftover in the request. To satisfy the requirement, callback function has return value so that drivers can tell blk_end_request_callback() to return without completing request. Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch converts core parts of block layer to use blk_end_request interfaces. Related 'uptodate' arguments are converted to 'error'. 'dequeue' argument was originally introduced for end_dequeued_request(), where no attempt should be made to dequeue the request as it's already dequeued. However, it's not necessary as it can be checked with list_empty(&rq->queuelist). (Dequeued request has empty list and queued request doesn't.) And it has been done in blk_end_request interfaces. As a result of this patch, end_queued_request() and end_dequeued_request() become identical. A future patch will merge and rename them and change users of those functions. Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch adds/exports functions to get the size of request in bytes. They are useful because blk_end_request interfaces take bytes as a completed I/O size instead of sectors. Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Kiyoshi Ueda 提交于
This patch adds 2 new interfaces for request completion: o blk_end_request() : called without queue lock o __blk_end_request() : called with queue lock held blk_end_request takes 'error' as an argument instead of 'uptodate', which current end_that_request_* take. The meanings of values are below and the value is used when bio is completed. 0 : success < 0 : error Some device drivers call some generic functions below between end_that_request_{first/chunk} and end_that_request_last(). o add_disk_randomness() o blk_queue_end_tag() o blkdev_dequeue_request() These are called in the blk_end_request interfaces as a part of generic request completion. So all device drivers become to call above functions. To decide whether to call blkdev_dequeue_request(), blk_end_request uses list_empty(&rq->queuelist) (blk_queued_rq() macro is added for it). So drivers must re-initialize it using list_init() or so before calling blk_end_request if drivers use it for its specific purpose. (Currently, there is no driver which completes request without re-initializing the queuelist after used it. So rq->queuelist can be used for the purpose above.) "Normal" drivers can be converted to use blk_end_request() in a standard way shown below. a) end_that_request_{chunk/first} spin_lock_irqsave() (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => blk_end_request() b) spin_lock_irqsave() end_that_request_{chunk/first} (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => spin_lock_irqsave() __blk_end_request() spin_unlock_irqsave() c) spin_lock_irqsave() (add_disk_randomness(), blk_queue_end_tag(), blkdev_dequeue_request()) end_that_request_last() spin_unlock_irqrestore() => blk_end_request() or spin_lock_irqsave() __blk_end_request() spin_unlock_irqrestore() Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 25 1月, 2008 4 次提交
-
-
由 Greg Kroah-Hartman 提交于
Now that the old kobject_init() function is gone, rename kobject_init_ng() to kobject_init() to clean up the namespace. Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Greg Kroah-Hartman 提交于
Now that the old kobject_add() function is gone, rename kobject_add_ng() to kobject_add() to clean up the namespace. Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Greg Kroah-Hartman 提交于
This converts the code to use the new kobject functions, cleaning up the logic in doing so. Cc: Jens Axboe <axboe@kernel.dk> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Kay Sievers 提交于
This moves the block devices to /sys/class/block. It will create a flat list of all block devices, with the disks and partitions in one directory. For compatibility /sys/block is created and contains symlinks to the disks. /sys/class/block |-- sda -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda |-- sda1 -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda1 |-- sda10 -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda10 |-- sda5 -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda5 |-- sda6 -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda6 |-- sda7 -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda7 |-- sda8 -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda8 |-- sda9 -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda9 `-- sr0 -> ../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sr0 /sys/block/ |-- sda -> ../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda `-- sr0 -> ../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sr0 Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 12 1月, 2008 1 次提交
-
-
由 James Bottomley 提交于
The purpose of this is to allow stacked alignment settings, with the ultimate queue alignment being set to the largest alignment requirement in the stack. The reason for this is so that the SCSI mid-layer can relax the default alignment requirements (which are basically causing a lot of superfluous copying to go on in the SG_IO interface) while allowing transports, devices or HBAs to add stricter limits if they need them. Acked-by: NJens Axboe <jens.axboe@oracle.com> Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
- 27 11月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
This was a temporary debugging thing for sg chaining testing, revert it now as it has served its purpose. This reverts commit 563063a8. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 09 11月, 2007 2 次提交
-
-
由 Alan D. Brunelle 提交于
Added blk_unplug interface, allowing all invocations of unplugs to result in a generated blktrace UNPLUG. Signed-off-by: NAlan D. Brunelle <Alan.Brunelle@hp.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
Credit goes to juergen.kadidlo@exasol.com for diagnosing this issue and supplying the initial patch. blk_queue_invalidate_tags() must use the proper requeueing paths instead of open coding the re-add of the request, otherwise we bug out in rq accounting. Just switch to using blk_requeue_request(), that takes care of end-tag handling as well and also adds the blktrace REQUEUE notify event that is also appropriate here. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 02 11月, 2007 2 次提交
-
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
sg_mark_end() overwrites the page_link information, but all users want __sg_mark_end() behaviour where we just set the end bit. That is the most natural way to use the sg list, since you'll fill it in and then mark the end point. So change sg_mark_end() to only set the termination bit. Add a sg_magic debug check as well, and clear a chain pointer if it is set. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 29 10月, 2007 3 次提交
-
-
由 Jens Axboe 提交于
For the locking to work, only the tag map and tag bit map may be shared (incidentally, I was just explaining this to Nick yesterday, but I apparently didn't review the code well enough myself). But we also share the busy list! The busy_list must be queue private, or we need a block_queue_tag covering lock as well. So we have to move the busy_list to the queue. This'll work fine, and it'll actually also fix a problem with blk_queue_invalidate_tags() which will invalidate tags across all shared queues. This is a bit confusing, the low level driver should call it for each queue seperately since otherwise you cannot kill tags on just a single queue for eg a hard drive that stops responding. Since the function has no callers currently, it's not an issue. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Nick Piggin 提交于
The block queue tag map can use lock bitops. Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Oleg Nesterov 提交于
blk_sync_queue() cancels the timer, but forgets to cancel the work. Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-