- 29 12月, 2008 1 次提交
-
-
由 Jens Axboe 提交于
__blk_queue_bounce() relies on a zeroed bio_vec list, since it looks up arbitrary indexes in the allocated bio. The block layer only guarentees that added entries are valid, so clear memory after alloc. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 26 11月, 2008 2 次提交
-
-
由 Ingo Molnar 提交于
Port to the new tracepoints API: split DEFINE_TRACE() and DECLARE_TRACE() sites. Spread them out to the usage sites, as suggested by Mathieu Desnoyers. Signed-off-by: NIngo Molnar <mingo@elte.hu> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
-
由 Arnaldo Carvalho de Melo 提交于
This was a forward port of work done by Mathieu Desnoyers, I changed it to encode the 'what' parameter on the tracepoint name, so that one can register interest in specific events and not on classes of events to then check the 'what' parameter. Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 09 10月, 2008 1 次提交
-
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 16 10月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
This implements functionality to pass down or insert a barrier in a queue, without having data attached to it. The ->prepare_flush_fn() infrastructure from data barriers are reused to provide this functionality. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 10 10月, 2007 1 次提交
-
-
由 NeilBrown 提交于
As bi_end_io is only called once when the reqeust is complete, the 'size' argument is now redundant. Remove it. Now there is no need for bio_endio to subtract the size completed from bi_size. So don't do that either. While we are at it, change bi_end_io to return void. Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 24 7月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
Some of the code has been gradually transitioned to using the proper struct request_queue, but there's lots left. So do a full sweet of the kernel and get rid of this typedef and replace its uses with the proper type. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 27 3月, 2007 1 次提交
-
-
由 Vasily Tarasov 提交于
There is a small problem in handling page bounce. At the moment blk_max_pfn equals max_pfn, which is in fact not maximum possible _number_ of a page frame, but the _amount_ of page frames. For example for the 32bit x86 node with 4Gb RAM, max_pfn = 0x100000, but not 0xFFFF. request_queue structure has a member q->bounce_pfn and queue needs bounce pages for the pages _above_ this limit. This routine is handled by blk_queue_bounce(), where the following check is produced: if (q->bounce_pfn >= blk_max_pfn) return; Assume, that a driver has set q->bounce_pfn to 0xFFFF, but blk_max_pfn equals 0x10000. In such situation the check above fails and for each bio we always fall down for iterating over pages tied to the bio. I want to notice, that for quite a big range of device drivers (ide, md, ...) such problem doesn't happen because they use BLK_BOUNCE_ANY for bounce_pfn. BLK_BOUNCE_ANY is defined as blk_max_pfn << PAGE_SHIFT, and then the check above doesn't fail. But for other drivers, which obtain reuired value from drivers, it fails. For example sata_nv uses ATA_DMA_MASK or dev->dma_mask. I propose to use (max_pfn - 1) for blk_max_pfn. And the same for blk_max_low_pfn. The patch also cleanses some checks related with bounce_pfn. Signed-off-by: NVasily Tarasov <vtaras@openvz.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 13 1月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
Currently we issue a bounce trace when __blk_queue_bounce() is called, but that merely means that the device has a lower dma mask than the higher pages in the system. The bio itself may still be lower pages. So move the bounce trace into __blk_queue_bounce(), when we know there will actually be page bouncing. Signed-off-by: NJens Axboe <jens.axboe@oracle.com> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 01 10月, 2006 1 次提交
-
-
由 David Howells 提交于
Move the bounce buffer code from mm/highmem.c to mm/bounce.c so that it can be more easily disabled when the block layer is disabled. !!!NOTE!!! There may be a bug in this code: Should init_emergency_pool() be contingent on CONFIG_HIGHMEM? Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-