- 06 2月, 2006 1 次提交
-
-
由 Eric Dumazet 提交于
percpu_data blindly allocates bootmem memory to store NR_CPUS instances of cpudata, instead of allocating memory only for possible cpus. As a preparation for changing that, we need to convert various 0 -> NR_CPUS loops to use for_each_cpu(). (The above only applies to users of asm-generic/percpu.h. powerpc has gone it alone and is presently only allocating memory for present CPUs, so it's currently corrupting memory). Signed-off-by: NEric Dumazet <dada1@cosmosbay.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: James Bottomley <James.Bottomley@steeleye.com> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: Jens Axboe <axboe@suse.de> Cc: Anton Blanchard <anton@samba.org> Acked-by: NWilliam Irwin <wli@holomorphy.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 02 2月, 2006 1 次提交
-
-
由 Jun'ichi "Nick" Nomura 提交于
Record I/O timing statistics The start time is added to struct dm_io, an existing structure allocated privately internally within dm and attached to each incoming bio. We export disk_round_stats() from block/ll_rw_blk.c instead of creating a private clone. Signed-off-by: NJun'ichi "Nick" Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 31 1月, 2006 1 次提交
-
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 24 1月, 2006 3 次提交
-
-
由 Tetsuo Takata 提交于
This makes XFS barrier mounts succeed on my SCSI system. Signed-off-by: NTetsuo Takata <takatatt@intellilink.co.jp> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
It can legally be called with interrupts/preemption enabled. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
IDE lba48 can support full 64k request size, which overflows the max_hw_sectors variable. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 09 1月, 2006 3 次提交
-
-
由 Jens Axboe 提交于
Request completion can be a quite heavy process, since it needs to iterate through the entire request and complete the bio's it holds. This patch adds blk_complete_request() which moves this processing into a dedicated block softirq. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
It's a broken interface, it's done way too late. And apparently it triggers slab problems in recent kernels as well (most likely after the generic dispatch code was merged). So kill it, ide-cd is the only user of it. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Nicolas Kaiser 提交于
linux/blkdev.h included twice Signed-off-by: NNicolas Kaiser <nikai@nikai.net> Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 06 1月, 2006 5 次提交
-
-
由 Tejun Heo 提交于
Reimplement handling of barrier requests. * Flexible handling to deal with various capabilities of target devices. * Retry support for falling back. * Tagged queues which don't support ordered tag can do ordered. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Tejun Heo 提交于
Separate out bio initialization part from __make_request. It will be used by the following blk_ordered_reimpl. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Tejun Heo 提交于
add @uptodate argument to end_that_request_last() and @error to rq_end_io_fn(). there's no generic way to pass error code to request completion function, making generic error handling of non-fs request difficult (rq->errors is driver-specific and each driver uses it differently). this patch adds @uptodate to end_that_request_last() and @error to rq_end_io_fn(). for fs requests, this doesn't really matter, so just using the same uptodate argument used in the last call to end_that_request_first() should suffice. imho, this can also help the generic command-carrying request jens is working on. Signed-off-by: Ntejun heo <htejun@gmail.com> Signed-Off-By: NJens Axboe <axboe@suse.de>
-
由 Arjan van de Ven 提交于
the patch below marks various read-only variables in block/* as const, so that gcc can optimize the use of them; eg gcc will replace the use by the value directly now and will even remove the memory usage of these. Signed-off-by: NArjan van de Ven <arjan@infradead.org> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Originally from: Nick Piggin <nickpiggin@yahoo.com.au> Move current_io_context out of the get_request fastpth. Also try to streamline a few other things in this area. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 16 12月, 2005 1 次提交
-
-
由 Mike Christie 提交于
- export __blk_put_request and blk_execute_rq_nowait needed for async REQ_BLOCK_PC requests - seperate max_hw_sectors and max_sectors for block/scsi_ioctl.c and SG_IO bio.c helpers per Jens's last comments. Since block/scsi_ioctl.c SG_IO was already testing against max_sectors and SCSI-ml was setting max_sectors and max_hw_sectors to the same value this does not change any scsi SG_IO behavior. It only prepares ll_rw_blk.c, scsi_ioctl.c and bio.c for when SCSI-ml begins to set a valid max_hw_sectors for all LLDs. Today if a LLD does not set it SCSI-ml sets it to a safe default and some LLDs set it to a artificial low value to overcome memory and feedback issues. Note: Since we now cap max_sectors to BLK_DEF_MAX_SECTORS, which is 1024, drivers that used to call blk_queue_max_sectors with a large value of max_sectors will now see the fs requests capped to BLK_DEF_MAX_SECTORS. Signed-off-by: NMike Christie <michaelc@cs.wisc.edu> Signed-off-by: NJames Bottomley <James.Bottomley@SteelEye.com>
-
- 15 12月, 2005 1 次提交
-
-
由 Mike Christie 提交于
To send async requests we need these two functions exported. Signed-off-by: NMike Christie <michaelc@cs.wisc.edu> Signed-off-by: NJames Bottomley <James.Bottomley@SteelEye.com>
-
- 19 11月, 2005 1 次提交
-
-
由 Coywolf Qi Hunt 提交于
Some leftover comments referring to drivers/block that are now block/. They don't add any information we don't already have, so kill them. Signed-off-by: NCoywolf Qi Hunt <qiyong@fc-cn.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 04 11月, 2005 1 次提交
-
-
由 Jens Axboe 提交于
drivers/block/ is right now a mix of core and driver parts. Lets move the core parts to a new top level directory. Al will move the fs/ related block parts to block/ next. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 01 11月, 2005 2 次提交
-
-
由 Jens Axboe 提交于
Instead of having ->read_sectors and ->write_sectors, combine the two into ->sectors[2] and similar for the other fields. This saves a branch several places in the io path, since we don't have to care for what the actual io direction is. On my x86-64 box, that's 200 bytes less text in just the core (not counting the various drivers). Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Right now we do it at queueing time, which works alright for reads (since they are usually sync), but not for async writes since we can queue io a lot faster than we can complete it. This makes the vmstat output look extremely bursty. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 28 10月, 2005 6 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jens Axboe 提交于
- 100msec sleep is a little excessive, lots of requests can complete in that timeframe. Use 10msec instead. - Rename QUEUE_FLAG_BYPASS to QUEUE_FLAG_ELVSWITCH to indicate what is going on. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Tejun Heo 提交于
This patch reimplements elevator switch. This patch assumes generic dispatch queue patchset is applied. * Each request is tagged with REQ_ELVPRIV flag if it has its elevator private data set. * Requests which doesn't have REQ_ELVPRIV flag set never enter iosched. They are always directly back inserted to dispatch queue. Of course, elevator_put_req_fn is called only for requests which have its REQ_ELVPRIV set. * Request queue maintains the current number of requests which have its elevator data set (elevator_set_req_fn called) in q->rq->elvpriv. * If a request queue has QUEUE_FLAG_BYPASS set, elevator private data is not allocated for new requests. To switch to another iosched, we set QUEUE_FLAG_BYPASS and wait until elvpriv goes to zero; then, we attach the new iosched and clears QUEUE_FLAG_BYPASS. New implementation is much simpler and main code paths are less cluttered, IMHO. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Tejun Heo 提交于
Implements generic dispatch queue which can replace all dispatch queues implemented by each iosched. This reduces code duplication, eases enforcing semantics over dispatch queue, and simplifies specific ioscheds. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Chen, Kenneth W 提交于
disk stat when "now" is different from disk->stamp. Otherwise, we are again needlessly adding zero to the stats. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Chen, Kenneth W 提交于
struct gendisk has these two fields: stamp, stamp_idle. Update to stamp_idle is always in sync with stamp and they are always the same. Therefore, it does not add any value in having two fields tracking same timestamp. Suggest to remove it. Also, we should only update gendisk stats with non-zero value. Advantage is that we don't have to needlessly calculate memory address, and then add zero to the content. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 22 9月, 2005 1 次提交
-
-
由 Christoph Hellwig 提交于
This function was removed a while ago, but crept in again via a recent scsi merge. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 08 9月, 2005 1 次提交
-
-
由 Stuart McLaren 提交于
Per-queue parameters should be updated using the appropriate blk_queue_xxx functions. Signed-off-by: NStuart McLaren <stuart.mclaren@hp.com> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 06 8月, 2005 1 次提交
-
-
由 Tejun Heo 提交于
My patch in commit fa72b903 incorrectly removed blk_queue_tag->real_max_depth. The original resize implementation was incorrect in the following points. * actual allocation size of tag_index was shorter than real_max_size, but assumed to be of the same size, possibly causing memory access beyond the allocated area. * bits in tag_map between max_deptn and real_max_depth were initialized to 1's, making the tags permanently reserved. In an attempt to fix above two bugs, I had removed allocation optimization in init_tag_map and real_max_size. Tag map/index were allocated and freed immediately during resize. Unfortunately, I wasn't considering that tag map/index can be resized dynamically with tags beyond new_depth active. This led to accessing freed area after shrinking tags and led to the following bug reporting thread on linux-scsi. http://marc.theaimsgroup.com/?l=linux-scsi&m=112319898111885&w=2 To fix the problem, I've revived real_max_depth without allocation optimization in init_tag_map, and Andrew Vasquez confirmed that the problem was fixed. As Jens is not going to be available for a week, he asked me to make sure that this patch reaches you. http://marc.theaimsgroup.com/?l=linux-scsi&m=112325778530886&w=2 Also, a comment was added to make sure that real_max_size is needed for dynamic shrinking. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 29 6月, 2005 5 次提交
-
-
由 Hugh Dickins 提交于
get_request is now expected to be holding on to queue_lock, with interrupts disabled, when it returns NULL; but one path forgot that, causing all kinds of nastiness under swap load - badness backtraces, strange failures, BUGs. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
get_io_context needlessly turned off interrupts and checked for racing io context creations. Both of which aren't needed, because the io context can only be created while in process context of the current process. Also, split the function in 2. A light version, current_io_context does not elevate the reference count specifically, but can be used when in process context, because the process holds a reference itself. Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
Change around locking a bit for a result of 1-2 less spin lock unlock pairs in request submission paths. Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
In the case where the request is not able to be merged by the elevator, don't retake the lock and retry the merge mechanism after allocating a new request. Instead assume that the chance of a merge remains slim, and now that we've done most of the work allocating a request we may as well just go with it. Also be rid of the GFP_ATOMIC allocation: we've got working mempools for the block layer now, so let's save atomic memory for things like networking. Lastly, in get_request_wait, do an initial get_request call before going into the waitqueue. This is reported to help efficiency. Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jens Axboe 提交于
Currently we cap request allocations at q->nr_requests, but we allow a batching io context to allocate up to 32 more (default setting). This can flood the queue with request allocations, with only a few batching processes. The real fix would be to limit the number of batchers, but as that isn't currently tracked, I suggest we just cap the maximum number of allocated requests to eg 50% over the limit. This was observed in real life, users typically see this as vmstat bo numbers going off the wall with seconds of no queueing afterwards. Behaviour this bursty is not beneficial. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 28 6月, 2005 1 次提交
-
-
由 Jens Axboe 提交于
This updates the CFQ io scheduler to the new time sliced design (cfq v3). It provides full process fairness, while giving excellent aggregate system throughput even for many competing processes. It supports io priorities, either inherited from the cpu nice value or set directly with the ioprio_get/set syscalls. The latter closely mimic set/getpriority. This import is based on my latest from -mm. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 26 6月, 2005 2 次提交
-
-
由 Nikita Danilov 提交于
ll_merge_requests_fn() assigns total_{phys,hw}_segments twice. Fix this and a typo. Signed-off-by: NNikita Danilov <nikita@clusterfs.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Adrian Bunk 提交于
This patch contains the following cleanups: - make needlessly global code static - remove the following unused global functions: - blkdev_scsi_issue_flush_fn - __blk_attempt_remerge - remove the following unused EXPORT_SYMBOL's: - blk_phys_contig_segment - blk_hw_contig_segment - blkdev_scsi_issue_flush_fn - __blk_attempt_remerge Signed-off-by: NAdrian Bunk <bunk@stusta.de> Acked-by: NJens Axboe <axboe@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 24 6月, 2005 3 次提交
-
-
由 Nick Piggin 提交于
get_request_wait needn't unplug the device immediately. Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
Sprinkle around a few branch hints in the block layer. Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
This memory barrier is not needed because the waitqueue will only get waiters on it in the following situations: rq->count has exceeded the threshold - however all manipulations of ->count are performed under the runqueue lock, and so we will correctly pick up any waiter. Memory allocation for the request fails. In this case, there is no additional help provided by the memory barrier. We are guaranteed to eventually wake up waiters because the request allocation mempool guarantees that if the mem allocation for a request fails, there must be some requests in flight. They will wake up waiters when they are retired. Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-