- 23 6月, 2006 2 次提交
-
-
由 Jens Axboe 提交于
A process flag to indicate whether we are doing sync io is incredibly ugly. It also causes performance problems when one does a lot of async io and then proceeds to sync it. Part of the io will go out as async, and the other part as sync. This causes a disconnect between the previously submitted io and the synced io. For io schedulers such as CFQ, this will cause us lost merges and suboptimal behaviour in scheduling. Remove PF_SYNCWRITE completely from the fsync/msync paths, and let the O_DIRECT path just directly indicate that the writes are sync by using WRITE_SYNC instead. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
We cannot update them if the user changes nr_requests, so don't set it in the first place. The gains are pretty questionable as well. The batching loss has been shown to decrease throughput. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 21 6月, 2006 1 次提交
-
-
由 Linus Torvalds 提交于
The color is now in the low bits of the parent pointer, and initializing it to 0 happens as part of the whole memset above, so just remove the unnecessary RB_CLEAR_COLOR. Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 15 6月, 2006 1 次提交
-
-
由 Jens Axboe 提交于
We don't clear the seek stat values in cfq_alloc_io_context(), and if ->seek_mean is unlucky enough to be set to -36 by chance, the first invocation of cfq_update_io_seektime() will oops with a divide by zero in do_div(). Just memset the entire cic instead of filling invididual values independently. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 09 6月, 2006 1 次提交
-
-
由 Jens Axboe 提交于
There's a race between shutting down one io scheduler and firing up the next, in which a new io could enter and cause the io scheduler to be invoked with bad or NULL data. To fix this, we need to maintain the queue lock for a bit longer. Unfortunately we cannot do that, since the elevator init requires to be run without the lock held. This isn't easily fixable, without also changing the mempool API. So split the initialization into two parts, and alloc-init operation and an attach operation. Then we can preallocate the io scheduler and related structures, and run the attach inside the lock after we detach the old one. This patch has survived 30 minutes of 1 second io scheduler switching with a very busy io load. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 02 6月, 2006 1 次提交
-
-
由 Jens Axboe 提交于
Now that we select busy_rr for possible service, insert entries at the back of that list instead of at the front. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 01 6月, 2006 4 次提交
-
-
由 Jens Axboe 提交于
There's a small window from when the timer is entered and we grab the queue lock, where cfq_set_active_queue() could be rearming the timer for us. Seen in the wild on a 12-way ppc box. Fix this by just using mod_timer(), which will do the right thing for us. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
If the hardware is doing real queueing, decide that it's worthless to idle the hardware. It does reasonable simultaneous io in that case anyways, and the idling hurts some work loads. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
If we are anticipating a sync request from this process and we are waiting for that and see an async request come in, expire that slice and move on. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
For just one busy queue (like async write out), we often overlooked that we could queue more io and decided we were idle instead. This causes us quite a bit of performance loss. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 31 5月, 2006 1 次提交
-
-
由 Jens Axboe 提交于
- Drop cic from the list when seen as dead. - Fixup the locking, just use a simple spinlock. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 21 4月, 2006 1 次提交
-
-
由 David Woodhouse 提交于
They were abusing the rb_color field to mark nodes which weren't currently on the tree. Fix that to use the same method as eventpoll did -- setting the parent pointer to point back to itself. And use the appropriate accessor macros for setting and reading the parent. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 19 4月, 2006 1 次提交
-
-
由 OGAWA Hirofumi 提交于
In current code, we are re-reading cic->key after dead cic->key check. So, in theory, it may really re-read *after* cfq_exit_queue() seted NULL. To avoid race, we copy it to stack, then use it. With this change, I guess gcc will assign cic->key to a register or stack, and it wouldn't be re-readed. Signed-off-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 18 4月, 2006 2 次提交
-
-
由 OGAWA Hirofumi 提交于
When queue dies, we set cic->key=NULL as dead mark. So, when we traverse a rbtree, we must check whether it's still valid key. if it was invalidated, drop it, then restart the traversal from top. Signed-off-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 OGAWA Hirofumi 提交于
On rmmod path, cfq/as waits to make sure all io-contexts was freed. However, it's using complete(), not wait_for_completion(). I think barrier() is not enough in here. To avoid the following case, this patch replaces barrier() with smb_wmb(). cpu0 visibility cpu1 [ioc_gnone=NULL,ioc_count=1] ioc_gnone = &all_gone NULL,ioc_count=1 atomic_read(&ioc_count) NULL,ioc_count=1 wait_for_completion() NULL,ioc_count=0 atomic_sub_and_test() NULL,ioc_count=0 if ( && ioc_gone) [ioc_gone==NULL, so doesn't call complete()] &all_gone,ioc_count=0 Signed-off-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 28 3月, 2006 3 次提交
-
-
由 Jens Axboe 提交于
Detect whether a given process is seeky and if so disable (mostly) the idle window if it is. We still allow just a little idle time, just enough to allow that process to submit a new request. That is needed to maintain fairness across priority groups. In some cases, we could setup several async queues. This is not optimal from a performance POV, since we want all async io in one queue to perform good sorting on it. It also impacted sync queues, as async io got too much slice time. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Andreas Mohr 提交于
this is a small optimization to cfq_choose_req() in the CFQ I/O scheduler (this function is a semi-often invoked candidate in an oprofile log): by using a bit mask variable, we can use a simple switch() to check the various cases instead of having to query two variables for each check. Benefit: 251 vs. 285 bytes footprint of cfq_choose_req(). Also, common case 0 (no request wrapping) is now checked first in code. Signed-off-by: NAndreas Mohr <andi@lisas.de> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
On setups with many disks, we spend a considerable amount of time looking up the process-disk mapping on each queue of io. Testing with a NULL based block driver, this costs 40-50% reduction in throughput for 1000 disks. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 27 3月, 2006 1 次提交
-
-
由 Matthew Dobson 提交于
Modify well over a dozen mempool users to call mempool_create_slab_pool() rather than calling mempool_create() with extra arguments, saving about 30 lines of code and increasing readability. Signed-off-by: NMatthew Dobson <colpatch@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 19 3月, 2006 14 次提交
-
-
由 Al Viro 提交于
-
由 Al Viro 提交于
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
We don't need to pin ->key down; ->cfqq->cfqd will do that for us. Incidentally, that stops the leak we had - that reference was never dropped. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
If somebody does a hash lookup for cfq_queue while ioprio of an async queue is elevated, they shouldn't end up stuck with lowered ioprio when we go back. Fix is to use ->org_ioprio{,class} in hash lookups. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 28 2月, 2006 1 次提交
-
-
由 Jens Axboe 提交于
During testing of SLES10, we encountered a hang in the CFQ io scheduler. Turns out the deferred slice expiry logic is buggy, so remove that for now. We could be left with an idle queue that would never wake up. So kill that logic, always expire immediately. Also fix a potential timer race condition. Patch looks bigger than it is, because it moves a function. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 06 1月, 2006 1 次提交
-
-
由 Arjan van de Ven 提交于
the patch below marks various read-only variables in block/* as const, so that gcc can optimize the use of them; eg gcc will replace the use by the value directly now and will even remove the memory usage of these. Signed-off-by: NArjan van de Ven <arjan@infradead.org> Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 19 11月, 2005 1 次提交
-
-
由 Coywolf Qi Hunt 提交于
Some leftover comments referring to drivers/block that are now block/. They don't add any information we don't already have, so kill them. Signed-off-by: NCoywolf Qi Hunt <qiyong@fc-cn.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 12 11月, 2005 2 次提交
-
-
由 Tejun Heo 提交于
When cfq slice expires, remainder of slice is calculated and stored in cfqq->slice_left. Current code calculates the opposite of remainder - how many jiffies the cfqq has used past slice end. This patch fixes the bug. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Tejun Heo 提交于
cfq forced dispatching might not return all requests on the queue. This bug can hang elevator switchinig and corrupt request ordering during flush sequence. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 04 11月, 2005 1 次提交
-
-
由 Jens Axboe 提交于
drivers/block/ is right now a mix of core and driver parts. Lets move the core parts to a new top level directory. Al will move the fs/ related block parts to block/ next. Signed-off-by: NJens Axboe <axboe@suse.de>
-
- 31 10月, 2005 1 次提交
-
-
由 Jens Axboe 提交于
Don't clear ->elevator_data on exit, if we are switching queues we are overwriting the data of the new io scheduler. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-