- 01 10月, 2006 40 次提交
-
-
由 David Howells 提交于
Move common FS-specific ioctls from linux/ext2_fs.h to linux/fs.h as FS_IOC_* and FS_IOC32_* and have the users of them use those as a base. Also move the GETFLAGS/SETFLAGS flags to linux/fs.h as FS_*_FL macros, and then have the other users use them as a base. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Move the loop device ioctl compat stuff from fs/compat_ioctl.c to the loop driver so that the loop header file doesn't need to be included. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Move __invalidate_device() from fs/inode.c to fs/block_dev.c so that it can more easily be disabled when the block layer is disabled. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Dissociate the generic_writepages() function from the mpage stuff, moving its declaration to linux/mm.h and actually emitting a full implementation into mm/page-writeback.c. The implementation is a partial duplicate of mpage_writepages() with all BIO references removed. It is used by NFS to do writeback. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Move blockdev_superblock extern declaration from fs/fs-writeback.c to a headerfile and remove the dependence on it by wrapping it in a macro. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Create a new header file, fs/internal.h, for common definitions local to the sources in the fs/ directory. Move extern definitions that should be in header files from fs/*.c to fs/internal.h or other main header files where they span directories. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
The AFS filesystem no longer needs to override its sync_page() op. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Move the bounce buffer code from mm/highmem.c to mm/bounce.c so that it can be more easily disabled when the block layer is disabled. !!!NOTE!!! There may be a bug in this code: Should init_emergency_pool() be contingent on CONFIG_HIGHMEM? Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Stop fallback_migrate_page() from using page_has_buffers() since that might not be available. Use PagePrivate() instead since that's more general. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Remove the duplicate declaration of exit_io_context() from linux/sched.h. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 David Howells 提交于
Move some functions out of the buffering code that aren't strictly buffering specific. This is a precursor to being able to disable the block layer. (*) Moved some stuff out of fs/buffer.c: (*) The file sync and general sync stuff moved to fs/sync.c. (*) The superblock sync stuff moved to fs/super.c. (*) do_invalidatepage() moved to mm/truncate.c. (*) try_to_release_page() moved to mm/filemap.c. (*) Moved some related declarations between header files: (*) declarations for do_invalidatepage() and try_to_release_page() moved to linux/mm.h. (*) __set_page_dirty_buffers() moved to linux/buffer_head.h. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Martin Peschke 提交于
This patch kills a few lines of code in blktrace by making use of on_each_cpu(). Signed-off-by: NMartin Peschke <mp3@de.ibm.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Oleg Nesterov 提交于
Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Oleg Nesterov 提交于
We don't need to disable irqs to clear current->io_context, it is protected by ->alloc_lock. Even IF it was possible to submit I/O from IRQ on behalf of current this irq_disable() can't help: current_io_context() will re-instantiate ->io_context after irq_enable(). We don't need task_lock() or local_irq_disable() to clear ioc->task. This can't prevent other CPUs from playing with our io_context anyway. Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Jens Axboe 提交于
It is a leftover from before the softirq completion was migrated to the block layer. Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Give meta data reads preference over regular reads, as the process often needs to get that out of the way to do the io it was actually interested in. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
We can use this information for making more intelligent priority decisions, and it will also be useful for blktrace. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
It can make sense to set read-ahead larger than a single request. We should not be enforcing such policy on the user. Additionally, using the BLKRASET ioctl doesn't impose such a restriction. So additionally we now expose identical behaviour through the two. Issue also reported by Anton <cbou@mail.ru> Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Don't touch the current queues, just make sure that the wanted queue is selected next. Simplifies the logic. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
CFQ implements this on its own now, but it's really block layer knowledge. Tells a device queue to start dispatching requests to the driver, taking care to unplug if needed. Also fixes the issue where as/cfq will invoke a stopped queue, which we really don't want. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
No point in having a place holder list just for empty queues, so remove it. It's not used for anything other than to keep ->cfq_list busy. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Currently it scales with number of processes in that priority group, which is potentially not very nice as it's called quite often. Basically we always need to do tail inserts, except for the case of a new process. So just mark/detect a queue as such. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Some were kmalloc_node(), some were still kmalloc(). Change them all to kmalloc_node(). Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Kill a few inlines that bring in too much code to more than one location Shrinks kernel text by about 300 bytes on 32-bit x86. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
It's ok if the read path is a lot more costly, as long as inc/dec is really cheap. The inc/dec will happen for each created/freed io context, while the reading only happens when a disk queue exits. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
It's ok if the read path is a lot more costly, as long as inc/dec is really cheap. The inc/dec will happen for each created/freed io context, while the reading only happens when a disk queue exits. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
None of the in-kernel primitives for handling "atomic" counting seem to be a good fit. We need something that is essentially free for incrementing/decrementing, while the read side may be more expensive as we only ever need to do that when a device is removed from the kernel. Use a per-cpu variable for maintaining a per-cpu ioc count and define a reading mechanism that just sums up the values. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
cfq_exit_lock is protecting two things now: - The per-ioc rbtree of cfq_io_contexts - The per-cfqd linked list of cfq_io_contexts The per-cfqd linked list can be protected by the queue lock, as it is (by definition) per cfqd as the queue lock is. The per-ioc rbtree is mainly used and updated by the process itself only. The only outside use is the io priority changing. If we move the priority changing to not browsing the rbtree, we can remove any locking from the rbtree updates and lookup completely. Let the sys_ioprio syscall just mark processes as having the iopriority changed and lazily update the private cfq io contexts the next time io is queued, and we can remove this locking as well. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
A collection of little fixes and cleanups: - We don't use the 'queued' sysfs exported attribute, since the may_queue() logic was rewritten. So kill it. - Remove dead defines. - cfq_set_active_queue() can be rewritten cleaner with else if conditions. - Several places had cfq_exit_cfqq() like logic, abstract that out and use that. - Annotate the cfqq kmem_cache_alloc() so the allocator knows that this is a repeat allocation if it fails with __GFP_WAIT set. Allows the allocator to start freeing some memory, if needed. CFQ already loops for this condition, so might as well pass the hint down. - Remove cfqd->rq_starved logic. It's not needed anymore after we dropped the crq allocation in cfq_set_request(). - Remove uneeded parameter passing. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
Move some members around and unionize completion_data and rb_node since they cannot ever be used at the same time. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
- Don't assign variables that are only used once. - Kill spin_lock() prefetching, it's opportunistic at best. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
It's not needed for anything, so kill the bio passing. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
After Christophs SCSI change, the only usage left is RQ_ACTIVE and RQ_INACTIVE. The block layer sets RQ_INACTIVE right before freeing the request, so any check for RQ_INACTIVE in a driver is a bug and indicates use-after-free. So kill/clean the remaining users, straight forward. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
It is always identical to &q->rq, and we only use it for detecting whether this request came out of our mempool or not. So replace it with an additional ->flags bit flag. Signed-off-by: NJens Axboe <axboe@suse.de>
-
由 Jens Axboe 提交于
As the comments indicates in blkdev.h, we can fold it into ->end_io_data usage as that is really what ->waiting is. Fixup the users of blk_end_sync_rq(). Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Jens Axboe 提交于
Get rid of the as_rq request type. With the added elevator_private2, we have enough room in struct request to get rid of any arq allocation/free for each request. Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NNick Piggin <npiggin@suse.de>
-
由 Jens Axboe 提交于
Get rid of the cfq_rq request type. With the added elevator_private2, we have enough room in struct request to get rid of any crq allocation/free for each request. Signed-off-by: NJens Axboe <axboe@suse.de>
-