- 09 8月, 2007 13 次提交
-
-
由 Ingo Molnar 提交于
eliminate rq_clock() use by changing it to: update_rq_clock(rq) now = rq->clock; identity transformation - no change in behavior. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
add the [__]update_rq_clock(rq) functions. (No change in functionality, just reorganization to prepare for elimination of the heavy 64-bit timestamp-passing in the scheduler.) Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Williams 提交于
There are two problems with balance_tasks() and how it used: 1. The variables best_prio and best_prio_seen (inherited from the old move_tasks()) were only required to handle problems caused by the active/expired arrays, the order in which they were processed and the possibility that the task with the highest priority could be on either. These issues are no longer present and the extra overhead associated with their use is unnecessary (and possibly wrong). 2. In the absence of CONFIG_FAIR_GROUP_SCHED being set, the same this_best_prio variable needs to be used by all scheduling classes or there is a risk of moving too much load. E.g. if the highest priority task on this at the beginning is a fairly low priority task and the rt class migrates a task (during its turn) then that moved task becomes the new highest priority task on this_rq but when the sched_fair class initializes its copy of this_best_prio it will get the priority of the original highest priority task as, due to the run queue locks being held, the reschedule triggered by pull_task() will not have taken place. This could result in inappropriate overriding of skip_for_load and excessive load being moved. The attached patch addresses these problems by deleting all reference to best_prio and best_prio_seen and making this_best_prio a reference parameter to the various functions involved. load_balance_fair() has also been modified so that this_best_prio is only reset (in the loop) if CONFIG_FAIR_GROUP_SCHED is set. This should preserve the effect of helping spread groups' higher priority tasks around the available CPUs while improving system performance when CONFIG_FAIR_GROUP_SCHED isn't set. Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Alexey Dobriyan 提交于
kernel.sched_domain hierarchy is under CTL_UNNUMBERED and thus unreachable to sysctl(2). Generating .ctl_number's in such situation is not useful. Signed-off-by: NAlexey Dobriyan <adobriyan@sw.ru> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
small delta_exec accounting fix: increase delta_exec and increase sum_exec_runtime even if the task is not on the runqueue anymore. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
cleanup: delta_mine is an unsigned value. no code impact: text data bss dec hex filename 27823 2726 16 30565 7765 sched.o.before 27823 2726 16 30565 7765 sched.o.after Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
speed up schedule(): share the 'now' parameter that deactivate_task() was calculating internally. ( this also fixes the small accounting window between the deactivate call and the pick_next_task() call. ) Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
uninline rq_clock() to save 263 bytes of code: text data bss dec hex filename 39561 3642 24 43227 a8db sched.o.before 39298 3642 24 42964 a7d4 sched.o.after Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Josh Triplett 提交于
sched_fair.c defines print_cfs_stats, and sched_debug.c uses it, but sched.c includes both sched_fair.c and sched_debug.c, so all the references to print_cfs_stats occur in the same compilation unit. Thus, mark print_cfs_stats static. Eliminates a sparse warning: warning: symbol 'print_cfs_stats' was not declared. Should it be static? Signed-off-by: NJosh Triplett <josh@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ulrich Drepper 提交于
here's another tiny cleanup. The generated code is not affected (gcc is smart enough) but for people looking over the code it is just irritating to have the extra conditional. Signed-off-by: NUlrich Drepper <drepper@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Williams 提交于
The move_tasks() function is currently multiplexed with two distinct capabilities: 1. attempt to move a specified amount of weighted load from one run queue to another; and 2. attempt to move a specified number of tasks from one run queue to another. The first of these capabilities is used in two places, load_balance() and load_balance_idle(), and in both of these cases the return value of move_tasks() is used purely to decide if tasks/load were moved and no notice of the actual number of tasks moved is taken. The second capability is used in exactly one place, active_load_balance(), to attempt to move exactly one task and, as before, the return value is only used as an indicator of success or failure. This multiplexing of sched_task() was introduced, by me, as part of the smpnice patches and was motivated by the fact that the alternative, one function to move specified load and one to move a single task, would have led to two functions of roughly the same complexity as the old move_tasks() (or the new balance_tasks()). However, the new modular design of the new CFS scheduler allows a simpler solution to be adopted and this patch addresses that solution by: 1. adding a new function, move_one_task(), to be used by active_load_balance(); and 2. making move_tasks() a single purpose function that tries to move a specified weighted load and returns 1 for success and 0 for failure. One of the consequences of these changes is that neither move_one_task() or the new move_tasks() care how many tasks sched_class.load_balance() moves and this enables its interface to be simplified by returning the amount of load moved as its result and removing the load_moved pointer from the argument list. This helps simplify the new move_tasks() and slightly reduces the amount of work done in each of sched_class.load_balance()'s implementations. Further simplification, e.g. changes to balance_tasks(), are possible but (slightly) complicated by the special needs of load_balance_fair() so I've left them to a later patch (if this one gets accepted). NB Since move_tasks() gets called with two run queue locks held even small reductions in overhead are worthwhile. [ mingo@elte.hu ] this change also reduces code size nicely: text data bss dec hex filename 39216 3618 24 42858 a76a sched.o.before 39173 3618 24 42815 a73f sched.o.after Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Peter Williams suggested to flip the order of update_cpu_load(rq) with the ->task_tick() call. This is a NOP for the current scheduler (the two functions are independent of each other), ->task_tick() might create some state for update_cpu_load() in the future (or in PlugSched). Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
batch up the sleeper bonus sum a bit more. Anything below sched-granularity is too small to make a practical difference anyway. this optimization reduces the math in high-frequency scheduling scenarios. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 07 8月, 2007 1 次提交
-
-
由 Al Viro 提交于
C99 6.10.3[11]: preprocessing directive within the argument list of macro invocation => undefined behaviour. Don't do that... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 8月, 2007 2 次提交
-
-
由 Oleg Nesterov 提交于
There is a couple of subtle checks which were needed to handle ptracing from the same thread group. This was deprecated a long ago, imho this code just complicates the understanding. And, the "->parent->signal->flags & SIGNAL_GROUP_EXIT" check in exit_notify() is not right. SIGNAL_GROUP_EXIT can mean exec(), not exit_group(). This means ptracer can lose a ptraced zombie on exec(). Minor problem, but still the bug. Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Acked-by: NRoland McGrath <roland@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Daniel Ritz 提交于
the early setup function serial8250_console_early_setup() can be called from non __init code (eg. hotpluggable serial ports like serial_cs) so remove the __init from the call chain to avoid crashes. Signed-off-by: NDaniel Ritz <daniel.ritz@gmx.ch> Cc: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 8月, 2007 11 次提交
-
-
由 Ingo Molnar 提交于
move the rest of the debugging/instrumentation code to under CONFIG_SCHEDSTATS too. This reduces code size and speeds code up: text data bss dec hex filename 33044 4122 28 37194 914a sched.o.before 32708 4122 28 36858 8ffa sched.o.after Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
make use of the new schedstat_set() API to eliminate two #ifdef sections. No functional changes: text data bss dec hex filename 29009 4122 28 33159 8187 sched.o.before 29009 4122 28 33159 8187 sched.o.after Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
add the schedstat_set() API, to allow the reduction of CONFIG_SCHEDSTAT related #ifdefs. No code changed. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
move load-calculation functions so that they can use the per-policy declarations and methods. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
make sched_class.task_new == NULL a 'default method', this allows the removal of task_rt_new. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
uninline inc_nr_running() and dec_nr_running(): text data bss dec hex filename 29039 4162 24 33225 81c9 sched.o.before 29027 4162 24 33213 81bd sched.o.after Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
uninline calc_delta_mine(): text data bss dec hex filename 29162 4162 24 33348 8244 sched.o.before 29039 4162 24 33225 81c9 sched.o.after Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
use fixed limit in calc_delta_mine() - this saves an instruction :) Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Williams 提交于
1. The only place that RTPRIO_TO_LOAD_WEIGHT() is used is in the call to move_tasks() in the function active_load_balance() and its purpose here is just to make sure that the load to be moved is big enough to ensure that exactly one task is moved (if there's one available). This can be accomplished by using ULONG_MAX instead and this allows RTPRIO_TO_LOAD_WEIGHT() to be deleted. 2. This, in turn, allows PRIO_TO_LOAD_WEIGHT() to be deleted. 3. This allows load_weight() to be deleted which allows TIME_SLICE_NICE_ZERO to be deleted along with the comment above it. Signed-off-by: NPeter Williams <pwil3058@bigpond.net.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
remove the last unused remains of cache_hot_time. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
Marcin Slusarz reported a ne2k-pci "hung network interface" regression. delayed disable relies on the ability to re-trigger the interrupt in the case that a real interrupt happens after the software disable was set. In this case we actually disable the interrupt on the hardware level _after_ it occurred. On enable_irq, we need to re-trigger the interrupt. On i386 this relies on a hardware resend mechanism (send_IPI_self()). Actually we only need the resend for edge type interrupts. Level type interrupts come back once enable_irq() re-enables the interrupt line. I assume that the interrupt in question is level triggered because it is shared and above the legacy irqs 0-15: 17: 12 IO-APIC-fasteoi eth1, eth0 Looking into the IO_APIC code, the resend via send_IPI_self() happens unconditionally. So the resend is done for level and edge interrupts. This makes the problem more mysterious. The code in question lib8390.c does disable_irq(); fiddle_with_the_network_card_hardware() enable_irq(); The fiddle_with_the_network_card_hardware() might cause interrupts, which are cleared in the same code path again, Marcin found that when he disables the irq line on the hardware level (removing the delayed disable) the card is kept alive. So the difference is that we can get a resend on enable_irq, when an interrupt happens during the time, where we are in the disabled region. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 8月, 2007 6 次提交
-
-
由 Jesper Juhl 提交于
Coverity spotted what looks like a real possible case of using a variable after it has been freed. The problem is in kernel/relay.c::relay_open_buf() If the code hits "goto free_buf;" it ends up in this code : free_buf: relay_destroy_buf(buf); <--- calls kfree() on 'buf'. free_name: kfree(tmpname); end: return buf; <-- use after free of 'buf'. I read through the callers and they all handle a NULL return from this function as an error (and hitting the 'free_buf' label only happens on failure to chan->cb->create_buf_file(), so that looks like a clear error to me). The patch simply sets 'buf' to NULL after the call to relay_destroy_buf(buf); - as far as I can see that should take care of the problem. The patch also corrects a reference to a documentation file while I was at it. Note from Mathieu: the documentation reference change should have been done in a separate patch, but I guess no one will really care. Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com> Acked-by: N"David J. Wilder" <wilder@us.ibm.com> Tested-by: N"David J. Wilder" <wilder@us.ibm.com> Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Tom Zanussi <zanussi@us.ibm.com> Cc: Karim Yaghmour <karim@opersys.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Satyam Sharma 提交于
WARNING: kernel/built-in.o(.text+0x16910): Section mismatch: reference to .init.text: (between 'kthreadd' and 'init_waitqueue_head') comes because kernel/kthread.c:kthreadd() is not __init but calls kthreadd_setup() which is __init. But this is ok, because kthreadd_setup() is only ever called at init time, and then kthreadd() proceeds into its "for (;;)" loop. We could mark kthreadd __init_refok, but kthreadd_setup() with just one callsite and 4 lines in it (it's been that small since 10ab825b) doesn't need to be a separate function at all -- so let's just move those four lines at beginning of kthreadd() itself. Signed-off-by: NSatyam Sharma <ssatyam@cse.iitk.ac.in> Acked-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andreas Schwab 提交于
The fourth argument of sys_futex is ignored when op == FUTEX_WAKE_OP, but futex_wake_op expects it as its nr_wake2 parameter. The only user of this operation in glibc is always passing 1, so this bug had no consequences so far. Signed-off-by: NAndreas Schwab <schwab@suse.de> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NUlrich Drepper <drepper@redhat.com> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexey Dobriyan 提交于
Signed-off-by: NAlexey Dobriyan <adobriyan@sw.ru> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexey Dobriyan 提交于
On every open/close one struct seq_operations leaks. Kudos to /proc/slab_allocators. Signed-off-by: NAlexey Dobriyan <adobriyan@sw.ru> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Randy Dunlap 提交于
Fix kernel-doc warnings in sched.c: Warning(linux-2623-rc1g4//kernel/sched.c:1685): No description found for parameter 'notifier' Warning(linux-2623-rc1g4//kernel/sched.c:1696): No description found for parameter 'notifier' Warning(linux-2623-rc1g4//kernel/sched.c:1750): No description found for parameter 'prev' Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 7月, 2007 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
This helps people when debugging problems like the ones that were in the recent -mm releases. Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 30 7月, 2007 3 次提交
-
-
由 Len Brown 提交于
Restore the 2.6.22 CONFIG_ACPI_SLEEP build option, but now shadowing the new CONFIG_PM_SLEEP option. Signed-off-by: NLen Brown <len.brown@intel.com> [ Modified to work with the PM config setup changes. ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rafael J. Wysocki 提交于
Introduce CONFIG_SUSPEND representing the ability to enter system sleep states, such as the ACPI S3 state, and allow the user to choose SUSPEND and HIBERNATION independently of each other. Make HOTPLUG_CPU be selected automatically if SUSPEND or HIBERNATION has been chosen and the kernel is intended for SMP systems. Also, introduce CONFIG_PM_SLEEP which is automatically selected if CONFIG_SUSPEND or CONFIG_HIBERNATION is set and use it to select the code needed for both suspend and hibernation. The top-level power management headers and the ACPI code related to suspend and hibernation are modified to use the new definitions (the changes in drivers/acpi/sleep/main.c are, mostly, moving code to reduce the number of ifdefs). There are many other files in which CONFIG_PM can be replaced with CONFIG_PM_SLEEP or even with CONFIG_SUSPEND, but they can be updated in the future. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rafael J. Wysocki 提交于
Replace CONFIG_SOFTWARE_SUSPEND with CONFIG_HIBERNATION to avoid confusion (among other things, with CONFIG_SUSPEND introduced in the next patch). Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 7月, 2007 2 次提交
-
-
由 Peter Zijlstra 提交于
copy_from_user() returns the number of bytes not copied, hence 0 is the expected output. axi->mm might not be valid anymore when not equal to current->mm, do not dereference before checking that - thanks to Al for spotting that. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Tested-by: NSteve Grubb <sgrubb@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NJeff Garzik <jeff@garzik.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 7月, 2007 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit bd804eba ("PM: Introduce pm_power_off_prepare") caused problems in the poweroff path, as reported by YOSHIFUJI Hideaki / 吉藤英明. Generally, sysdev_shutdown() should be called after the ACPI preparation for powering the system off. To make it happen, we can separate sysdev_shutdown() from device_shutdown() and call it directly wherever necessary. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Tested-by: NYOSHIFUJI Hideaki / 吉藤英明 <yoshfuji@linux-ipv6.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-