- 13 1月, 2006 21 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 akpm@osdl.org 提交于
) From: Al Viro <viro@ftp.linux.org.uk> task_pt_regs() needs the same offset-by-8 to match copy_thread() Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 akpm@osdl.org 提交于
) From: Al Viro <viro@ftp.linux.org.uk> rename alpha_task_regs() to task_pt_regs(), switch open-coded instances to use of the helper. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
use task_stack_page() for accesses to stack page of task in alpha-specific parts of tree Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
use task_thread_info() for accesses to thread_info of task in arch/alpha and include/asm-alpha Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Patchset annotates arch/* uses of ->thread_info. Ones that really are about access of thread_info of given process are simply switched to task_thread_info(task); ones that deal with access to objects on stack are switched to new helper - task_stack_page(). A _lot_ of the latter are actually open-coded instances of "find where pt_regs are"; those are consolidated into task_pt_regs(task) (many architectures actually have such helper already). Note that these annotations are not mandatory - any code not converted to these helpers still works. However, they clean up a lot of places and have actually caught a number of bugs, so converting out of tree ports would be a good idea... As an example of breakage caught by that stuff, see i386 pt_regs mess - we used to have it open-coded in a bunch of places and when back in April Stas had fixed a bug in copy_thread(), the rest had been left out of sync. That required two followup patches (the latest - just before 2.6.15) _and_ still had left /proc/*/stat eip field broken. Try ps -eo eip on i386 and watch the junk... This patch: new helper - task_stack_page(task). Returns pointer to the memory object containing task stack; usually thread_info of task sits in the beginning of that object. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 akpm@osdl.org 提交于
) From: Nick Piggin <nickpiggin@yahoo.com.au> Track the last waker CPU, and only consider wakeup-balancing if there's a match between current waker CPU and the previous waker CPU. This ensures that there is some correlation between two subsequent wakeup events before we move the task. Should help random-wakeup workloads on large SMP systems, by reducing the migration attempts by a factor of nr_cpus. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 akpm@osdl.org 提交于
) From: Ingo Molnar <mingo@elte.hu> This is the latest version of the scheduler cache-hot-auto-tune patch. The first problem was that detection time scaled with O(N^2), which is unacceptable on larger SMP and NUMA systems. To solve this: - I've added a 'domain distance' function, which is used to cache measurement results. Each distance is only measured once. This means that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT distances 0 and 1, and on SMP distance 0 is measured. The code walks the domain tree to determine the distance, so it automatically follows whatever hierarchy an architecture sets up. This cuts down on the boot time significantly and removes the O(N^2) limit. The only assumption is that migration costs can be expressed as a function of domain distance - this covers the overwhelming majority of existing systems, and is a good guess even for more assymetric systems. [ People hacking systems that have assymetries that break this assumption (e.g. different CPU speeds) should experiment a bit with the cpu_distance() function. Adding a ->migration_distance factor to the domain structure would be one possible solution - but lets first see the problem systems, if they exist at all. Lets not overdesign. ] Another problem was that only a single cache-size was used for measuring the cost of migration, and most architectures didnt set that variable up. Furthermore, a single cache-size does not fit NUMA hierarchies with L3 caches and does not fit HT setups, where different CPUs will often have different 'effective cache sizes'. To solve this problem: - Instead of relying on a single cache-size provided by the platform and sticking to it, the code now auto-detects the 'effective migration cost' between two measured CPUs, via iterating through a wide range of cachesizes. The code searches for the maximum migration cost, which occurs when the working set of the test-workload falls just below the 'effective cache size'. I.e. real-life optimized search is done for the maximum migration cost, between two real CPUs. This, amongst other things, has the positive effect hat if e.g. two CPUs share a L2/L3 cache, a different (and accurate) migration cost will be found than between two CPUs on the same system that dont share any caches. (The reliable measurement of migration costs is tricky - see the source for details.) Furthermore i've added various boot-time options to override/tune migration behavior. Firstly, there's a blanket override for autodetection: migration_cost=1000,2000,3000 will override the depth 0/1/2 values with 1msec/2msec/3msec values. Secondly, there's a global factor that can be used to increase (or decrease) the autodetected values: migration_factor=120 will increase the autodetected values by 20%. This option is useful to tune things in a workload-dependent way - e.g. if a workload is cache-insensitive then CPU utilization can be maximized by specifying migration_factor=0. I've tested the autodetection code quite extensively on x86, on 3 P3/Xeon/2MB, and the autodetected values look pretty good: Dual Celeron (128K L2 cache): --------------------- migration cost matrix (max_cache_size: 131072, cpu: 467 MHz): --------------------- [00] [01] [00]: - 1.7(1) [01]: 1.7(1) - --------------------- cacheflush times [2]: 0.0 (0) 1.7 (1784008) --------------------- Here the slow memory subsystem dominates system performance, and even though caches are small, the migration cost is 1.7 msecs. Dual HT P4 (512K L2 cache): --------------------- migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz): --------------------- [00] [01] [02] [03] [00]: - 0.4(1) 0.0(0) 0.4(1) [01]: 0.4(1) - 0.4(1) 0.0(0) [02]: 0.0(0) 0.4(1) - 0.4(1) [03]: 0.4(1) 0.0(0) 0.4(1) - --------------------- cacheflush times [2]: 0.0 (33900) 0.4 (448514) --------------------- Here it can be seen that there is no migration cost between two HT siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory system makes inter-physical-CPU migration pretty cheap: 0.4 msecs. 8-way P3/Xeon [2MB L2 cache]: --------------------- migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz): --------------------- [00] [01] [02] [03] [04] [05] [06] [07] [00]: - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [01]: 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [02]: 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [03]: 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) [04]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) [05]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) [06]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) [07]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - --------------------- cacheflush times [2]: 0.0 (0) 19.2 (19281756) --------------------- This one has huge caches and a relatively slow memory subsystem - so the migration cost is 19 msecs. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Cc: <wilder@us.ibm.com> Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Ingo Molnar 提交于
Add per-arch sched_cacheflush() which is a write-back cacheflush used by the migration-cost calibration code at bootup time. Signed-off-by: NIngo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
Fixes bugzilla.kernel.org bug 2903. Cc: <tim@cyberelk.net> Cc: <andrea@suse.de> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Greg Ungerer 提交于
Remove unecessary page++ from memmap_init_zone loop. Signed-off-by: NGreg Ungerer <gerg@uclinux.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Neil Brown 提交于
e.g. The sx8 driver uses names like sx8/0. This would make a md component dev name like /sys/block/md0/md/dev-sx8/0 which is not allowed. So we change the '/' to '!' just like fs/partitions/check.c(register_disk) does. Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Catalin Marinas 提交于
Adapt tiny-shmem.c to the new do_truncate() prototype. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMatt Mackall <mpm@selenic.com> Acked-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Lameter 提交于
This ensures that reserved pages are not migrated. Reserved pages currently cause the WARN_ON to trigger in migrate_page_add() Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Tejun Heo 提交于
If ordered tag isn't supported, request ordering for barrier sequencing is performed by queue draining, which basically hangs the request queue until elv_completed_request() reports completion of all previous fs requests. The condition check in elv_completed_request() was only performed for fs requests. If a special request is queued between the last to-be-drained request and the barrier sequence, draining is never completed and the queue is stalled forever. This patch moves the end-of-draining condition check such that it's performed for all requests. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NJens Axboe <axboe@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 12 1月, 2006 19 次提交
-
-
由 Linus Torvalds 提交于
-
由 Vivek Goyal 提交于
o This fix was posted for i386 long back. Posting it for x86_64. http://marc.theaimsgroup.com/?l=linux-kernel&m=110380103229830&w=2 o This patch fixes the problem of secondary cpus boot up. This situation is faced when kernel is built for default locations like 16MB and onwards. In this configuration, only primary cpu (BP) comes and secondary cpus don't boot. o Problem occurs because in trampoline code, lgdt is not able to load the GDT as it happens to be situated beyond 16MB. This is due to the fact that cpu is still in real mode and default operand size is 16bit. o This patch uses lgdtl instead of lgdt to force operand size to 32 instead of 16. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
Handling common prefixes is tricky. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jan Beulich 提交于
... reducing the amount of changes Xen has to do. Signed-Off-By: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jan Beulich 提交于
The explicit and implicit calls to setup_early_printk() were passing inconsistent arguments. Signed-Off-By: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
It has no business being elsewhere and x86-64 doesn't need/want it. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
Previously they would be only allocated before the kernel text at 1MB. This limited the maximum supported memory to 128GB. Now allow the e820 allocator to put them everywhere. Try to put them beyond any DMA zones to avoid filling them up. This should free some GFP_DMA memory compared to earlier kernels. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
hard_smp_processor_id would return the local APIC id instead of the Linux processor id. On big systems they are often not identical. safe_smp_processor_id is just a wrapper around it that does the necessary conversions. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
Remove support for obsolete hardware and cleanup. - Remove checks for non integrated APICs - Replace apic_write_around with apic_write. - Remove apic_read_around - Remove APIC version reads used by old workarounds - Remove old workaround for Simics - Fix indentation Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jan Beulich 提交于
When building in a separate objtree, file names produced by BUG() & Co. can get fairly long; printing only the first 50 characters may thus result in (almost) no useful information. The following change makes it so that rather the last 50 characters of the filename get printed. Signed-Off-By: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jan Beulich 提交于
Especially under Xen, where the console cannot be adjusted to more than 25 lines, it is fairly important that the information displayed during a panic is as compact as possible. Below adjustments work towards that. Signed-Off-By: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jan Beulich 提交于
Due to a broken condition, the body of the loop that is intended to wait for the Update-In-Progress bit to get set and then cleared again was never entered; in fact, the entire loop was optimized out by the compiler. Here is a change to fix the condition (and to also move the initialization of locals out of the spin lock protected region). Signed-Off-By: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
It was only needed for APM Pointed out by Jan Beulich Cc: jbeulich@novell.com Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
X86_FEATURE_K8_C was a synthetic Linux CPUID flag that was used for some code optimizations in Opteron C stepping or later. But support for pre C stepping optimizations has been removed, so this isn't needed anymore. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
early_cpu_detect only runs on the BP, but this code needs to run on all CPUs. Looks like a mismerge somewhere. Also add a warning comment. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Hemminger 提交于
Fix some trivial sparse warnings in x86_64 code. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
Saves about ~18K .text in defconfig There would be more optimization potential, but that's for later. Suggestion originally from Bill Irwin. Fix from Andy Whitcroft. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
They used to be used by the reboot code, but not anymore. Noticed by Jan Beulich Cc: JBeulich@novell.com Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Vivek Goyal 提交于
o Currently, during kexec reboot, IOAPIC is re-programmed back to virtual wire mode if there was an i8259 connected to it. This enables getting timer interrupts in second kernel in legacy mode. o After putting into virtual wire mode, IOAPIC delivers the i8259 interrupts to CPU0. This works well for kexec but not for kdump as we might crash on a different CPU and second kernel will not see timer interrupts. o This patch modifies the redirection table entry to deliver the timer interrupts to the cpu we are rebooting (instead of hardcoding to zero). This ensures that second kernel receives timer interrupts even on a non-boot cpu. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-