- 17 12月, 2005 3 次提交
-
-
由 Christoph Lameter 提交于
sparc64, i386 and x86_64 have support for a special data section dedicated to rarely updated data that is frequently read. The section was created to avoid false sharing of those rarely read data with frequently written kernel data. This patch creates such a data section for ia64 and will group rarely written data into this section. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jes Sorensen 提交于
Use raw_smp_processor_id() instead of get_cpu() as we don't need the extra features of get_cpu(). Signed-off-by: NJes Sorensen <jes@trained-monkey.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 John Hawkes 提交于
The udelay() inline for ia64 uses the ITC. If CONFIG_PREEMPT is enabled and the platform has unsynchronized ITCs and the calling task migrates to another CPU while doing the udelay loop, then the effective delay may be too short or very, very long. This patch disables preemption around 100 usec chunks of the overall desired udelay time. This minimizes preemption-holdoffs. udelay() is now too big to be inline, move it out of line and export it. Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 15 12月, 2005 1 次提交
-
-
由 Robin Holt 提交于
Missed this when fixing the SET_PERSONALITY change. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 13 12月, 2005 1 次提交
-
-
由 Keshavamurthy Anil S 提交于
When multiple probes are registered at the same address and if due to some recursion (probe getting triggered within a probe handler), we skip calling pre_handlers and just increment nmissed field. The below patch make sure it walks the list for multiple probes case. Without the below patch we get incorrect results of nmissed count for multiple probe case. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 07 12月, 2005 2 次提交
-
-
由 Venkatesh Pallipadi 提交于
What is the value shown in "cpu MHz" of /proc/cpuinfo when CPUs are capable of changing frequency? Today the answer is: It depends. On i386: SMP kernel - It is always the boot frequency UP kernel - Scales with the frequency change and shows that was last set. On x86_64: There is one single variable cpu_khz that gets written by all the CPUs. So, the frequency set by last CPU will be seen on /proc/cpuinfo of all the CPUs in the system. What you see also depends on whether you have constant_tsc capable CPU or not. On ia64: It is always boot time frequency of a particular CPU that gets displayed. The patch below changes this to: Show the last known frequency of the particular CPU, when cpufreq is present. If cpu doesnot support changing of frequency through cpufreq, then boot frequency will be shown. The patch affects i386, x86_64 and ia64 architectures. Signed-off-by: Venkatesh Pallipadi<venkatesh.pallipadi@intel.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Robin Holt 提交于
We have a customer application which trips a bug. The problem arises when a driver attempts to call do_munmap on an area which is mapped, but because current->thread.task_size has been set to 0xC0000000, the call to do_munmap fails thinking it is an unmap beyond the user's address space. The comment in fs/binfmt_elf.c in load_elf_library() before the call to SET_PERSONALITY() indicates that task_size must not be changed for the running application until flush_thread, but is for ia64 executing ia32 binaries. This patch moves the setting of task_size from SET_PERSONALITY() to flush_thread() as indicated. The customer application no longer is able to trip the bug. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 06 12月, 2005 1 次提交
-
-
由 Keith Owens 提交于
Return -EINTR instead of -ERESTARTSYS when signals are delivered during a blocked read of /proc/sal/*/event. This allows salinfo_decode to detect signals when it is blocked on a read of those files. Signed-off-by: NKeith Owens <kaos@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 30 11月, 2005 2 次提交
-
-
由 Keshavamurthy Anil S 提交于
break.b always sets cr.iim to 0 and the current code tries to get the break_num by decoding instruction. However, their seems to be a race condition while reading the regs->cr_iip, as on other cpu the break.b at regs->cr_iip might have been replaced with the original instruction as a result of unregister_kprobe() and hence decoding instruction to obtain break_num will result in wrong value in this case. Also includes changes to kprobes.c which now has to handle break number zero. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Roe 提交于
A single SGI Altix system can be divided into multiple partitions, each running their own instance of the Linux kernel. pfn_valid() is currently not optimal for any but the first partition, since it does not compare the pfn with min_low_pfn before calling the more costly ia64_pfn_valid(). Signed-off-by: NDean Roe <roe@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 24 11月, 2005 1 次提交
-
-
由 Jim Keniston 提交于
Fix a bug in kprobes that can cause an Oops or even a crash when a return probe is installed on one of the following functions: sys_execve, do_execve, load_*_binary, flush_old_exec, or flush_thread. The fix is to remove the call to kprobe_flush_task() in flush_thread(). This fix has been tested on all architectures for which the return-probes feature has been implemented (i386, x86_64, ppc64, ia64). Please apply. BACKGROUND Up to now, we have called kprobe_flush_task() under two situations: when a task exits, and when it execs. Flushing kretprobe_instances on exit is correct because (a) do_exit() doesn't return, and (b) one or more return-probed functions may be active when a task calls do_exit(). Neither is the case for sys_execve() and its callees. Initially, the mistaken call to kprobe_flush_task() on exec was harmless because we put the "real" return address of each active probed function back in the stack, just to be safe, when we recycled its kretprobe_instance. When support for ppc64 and ia64 was added, this safety measure couldn't be employed, and was eventually dropped even for i386 and x86_64. sys_execve() and its callees were informally blacklisted for return probes until this fix was developed. Acked-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: NJim Keniston <jkenisto@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 18 11月, 2005 2 次提交
-
-
由 Chen, Kenneth W 提交于
Polish the comments specifically in vhpt_miss and nested_dtlb_miss handlers. I think it's better to explicitly name each page table level with its name instead of numerically name them. i.e., use pgd, pud, pmd, and pte instead of referring as L1, L2, L3 etc. Along the line, remove some magic number in the comments like: "PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)". No code change at all, pure comment update. Feel free to shoot anything you have, darts or tomahawk cruise missile. I will duck behind a bunker ;-) Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Acked-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
From source code inspection, I think there is a bug with 4 level page table with vhpt_miss handler. In the code path of rechecking page table entry against previously read value after tlb insertion, *pte value in register r18 was overwritten with value newly read from pud pointer, render the check of new *pte against previous *pte completely wrong. Though the bug is none fatal and the penalty is to purge the entry and retry. For functional correctness, it should be fixed. The fix is to use a different register so new *pud don't trash *pte. (btw, the comments in the cmp statement is wrong as well, which I will address in the next patch). Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 16 11月, 2005 1 次提交
-
-
由 Chen, Kenneth W 提交于
Our performance validation on 2.6.15-rc1 caught a disastrous performance regression on ia64 with netperf (-98%) and volanomark (-58%) compares to previous kernel version 2.6.14-git7. See the following chart (result group 1 & 2). http://kernel-perf.sourceforge.net/results.machine_id=26.html We have root caused it to commit 64c7c8f8 This changeset broke the ia64 task resched notification. In sched.c:resched_task(), a reschedule IPI is conditioned upon TIF_POLLING_NRFLAG. However, the above changeset unconditionally set the polling thread flag for idle tasks regardless whether pal_halt_light is in use or not. As a result, resched IPI is not sent from resched_task(). And since the default behavior on ia64 is to use pal_halt_light, we end up delaying the rescheduling task until next timer tick, and thus cause the performance regression. This fixes the performance bug. I'm glad our performance suite is turning up bad performance bug like this in time. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 12 11月, 2005 1 次提交
-
-
由 Robin Holt 提交于
This patch introduces 4-level page tables to ia64. I have run some benchmarks and found nothing interesting. Performance has consistently fallen within the noise range. It also introduces a config option (setting the default to 3 levels). The config option prevents having 4 level page tables with 64k base page size. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 11 11月, 2005 1 次提交
-
-
由 Panagiotis Issaris 提交于
Conversion from kcalloc(1, to kzalloc. Signed-off-by: NPanagiotis Issaris <takis@issaris.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 09 11月, 2005 6 次提交
-
-
由 Nick Piggin 提交于
Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce confusion, and make their semantics rigid. Improves efficiency of resched_task and some cpu_idle routines. * In resched_task: - TIF_NEED_RESCHED is only cleared with the task's runqueue lock held, and as we hold it during resched_task, then there is no need for an atomic test and set there. The only other time this should be set is when the task's quantum expires, in the timer interrupt - this is protected against because the rq lock is irq-safe. - If TIF_NEED_RESCHED is set, then we don't need to do anything. It won't get unset until the task get's schedule()d off. - If we are running on the same CPU as the task we resched, then set TIF_NEED_RESCHED and no further action is required. - If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set after TIF_NEED_RESCHED has been set, then we need to send an IPI. Using these rules, we are able to remove the test and set operation in resched_task, and make clear the previously vague semantics of POLLING_NRFLAG. * In idle routines: - Enter cpu_idle with preempt disabled. When the need_resched() condition becomes true, explicitly call schedule(). This makes things a bit clearer (IMO), but haven't updated all architectures yet. - Many do a test and clear of TIF_NEED_RESCHED for some reason. According to the resched_task rules, this isn't needed (and actually breaks the assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock held). So remove that. Generally one less locked memory op when switching to the idle thread. - Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner most polling idle loops. The above resched_task semantics allow it to be set until before the last time need_resched() is checked before going into a halt requiring interrupt wakeup. Many idle routines simply never enter such a halt, and so POLLING_NRFLAG can be always left set, completely eliminating resched IPIs when rescheduling the idle task. POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs. Signed-off-by: NNick Piggin <npiggin@suse.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Con Kolivas <kernel@kolivas.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
Run idle threads with preempt disabled. Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()). How did it ever work before? Might fix the CPU hotplugging hang which Nigel Cunningham noted. We think the bug hits if the idle thread is preempted after checking need_resched() and before going to sleep, then the CPU offlined. After calling stop_machine_run, the CPU eventually returns from preemption and into the idle thread and goes to sleep. The CPU will continue executing previous idle and have no chance to call play_dead. By disabling preemption until we are ready to explicitly schedule, this bug is fixed and the idle threads generally become more robust. From: alexs <ashepard@u.washington.edu> PPC build fix From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp> MIPS build fix Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NYoichi Yuasa <yuasa@hh.iij4u.or.jp> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Russ Anderson 提交于
When a page has a memory uncorrectable ECC error, the recovery code wants to prevent the page from being reused. This change bumps the reference count to prevent the page from getting back on the free list. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Russ Anderson 提交于
paddr needs to be shifted by PAGE_SHIFT to be valid input for pfn_valid(). Signed-off-by: NRuss Anderson <rja@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Russ Anderson 提交于
The determination of whether an MCA is recoverable or not must be based on the bits set in the PSP (Processor State Parameter). The specific bits are shown in the Intel IA-64 Architecture Software Developer's Manual, Vol 2, Table 11-6 Software Recovery Bits in Processor State Parameter. Those bits should be consistent across the entire IA-64 family of processors. Signed-off-by: NRuss Anderson <rja@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 David Mosberger-Tang 提交于
At the moment, attempting to invoke a signal-handler on the normal stack is guaranteed to fail if the stack-pointer happens not to be 16-byte aligned. This is because the signal-trampoline will attempt to store fp-regs with stf.spill instructions, which will trap for misaligned addresses. This isn't terribly useful behavior. It's better to just always align the signal frame to the next lower 16-byte boundary. Signed-off-by: NDavid Mosberger-Tang <David.Mosberger@acm.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 08 11月, 2005 1 次提交
-
-
由 Keith Owens 提交于
notify_die() added for MCA_{MONARCH,SLAVE,RENDEZVOUS}_{ENTER,PROCESS,LEAVE} and INIT_{MONARCH,SLAVE}_{ENTER,PROCESS,LEAVE}. We need multiple notification points for these events because they can take many seconds to run which has nasty effects on the behaviour of the rest of the system. DIE_SS replaced by a generic DIE_FAULT which checks the vector number, to allow interception of faults other than SS. DIE_MACHINE_{HALT,RESTART} added to allow last minute close down processing, especially when the halt/restart routines are called from error handlers. DIE_OOPS added. The check for kprobe's break numbers has been moved from traps.c to kprobes.c, allowing DIE_BREAK to be used for any additional break numbers, i.e. it is no longer kprobes specific. Hooks for kernel debuggers and kernel dumpers added, ENTER and LEAVE. Both of these disable the system for long periods which impact on watchdogs and heartbeat systems in general. More patches to come that use these events to reset watchdogs and heartbeats. unregister_die_notifier() added and both routines exported. Requested by Dean Nelson. Lock removed from {un,}register_die_notifier. notifier_chain_register() already takes a lock. Also the generic notifier chain locking is being reworked to distinguish between callbacks that can block and those that cannot, the lock in {un,}register_die_notifier would interfere with that change. http://marc.theaimsgroup.com/?l=linux-kernel&m=113018709002036&w=2 Leading white space removed from arch/ia64/kernel/kprobes.c. Typo in mca.c in original version of this patch found & fixed by Dean Nelson. Signed-off-by: NKeith Owens <kaos@sgi.com> Acked-by: NDean Nelson <dcn@sgi.com> Acked-by: NAnil Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 07 11月, 2005 6 次提交
-
-
由 Jesper Juhl 提交于
This is the arch/ part of the big kfree cleanup patch. Remove pointless checks for NULL prior to calling kfree() in arch/. Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com> Acked-by: NGrant Grundler <grundler@parisc-linux.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
Reorganize the preempt_disable/enable calls to eliminate the extra preempt depth. Changes based on Paul McKenney's review suggestions for the kprobes RCU changeset. Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
Changes to the arch kprobes infrastructure to take advantage of the locking changes introduced by usage of RCU for synchronization. All handlers are now run without any locks held, so they have to be re-entrant or provide their own synchronization. Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
IA64 changes to track kprobe execution on a per-cpu basis. We now track the kprobe state machine independently on each cpu using an arch specific kprobe control block. Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
The following set of patches are aimed at improving kprobes scalability. We currently serialize kprobe registration, unregistration and handler execution using a single spinlock - kprobe_lock. With these changes, kprobe handlers can run without any locks held. It also allows for simultaneous kprobe handler executions on different processors as we now track kprobe execution on a per processor basis. It is now necessary that the handlers be re-entrant since handlers can run concurrently on multiple processors. All changes have been tested on i386, ia64, ppc64 and x86_64, while sparc64 has been compile tested only. The patches can be viewed as 3 logical chunks: patch 1: Reorder preempt_(dis/en)able calls patches 2-7: Introduce per_cpu data areas to track kprobe execution patches 8-9: Use RCU to synchronize kprobe (un)registration and handler execution. Thanks to Maneesh Soni, James Keniston and Anil Keshavamurthy for their review and suggestions. Thanks again to Anil, Hien Nguyen and Kevin Stafford for testing the patches. This patch: Reorder preempt_disable/enable() calls in arch kprobes files in preparation to introduce locking changes. No functional changes introduced by this patch. Signed-off-by: NAnanth N Mavinakayahanalli <ananth@in.ibm.com> Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 John W. Linville 提交于
The current ia64 implementation of dma_get_cache_alignment does not work for modules because it relies on a symbol which is not exported. Direct access to a global is a little ugly anyway, so this patch re-implements dma_get_cache_alignment in a manner similar to what is currently used for x86_64. Signed-off-by: NJohn W. Linville <linville@tuxdriver.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 01 11月, 2005 1 次提交
-
-
由 Peter Keilty 提交于
Corrects the very inefficent method of finding free context_ids in get_mmu_context(). Instead of walking the task_list of all processes, 2 bitmaps are used to efficently store and lookup state, inuse and needs flushing. The entire rid address space is now used before calling wrap_mmu_context and global tlb flushing. Special thanks to Ken and Rohit for their review and modifications in using a bit flushmap. Signed-off-by: NPeter Keilty <peter.keilty@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 31 10月, 2005 2 次提交
-
-
由 Tim Schmielau 提交于
I recently picked up my older work to remove unnecessary #includes of sched.h, starting from a patch by Dave Jones to not include sched.h from module.h. This reduces the number of indirect includes of sched.h by ~300. Another ~400 pointless direct includes can be removed after this disentangling (patch to follow later). However, quite a few indirect includes need to be fixed up for this. In order to feed the patches through -mm with as little disturbance as possible, I've split out the fixes I accumulated up to now (complete for i386 and x86_64, more archs to follow later) and post them before the real patch. This way this large part of the patch is kept simple with only adding #includes, and all hunks are independent of each other. So if any hunk rejects or gets in the way of other patches, just drop it. My scripts will pick it up again in the next round. Signed-off-by: NTim Schmielau <tim@physik3.uni-rostock.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Thomas Gleixner 提交于
Define jiffies_64 in kernel/timer.c rather than having 24 duplicated defines in each architecture. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 30 10月, 2005 1 次提交
-
-
由 Hugh Dickins 提交于
The original vm_stat_account has fallen into disuse, with only one user, and only one user of vm_stat_unaccount. It's easier to keep track if we convert them all to __vm_stat_account, then free it from its __shackles. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 29 10月, 2005 1 次提交
-
-
由 Tony Luck 提交于
4ac0068f forgot to delete the declaration of this variable which is no longer used. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 28 10月, 2005 1 次提交
-
-
由 Cliff Wickman 提交于
In arch/ia64/kernel/ptrace.c there is a test for a peek or poke of a register image (in register backing storage). The test can be unnecessarily long (and occurs while holding the tasklist_lock). Especially long on a large system with thousands of active tasks. The ptrace caller (presumably a debugger) specifies the pid of its target and an address to peek or poke. But the debugger could be attached to several tasks. The idea of find_thread_for_addr() is to find whether the target address is in the RBS for any of those tasks. Currently it searches the thread-list of the target pid. If that search does not find a match, and the shared mm-struct's user count indicates that there are other tasks sharing this address space (a rare occurrence), a search is made of all the tasks in the system. Another approach can drastically shorten this procedure. It depends upon the fact that in order to peek or poke from/to any task, the debugger must first attach to that task. And when it does, the attached task is made a child of the debugger (is chained to its children list). Therefore we can search just the debugger's children list. Signed-off-by: NCliff Wickman <cpw@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 26 10月, 2005 4 次提交
-
-
由 hawkes@sgi.com 提交于
In arch/ia64 change the explicit use of a for-loop using NR_CPUS into the general for_each_online_cpu() construct. This widens the scope of potential future optimizations of the general constructs, as well as takes advantage of the existing optimizations of first_cpu() and next_cpu(), which is advantageous when the true CPU count is much smaller than NR_CPUS. Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 hawkes@sgi.com 提交于
In arch/ia64 change the explicit use of for-loops and NR_CPUS into the general for_each_cpu() or for_each_online_cpu() constructs, as appropriate. This widens the scope of potential future optimizations of the general constructs, as well as takes advantage of the existing optimizations of first_cpu() and next_cpu(). Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 H. J. Lu 提交于
The new ia64 assembler uses slot 1 for the offset of a long (2-slot) instruction and the old assembler uses slot 2. The 2.6 kernel assumes slot 2 and won't boot when the new assembler is used: http://sources.redhat.com/bugzilla/show_bug.cgi?id=1433 This patch will work with either slot 1 or 2. Patch provided by H.J. Lu Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Siddha, Suresh B 提交于
Fix the "siblings" field value in /proc/cpuinfo so that it now shows the number of siblings as seen by OS, instead of what is available from hardware perspective. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 07 10月, 2005 1 次提交
-
-
由 Bryan Sutula 提交于
I've noticed a kernel hang during a storm of CMC interrupts, which was tracked down to the continual execution of the interrupt handler. There's code in the CMC handler that's supposed to disable CMC interrupts and switch to polling mode when it sees a bunch of CMCs. Because disabling CMCs across all CPUs isn't safe in interrupt context, the disable is done with a schedule_work(). But with continual CMC interrupts, the schedule_work() never gets executed. The following patch immediately disables CMC interrupts for the current CPU. This then allows (at least) one CPU to ignore CMC interrupts, execute the schedule_work() code, and disable CMC interrupts on the rest of the CPUs. Acked-by: NKeith Owens <kaos@sgi.com> Signed-off-by: NBryan Sutula <Bryan.Sutula@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-