- 30 6月, 2006 2 次提交
-
-
由 Ingo Molnar 提交于
Add ->retrigger() irq op to consolidate hw_irq_resend() implementations. (Most architectures had it defined to NOP anyway.) NOTE: ia64 needs testing. i386 and x86_64 tested. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Adrian Bunk 提交于
drivers/built-in.o: In function `sgivwfb_set_par': sgivwfb.c:(.text+0x88583): undefined reference to `sgivwfb_mem_phys' sgivwfb.c:(.text+0x88596): undefined reference to `sgivwfb_mem_phys' sgivwfb.c:(.text+0x885a8): undefined reference to `sgivwfb_mem_phys' drivers/built-in.o: In function `sgivwfb_check_var': sgivwfb.c:(.text+0x88ad0): undefined reference to `sgivwfb_mem_size' drivers/built-in.o: In function `sgivwfb_mmap': sgivwfb.c:(.text+0x88c75): undefined reference to `sgivwfb_mem_size' sgivwfb.c:(.text+0x88c7f): undefined reference to `sgivwfb_mem_phys' drivers/built-in.o: In function `sgivwfb_probe': sgivwfb.c:(.init.text+0x4060): undefined reference to `sgivwfb_mem_size' sgivwfb.c:(.init.text+0x4065): undefined reference to `sgivwfb_mem_phys' sgivwfb.c:(.init.text+0x4076): undefined reference to `sgivwfb_mem_phys' sgivwfb.c:(.init.text+0x409c): undefined reference to `sgivwfb_mem_size' sgivwfb.c:(.init.text+0x410e): undefined reference to `sgivwfb_mem_size' sgivwfb.c:(.init.text+0x4113): undefined reference to `sgivwfb_mem_phys' sgivwfb.c:(.init.text+0x4162): undefined reference to `sgivwfb_mem_size' sgivwfb.c:(.init.text+0x4168): undefined reference to `sgivwfb_mem_phys' make: *** [.tmp_vmlinux1] Error 1 Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 28 6月, 2006 5 次提交
-
-
由 Siddha, Suresh B 提交于
sysfs entries 'sched_mc_power_savings' and 'sched_smt_power_savings' in /sys/devices/system/cpu/ control the MC/SMT power savings policy for the scheduler. Based on the values (1-enable, 0-disable) for these controls, sched groups cpu power will be determined for different domains. When power savings policy is enabled and under light load conditions, scheduler will minimize the physical packages/cpu cores carrying the load and thus conserving power(with a perf impact based on the workload characteristics... see OLS 2005 CMP kernel scheduler paper for more details..) Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Con Kolivas <kernel@kolivas.org> Cc: "Chen, Kenneth W" <kenneth.w.chen@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Ingo Molnar 提交于
Move the i386 VDSO down into a vma and thus randomize it. Besides the security implications, this feature also helps debuggers, which can COW a vma-backed VDSO just like a normal DSO and can thus do single-stepping and other debugging features. It's good for hypervisors (Xen, VMWare) too, which typically live in the same high-mapped address space as the VDSO, hence whenever the VDSO is used, they get lots of guest pagefaults and have to fix such guest accesses up - which slows things down instead of speeding things up (the primary purpose of the VDSO). There's a new CONFIG_COMPAT_VDSO (default=y) option, which provides support for older glibcs that still rely on a prelinked high-mapped VDSO. Newer distributions (using glibc 2.3.3 or later) can turn this option off. Turning it off is also recommended for security reasons: attackers cannot use the predictable high-mapped VDSO page as syscall trampoline anymore. There is a new vdso=[0|1] boot option as well, and a runtime /proc/sys/vm/vdso_enabled sysctl switch, that allows the VDSO to be turned on/off. (This version of the VDSO-randomization patch also has working ELF coredumping, the previous patch crashed in the coredumping code.) This code is a combined work of the exec-shield VDSO randomization code and Gerd Hoffmann's hypervisor-centric VDSO patch. Rusty Russell started this patch and i completed it. [akpm@osdl.org: cleanups] [akpm@osdl.org: compile fix] [akpm@osdl.org: compile fix 2] [akpm@osdl.org: compile fix 3] [akpm@osdl.org: revernt MAXMEM change] Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NArjan van de Ven <arjan@infradead.org> Cc: Gerd Hoffmann <kraxel@suse.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Zachary Amsden <zach@vmware.com> Cc: Andi Kleen <ak@muc.de> Cc: Jan Beulich <jbeulich@novell.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Chuck Ebbert 提交于
Using C code for current_thread_info() lets the compiler optimize it. With gcc 4.0.2, kernel is smaller: text data bss dec hex filename 3645212 555556 312024 4512792 44dc18 2.6.17-rc6-nb-post/vmlinux 3647276 555556 312024 4514856 44e428 2.6.17-rc6-nb/vmlinux ------- -2064 Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Rohit Seth 提交于
Move the phys_core_id and cpu_core_id to cpuinfo_x86 structure. Similar patch for x86_64 is already accepted by Andi earlier this week. [akpm@osdl.org: fix warning] Signed-off-by: NRohit Seth <rohitseth@google.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Yasunori Goto 提交于
When new node becomes enable by hot-add, new sysfs file must be created for new node. So, if new node is enabled by add_memory(), register_one_node() is called to create it. In addition, I386's arch_register_node() and a part of register_nodes() of powerpc are consolidated to register_one_node() as a generic_code(). This is tested by Tiger4(IPF) with node hot-plug emulation. Signed-off-by: NKeiichiro Tokunaga <tokuanga.keiich@jp.fujitsu.com> Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 6月, 2006 17 次提交
-
-
由 Venkatesh Pallipadi 提交于
Intel now has support for Architectural Performance Monitoring Counters ( Refer to IA-32 Intel Architecture Software Developer's Manual http://www.intel.com/design/pentium4/manuals/253669.htm ). This feature is present starting from Intel Core Duo and Intel Core Solo processors. What this means is, the performance monitoring counters and some performance monitoring events are now defined in an architectural way (using cpuid). And there will be no need to check for family/model etc for these architectural events. Below is the patch to use this performance counters in nmi watchdog driver. Patch handles both i386 and x86-64 kernels. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Keith Owens 提交于
On some i386/x86_64 systems, sending an NMI IPI as a broadcast will reset the system. This seems to be a BIOS bug which affects machines where one or more cpus are not under OS control. It occurs on HT systems with a version of the OS that is not compiled without HT support. It also occurs when a system is booted with max_cpus=n where 2 <= n < cpus known to the BIOS. The fix is to always send NMI IPI as a mask instead of as a broadcast. Signed-off-by: NKeith Owens <kaos@sgi.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Keith Owens 提交于
x86_64 and i386 behave inconsistently when sending an IPI on vector 2 (NMI_VECTOR). Make both behave the same, so IPI 2 is sent as NMI. The crash code was abusing send_IPI_allbutself() by passing a code instead of a vector, it only worked because crash knew about the internal code of send_IPI_allbutself(). Change crash to use NMI_VECTOR instead, and remove the comment about how crash was abusing the function. This patch is a pre-requisite for fixing the problem where sending an IPI as NMI would reboot some Dell Xeon systems. I cannot fix that problem while crash continus to abuse send_IPI_allbutself(). It also removes the inconsistency between i386 and x86_64 for NMI_VECTOR. That will simplify all the RAS code that needs to bring all the cpus to a clean stop, even when one or more cpus are spinning disabled. Signed-off-by: NKeith Owens <kaos@sgi.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
When a process changes CPUs while doing the non atomic cpu_local_* operations it might operate on the local_t of a different CPUs. Fix that by disabling preemption. Pointed out by Christopher Lameter Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
During some profiling I noticed that default_idle causes a lot of memory traffic. I think that is caused by the atomic operations to clear/set the polling flag in thread_info. There is actually no reason to make this atomic - only the idle thread does it to itself, other CPUs only read it. So I moved it into ti->status. Converted i386/x86-64/ia64 for now because that was the easiest way to fix ACPI which also manipulates these flags in its idle function. Cc: Nick Piggin <npiggin@novell.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Len Brown <len.brown@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jan Beulich 提交于
If no unwinding is possible at all for a certain exception instance, fall back to the old style call trace instead of not showing any trace at all. Also, allow setting the stack trace mode at the command line. Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jan Beulich 提交于
To increase the usefulness of reliable stack unwinding, this adds CFI unwind annotations to many low-level i386 routines. Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jan Beulich 提交于
These are the i386-specific pieces to enable reliable stack traces. This is going to be even more useful once CFI annotations get added to he assembly code, namely to entry.S. Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Don Zickus 提交于
Misc header cleanup for nmi watchdog. Signed-off-by: NDon Zickus <dzickus@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
- Factor out the duplicated access/cache code into a single file * Shared between i386/x86-64. - Share flush code between AGP and IOMMU * Fix a bug: AGP didn't wait for end of flush before - Drop 8 northbridges limit and allocate dynamically - Add lock to serialize AGP and IOMMU GART flushes - Add PCI ID for next AMD northbridge - Random related cleanups The old K8 NUMA discovery code is unchanged. New systems should all use SRAT for this. Cc: "Navin Boppuri" <navin.boppuri@newisys.com> Cc: Dave Jones <davej@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Gerd Hoffmann 提交于
Changes are largely identical to the i386 version: * alternative #define are moved to the new alternative.h file. * one new elf section with pointers to the lock prefixes which can be nop'ed out for non-smp. * two new elf sections simliar to the "classic" alternatives to replace SMP code with simpler UP code. * fixup headers to use alternative.h instead of defining their own LOCK / LOCK_PREFIX macros. The patch reuses the i386 version of the alternatives code to avoid code duplication. The code in alternatives.c was shuffled around a bit to reduce the number of #ifdefs needed. It also got some tweaks needed for x86_64 (vsyscall page handling) and new features (noreplacement option which was x86_64 only up to now). Debug printk's are changed from compile-time to runtime. Loosely based on a early version from Bastian Blank <waldi@debian.org> Signed-off-by: NGerd Hoffmann <kraxel@suse.de> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andi Kleen 提交于
Intel systems report the cache level data from CPUID 4 in sysfs. Add a CPUID 4 emulation for AMD CPUs to report the same information for them. This allows programs to read this information in a uniform way. The AMD way to report this is less flexible so some assumptions are hardcoded (e.g. no L3) Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Anil S Keshavamurthy 提交于
With this patch Kprobes now registers for page fault notifications only when their is an active probe registered. Once all the active probes are unregistered their is no need to be notified of page faults and kprobes unregisters itself from the page fault notifications. Hence we will have ZERO side effects when no probes are active. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Anil S Keshavamurthy 提交于
Overloading of page fault notification with the notify_die() has performance issues(since the only interested components for page fault is kprobes and/or kdb) and hence this patch introduces the new notifier call chain exclusively for page fault notifications their by avoiding notifying unnecessary components in the do_page_fault() code path. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 john stultz 提交于
This converts the i386 arch to use the generic timeofday subsystem. It enabled the GENERIC_TIME option, disables the timer_opts code and other arch specific timekeeping code and reworks the delay code. While this patch enables the generic timekeeping, please note that this patch does not provide any i386 clocksource. Thus only the jiffies clocksource will be available. To get full replacements for the code being disabled here, the timeofday-clocks-i386 patch will needed. Signed-off-by: NJohn Stultz <johnstul@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 john stultz 提交于
As part of the i386 conversion to the generic timekeeping infrastructure, this introduces a new tsc.c file. The code in this file replaces the TSC initialization, management and access code currently in timer_tsc.c (which will be removed) that we want to preserve. The code also introduces the following functionality: o tsc_khz: like cpu_khz but stores the TSC frequency on systems that do not change TSC frequency w/ CPU frequency o check/mark_tsc_unstable: accessor/modifier flag for TSC timekeeping usability o minor cleanups to calibration math. This patch also includes a one line __cpuinitdata fix from Zwane Mwaikambo. Signed-off-by: NJohn Stultz <johnstul@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Andreas Mohr 提交于
acquired (aquired) contiguous (contigious) successful (succesful, succesfull) surprise (suprise) whether (weather) some other misspellings Signed-off-by: NAndreas Mohr <andi@lisas.de> Signed-off-by: NAdrian Bunk <bunk@stusta.de>
-
- 26 6月, 2006 4 次提交
-
-
由 NeilBrown 提交于
As described in a previous patch and documented in mm/filemap.h, copy_from_user_inatomic* shouldn't zero out the tail of the buffer after an incomplete copy. This patch implements that change for i386. For the _nocache version, a new __copy_user_intel_nocache is defined similar to copy_user_zeroio_intel_nocache, and this is ultimately used for the copy. For the regular version, __copy_from_user_ll_nozero is defined which uses __copy_user and __copy_user_intel - the later needs casts to reposition the __user annotations. If copy_from_user_atomic is given a constant length of 1, 2, or 4, then we do still zero the destintion on failure. This didn't seem worth the effort of fixing as the places where it is used really don't care. Signed-off-by: NNeil Brown <neilb@suse.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 NeilBrown 提交于
The problem is that when we write to a file, the copy from userspace to pagecache is first done with preemption disabled, so if the source address is not immediately available the copy fails *and* *zeros* *the* *destination*. This is a problem because a concurrent read (which admittedly is an odd thing to do) might see zeros rather that was there before the write, or what was there after, or some mixture of the two (any of these being a reasonable thing to see). If the copy did fail, it will immediately be retried with preemption re-enabled so any transient problem with accessing the source won't cause an error. The first copying does not need to zero any uncopied bytes, and doing so causes the problem. It uses copy_from_user_atomic rather than copy_from_user so the simple expedient is to change copy_from_user_atomic to *not* zero out bytes on failure. The first of these two patches prepares for the change by fixing two places which assume copy_from_user_atomic does zero the tail. The two usages are very similar pieces of code which copy from a userspace iovec into one or more page-cache pages. These are changed to remove the assumption. The second patch changes __copy_from_user_inatomic* to not zero the tail. Once these are accepted, I will look at similar patches of other architectures where this is important (ppc, mips and sparc being the ones I can find). This patch: There is a problem with __copy_from_user_inatomic zeroing the tail of the buffer in the case of an error. As it is called in atomic context, the error may be transient, so it results in zeros being written where maybe they shouldn't be. In the usage in filemap, this opens a window for a well timed read to see data (zeros) which is not consistent with any ordering of reads and writes. Most cases where __copy_from_user_inatomic is called, a failure results in __copy_from_user being called immediately. As long as the latter zeros the tail, the former doesn't need to. However in *copy_from_user_iovec implementations (in both filemap and ntfs/file), it is assumed that copy_from_user_inatomic will zero the tail. This patch removes that assumption, so that after this patch it will be safe for copy_from_user_inatomic to not zero the tail. This patch also adds some commentary to filemap.h and asm-i386/uaccess.h. After this patch, all architectures that might disable preempt when kmap_atomic is called need to have their __copy_from_user_inatomic* "fixed". This includes - powerpc - i386 - mips - sparc Signed-off-by: NNeil Brown <neilb@suse.de> Cc: David Howells <dhowells@redhat.com> Cc: Anton Altaparmakov <aia21@cantab.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Matt Mackall 提交于
The floppy driver is already calling add_disk_randomness as it should, so this was redundant. Signed-off-by: NMatt Mackall <mpm@selenic.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
Clean up and refactor i386 sub-architecture setup. This change moves all the code from the asm-i386/mach-*/setup_arch_pre/post.h headers, into arch/i386/mach-*/setup.c. mach-*/setup_arch_pre.h is renamed to setup_arch.h, and contains only things which should be in header files. It is purely code-motion; there should be no functional changes at all. Several functions in arch/i386/kernel/setup.c needed to be made non-static so that they're visible to the code in mach-*/setup.c. asm-i386/setup.h is used to hold the prototypes for these functions. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Cc: Zachary Amsden <zach@vmware.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: Christian Limpach <Christian.Limpach@cl.cam.ac.uk> Cc: Martin Bligh <mbligh@google.com> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Andrey Panin <pazke@donpac.ru> Cc: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 23 6月, 2006 9 次提交
-
-
由 Kirill Smelkov 提交于
compile fix: <asm-i386/alternative.h> needs <asm/types.h> for 'u8' -- just look at struct alt_instr. My module includes <asm/bitops.h> as the first header, and as of 2.6.17 this leads to compilation errors. Signed-off-by: NKirill Smelkov <kirr@mns.spb.ru> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Michal Ludvig 提交于
New CPU flags for next generation of crypto engine as found in VIA C7 processors. Signed-off-by: NMichal Ludvig <michal@logix.cz> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Roman Zippel 提交于
An immediate operand can't be the destination of the cmpl instruction, so exclude it. Signed-off-by: NRoman Zippel <zippel@linux-m68k.org> Cc: Mattia Dongili <malattia@linux.it> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Alexey Dobriyan 提交于
Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Alexey Dobriyan 提交于
Only drm, framebuffer, mtrr parts + misc files here and there. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Adrian Bunk 提交于
Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Hiro Yoshioka 提交于
Use the x86 cache-bypassing copy instructions for copy_from_user(). Some performance data are Total of GLOBAL_POWER_EVENTS (CPU cycle samples) 2.6.12.4.orig 1921587 2.6.12.4.nt 1599424 1599424/1921587=83.23% (16.77% reduction) BSQ_CACHE_REFERENCE (L3 cache miss) 2.6.12.4.orig 57427 2.6.12.4.nt 20858 20858/57427=36.32% (63.7% reduction) L3 cache miss reduction of __copy_from_user_ll samples % 37408 65.1412 vmlinux __copy_from_user_ll 23 0.1103 vmlinux __copy_user_zeroing_intel_nocache 23/37408=0.061% (99.94% reduction) Top 5 of 2.6.12.4.nt Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000 samples % app name symbol name 128392 8.0274 vmlinux __copy_user_zeroing_intel_nocache 64206 4.0143 vmlinux journal_add_journal_head 59746 3.7355 vmlinux do_get_write_access 47674 2.9807 vmlinux journal_put_journal_head 46021 2.8774 vmlinux journal_dirty_metadata pattern9-0-cpu4-0-09011728/summary.out Counted BSQ_CACHE_REFERENCE events (cache references seen by the bus unit) with a unit mask of 0x3f (multiple flags) count 3000 samples % app name symbol name 69755 4.2861 vmlinux __copy_user_zeroing_intel_nocache 55685 3.4215 vmlinux journal_add_journal_head 52371 3.2179 vmlinux __find_get_block 45504 2.7960 vmlinux journal_put_journal_head 36005 2.2123 vmlinux journal_stop pattern9-0-cpu4-0-09011744/summary.out Counted BSQ_CACHE_REFERENCE events (cache references seen by the bus unit) with a unit mask of 0x200 (read 3rd level cache miss) count 3000 samples % app name symbol name 1147 5.4994 vmlinux journal_add_journal_head 881 4.2240 vmlinux journal_dirty_data 872 4.1809 vmlinux blk_rq_map_sg 734 3.5192 vmlinux journal_commit_transaction 617 2.9582 vmlinux radix_tree_delete pattern9-0-cpu4-0-09011731/summary.out iozone results are original 2.6.12.4 CPU time = 207.768 sec cache aware CPU time = 184.783 sec (three times run) 184.783/207.768=88.94% (11.06% reduction) original: pattern9-0-cpu4-0-08191720/iozone.out: CPU Utilization: Wall time 45.997 CPU time 64.527 CPU utilization 140.28 % pattern9-0-cpu4-0-08191741/iozone.out: CPU Utilization: Wall time 46.878 CPU time 71.933 CPU utilization 153.45 % pattern9-0-cpu4-0-08191743/iozone.out: CPU Utilization: Wall time 45.152 CPU time 71.308 CPU utilization 157.93 % cache awre: pattern9-0-cpu4-0-09011728/iozone.out: CPU Utilization: Wall time 44.842 CPU time 62.465 CPU utilization 139.30 % pattern9-0-cpu4-0-09011731/iozone.out: CPU Utilization: Wall time 44.718 CPU time 59.273 CPU utilization 132.55 % pattern9-0-cpu4-0-09011744/iozone.out: CPU Utilization: Wall time 44.367 CPU time 63.045 CPU utilization 142.10 % Signed-off-by: NHiro Yoshioka <hyoshiok@miraclelinux.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Lameter 提交于
sys_move_pages() support for 32bit (i386 plus x86_64 compat layer) Add support for move_pages() on i386 and also add the compat functions necessary to run 32 bit binaries on x86_64. Add compat_sys_move_pages to the x86_64 32bit binary layer. Note that it is not up to date so I added the missing pieces. Not sure if this is done the right way. [akpm@osdl.org: compile fix] Signed-off-by: NChristoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Bjorn Helgaas 提交于
VGA_MAP_MEM translates to ioremap() on some architectures. It makes sense to do this to vga_vram_base, because we're going to access memory between vga_vram_base and vga_vram_end. But it doesn't really make sense to map starting at vga_vram_end, because we aren't going to access memory starting there. On ia64, which always has to be different, ioremapping vga_vram_end gives you something completely incompatible with ioremapped vga_vram_start, so vga_vram_size ends up being nonsense. As a bonus, we often know the size up front, so we can use ioremap() correctly, rather than giving it a zero size. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Cc: "Antonino A. Daplas" <adaplas@pol.net> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 22 6月, 2006 2 次提交
-
-
由 bibo,mao 提交于
In IA64 platform, msi driver does not use irq_vector variable, and in x86 platform LAST_DEVICE_VECTOR should one before FIRST_SYSTEM_VECTOR, this patch modify this. Signed-off-by: Nbibo, mao <bibo.mao@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Mark Maule 提交于
Abstract portions of the MSI core for platforms that do not use standard APIC interrupt controllers. This is implemented through a new arch-specific msi setup routine, and a set of msi ops which can be set on a per platform basis. Signed-off-by: NMark Maule <maule@sgi.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 09 5月, 2006 1 次提交
-
-
由 Kimball Murray 提交于
The patch addresses a problem with ACPI SCI interrupt entry, which gets re-used, and the IRQ is assigned to another unrelated device. The patch corrects the code such that SCI IRQ is skipped and duplicate entry is avoided. Second issue came up with VIA chipset, the problem was caused by original patch assigning IRQs starting 16 and up. The VIA chipset uses 4-bit IRQ register for internal interrupt routing, and therefore cannot handle IRQ numbers assigned to its devices. The patch corrects this problem by allowing PCI IRQs below 16. Cc: len.brown@intel.com Signed-off by: Natalie Protasevich <Natalie.Protasevich@unisys.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-