- 13 2月, 2007 40 次提交
-
-
由 Andreas Herrmann 提交于
mtrr: fix size_or_mask and size_and_mask This fixes two bugs in /proc/mtrr interface: o If physical address size crosses the 44 bit boundary size_or_mask is evaluated wrong. o size_and_mask limits width of physical base address for an MTRR to be less than 44 bits. TBD: later patch had one more change, but I think that was bogus. TBD: need to double check Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Alexey Dobriyan 提交于
Byte-to-byte identical /proc/apm here. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Josef 'Jeff' Sipek 提交于
Old code was legal standard C, but apparently not sparse-C. Signed-off-by: NJosef 'Jeff' Sipek <jsipek@cs.sunysb.edu> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Alexey Dobriyan 提交于
It will execure cpuid only on the cpu we need. Signed-off-by: NAlexey Dobriyan <adobriyan@openvz.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Alexey Dobriyan 提交于
It will execute rdmsr and wrmsr only on the cpu we need. Signed-off-by: NAlexey Dobriyan <adobriyan@openvz.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Nicolas Kaiser 提交于
Some typos in Kconfig. Signed-off-by: NNicolas Kaiser <nikai@nikai.net> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
- Remove outdated comment - Use cpu_relax() in a busy loop Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jan Beulich 提交于
Function is dead. Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
When a machine check event is detected (including a AMD RevF threshold overflow event) allow to run a "trigger" program. This allows user space to react to such events sooner. The trigger is configured using a new trigger entry in the machinecheck sysfs interface. It is currently shared between all CPUs. I also fixed the AMD threshold handler to run the machine check polling code immediately to actually log any events that might have caused the threshold interrupt. Also added some documentation for the mce sysfs interface. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jan Beulich 提交于
while debugging an unrelated problem in Xen, I noticed odd reads from non-existent MSRs. Having now found time to look why these happen, I came up with below patch, which - prevents accessing MCi_MISCj with j > 0 when the block pointer in MCi_MISC0 is zero - accesses only contiguous MCi_MISCj until a non-implemented one is found - doesn't touch unimplemented blocks in mce_threshold_interrupt at all - gives names to two bits previously derived from MASK_VALID_HI (it took me some time to understand the code without this) The first three items, besides being apparently closer to the spec, should namely help cutting down on the time mce_threshold_interrupt() takes. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jan Beulich 提交于
Remove all parameters from this function that aren't really variable. Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Randy Dunlap 提交于
List x86_64 quilt tree in MAINTAINERS. Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Randy Dunlap 提交于
Fix typos. Lots of whitespace changes for readability and consistency. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Venkatesh Pallipadi 提交于
Handle these 32 bit perfmon counter MSR writes cleanly in oprofile. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Venkatesh Pallipadi 提交于
Change i386 nmi handler to handle 32 bit perfmon counter MSR writes cleanly. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Venkatesh Pallipadi 提交于
P6 CPUs and Core/Core 2 CPUs which has 'architectural perf mon' feature, only supports write of low 32 bits in Performance Monitoring Counters. Bits 32..39 are sign extended based on bit 31 and bits 40..63 are reserved and should be zero. This patch: Change x86_64 nmi handler to handle this case cleanly. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
This is a tiny cleanup to increase readability Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
Unlike x86, x86_64 already passes arguments in registers. The use of regparm attribute makes no difference in produced code, and the use of fastcall just bloats the code. Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rohit Seth 提交于
This patch resolves the issue of running with numa=fake=X on kernel command line on x86_64 machines that have big IO hole. While calculating the size of each node now we look at the total hole size in that range. Previously there were nodes that only had IO holes in them causing kernel boot problems. We now use the NODE_MIN_SIZE (64MB) as the minimum size of memory that any node must have. We reduce the number of allocated nodes if the number of nodes specified on kernel command line results in any node getting memory smaller than NODE_MIN_SIZE. This change allows the extra memory to be incremented in NODE_MIN_SIZE granule and uniformly distribute among as many nodes (called big nodes) as possible. [akpm@osdl.org: build fix] Signed-off-by: NDavid Rientjes <reintjes@google.com> Signed-off-by: NPaul Menage <menage@google.com> Signed-off-by: NRohit Seth <rohitseth@google.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rene Herman 提交于
Use adding __init to romsignature() (it's only called from probe_roms() which is itself __init) as an excuse to submit a pedantic cleanup. Signed-off-by: NRene Herman <rene.herman@gmail.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Ingo Molnar 提交于
Clean up sched_clock() on i686: it will use the TSC if available and falls back to jiffies only if the user asked for it to be disabled via notsc or the CPU calibration code didnt figure out the right cpu_khz. This generally makes the scheduler timestamps more finegrained, on all hardware. (the current scheduler is pretty resistant against asynchronous sched_clock() values on different CPUs, it will allow at most up to a jiffy of jitter.) Also simplify sched_clock()'s check for TSC availability: propagate the desire and ability to use the TSC into the tsc_disable flag, previously this flag only indicated whether the notsc option was passed. This makes the rare low-res sched_clock() codepath a single branch off a read-mostly flag. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Stephane Eranian 提交于
Add a notifier mechanism to the low level idle loop. You can register a callback function which gets invoked on entry and exit from the low level idle loop. The low level idle loop is defined as the polling loop, low-power call, or the mwait instruction. Interrupts processed by the idle thread are not considered part of the low level loop. The notifier can be used to measure precisely how much is spent in useless execution (or low power mode). The perfmon subsystem uses it to turn on/off monitoring. Signed-off-by: Nstephane eranian <eranian@hpl.hp.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Adrian Bunk 提交于
Every file should include the headers containing the prototypes for it's global functions. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
o init() is a non __init function in .text section but it calls many functions which are in .init.text section. Hence MODPOST generates lots of cross reference warnings on i386 if compiled with CONFIG_RELOCATABLE=y WARNING: vmlinux - Section mismatch: reference to .init.text:smp_prepare_cpus from .text between 'init' (at offset 0xc0101049) and 'rest_init' WARNING: vmlinux - Section mismatch: reference to .init.text:migration_init from .text between 'init' (at offset 0xc010104e) and 'rest_init' WARNING: vmlinux - Section mismatch: reference to .init.text:spawn_ksoftirqd from .text between 'init' (at offset 0xc0101053) and 'rest_init' o This patch breaks down init() in two parts. One part which can go in .init.text section and can be freed and other part which has to be non __init(init_post()). Now init() calls init_post() and init_post() does not call any functions present in .init sections. Hence getting rid of warnings. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
o Entry startup_32 was in .text section but it was accessing some init data too and it prompts MODPOST to generate compilation warnings. WARNING: vmlinux - Section mismatch: reference to .init.data:boot_params from .text between '_text' (at offset 0xc0100029) and 'startup_32_smp' WARNING: vmlinux - Section mismatch: reference to .init.data:boot_params from .text between '_text' (at offset 0xc0100037) and 'startup_32_smp' WARNING: vmlinux - Section mismatch: reference to .init.data:init_pg_tables_end from .text between '_text' (at offset 0xc0100099) and 'startup_32_smp' o Can't move startup_32 to .init.text as this entry point has to be at the start of bzImage. Hence moved startup_32 to a new section .text.head and instructed MODPOST to not to generate warnings if init data is being accessed from .text.head section. This code has been audited. o SMP boot up code (startup_32_smp) can go into .init.text if CPU hotplug is not supported. Otherwise it generates more warnings WARNING: vmlinux - Section mismatch: reference to .init.data:new_cpu_data from .text between 'checkCPUtype' (at offset 0xc0100126) and 'is486' WARNING: vmlinux - Section mismatch: reference to .init.data:new_cpu_data from .text between 'checkCPUtype' (at offset 0xc0100130) and 'is486' Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Zachary Amsden 提交于
Deliberate register clobber around performance critical inline code is great for testing, bad to leave on by default. Many people ship with DEBUG_KERNEL turned on, so stop making DEBUG_PARAVIRT default on. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Zachary Amsden 提交于
Because timer code moves around, and we might eventually move our init to a late_time_init hook, save and restore IRQs around this code because it is definitely not interrupt safe. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Zachary Amsden 提交于
Kprobes bugfix for paravirt compatibility - RPL on the CS when inserting BPs must match running kernel. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> CC: Eric Biederman <ebiederm@xmission.com>
-
由 Zachary Amsden 提交于
Profile_pc was broken when using paravirtualization because the assumption the kernel was running at CPL 0 was violated, causing bad logic to read a random value off the stack. The only way to be in kernel lock functions is to be in kernel code, so validate that assumption explicitly by checking the CS value. We don't want to be fooled by BIOS / APM segments and try to read those stacks, so only match KERNEL_CS. I moved some stuff in segment.h to make it prettier. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Zachary Amsden 提交于
VMI timer code. It works by taking over the local APIC clock when APIC is configured, which requires a couple hooks into the APIC code. The backend timer code could be commonized into the timer infrastructure, but there are some pieces missing (stolen time, in particular), and the exact semantics of when to do accounting for NO_IDLE need to be shared between different hypervisors as well. So for now, VMI timer is a separate module. [Adrian Bunk: cleanups] Subject: VMI timer patches Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Zachary Amsden 提交于
Fairly straightforward implementation of VMI backend for paravirt-ops. [Adrian Bunk: some cleanups] Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Zachary Amsden 提交于
Add VMI SMP boot hook. We emulate a regular boot sequence and use the same APIC IPI initiation, we just poke magic values to load into the CPU state when the startup IPI is received, rather than having to jump through a real mode trampoline. This is all that was needed to get SMP to work. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Zachary Amsden 提交于
I found a clever way to make the extra IOPL switching invisible to non-paravirt compiles - since kernel_rpl is statically defined to be zero there, and only non-zero rpl kernel have a problem restoring IOPL, as popf does not restore IOPL flags unless run at CPL-0. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Zachary Amsden 提交于
The VMI ROM has a mode where hypercalls can be queued and batched. This turns out to be a significant win during context switch, but must be done at a specific point before side effects to CPU state are visible to subsequent instructions. This is similar to the MMU batching hooks already provided. The same hooks could be used by the Xen backend to implement a context switch multicall. To explain a bit more about lazy modes in the paravirt patches, basically, the idea is that only one of lazy CPU or MMU mode can be active at any given time. Lazy MMU mode is similar to this lazy CPU mode, and allows for batching of multiple PTE updates (say, inside a remap loop), but to avoid keeping some kind of state machine about when to flush cpu or mmu updates, we just allow one or the other to be active. Although there is no real reason a more comprehensive scheme could not be implemented, there is also no demonstrated need for this extra complexity. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Zachary Amsden 提交于
The VMI backend uses explicit page type notification to track shadow page tables. The allocation of page table roots is especially tricky. We need to clone the root for non-PAE mode while it is protected under the pgd lock to correctly copy the shadow. We don't need to allocate pgds in PAE mode, (PDPs in Intel terminology) as they only have 4 entries, and are cached entirely by the processor, which makes shadowing them rather simple. For base page table level allocation, pmd_populate provides the exact hook point we need. Also, we need to allocate pages when splitting a large page, and we must release pages before returning the page to any free pool. Despite being required with these slightly odd semantics for VMI, Xen also uses these hooks to determine the exact moment when page tables are created or released. AK: All nops for other architectures Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Adrian Bunk 提交于
Every file should #include the headers containing the prototypes for its global functions. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Catalin Marinas 提交于
It makes more sense to end the stack trace with ULONG_MAX only if nr_entries < max_entries. Otherwise, we lose one entry in the long stack traces and cannot know whether the trace was complete or not. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: Jan Beulich <jbeulich@novell.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Karsten Weiss 提交于
- add SWIOTLB config help text - mention Documentation/x86_64/boot-options.txt in Documentation/kernel-parameters.txt - remove the duplication of the iommu kernel parameter documentation. - Better explanation of some of the iommu kernel parameter options. - "32MB<<order" instead of "32MB^order". - Mention the default "order" value. - list the four existing PCI-DMA mapping implementations of arch x86_64 - group the iommu= option keywords by PCI-DMA mapping implementation. - Distinguish iommu= option keywords from number arguments. - Explain the meaning of DAC and SAC. Signed-off-by: NKarsten Weiss <knweiss@science-computing.de> Signed-off-by: NAndi Kleen <ak@suse.de> Acked-by: NMuli Ben-Yehuda <muli@il.ibm.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Eric Dumazet 提交于
ARCH_HAVE_XTIME_LOCK is used by x86_64 arch . This arch needs to place a read only copy of xtime_lock into vsyscall page. This read only copy is named __xtime_lock, and xtime_lock is defined in arch/x86_64/kernel/vmlinux.lds.S as an alias. So the declaration of xtime_lock in kernel/timer.c was guarded by ARCH_HAVE_XTIME_LOCK define, defined to true on x86_64. We can get same result with _attribute__((weak)) in the declaration. linker should do the job. Signed-off-by: NEric Dumazet <dada1@cosmosbay.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 OGAWA Hirofumi 提交于
This is just cleanup. It moves to e820 check into pci_mmcfg_reject_broken(). Signed-off-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-