- 07 12月, 2006 40 次提交
-
-
由 Chuck Ebbert 提交于
Add sysctl for kstack_depth_to_print. This lets users change the amount of raw stack data printed in dump_stack() without having to reboot. Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Dave Jones 提交于
We do the exact same printk about a dozen lines above with no intermediate printk's. Signed-off-by: NDave Jones <davej@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Artiom Myaskouvskey 提交于
When using memmap kernel parameter in EFI boot we should also add to memory map memory regions of runtime services to enable their mapping later. AK: merged and cleaned up the patch Signed-off-by: NArtiom Myaskouvskey <artiom.myaskouvskey@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Karsten Wiese 提交于
Read/Write APIC_LVTPC and APIC_LVTTHMR only, if get_maxlvt() returns certain values. This is done like everywhere else in i386/kernel/apic.c, so I guess its correct. Suspends/Resumes to disk fine and eleminates an smp_error_interrupt() here on a K8. AK: ported to x86-64 too Signed-off-by: NKarsten Wiese <fzu@wemgehoertderstaat.de> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Stephane Eranian 提交于
Here is a small patch for i386 which adds a cpufeature flag and detection code for Intel's Branch Trace Store (BTS) feature. This feature can be found on Intel P4 and Core 2 processors among others. It can also be used by perfmon. changelog: - add CPU_FEATURE_BTS - add Branch Trace Store detection signed-off-by: Nstephane eranian <eranian@hpl.hp.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Adrian Bunk 提交于
irq_vector[] can now become static. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndi Kleen <ak@suse.de> Acked-by: NEric W. Biederman <ebiederm@xmission.com> Acked-by: NIngo Molnar <mingo@redhat.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Adrian Bunk 提交于
The Coverity checker noted that bad things might happen if find_isa_irq_apic() returned -1. [akpm@osdl.org: add debugging checks] Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Acked-by: NIngo Molnar <mingo@redhat.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Artiom Myaskouvskey 提交于
Function efi_get_time called not only during init kernel phase but also during suspend (from get_cmos_time). When it is called from get_cmos_time the corresponding runtime service should be called in virtual and not in physical mode. Signed-off-by: NArtiom Myaskouvskey <artiom.myaskouvskey@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: "Narayanan, Chandramouli" <chandramouli.narayanan@intel.com> Cc: "Jiossy, Rami" <rami.jiossy@intel.com> Cc: "Satt, Shai" <shai.satt@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Matt Domsch <Matt_Domsch@dell.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Siddha, Suresh B 提交于
Move the irqbalance quirks for E7320/E7520/E7525(Errata 23 in http://download.intel.com/design/chipsets/specupdt/30304203.pdf) to early quirks. And add a PCI quirk for these platforms to check(which happens very late during the boot) if the APIC routing is indeed set to default flat mode. This fixes the breakage(in x86_64) of this quirk due to cpu hotplug which selects physical mode instead of the logical flat(as needed for this errata workaround). Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: "Li, Shaohua" <shaohua.li@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Siddha, Suresh B 提交于
Change the 'no_control' field in the cpu struct to a more positive and better term 'hotpluggable'. And change(/cleanup) the logic accordingly. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: "Li, Shaohua" <shaohua.li@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Siddha, Suresh B 提交于
Add 'enable_cpu_hotplug' flag and when cleared, the hotplug control file ("online") will not be added under /sys/devices/system/cpu/cpuX/ Next patch doing PCI quirks will use this. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: "Li, Shaohua" <shaohua.li@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Vivek Goyal 提交于
o Convert more absolute symbols to section relative to keep the theme in vmlinux.lds.S file and to avoid problem if kernel is relocated. o Also put a message so that in future people can be aware of it and avoid introducing absolute symbols. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Adrian Bunk 提交于
Make the needlessly global alloc_gdt() static. (against) pda-percpu-init Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jan Beulich 提交于
Until not so long ago, there were system log messages pointing to inconsistent MTRR setup of the video frame buffer caused by the way vesafb and X worked. While vesafb was fixed meanwhile, I believe fixing it there only hides a shortcoming in the MTRR code itself, in that that code is not symmetric with respect to the ordering of attempts to set up two (or more) regions where one contains the other. In the current shape, it permits only setting up sub-regions of pre-exisiting ones. The patch below makes this symmetric. While working on that I noticed a few more inconsistencies in that code, namely - use of 'unsigned int' for sizes in many, but not all places (the patch is converting this to use 'unsigned long' everywhere, which specifically might be necessary for x86-64 once a processor supporting more than 44 physical address bits would become available) - the code to correct inconsistent settings during secondary processor startup tried (if necessary) to correct, among other things, the value in IA32_MTRR_DEF_TYPE, however the newly computed value would never get used (i.e. stored in the respective MSR) - the generic range validation code checked that the end of the to-be-added range would be above 1MB; the value checked should have been the start of the range - when contained regions are detected, previously this was allowed only when the old region was uncacheable; this can be symmetric (i.e. the new region can also be uncacheable) and even further as per Intel's documentation write-trough and write-back for either region is also compatible with the respective opposite in the other Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jan Beulich 提交于
Avoid inclusion of code that's dead for x86-64. Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jan Beulich 提交于
Just like on x86-64, don't touch foreign CPUs' memory if the watchdog isn't enabled at all. Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Zachary Amsden 提交于
Add a way to disable the timer IRQ routing check via a boot option. The VMI timer code uses this to avoid triggering the pester Mingo code, which probes for some very unusual and broken motherboard routings. It fires 100% of the time when using a paravirtual delay mechanism instead of using a realtime delay, since there is no elapsed real time, and the 4 timer IRQs have not yet been delivered. In addition, it is entirely possible, though improbable, that this bug could surface on real hardware which picks a particularly bad time to enter SMM mode, causing a long latency during one of the timer IRQs. While here, make check_timer be __init. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> [chrisw: use no_timer_check to bring inline with x86_64 as per Andi's request] Signed-off-by: NChris Wright <chrisw@sous-sol.org> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
BIOS ROM areas may not be mapped into the guest address space, so be careful when touching those addresses to make sure they appear to be mapped. [akpm@osdl.org: fix unused var warning] AK: Changed __get_user to probe_kernel_address Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
Add the three bare TLB accessor functions to paravirt-ops. Most amusingly, flush_tlb is redefined on SMP, so I can't call the paravirt op flush_tlb. Instead, I chose to indicate the actual flush type, kernel (global) vs. user (non-global). Global in this sense means using the global bit in the page table entry, which makes TLB entries persistent across CR3 reloads, not global as in the SMP sense of invoking remote shootdowns, so the term is confusingly overloaded. AK: folded in fix from Zach for PAE compilation Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
Add APIC accessors to paravirt-ops. Unfortunately, we need two write functions, as some older broken hardware requires workarounds for Pentium APIC errata - this is the purpose of apic_write_atomic. AK: replaced __inline with inline Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
Two legacy power management modes are much easier to just explicitly disable when running in paravirtualized mode - neither APM nor PnP is still relevant. The status of ACPI is still debatable, and noacpi is still a common enough boot parameter that it is not necessary to explicitly disable ACPI. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Andi Kleen 提交于
They don't work together and this way even glibc still works. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Rusty Russell 提交于
Allow selected bug checks to be skipped by paravirt kernels. The two most important are the F00F workaround (which is either done by the hypervisor, or not required), and the 'hlt' instruction check, which can break under some hypervisors. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
1) Each hypervisor writes a probe function to detect whether we are running under that hypervisor. paravirt_probe() registers this function. 2) If vmlinux is booted with ring != 0, we call all the probe functions (with registers except %esp intact) in link order: the winner will not return. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
Both lhype and Xen want to call the core of the x86 cpu detect code before calling start_kernel. (extracted from larger patch) AK: folded in start_kernel header patch Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
It turns out that the most called ops, by several orders of magnitude, are the interrupt manipulation ops. These are obvious candidates for patching, so mark them up and create infrastructure for it. The method used is that the ops structure has a patch function, which is called for each place which needs to be patched: this returns a number of instructions (the rest are NOP-padded). Usually we can spare a register (%eax) for the binary patched code to use, but in a couple of critical places in entry.S we can't: we make the clobbers explicit at the call site, and manually clobber the allowed registers in debug mode as an extra check. And: Don't abuse CONFIG_DEBUG_KERNEL, add CONFIG_DEBUG_PARAVIRT. And: AK: Fix warnings in x86-64 alternative.c build And: AK: Fix compilation with defconfig And: ^From: Andrew Morton <akpm@osdl.org> Some binutlises still like to emit references to __stop_parainstructions and __start_parainstructions. And: AK: Fix warnings about unused variables when PARAVIRT is disabled. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
Create a paravirt.h header for all the critical operations which need to be replaced with hypervisor calls, and include that instead of defining native operations, when CONFIG_PARAVIRT. This patch does the dumbest possible replacement of paravirtualized instructions: calls through a "paravirt_ops" structure. Currently these are function implementations of native hardware: hypervisors will override the ops structure with their own variants. All the pv-ops functions are declared "fastcall" so that a specific register-based ABI is used, to make inlining assember easier. And: +From: Andy Whitcroft <apw@shadowen.org> The paravirt ops introduce a 'weak' attribute onto memory_setup(). Code ordering leads to the following warnings on x86: arch/i386/kernel/setup.c:651: warning: weak declaration of `memory_setup' after first use results in unspecified behavior Move memory_setup() to avoid this. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
-
由 Nicolas Kaiser 提交于
Fix double #includes in arch/i386 Signed-off-by: NNicolas Kaiser <nikai@nikai.net> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Chuck Ebbert 提交于
IOPL is implicitly saved and restored on task switch, so explicit check is no longer needed. Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Interrupt could happen between setting the IO-APIC entry and setting its interrupt data. Pointed out by Linus. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 bibo,mao 提交于
This patch moves e820 memory map print and memmap boot param parsing function from setup.c to e820.c, also adds limit_regions and print_memory_map declaration in header file. Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 152 +++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 158 --------------------------------- include/asm-i386/e820.h | 2 arch/i386/kernel/e820.c | 152 ++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 153 ----------------------------------------------- include/asm-i386/e820.h | 2 3 files changed, 155 insertions(+), 152 deletions(-)
-
由 bibo,mao 提交于
This patch moves e820/efi memmap table walking function from setup.c to e820.c, also this patch adds extern declaration in header file. Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 115 +++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 118 ----------------------------------- include/asm-i386/e820.h | 2 arch/i386/kernel/e820.c | 115 +++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 118 ----------------------------------------------- include/asm-i386/e820.h | 2 3 files changed, 117 insertions(+), 118 deletions(-)
-
由 bibo,mao 提交于
Move more code from setup.c into e820.c Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 bibo,mao 提交于
This patch moves bios e820 map sanitize and copy function from setup.c to e820.c Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 252 +++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 240 -------------------------------------------- 2 files changed, 252 insertions(+), 240 deletions(-)
-
由 bibo,mao 提交于
This patch creates new file named e820.c to hanle standard io/mem resources, moving request_standard_resources function from setup.c to e820.c. Also this patch modifies Makfile to compile file e820.c. Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Makefile | 2 arch/i386/kernel/Makefile | 2 arch/i386/kernel/e820.c | 289 ++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 276 ------------------------------------------- 3 files changed, 293 insertions(+), 274 deletions(-)
-
由 Andi Kleen 提交于
Makes the intention of the code cleaner to read and avoids a potential deadlock on mmap_sem. Also change the types of the arguments to not include __user because they're really not user addresses. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Also report it in /proc/cpuinfo similar to x86-64. Needed for followon patch Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Joe Korty 提交于
The entry.S code at work_notifysig is surely wrong. It drops into unrelated code if the branch to work_notifysig_v86 is taken, and CONFIG_VM86=n. [PATCH] Make vm86 support optional tree 9b5daef5280800a0006343a17f63072658d91a1d pushed to git Jan 8, 2006, and first appears in 2.6.16 The 'fix' here is to also compile out the vm86 test & branch when CONFIG_VM86=n. Signed-off-by: NJoe Korty <joe.korty@ccur.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
o Now CONFIG_PHYSICAL_START is being replaced with CONFIG_PHYSICAL_ALIGN. Hardcoding the kernel physical start value creates a problem in relocatable kernel context due to boot loader limitations. For ex, if somebody compiles a relocatable kernel to be run from address 4MB, but this kernel will run from location 1MB as grub loads the kernel at physical address 1MB. Kernel thinks that I am a relocatable kernel and I should run from the address I have been loaded at. So somebody wanting to run kernel from 4MB alignment location (for improved performance regions) can't do that. o Hence, Eric proposed that probably CONFIG_PHYSICAL_ALIGN will make more sense in relocatable kernel context. At run time kernel will move itself to a physical addr location which meets user specified alignment restrictions. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
Defining __PHYSICAL_START and __KERNEL_START in asm-i386/page.h works but it triggers a full kernel rebuild for the silliest of reasons. This modifies the users to directly use CONFIG_PHYSICAL_START and linux/config.h which prevents the full rebuild problem, which makes the code much more maintainer and hopefully user friendly. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-