- 13 1月, 2011 1 次提交
-
-
由 Thomas Renninger 提交于
Having four variables for the same thing: idle_halt, idle_nomwait, force_mwait and boot_option_idle_overrides is rather confusing and unnecessary complex. if idle= boot param is passed, only set up one variable: boot_option_idle_overrides Introduces following functional changes/fixes: - intel_idle driver does not register if any idle=xy boot param is passed. - processor_idle.c will also not register a cpuidle driver and get active if idle=halt is passed. Before a cpuidle driver with one (C1, halt) state got registered Now the default_idle function will be used which finally uses the same idle call to enter sleep state (safe_halt()), but without registering a whole cpuidle driver. That means idle= param will always avoid cpuidle drivers to register with one exception (same behavior as before): idle=nomwait may still register acpi_idle cpuidle driver, but C1 will not use mwait, but hlt. This can be a workaround for IO based deeper sleep states where C1 mwait causes problems. Signed-off-by: NThomas Renninger <trenn@suse.de> cc: x86@kernel.org Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 30 12月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
Replace all uses of current_cpu_data with this_cpu operations on the per cpu structure cpu_info. The scala accesses are replaced with the matching this_cpu ops which results in smaller and more efficient code. In the long run, it might be a good idea to remove cpu_data() macro too and use per_cpu macro directly. tj: updated description Cc: Yinghai Lu <yinghai@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Acked-by: NH. Peter Anvin <hpa@zytor.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 02 11月, 2010 1 次提交
-
-
由 Uwe Kleine-König 提交于
"gadget", "through", "command", "maintain", "maintain", "controller", "address", "between", "initiali[zs]e", "instead", "function", "select", "already", "equal", "access", "management", "hierarchy", "registration", "interest", "relative", "memory", "offset", "already", Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 02 10月, 2010 1 次提交
-
-
由 Andreas Herrmann 提交于
Get compute unit information from CPUID Fn8000_001E_EBX. (See AMD CPUID Specification - publication # 25481, revision 2.34, September 2010.) Note that each core on a compute unit still has a core_id of its own. Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com> LKML-Reference: <20100930123857.GE20545@loge.amd.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 18 9月, 2010 1 次提交
-
-
由 H. Peter Anvin 提交于
The code in native_play_dead() has a number of problems: 1. We should use MWAIT when available, to put ourselves into a deeper sleep state. 2. We use the existence of CLFLUSH to determine if WBINVD is safe, but that is totally bogus -- WBINVD is 486+, whereas CLFLUSH is a much later addition. 3. We should do WBINVD inside the loop, just in case of something like setting an A bit on page tables. Pointed out by Arjan van de Ven. This code is based in part of a previous patch by Venki Pallipadi, but unlike that patch this one keeps all the detection code local instead of pre-caching a bunch of information. We're shutting down the CPU; there is absolutely no hurry. This patch moves all the code to C and deletes the global wbinvd_halt() which is broken anyway. Originally-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Reviewed-by: NArjan van de Ven <arjan@linux.intel.com> Cc: Len Brown <lenb@kernel.org> Cc: Venkatesh Pallipadi <venki@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.hl> LKML-Reference: <20090522232230.162239000@intel.com>
-
- 10 9月, 2010 1 次提交
-
-
由 Brian Gerst 提交于
%cr4 is 64-bit in 64-bit mode (although the upper 32-bits are currently reserved). Use unsigned long for the temporary variable to get the right size. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Acked-by: NPekka Enberg <penberg@kernel.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1283563039-3466-2-git-send-email-brgerst@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 02 8月, 2010 1 次提交
-
-
由 Michal Schmidt 提交于
Accomodate the original C1E-aware idle routine to the different times during boot when the BIOS enables C1E. While at it, remove the synthetic CPUID flag in favor of a single global setting which denotes C1E status on the system. [ hpa: changed c1e_enabled to be a bool; clarified cpu bit 3:21 comment ] Signed-off-by: NMichal Schmidt <mschmidt@redhat.com> LKML-Reference: <20100727165335.GA11630@aftab> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Acked-by: NThomas Gleixner <tglx@linutronix.de>
-
- 29 7月, 2010 3 次提交
-
-
由 Hans Rosenfeld 提交于
Use the AMD errata checking framework instead of open-coding the test. Signed-off-by: NHans Rosenfeld <hans.rosenfeld@amd.com> LKML-Reference: <1280336972-865982-3-git-send-email-hans.rosenfeld@amd.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Hans Rosenfeld 提交于
Remove check_c1e_idle() and use the new AMD errata checking framework instead. Signed-off-by: NHans Rosenfeld <hans.rosenfeld@amd.com> LKML-Reference: <1280336972-865982-2-git-send-email-hans.rosenfeld@amd.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Hans Rosenfeld 提交于
Errata are defined using the AMD_LEGACY_ERRATUM() or AMD_OSVW_ERRATUM() macros. The latter is intended for newer errata that have an OSVW id assigned, which it takes as first argument. Both take a variable number of family-specific model-stepping ranges created by AMD_MODEL_RANGE(). Iff an erratum has an OSVW id, OSVW is available on the CPU, and the OSVW id is known to the hardware, it is used to determine whether an erratum is present. Otherwise, the model-stepping ranges are matched against the current CPU to find out whether the erratum applies. For certain special errata, the code using this framework might have to conduct further checks to make sure an erratum is really (not) present. Signed-off-by: NHans Rosenfeld <hans.rosenfeld@amd.com> LKML-Reference: <1280336972-865982-1-git-send-email-hans.rosenfeld@amd.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 21 5月, 2010 1 次提交
-
-
由 Jan Kiszka 提交于
Allow the x86 arch to have early exception processing for the purpose of debugging via the kgdb. Signed-off-by: NJan Kiszka <jan.kiszka@web.de> Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
-
- 11 5月, 2010 1 次提交
-
-
由 Avi Kivity 提交于
Currently all fpu state access is through tsk->thread.xstate. Since we wish to generalize fpu access to non-task contexts, wrap the state in a new 'struct fpu' and convert existing access to use an fpu API. Signal frame handlers are not converted to the API since they will remain task context only things. Signed-off-by: NAvi Kivity <avi@redhat.com> Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-3-git-send-email-avi@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 08 5月, 2010 1 次提交
-
-
由 H. Peter Anvin 提交于
Clean up the hypervisor layer and the hypervisor drivers, using an ops structure instead of an enumeration with if statements. The identity of the hypervisor, if needed, can be tested by testing the pointer value in x86_hyper. The MS-HyperV private state is moved into a normal global variable (it's per-system state, not per-CPU state). Being a normal bss variable, it will be left at all zero on non-HyperV platforms, and so can generally be tested for HyperV-specific features without additional qualification. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Acked-by: NGreg KH <greg@kroah.com> Cc: Hank Janssen <hjanssen@microsoft.com> Cc: Alok Kataria <akataria@vmware.com> Cc: Ky Srinivasan <ksrinivasan@novell.com> LKML-Reference: <4BE49778.6060800@zytor.com>
-
- 07 5月, 2010 1 次提交
-
-
由 Ky Srinivasan 提交于
This patch integrates HyperV detection within the framework currently used by VmWare. With this patch, we can avoid having to replicate the HyperV detection code in each of the Microsoft HyperV drivers. Reworked and tweaked by Greg K-H to build properly. Signed-off-by: NK. Y. Srinivasan <ksrinivasan@novell.com> LKML-Reference: <20100506190841.GA1605@kroah.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Vadim Rozenfeld <vrozenfe@redhat.com> Cc: Avi Kivity <avi@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: "K.Prasad" <prasad@linux.vnet.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Alan Cox <alan@linux.intel.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Hank Janssen <hjanssen@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 26 3月, 2010 2 次提交
-
-
由 Peter Zijlstra 提交于
Implement ptrace-block-step using TIF_BLOCKSTEP which will set DEBUGCTLMSR_BTF when set for a task while preserving any other DEBUGCTLMSR bits. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100325135414.017536066@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Support for the PMU's BTS features has been upstreamed in v2.6.32, but we still have the old and disabled ptrace-BTS, as Linus noticed it not so long ago. It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without regard for other uses (perf) and doesn't provide the flexibility needed for perf either. Its users are ptrace-block-step and ptrace-bts, since ptrace-bts was never used and ptrace-block-step can be implemented using a much simpler approach. So axe all 3000 lines of it. That includes the *locked_memory*() APIs in mm/mlock.c as well. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Roland McGrath <roland@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Markus Metzger <markus.t.metzger@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <20100325135413.938004390@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 2月, 2010 1 次提交
-
-
由 Frederic Weisbecker 提交于
When the user enables breakpoints through dr7, he can choose between "local" or "global" enable bits but given how linux is implemented, both have the same effect. That said we don't keep track how the user enabled the breakpoints so when the user requests the dr7 value, we only translate the "enabled" status using the global enabled bits. It means that if the user enabled a breakpoint using the local enabled bit, reading back dr7 will set the global bit and clear the local one. Apps like Wine expect a full dr7 POKEUSER/PEEKUSER match for emulated softwares that implement old reverse engineering protection schemes. We fix that by keeping track of the whole dr7 value given by the user in the thread structure to drop this bug. We'll think about something more proper later. This fixes a 2.6.32 - 2.6.33-x ptrace regression. Reported-and-tested-by: NMichael Stefaniuc <mstefani@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NK.Prasad <prasad@linux.vnet.ibm.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Maneesh Soni <maneesh@linux.vnet.ibm.com> Cc: Alexandre Julliard <julliard@winehq.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Maciej Rutecki <maciej.rutecki@gmail.com>
-
- 17 12月, 2009 1 次提交
-
-
由 Suresh Siddha 提交于
xsave_cntxt_init() does something like: cpuid(0xd, ..); // find out what features FP/SSE/.. etc are supported xsetbv(); // enable the features known to OS cpuid(0xd, ..); // find out the size of the context for features enabled Depending on what features get enabled in xsetbv(), value of the cpuid.eax=0xd.ecx=0.ebx changes correspondingly (representing the size of the context that is enabled). As we don't have volatile keyword for native_cpuid(), gcc 4.1.2 optimizes away the second cpuid and the kernel continues to use the cpuid information obtained before xsetbv(), ultimately leading to kernel crash on processors supporting more state than the legacy FP/SSE. Add "volatile" for native_cpuid(). Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1261009542.2745.55.camel@sbs-t61.sc.intel.com> Cc: stable@kernel.org Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 08 11月, 2009 1 次提交
-
-
由 Frederic Weisbecker 提交于
This patch rebase the implementation of the breakpoints API on top of perf events instances. Each breakpoints are now perf events that handle the register scheduling, thread/cpu attachment, etc.. The new layering is now made as follows: ptrace kgdb ftrace perf syscall \ | / / \ | / / / Core breakpoint API / / | / | / Breakpoints perf events | | Breakpoints PMU ---- Debug Register constraints handling (Part of core breakpoint API) | | Hardware debug registers Reasons of this rewrite: - Use the centralized/optimized pmu registers scheduling, implying an easier arch integration - More powerful register handling: perf attributes (pinned/flexible events, exclusive/non-exclusive, tunable period, etc...) Impact: - New perf ABI: the hardware breakpoints counters - Ptrace breakpoints setting remains tricky and still needs some per thread breakpoints references. Todo (in the order): - Support breakpoints perf counter events for perf tools (ie: implement perf_bpcounter_event()) - Support from perf tools Changes in v2: - Follow the perf "event " rename - The ptrace regression have been fixed (ptrace breakpoint perf events weren't released when a task ended) - Drop the struct hw_breakpoint and store generic fields in perf_event_attr. - Separate core and arch specific headers, drop asm-generic/hw_breakpoint.h and create linux/hw_breakpoint.h - Use new generic len/type for breakpoint - Handle off case: when breakpoints api is not supported by an arch Changes in v3: - Fix broken CONFIG_KVM, we need to propagate the breakpoint api changes to kvm when we exit the guest and restore the bp registers to the host. Changes in v4: - Drop the hw_breakpoint_restore() stub as it is only used by KVM - EXPORT_SYMBOL_GPL hw_breakpoint_restore() as KVM can be built as a module - Restore the breakpoints unconditionally on kvm guest exit: TIF_DEBUG_THREAD doesn't anymore cover every cases of running breakpoints and vcpu->arch.switch_db_regs might not always be set when the guest used debug registers. (Waiting for a reliable optimization) Changes in v5: - Split-up the asm-generic/hw-breakpoint.h moving to linux/hw_breakpoint.h into a separate patch - Optimize the breakpoints restoring while switching from kvm guest to host. We only want to restore the state if we have active breakpoints to the host, otherwise we don't care about messed-up address registers. - Add asm/hw_breakpoint.h to Kbuild - Fix bad breakpoint type in trace_selftest.c Changes in v6: - Fix wrong header inclusion in trace.h (triggered a build error with CONFIG_FTRACE_SELFTEST Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Prasad <prasad@linux.vnet.ibm.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Jan Kiszka <jan.kiszka@web.de> Cc: Jiri Slaby <jirislaby@gmail.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Avi Kivity <avi@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Masami Hiramatsu <mhiramat@redhat.com> Cc: Paul Mundt <lethal@linux-sh.org>
-
- 04 11月, 2009 1 次提交
-
-
由 Stefani Seibold 提交于
This patch fixes two issues in the procfs stack information on x86-64 linux. The 32 bit loader compat_do_execve did not store stack start. (this was figured out by Alexey Dobriyan). The stack information on a x64_64 kernel always shows 0 kbyte stack usage, because of a missing implementation of the KSTK_ESP macro which always returned -1. The new implementation now returns the right value. Signed-off-by: NStefani Seibold <stefani@seibold.net> Cc: Americo Wang <xiyou.wangcong@gmail.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <1257240160.4889.24.camel@wall-e> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 9月, 2009 1 次提交
-
-
由 Andreas Herrmann 提交于
Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Acked-by: NH. Peter Anvin <hpa@zytor.com>
-
- 15 9月, 2009 1 次提交
-
-
由 Peter Zijlstra 提交于
Move some of the aperf/mperf code out from the cpufreq driver thingy so that other people can enjoy it too. Cc: H. Peter Anvin <hpa@zytor.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Yanmin <yanmin_zhang@linux.intel.com> Cc: Dave Jones <davej@redhat.com> Cc: Len Brown <len.brown@intel.com> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Cc: cpufreq@vger.kernel.org Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 9月, 2009 1 次提交
-
-
由 Ben Hutchings 提交于
As reported in <http://bugs.debian.org/511703> and <http://bugs.debian.org/515982>, kernels with paravirt-alternatives enabled crash in text_poke_early() on at least some 486-class processors. The problem is that text_poke_early() itself uses inline functions affected by paravirt-alternatives and so will modify instructions that have already been prefetched. Pentium and later processors will invalidate the prefetched instructions in this case, but 486-class processors do not. Change sync_core() to limit prefetching on 486-class (and 386-class) processors, and move the call to sync_core() above the call to the modifiable local_irq_restore(). Signed-off-by: NBen Hutchings <ben@decadent.org.uk> LKML-Reference: <1252547631.3423.134.camel@localhost> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 04 9月, 2009 2 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Pack aligned things together into a special section to minimize padding holes. Suggested-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Tejun Heo <tj@kernel.org> LKML-Reference: <4AA035C0.9070202@goop.org> [ queued up in tip:x86/asm because it depends on this commit: x86/i386: Make sure stack-protector segment base is cache aligned ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
The Intel Optimization Reference Guide says: In Intel Atom microarchitecture, the address generation unit assumes that the segment base will be 0 by default. Non-zero segment base will cause load and store operations to experience a delay. - If the segment base isn't aligned to a cache line boundary, the max throughput of memory operations is reduced to one [e]very 9 cycles. [...] Assembly/Compiler Coding Rule 15. (H impact, ML generality) For Intel Atom processors, use segments with base set to 0 whenever possible; avoid non-zero segment base address that is not aligned to cache line boundary at all cost. We can't avoid having a non-zero base for the stack-protector segment, but we can make it cache-aligned. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: <stable@kernel.org> LKML-Reference: <4AA01893.6000507@goop.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 03 6月, 2009 1 次提交
-
-
由 K.Prasad 提交于
The generic hardware breakpoint interface provides an abstraction of hardware breakpoints in front of specific arch implementations for both kernel and user side breakpoints. This includes execution breakpoints and read/write breakpoints, also known as "watchpoints". This patch introduces header files containing constants, structure definitions and declaration of functions used by the hardware breakpoint core and x86 specific code. It also introduces an array based storage for the debug-register values in 'struct thread_struct', while modifying all users of debugreg<n> member in the structure. [ Impact: add headers for new hardware breakpoint interface ] Original-patch-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com> Reviewed-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 19 5月, 2009 1 次提交
-
-
由 Jaswinder Singh Rajput 提交于
Remove double declaration of: extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c); extern unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c); extern unsigned short num_cache_leaves; they are already defined in the same file. [ Impact: cleanup ] Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com> LKML-Reference: <1242733021.3377.1.camel@localhost.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 5月, 2009 1 次提交
-
-
由 H. Peter Anvin 提交于
A long ago, in days of yore, it all began with a god named Thor. There were vikings and boats and some plans for a Linux kernel header. Unfortunately, a single 8-bit field was used for bootloader type and version. This has generally worked without *too* much pain, but we're getting close to flat running out of ID fields. Add extension fields for both type and version. The type will be extended if it the old field is 0xE; the version is a simple MSB extension. Keep /proc/sys/kernel/bootloader_type containing (type << 4) + (ver & 0xf) for backwards compatiblity, but also add /proc/sys/kernel/bootloader_version which contains the full version number. [ Impact: new feature to support more bootloaders ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 11 5月, 2009 3 次提交
-
-
由 Alexey Dobriyan 提交于
struct thread_struct::ip isn't used on x86_64, struct pt_regs::ip is used instead. kgdb should be reading 0 always, but I can't check it. [ Impact: (potentially) reduce thread_struct size on 64-bit ] Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Cc: containers@lists.linux-foundation.org LKML-Reference: <20090503233015.GJ16631@x200.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Alexey Dobriyan 提交于
After commit 464d1a78 aka "[PATCH] i386: Convert i386 PDA code to use %fs" %fs saved during context switch moved from thread_struct to pt_regs and value on thread_struct became unused. [ Impact: reduce thread_struct size on 32-bit ] Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Cc: containers@lists.linux-foundation.org LKML-Reference: <20090503232952.GI16631@x200.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
setup_force_cpu_cap() only have one user (Xen guest code), but it should not reuse cleared_cpu_cpus, otherwise it will have problems on SMP. Need to have a separate cpu_cpus_set array too, for forced-on flags, beyond the forced-off flags. Also need to setup handling before all cpus caps are combined. [ Impact: fix the forced-set CPU feature flag logic ] Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NYinghai Lu <yinghai.lu@kernel.org> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 4月, 2009 1 次提交
-
-
由 David Howells 提交于
In non-SMP mode, the variable section attribute specified by DECLARE_PER_CPU() does not agree with that specified by DEFINE_PER_CPU(). This means that architectures that have a small data section references relative to a base register may throw up linkage errors due to too great a displacement between where the base register points and the per-CPU variable. On FRV, the .h declaration says that the variable is in the .sdata section, but the .c definition says it's actually in the .data section. The linker throws up the following errors: kernel/built-in.o: In function `release_task': kernel/exit.c:78: relocation truncated to fit: R_FRV_GPREL12 against symbol `per_cpu__process_counts' defined in .data section in kernel/built-in.o kernel/exit.c:78: relocation truncated to fit: R_FRV_GPREL12 against symbol `per_cpu__process_counts' defined in .data section in kernel/built-in.o To fix this, DECLARE_PER_CPU() should simply apply the same section attribute as does DEFINE_PER_CPU(). However, this is made slightly more complex by virtue of the fact that there are several variants on DEFINE, so these need to be matched by variants on DECLARE. Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 4月, 2009 1 次提交
-
-
由 Suresh Siddha 提交于
Impact: save/restore Intel-AVX state properly between tasks Intel Advanced Vector Extensions (AVX) introduce 256-bit vector processing capability. More about AVX at http://software.intel.com/sites/avx Add OS support for YMM state management using xsave/xrstor infrastructure to support AVX. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1239402084.27006.8057.camel@localhost.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 07 4月, 2009 3 次提交
-
-
由 Markus Metzger 提交于
Add a warning in case a debug store context is not removed before the task it is attached to is freed. Remove the old warning at thread exit. It is too early. Declare the debug store context field in thread_struct unconditionally. Remove ds_copy_thread() and ds_exit_thread() and do the work directly in process*.c. Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com> Cc: roland@redhat.com Cc: eranian@googlemail.com Cc: oleg@redhat.com Cc: juan.villacis@intel.com Cc: ak@linux.jf.intel.com LKML-Reference: <20090403144601.254472000@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Markus Metzger 提交于
Add functions to get and set the debugctlmsr on different cpus. Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com> Cc: roland@redhat.com Cc: eranian@googlemail.com Cc: oleg@redhat.com Cc: juan.villacis@intel.com Cc: ak@linux.jf.intel.com LKML-Reference: <20090403144554.738772000@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Markus Metzger 提交于
When a ptraced task is unlinked, we need to stop branch tracing for that task. Since the unlink is called with interrupts disabled, and we need interrupts enabled to stop branch tracing, we defer the work. Collect all branch tracing related stuff in a branch tracing context. Reviewed-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: roland@redhat.com Cc: eranian@googlemail.com Cc: juan.villacis@intel.com Cc: ak@linux.jf.intel.com LKML-Reference: <20090403144550.712401000@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 3月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Impact: Fix cpu offline when CONFIG_MAXSMP=y Changeset bc9b83dd "cpumask: convert c1e_mask in arch/x86/kernel/process.c to cpumask_var_t" contained a bug: c1e_mask is manipulated even if C1E isn't detected (and hence not allocated). This is simply fixed by checking for NULL (which gcc optimizes out anyway of CONFIG_CPUMASK_OFFSTACK=n, since it knows ce1_mask can never be NULL). In addition, fix a leak where select_idle_routine re-allocates (and re-clears) c1e_mask on every cpu init. Reported-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Mike Travis <travis@sgi.com> LKML-Reference: <200903171450.34549.rusty@rustcorp.com.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 14 3月, 2009 1 次提交
-
-
由 Jaswinder Singh Rajput 提交于
- fix various style problems - declare varibles before they get used - introduced clear_all_debug_regs - fix header files issues LKML-Reference: <1237009789.4387.2.camel@localhost.localdomain> Signed-off-by: NJaswinder Singh Rajput <jaswinder@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 3月, 2009 2 次提交
-
-
由 Rusty Russell 提交于
Impact: reduce kernel memory usage when CONFIG_CPUMASK_OFFSTACK=y Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Jan Beulich 提交于
Impact: 32/64-bit consolidation In a first step, this allows fixing phys_addr_valid() for PAE (which until now reported all addresses to be valid). Subsequently, this will also allow simplifying some MTRR handling code. Signed-off-by: NJan Beulich <jbeulich@novell.com> LKML-Reference: <49B9101E.76E4.0078.0@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-