- 27 8月, 2010 3 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
We are using a very simple sort routine which sorts the .iommu_table array in the order of dependencies. Specifically each structure of iommu_table_entry has a field 'depend' which contains the function pointer to the IOMMU that MUST be run before us. We sort the array of structures so that the struct iommu_table_entry with no 'depend' field are first, and then the subsequent ones are the ones for which the 'depend' function has been already invoked (in other words, precede us). Using the kernel's version 'sort', which is a mergeheap is feasible, but would require making the comparison operator scan recursivly the array to satisfy the "heapify" process: setting the levels properly. The end result would much more complex than it should be an it is just much simpler to utilize this simple sort routine. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> LKML-Reference: <1282845485-8991-4-git-send-email-konrad.wilk@oracle.com> CC: H. Peter Anvin <hpa@zytor.com> CC: Fujita Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Konrad Rzeszutek Wilk 提交于
We return 1 if the IOMMU has been detected. Zero or an error number if we failed to find it. This is in preperation of using the IOMMU_INIT so that we can detect whether an IOMMU is present. I have not tested this for regression on Calgary, nor on AMD Vi chipsets as I don't have that hardware. CC: Muli Ben-Yehuda <muli@il.ibm.com> CC: "Jon D. Mason" <jdmason@kudzu.us> CC: "Darrick J. Wong" <djwong@us.ibm.com> CC: Jesse Barnes <jbarnes@virtuousgeek.org> CC: David Woodhouse <David.Woodhouse@intel.com> CC: Chris Wright <chrisw@sous-sol.org> CC: Yinghai Lu <yinghai@kernel.org> CC: Joerg Roedel <joerg.roedel@amd.com> CC: H. Peter Anvin <hpa@zytor.com> CC: Fujita Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> LKML-Reference: <1282845485-8991-3-git-send-email-konrad.wilk@oracle.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Konrad Rzeszutek Wilk 提交于
This patch set adds a mechanism to "modularize" the IOMMUs we have on X86. Currently the count of IOMMUs is up to six and they have a complex relationship that requires careful execution order. 'pci_iommu_alloc' does that today, but most folks are unhappy with how it does it. This patch set addresses this and also paves a mechanism to jettison unused IOMMUs during run-time. For details that sparked this, please refer to: http://lkml.org/lkml/2010/8/2/282 The first solution that comes to mind is to convert wholesale the IOMMU detection routines to be called during initcall time frame. Unfortunately that misses the dependency relationship that some of the IOMMUs have (for example: for AMD-Vi IOMMU to work, GART detection MUST run first, and before all of that SWIOTLB MUST run). The second solution would be to introduce a registration call wherein the IOMMU would provide its detection/init routines and as well on what MUST run before it. That would work, except that the 'pci_iommu_alloc' which would run through this list, is called during mem_init. This means we don't have any memory allocator, and it is so early that we haven't yet started running through the initcall_t list. This solution borrows concepts from the 2nd idea and from how MODULE_INIT works. A macro is provided that each IOMMU uses to define it's detect function and early_init (before the memory allocate is active), and as well what other IOMMU MUST run before us. Since most IOMMUs depend on having SWIOTLB run first ("pci_swiotlb_detect") a convenience macro to depends on that is also provided. This macro is similar in design to MODULE_PARAM macro wherein we setup a .iommu_table section in which we populate it with the values that match a struct iommu_table_entry. During bootup we will sort through the array so that the IOMMUs that MUST run before us are first elements in the array. And then we just iterate through them calling the detection routine and if appropiate, the init routines. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> LKML-Reference: <1282845485-8991-2-git-send-email-konrad.wilk@oracle.com> CC: H. Peter Anvin <hpa@zytor.com> CC: Fujita Tomonori <fujita.tomonori@lab.ntt.co.jp> CC: Thomas Gleixner <tglx@linutronix.de> CC: Ingo Molnar <mingo@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 20 8月, 2010 2 次提交
-
-
由 Daniel Kiper 提交于
Fix a boot crash when apic=debug is used and the APIC is not properly initialized. This issue appears during Xen Dom0 kernel boot but the fix is generic and the crash could occur on real hardware as well. Signed-off-by: NDaniel Kiper <dkiper@net-space.pl> Cc: xen-devel@lists.xensource.com Cc: konrad.wilk@oracle.com Cc: jeremy@goop.org Cc: <stable@kernel.org> # .35.x, .34.x, .33.x, .32.x LKML-Reference: <20100819224616.GB9967@router-fw-old.local.net-space.pl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Borislav Petkov 提交于
When testing cpu hotplug code on 32-bit we kept hitting the "CPU%d: Stuck ??" message due to multiple cores concurrently accessing the cpu_callin_mask, among others. Since these codepaths are not protected from concurrent access due to the fact that there's no sane reason for making an already complex code unnecessarily more complex - we hit the issue only when insanely switching cores off- and online - serialize hotplugging cores on the sysfs level and be done with it. [ v2.1: fix !HOTPLUG_CPU build ] Cc: <stable@kernel.org> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20100819181029.GC17171@aftab> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 19 8月, 2010 3 次提交
-
-
由 KUMANO Syuhei 提交于
Fix the return address of subsequent kretprobes when multiple kretprobes are set on the same function. For example: # cd /sys/kernel/debug/tracing # echo "r:event1 sys_symlink" > kprobe_events # echo "r:event2 sys_symlink" >> kprobe_events # echo 1 > events/kprobes/enable # ln -s /tmp/foo /tmp/bar (without this patch) # cat trace ln-897 [000] 20404.133727: event1: (kretprobe_trampoline+0x0/0x4c <- sys_symlink) ln-897 [000] 20404.133747: event2: (system_call_fastpath+0x16/0x1b <- sys_symlink) (with this patch) # cat trace ln-740 [000] 13799.491076: event1: (system_call_fastpath+0x16/0x1b <- sys_symlink) ln-740 [000] 13799.491096: event2: (system_call_fastpath+0x16/0x1b <- sys_symlink) Signed-off-by: NKUMANO Syuhei <kumano.prog@gmail.com> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> LKML-Reference: <1281853084.3254.11.camel@camp10-laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Joerg Roedel 提交于
This patch fixes machine crashes which occur when heavily exercising the CPU hotplug codepaths on a 32-bit kernel. These crashes are caused by AMD Erratum 383 and result in a fatal machine check exception. Here's the scenario: 1. On 32-bit, the swapper_pg_dir page table is used as the initial page table for booting a secondary CPU. 2. To make this work, swapper_pg_dir needs a direct mapping of physical memory in it (the low mappings). By adding those low, large page (2M) mappings (PAE kernel), we create the necessary conditions for Erratum 383 to occur. 3. Other CPUs which do not participate in the off- and onlining game may use swapper_pg_dir while the low mappings are present (when leave_mm is called). For all steps below, the CPU referred to is a CPU that is using swapper_pg_dir, and not the CPU which is being onlined. 4. The presence of the low mappings in swapper_pg_dir can result in TLB entries for addresses below __PAGE_OFFSET to be established speculatively. These TLB entries are marked global and large. 5. When the CPU with such TLB entry switches to another page table, this TLB entry remains because it is global. 6. The process then generates an access to an address covered by the above TLB entry but there is a permission mismatch - the TLB entry covers a large global page not accessible to userspace. 7. Due to this permission mismatch a new 4kb, user TLB entry gets established. Further, Erratum 383 provides for a small window of time where both TLB entries are present. This results in an uncorrectable machine check exception signalling a TLB multimatch which panics the machine. There are two ways to fix this issue: 1. Always do a global TLB flush when a new cr3 is loaded and the old page table was swapper_pg_dir. I consider this a hack hard to understand and with performance implications 2. Do not use swapper_pg_dir to boot secondary CPUs like 64-bit does. This patch implements solution 2. It introduces a trampoline_pg_dir which has the same layout as swapper_pg_dir with low_mappings. This page table is used as the initial page table of the booting CPU. Later in the bringup process, it switches to swapper_pg_dir and does a global TLB flush. This fixes the crashes in our test cases. -v2: switch to swapper_pg_dir right after entering start_secondary() so that we are able to access percpu data which might not be mapped in the trampoline page table. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> LKML-Reference: <20100816123833.GB28147@aftab> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Hans Rosenfeld 提交于
A bug in the family-model-stepping matching code caused the presence of errata to go undetected when OSVW was not used. This causes hangs on some K8 systems because the E400 workaround is not enabled. Signed-off-by: NHans Rosenfeld <hans.rosenfeld@amd.com> LKML-Reference: <1282141190-930137-1-git-send-email-hans.rosenfeld@amd.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 18 8月, 2010 2 次提交
-
-
由 Zhang, Yanmin 提交于
Fix the Errata AAK100/AAP53/BD53 workaround, the officialy documented workaround we implemented in: 11164cd4: perf, x86: Add Nehelem PMU programming errata workaround doesn't actually work fully and causes a stuck PMU state under load and non-functioning perf profiling. A functional workaround was found by trial & error. Affects all Nehalem-class Intel PMUs. Signed-off-by: NZhang Yanmin <yanmin_zhang@linux.intel.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1281073148.2125.63.camel@ymzhang.sh.intel.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: <stable@kernel.org> # .35.x Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 David Howells 提交于
Make do_execve() take a const filename pointer so that kernel_execve() compiles correctly on ARM: arch/arm/kernel/sys_arm.c:88: warning: passing argument 1 of 'do_execve' discards qualifiers from pointer target type This also requires the argv and envp arguments to be consted twice, once for the pointer array and once for the strings the array points to. This is because do_execve() passes a pointer to the filename (now const) to copy_strings_kernel(). A simpler alternative would be to cast the filename pointer in do_execve() when it's passed to copy_strings_kernel(). do_execve() may not change any of the strings it is passed as part of the argv or envp lists as they are some of them in .rodata, so marking these strings as const should be fine. Further kernel_execve() and sys_execve() need to be changed to match. This has been test built on x86_64, frv, arm and mips. Signed-off-by: NDavid Howells <dhowells@redhat.com> Tested-by: NRalf Baechle <ralf@linux-mips.org> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 8月, 2010 1 次提交
-
-
由 Namhyung Kim 提交于
breakinfo->pev is a pointer to percpu pointer but was missing __percpu markup. Add it. Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
-
- 15 8月, 2010 1 次提交
-
-
由 Xiaotian Feng 提交于
fpu.state is allocated from task_xstate_cachep, the size of task_xstate_cachep is xstate_size. xstate_size is set from cpuid instruction, which is often smaller than sizeof(struct xsave_struct). kvm is using sizeof(struct xsave_struct) to fill in/out fpu.state.xsave, as what we allocated for fpu.state is xstate_size, kernel will write out of memory and caused poison/redzone/padding overwritten warnings. Signed-off-by: NXiaotian Feng <dfeng@redhat.com> Reviewed-by: NSheng Yang <sheng@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Brian Gerst <brgerst@gmail.com> Cc: Avi Kivity <avi@redhat.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Sheng Yang <sheng@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 14 8月, 2010 1 次提交
-
-
由 David Howells 提交于
Mark arguments to certain system calls as being const where they should be but aren't. The list includes: (*) The filename arguments of various stat syscalls, execve(), various utimes syscalls and some mount syscalls. (*) The filename arguments of some syscall helpers relating to the above. (*) The buffer argument of various write syscalls. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 8月, 2010 3 次提交
-
-
由 Namhyung Kim 提交于
acpi_perf_data is a percpu pointer but was missing __percpu markup. Add it. Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Chris Wilson 提交于
The current computation, introduced with f12a15be, of FSEC_PER_SEC using the multiplication of (FSEC_PER_NSEC * NSEC_PER_SEC) is performed only with 32bit integers on small machines, resulting in an overflow and a *very* short intervals being programmed. An interrupt storm follows. Note that we also have to specify FSEC_PER_SEC as being long long to overcome the same limitations. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NJohn Stultz <johnstul@us.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: NIngo Molnar <mingo@elte.hu> Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Namhyung Kim 提交于
pcc_cpu_info is a percpu pointer but was missing __percpu markup. Add it. Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NDave Jones <davej@redhat.com>
-
- 11 8月, 2010 1 次提交
-
-
由 Linus Torvalds 提交于
As pointed out by Jiri Slaby: when I resolved the the 32-bit x85 system call entry tables for prlimit (due to the conflict with fanotify), I forgot to add the numbering in comments that we do for every fifth entry. Reported-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 8月, 2010 1 次提交
-
-
由 Suresh Siddha 提交于
Workqueues are now initialized as part of the early_initcall(). So they are available for use during cold boot process aswell. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tejun Heo <tj@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 8月, 2010 2 次提交
-
-
由 Cyrill Gorcunov 提交于
In case if last active performance counter is not overflowed at moment of NMI being triggered by another counter, the irq statistics may miss an update stage. As a more serious consequence -- apic quirk may not be triggered so apic lvt entry stay masked. Tested-by: NLin Ming <ming.m.lin@intel.com> Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: Stephane Eranian <eranian@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20100805150917.GA6311@lenovo> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Huang Ying 提交于
The abbreviation of severity should be SEV instead of SER, so the CPER severity constants are renamed accordingly. GHES severity constants are renamed in the same way too. Signed-off-by: NHuang Ying <ying.huang@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 06 8月, 2010 1 次提交
-
-
由 Eric W. Biederman 提交于
This fixes a regression in 2.6.35 from 2.6.34, that is present for select models of Intel cpus when people are using an MP table. The commit cf7500c0 "x86, ioapic: In mpparse use mp_register_ioapic" started calling mp_register_ioapic from MP_ioapic_info. An extremely simple change that was obviously correct. Unfortunately mp_register_ioapic did just a little more than the previous hand crafted code and so we gained this call path. The problem call path is: MP_ioapic_info() mp_register_ioapic() io_apic_unique_id() io_apic_get_unique_id() get_physical_broadcast() modern_apic() lapic_get_version() apic_read(APIC_LVR) Which turned out to be a problem because the local apic was not mapped, at that point, unlike the similar point in the ACPI parsing code. This problem is fixed by mapping the local apic when parsing the mptable as soon as we reasonably can. Looking at the number of places we setup the fixmap for the local apic, I see some serious simplification opportunities. For the moment except for not duplicating the setting up of the fixmap in init_apic_mappings, I have not acted on them. The regression from 2.6.34 is tracked in bug https://bugzilla.kernel.org/show_bug.cgi?id=16173 Cc: <stable@kernel.org> 2.6.35 Reported-by: NDavid Hill <hilld@binarystorm.net> Reported-by: NTvrtko Ursulin <tvrtko.ursulin@sophos.com> Tested-by: NTvrtko Ursulin <tvrtko.ursulin@sophos.com> Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> LKML-Reference: <m1eiee86jg.fsf_-_@fess.ebiederm.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 05 8月, 2010 3 次提交
-
-
由 Dongdong Deng 提交于
Use the macros provided by the HW breakpoint API. Signed-off-by: NDongdong Deng <dongdong.deng@windriver.com> Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
-
由 Andi Kleen 提交于
Found by gcc 4.6's new warnings Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
-
由 Jason Wessel 提交于
Implement the ability to individually get and set registers for kdb and kgdb for x86. Signed-off-by: NJason Wessel <jason.wessel@windriver.com> Acked-by: NH. Peter Anvin <hpa@zytor.com> CC: Ingo Molnar <mingo@redhat.com> CC: x86@kernel.org
-
- 04 8月, 2010 16 次提交
-
-
由 Fenghua Yu 提交于
Power limit notification feature is published in Intel 64 and IA-32 Architectures SDMV Vol 3A 14.5.6 Power Limit Notification. It is implemented first on Intel Sandy Bridge platform. The patch handles notification interrupt. Interrupt handler dumps power limit information in log_buf, logs the event in mce log, and increases the event counters (core_power_limit and package_power_limit). Upper level applications could use the data to detect system health or diagnose functionality/performance issues. In the future, the event could be handled in a more fancy way. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1280448826-12004-5-git-send-email-fenghua.yu@intel.com> Reviewed-by: NLen Brown <len.brown@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Add package level thermal throttle interrupt support. The interrupt handler increases package level thermal throttle count. It also logs the event in MCE log. The package level thermal throttle interrupt happens across threads in a package. Each thread handles the interrupt individually. User level application is supposed to retrieve correct event count and log based on package/thread topology. This is the same situation for core level interrupt handler. In the future, interrupt may be reported only per package or per core. core_throttle_count and package_throttle_count are used for user interface. Previously only throttle_count is used for core throttle count. If you think new core_throttle_count name breaks user interface, I can change this part. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1280448826-12004-4-git-send-email-fenghua.yu@intel.com> Reviewed-by: NLen Brown <len.brown@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Dave Jones 提交于
The only machines this is triggering on should be supported by acpi-cpufreq or acpi's internal throttling. Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Holger Freyther 提交于
Use __cpuinit instead of __init for the cpufreq_driver init function like it is done in powernow-k8.c. This is removing the warning generated when compiling with the CONFIG_DEBUG_SECTION_MISMATCH=y option. Signed-off-by: NHolger Hans Peter Freyther <holger@moiji-mobile.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Holger Freyther 提交于
Use __cpuinit instead of __init for the cpufreq_driver init function like it is done in powernow-k8.c. Use the __cpuinitdata for data used by the routines marked as __cpuinit. This is removing the warning generated when compiling with the CONFIG_DEBUG_SECTION_MISMATCH=y option. Signed-off-by: NHolger Hans Peter Freyther <holger@moiji-mobile.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Holger Freyther 提交于
Use __cpuinit instead of __init for the cpufreq_driver init function like it is done in powernow-k8.c. This is removing the warning generated when compiling with the CONFIG_DEBUG_SECTION_MISMATCH=y option. Signed-off-by: NHolger Hans Peter Freyther <holger@moiji-mobile.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Borislav Petkov 提交于
rdmsr() takes the lower 32 bits as a second argument and the high 32 as a third. Fix the names accordingly since they were swapped. There should be no functionality change resulting from this patch. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Peter Huewe 提交于
This patch converts pci_table entries, where .subvendor=PCI_ANY_ID and .subdevice=PCI_ANY_ID, .class=0 and .class_mask=0, to use the PCI_VDEVICE macro, and thus improves readability. Signed-off-by: NPeter Huewe <peterhuewe@gmx.de> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Kulikov Vasiliy 提交于
Use for_each_pci_dev() to simplify the code. Signed-off-by: NKulikov Vasiliy <segooon@gmail.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Thomas Renninger 提交于
and fix the broken case if a core's frequency depends on others. trace_power_frequency was only implemented in a rather ungeneric way in acpi-cpufreq driver's target() function only. -> Move the call to trace_power_frequency to cpufreq.c:cpufreq_notify_transition() where CPUFREQ_POSTCHANGE notifier is triggered. This will support power frequency tracing by all cpufreq drivers trace_power_frequency did not trace frequency changes correctly when the userspace governor was used or when CPU cores' frequency depend on each other. -> Moving this into the CPUFREQ_POSTCHANGE notifier and pass the cpu which gets switched automatically fixes this. Robert Schoene provided some important fixes on top of my initial quick shot version which are integrated in this patch: - Forgot some changes in power_end trace (TP_printk/variable names) - Variable dummy in power_end must now be cpu_id - Use static 64 bit variable instead of unsigned int for cpu_id Signed-off-by: NThomas Renninger <trenn@suse.de> CC: davej@redhat.com CC: arjan@infradead.org CC: linux-kernel@vger.kernel.org CC: robert.schoene@tu-dresden.de Tested-by: robert.schoene@tu-dresden.de Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Thomas Renninger 提交于
Signed-off-by: NThomas Renninger <trenn@suse.de> CC: venki@google.com CC: davej@redhat.com CC: arjan@infradead.org CC: linux-kernel@vger.kernel.org Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Marti Raudsepp 提交于
On Wed, 2010-01-20 at 16:56 +0100, Thomas Renninger wrote: > But most often this happens if people upgrade their CPU and do not > update their BIOS. > Or the vendor does not recognise the new CPU even if the BIOS got > updated. Maybe some of those people just didn't realize it was disabled in BIOS? If you tell users that it's a firmware bug then they'll probably just give up. > The itself message might be an enhancment, IMO it's not worth a patch. Why do you think so? I spent an hour on hunting down the BIOS upgrade, only to find that it didn't improve anything. It was a day later that I realized that it might be a BIOS option; and the option was literally the _last_ option in the whole BIOS setup. :) This message would have saved the day. > But do not revert the FW_BUG part! Sure, you have a point here. How about this patch?
-
由 Borislav Petkov 提交于
The Pstate transition latency check was added for broken F10h BIOSen which wrongly contain a value of 0 for transition and bus master latency. Fam11h and later, however, (will) have similar transition latency so extend that behavior for them too. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Matthew Garrett 提交于
The PCC cpufreq driver unmaps the mailbox address range if any CPUs fail to initialise, but doesn't do anything to remove the registered CPUs from the cpufreq core resulting in failures further down the line. We're better off simply returning a failure - the cpufreq core will unregister us cleanly if we end up with no successfully registered CPUs. Tidy up the failure path and also add a sanity check to ensure that the firmware gives us a realistic frequency - the core deals badly with that being set to 0. Signed-off-by: NMatthew Garrett <mjg@redhat.com> Cc: Naga Chumbalkar <nagananda.chumbalkar@hp.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Daniel J Blueman 提交于
Prevent double freeing on error path. Signed-off-by: NDaniel J Blueman <daniel.blueman@gmail.com> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Matthew Garrett 提交于
The pcc specification documents an _OSC method that's incompatible with the one defined as part of the ACPI spec. This shouldn't be a problem as both are supposed to be guarded with a UUID. Unfortunately approximately nobody (including HP, who wrote this spec) properly check the UUID on entry to the _OSC call. Right now this could result in surprising behaviour if the pcc driver performs an _OSC call on a machine that doesn't implement the pcc specification. Check whether the PCCH method exists first in order to reduce this probability. Signed-off-by: NMatthew Garrett <mjg@redhat.com> Cc: Naga Chumbalkar <nagananda.chumbalkar@hp.com> Signed-off-by: NDave Jones <davej@redhat.com>
-