- 17 1月, 2012 7 次提交
-
-
由 Randy Dunlap 提交于
Move the ZONE_DMA kconfig symbol under a menu item instead of having it listed before everything else in "make {xconfig | gconfig | nconfig | menuconfig}". This drops the first line of the top-level kernel config menu (in 3.2) below and moves it under "Processor type and features". [*] DMA memory allocation support General setup ---> [*] Enable loadable module support ---> [*] Enable the block layer ---> Processor type and features ---> Power management and ACPI options ---> Bus options (PCI etc.) ---> Executable file formats / Emulations ---> Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net> Acked-by: NDavid Rientjes <rientjes@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-mm@kvack.org <linux-mm@kvack.org> Link: http://lkml.kernel.org/r/4F14811E.6090107@xenotime.netSigned-off-by: NIngo Molnar <mingo@elte.hu> Cc: David Rientjes <rientjes@google.com>
-
由 Cliff Wickman 提交于
This patch adds separate accounting of UV2 message "strong nack's" in the BAU statistics. Signed-off-by: NCliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116212238.GF5767@sgi.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cliff Wickman 提交于
This patch moves the ack of the BAU interrupt to the beginning of the interrupt handler so that there is less possibility of a lost interrupt and slower response to a shootdown message. Signed-off-by: NCliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116212146.GE5767@sgi.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cliff Wickman 提交于
This patch removes an unnecessary test for a no-destination-resources-available condition that looks like a destination timeout in UV1, but is separately distinguishable in UV2. Signed-off-by: NCliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116212050.GD5767@sgi.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cliff Wickman 提交于
This patch implements a workaround for a UV2 hardware bug. The bug is a non-atomic update of a memory-mapped register. When hardware message delivery and software message acknowledge occur simultaneously the pending message acknowledge for the arriving message may be lost. This causes the sender's message status to stay busy. Part of the workaround is to not acknowledge a completed message until it is verified that no other message is actually using the resource that is mistakenly recorded in the completed message. Part of the workaround is to test for long elapsed time in such a busy condition, then handle it by using a spare sending descriptor. The stay-busy condition is eventually timed out by hardware, and then the original sending descriptor can be re-used. Most of that logic change is in keeping track of the current descriptor and the state of the spares. The occurrences of the workaround are added to the BAU statistics. Signed-off-by: NCliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116211947.GC5767@sgi.com Cc: <stable@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cliff Wickman 提交于
Move the call to enable_timeouts() forward so that BAU_MISC_CONTROL is initialized before using it in calculate_destination_timeout(). Fix the calculation of a BAU destination timeout for UV2 (in calculate_destination_timeout()). Signed-off-by: NCliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116211848.GB5767@sgi.com Cc: <stable@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cliff Wickman 提交于
Update the use of the Broadcast Assist Unit on SGI Altix UV2 to the use of native UV2 mode on new hardware (not the legacy mode). UV2 native mode has a different format for a broadcast message. We also need quick differentiaton between UV1 and UV2. Signed-off-by: NCliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116211750.GA5767@sgi.com Cc: <stable@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 1月, 2012 1 次提交
-
-
由 Anton Vorontsov 提交于
This very noisy sparse warning appears on almost every file in the kernel: CHECK init/main.c arch/x86/include/asm/thread_info.h:43:55: error: dubious one-bit signed bitfield arch/x86/include/asm/thread_info.h:44:46: error: dubious one-bit signed bitfield Sparse is right and this patch changes sig_on_uaccess_error and uaccess_err flags to unsigned type and thus fixes the warning. Signed-off-by: NAnton Vorontsov <cbouatmailru@gmail.com> Acked-by: NAndy Lutomirski <luto@mit.edu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: Dan Carpenter <error27@gmail.com> Link: http://lkml.kernel.org/r/20120111011146.GA30428@oksana.dev.rtsoft.ruSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 1月, 2012 1 次提交
-
-
由 H. Peter Anvin 提交于
atomic64_read() doesn't actually write anything (as far as the C environment is concerned... the CPU does actually write but that's an implementation quirk), so it should take a const pointer. This does NOT mean that it is safe to use atomic64_read() on an object in readonly storage (it will trap!) Reported-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Link: http://lkml.kernel.org/r/20120109165859.1879abda.akpm@linux-foundation.org
-
- 08 1月, 2012 1 次提交
-
-
由 Jack Steiner 提交于
SGI UV systems print a message during boot: UV: Found <num> blades Due to packaging changes, the blade count is not accurate for on the next generation of the platform. This patch corrects the count. Signed-off-by: NJack Steiner <steiner@sgi.com> Cc: <stable@kernel.org> Link: http://lkml.kernel.org/r/20120106191900.GA19772@sgi.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 1月, 2012 2 次提交
-
-
由 Eric Dumazet 提交于
It appears about all functions in arch/x86/lib/atomic64_cx8_32.S are wrong in case cmpxchg8b must be restarted, because LOCK_PREFIX macro defines a label "1" clashing with other local labels : 1: some_instructions LOCK_PREFIX cmpxchg8b (%ebp) jne 1b / jumps to beginning of LOCK_PREFIX ! A possible fix is to use a magic label "672" in LOCK_PREFIX asm definition, similar to the "671" one we defined in LOCK_PREFIX_HERE. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Acked-by: NJan Beulich <JBeulich@suse.com> Cc: Christoph Lameter <cl@linux.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1325608540.2320.103.camel@edumazet-HP-Compaq-6005-Pro-SFF-PCSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jan Beulich 提交于
Just like the per-CPU ones they had several problems/shortcomings: Only the first memory operand was mentioned in the asm() operands, and the 2x64-bit version didn't have a memory clobber while the 2x32-bit one did. The former allowed the compiler to not recognize the need to re-load the data in case it had it cached in some register, while the latter was overly destructive. The types of the local copies of the old and new values were incorrect (the types of the pointed-to variables should be used here, to make sure the respective old/new variable types are compatible). The __dummy/__junk variables were pointless, given that local copies of the inputs already existed (and can hence be used for discarded outputs). The 32-bit variant of cmpxchg_double_local() referenced cmpxchg16b_local(). At once also: - change the return value type to what it really is: 'bool' - unify 32- and 64-bit variants - abstract out the common part of the 'normal' and 'local' variants Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: Christoph Lameter <cl@linux.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/4F01F12A020000780006A19B@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 12月, 2011 1 次提交
-
-
由 Jan Kiszka 提交于
Unlike all of the other cpuid bits, the TSC deadline timer bit is set unconditionally, regardless of what userspace wants. This is broken in several ways: - if userspace doesn't use KVM_CREATE_IRQCHIP, and doesn't emulate the TSC deadline timer feature, a guest that uses the feature will break - live migration to older host kernels that don't support the TSC deadline timer will cause the feature to be pulled from under the guest's feet; breaking it - guests that are broken wrt the feature will fail. Fix by not enabling the feature automatically; instead report it to userspace. Because the feature depends on KVM_CREATE_IRQCHIP, which we cannot guarantee will be called, we expose it via a KVM_CAP_TSC_DEADLINE_TIMER and not KVM_GET_SUPPORTED_CPUID. Fixes the Illumos guest kernel, which uses the TSC deadline timer feature. [avi: add the KVM_CAP + documentation] Reported-by: NAlexey Zaytsev <alexey.zaytsev@gmail.com> Tested-by: NAlexey Zaytsev <alexey.zaytsev@gmail.com> Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 25 12月, 2011 1 次提交
-
-
由 Jan Kiszka 提交于
User space may create the PIT and forgets about setting up the irqchips. In that case, firing PIT IRQs will crash the host: BUG: unable to handle kernel NULL pointer dereference at 0000000000000128 IP: [<ffffffffa10f6280>] kvm_set_irq+0x30/0x170 [kvm] ... Call Trace: [<ffffffffa11228c1>] pit_do_work+0x51/0xd0 [kvm] [<ffffffff81071431>] process_one_work+0x111/0x4d0 [<ffffffff81071bb2>] worker_thread+0x152/0x340 [<ffffffff81075c8e>] kthread+0x7e/0x90 [<ffffffff815a4474>] kernel_thread_helper+0x4/0x10 Prevent this by checking the irqchip mode before starting a timer. We can't deny creating the PIT if the irqchips aren't set up yet as current user land expects this order to work. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 24 12月, 2011 6 次提交
-
-
由 Suresh Siddha 提交于
If the x2apic mode is disabled for reasons like interrupt-remapping not available etc, then we need to skip the logical cpu bringup of apic-id's >= 255. Otherwise as the platform is in xapic mode, init/startup IPI's will consider only the low 8-bits and there is a possibility of re-sending init/startup IPI's to the logical cpu that is already online. This will avoid potential reboots/unpredictable behavior etc. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/20111222014632.702932458@sbsiddha-desk.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Yinghai Lu 提交于
Currently "nox2apic" boot parameter was not enabling x2apic mode if the cpu, kernel are all capable of enabling x2apic mode and the OS handover happened in xapic mode. However If the bios enabled x2apic prior to OS handover, using "nox2apic" boot parameter had no effect. If the boot cpu's apicid is < 255, enable "nox2apic" boot parameter to disable the x2apic mode setup by the bios. This will enable the kernel to fallback to xapic mode and bringup only the cpu's which has apic-id < 255. -v2: fix patch error and two compiling warning make disable_x2apic to be __init Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/CAE9FiQUeB-3uxJAMiHsz=uPWoFv5Hg1pVepz7aU6YtqOxMC-=Q@mail.gmail.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Yinghai Lu 提交于
On some of the recent Intel SNB platforms, by default bios is pre-enabling x2apic mode in the cpu with out setting up interrupt-remapping. This case was resulting in the kernel to panic as the cpu is already in x2apic mode but the OS was not able to enable interrupt-remapping (which is a pre-req for using x2apic capability). On these platforms all the apic-ids are < 255 and the kernel can fallback to xapic mode if the bios has not enabled interrupt-remapping (which is mostly the case if the bios has not exported interrupt-remapping tables to the OS). Reported-by: NBerck E. Nash <flyboy@gmail.com> Signed-off-by: NYinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/20111222014632.600418637@sbsiddha-desk.sc.intel.comSigned-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Yinghai Lu 提交于
If the x2apic feature is not present (either the cpu is not capable of it or the user has disabled the feature using boot-parameter etc), ignore the x2apic MADT and SRAT entries provided by the ACPI tables. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/20111222014632.540896503@sbsiddha-desk.sc.intel.comSigned-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Yinghai Lu 提交于
Currently we start with the default apic_flat mode and switch to some other apic model depending on the apic drivers acpi_madt_oem_check() routines and later followed by the apic drivers probe() routines. Once we selected non flat mode there was no case where we fall back to flat mode again. Upcoming changes allow bios-enabled x2apic mode to be disabled by the OS if interrupt-remapping etc is not setup properly by the bios. We now has a case for the apic to fall back to legacy flat mode during apic driver probe() seqeuence. Add a simple flat_probe() which allows the apic_flat mode to be the last fallback option. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/20111222014632.484984298@sbsiddha-desk.sc.intel.comSigned-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Robert Richter 提交于
Use raw_spin_unlock_irqrestore() as equivalent to raw_spin_lock_irqsave(). Signed-off-by: NRobert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1324646665-13334-1-git-send-email-robert.richter@amd.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 12月, 2011 3 次提交
-
-
由 Stephane Eranian 提交于
Add event maps for Intel x86 processors (with architected PMU v2 or later). On AMD, there is frequency scaling but no Turbo. There is no core cycle event not subject to frequency scaling, therefore we do not provide a mapping. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1323559734-3488-4-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
This patch adds the encoding and definitions necessary for the unhalted_reference_cycles event avaialble since Intel Core 2 processors. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1323559734-3488-2-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Kevin Winchester 提交于
Several fields in struct cpuinfo_x86 were not defined for the !SMP case, likely to save space. However, those fields still have some meaning for UP, and keeping them allows some #ifdef removal from other files. The additional size of the UP kernel from this change is not significant enough to worry about keeping up the distinction: text data bss dec hex filename 4737168 506459 972040 6215667 5ed7f3 vmlinux.o.before 4737444 506459 972040 6215943 5ed907 vmlinux.o.after for a difference of 276 bytes for an example UP config. If someone wants those 276 bytes back badly then it should be implemented in a cleaner way. Signed-off-by: NKevin Winchester <kjwinchester@gmail.com> Cc: Steffen Persvold <sp@numascale.com> Link: http://lkml.kernel.org/r/1324428742-12498-1-git-send-email-kjwinchester@gmail.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 12月, 2011 2 次提交
-
-
由 Clemens Ladisch 提交于
When printing the code bytes in show_registers(), the markers around the byte at the fault address could make the printk() format string look like a valid log level and facility code. This would prevent this byte from being printed and result in a spurious newline: [ 7555.765589] Code: 8b 32 e9 94 00 00 00 81 7d 00 ff 00 00 00 0f 87 96 00 00 00 48 8b 83 c0 00 00 00 44 89 e2 44 89 e6 48 89 df 48 8b 80 d8 02 00 00 [ 7555.765683] 8b 48 28 48 89 d0 81 e2 ff 0f 00 00 48 c1 e8 0c 48 c1 e0 04 Add KERN_CONT where needed, and elsewhere in show_registers() for consistency. Signed-off-by: NClemens Ladisch <clemens@ladisch.de> Link: http://lkml.kernel.org/r/4EEFA7AE.9020407@ladisch.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Markus Kötter 提交于
x86 jump instruction size is 2 or 5 bytes (near/long jump), not 2 or 6 bytes. In case a conditional jump is followed by a long jump, conditional jump target is one byte past the start of target instruction. Signed-off-by: NMarkus Kötter <nepenthesdev@gmail.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 12月, 2011 3 次提交
-
-
LAPIC related statistics are grouped inside the per-cpu structure irq_stat, so there is no need for icr_read_retry_count to be a standalone per-cpu variable. This patch moves icr_read_retry_count to where it belongs. Suggested-y: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NFernando Luis Vazquez Cao <fernando@oss.ntt.co.jp> Cc: Jörn Engel <joern@logfs.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jesper Juhl 提交于
gcc noticed (when using -Wempty-body) that our use of lock_cmos() and unlock_cmos() in arch/x86/include/asm/mach_traps.h is potentially problematic : arch/x86/include/asm/mach_traps.h:32:15: warning: suggest braces around empty body in an ¡else¢ statement [-Wempty-body] arch/x86/include/asm/mach_traps.h:40:16: warning: suggest braces around empty body in an ¡else¢ statement [-Wempty-body] Let's just use the standard 'do {} while (0)' solution. That shuts up gcc and also prevents future problems if the macros should end up being used in a similar situation elsewhere. Signed-off-by: NJesper Juhl <jj@chaosbits.net> Link: http://lkml.kernel.org/r/alpine.LNX.2.00.1112180103130.21784@swampdragon.chaosbits.netSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jesper Juhl 提交于
If one builds the kernel with -Wempty-body one gets this warning: mm/memory.c:3432:46: warning: suggest braces around empty body in an ¡if¢ statement [-Wempty-body] due to the fact that 'flush_tlb_fix_spurious_fault' is a macro that can sometimes be defined to nothing. Signed-off-by: NJesper Juhl <jj@chaosbits.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: linux-mm@kvack.org Cc: Michel Lespinasse <walken@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Johannes Weiner <hannes@cmpxchg.org> Link: http://lkml.kernel.org/r/alpine.LNX.2.00.1112180128070.21784@swampdragon.chaosbits.netSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 12月, 2011 1 次提交
-
-
由 Chen Gong 提交于
mce-inject provides a mechanism to simulate errors so that test scripts can check for correct operation of the kernel without requiring any specialized hardware to create rare events. The existing code can simulate events in normal process context and also in NMI context - but not in IRQ context. This patch fills that gap. Link: https://lkml.org/lkml/2011/12/7/537Signed-off-by: NChen Gong <gong.chen@linux.intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 16 12月, 2011 4 次提交
-
-
由 David Howells 提交于
fls(N), ffs(N) and fls64(N) can be optimised on x86_64. Currently they use a CMOV instruction after the BSR/BSF to set the destination register to -1 if the value to be scanned was 0 (in which case BSR/BSF set the Z flag). Instead, according to the AMD64 specification, we can make use of the fact that BSR/BSF doesn't modify its output register if its input is 0. By preloading the output with -1 and incrementing the result, we achieve the desired result without the need for a conditional check. The Intel x86_64 specification, however, says that the result of BSR/BSF in such a case is undefined. That said, when queried, one of the Intel CPU architects said that the behaviour on all Intel CPUs is that: (1) with BSRQ/BSFQ, the 64-bit destination register is written with its original value if the source is 0, thus, in essence, giving the effect we want. And, (2) with BSRL/BSFL, the lower half of the 64-bit destination register is written with its original value if the source is 0, and the upper half is cleared, thus giving us the effect we want (we return a 4-byte int). Further, it was indicated that they (Intel) are unlikely to get away with changing the behaviour. It might be possible to optimise the 32-bit versions of these functions, but there's a lot more variation, and so the effective non-destructive property of BSRL/BSRF cannot be relied on. [ hpa: specifically, some 486 chips are known to NOT have this property. ] I have benchmarked these functions on my Core2 Duo test machine using the following program: #include <stdlib.h> #include <stdio.h> #ifndef __x86_64__ #error #endif #define PAGE_SHIFT 12 typedef unsigned long long __u64, u64; typedef unsigned int __u32, u32; #define noinline __attribute__((noinline)) static __always_inline int fls64(__u64 x) { long bitpos = -1; asm("bsrq %1,%0" : "+r" (bitpos) : "rm" (x)); return bitpos + 1; } static inline unsigned long __fls(unsigned long word) { asm("bsr %1,%0" : "=r" (word) : "rm" (word)); return word; } static __always_inline int old_fls64(__u64 x) { if (x == 0) return 0; return __fls(x) + 1; } static noinline // __attribute__((const)) int old_get_order(unsigned long size) { int order; size = (size - 1) >> (PAGE_SHIFT - 1); order = -1; do { size >>= 1; order++; } while (size); return order; } static inline __attribute__((const)) int get_order_old_fls64(unsigned long size) { int order; size--; size >>= PAGE_SHIFT; order = old_fls64(size); return order; } static inline __attribute__((const)) int get_order(unsigned long size) { int order; size--; size >>= PAGE_SHIFT; order = fls64(size); return order; } unsigned long prevent_optimise_out; static noinline unsigned long test_old_get_order(void) { unsigned long n, total = 0; long rep, loop; for (rep = 1000000; rep > 0; rep--) { for (loop = 0; loop <= 16384; loop += 4) { n = 1UL << loop; total += old_get_order(n); } } return total; } static noinline unsigned long test_get_order_old_fls64(void) { unsigned long n, total = 0; long rep, loop; for (rep = 1000000; rep > 0; rep--) { for (loop = 0; loop <= 16384; loop += 4) { n = 1UL << loop; total += get_order_old_fls64(n); } } return total; } static noinline unsigned long test_get_order(void) { unsigned long n, total = 0; long rep, loop; for (rep = 1000000; rep > 0; rep--) { for (loop = 0; loop <= 16384; loop += 4) { n = 1UL << loop; total += get_order(n); } } return total; } int main(int argc, char **argv) { unsigned long total; switch (argc) { case 1: total = test_old_get_order(); break; case 2: total = test_get_order_old_fls64(); break; default: total = test_get_order(); break; } prevent_optimise_out = total; return 0; } This allows me to test the use of the old fls64() implementation and the new fls64() implementation and also to contrast these to the out-of-line loop-based implementation of get_order(). The results were: warthog>time ./get_order real 1m37.191s user 1m36.313s sys 0m0.861s warthog>time ./get_order x real 0m16.892s user 0m16.586s sys 0m0.287s warthog>time ./get_order x x real 0m7.731s user 0m7.727s sys 0m0.002s Using the current upstream fls64() as a basis for an inlined get_order() [the second result above] is much faster than using the current out-of-line loop-based get_order() [the first result above]. Using my optimised inline fls64()-based get_order() [the third result above] is even faster still. [ hpa: changed the selection of 32 vs 64 bits to use CONFIG_X86_64 instead of comparing BITS_PER_LONG, updated comments, rebased manually on top of 83d99df7 x86, bitops: Move fls64.h inside __KERNEL__ ] Signed-off-by: NDavid Howells <dhowells@redhat.com> Link: http://lkml.kernel.org/r/20111213145654.14362.39868.stgit@warthog.procyon.org.uk Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 H. Peter Anvin 提交于
We would include <asm-generic/bitops/fls64.h> even without __KERNEL__, but that doesn't make sense, as: 1. That file provides fls64(), but the corresponding function fls() is not exported to user space. 2. The implementation of fls64.h uses kernel-only symbols. 3. fls64.h is not exported to user space. This appears to have been a bug introduced in checkin: d57594c2 bitops: use __fls for fls64 on 64-bit archs Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: Alexander van Heukelum <heukelum@mailshack.com> Cc: David Howells <dhowells@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Link: http://lkml.kernel.org/r/4EEA77E1.6050009@zytor.com
-
由 Ian Campbell 提交于
d312ae87 "xen: use maximum reservation to limit amount of usable RAM" clamped the total amount of RAM to the current maximum reservation. This is correct for dom0 but is not correct for guest domains. In order to boot a guest "pre-ballooned" (e.g. with memory=1G but maxmem=2G) in order to allow for future memory expansion the guest must derive max_pfn from the e820 provided by the toolstack and not the current maximum reservation (which can reflect only the current maximum, not the guest lifetime max). The existing algorithm already behaves this correctly if we do not artificially limit the maximum number of pages for the guest case. For a guest booted with maxmem=512, memory=128 this results in: [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] Xen: 0000000000000000 - 00000000000a0000 (usable) [ 0.000000] Xen: 00000000000a0000 - 0000000000100000 (reserved) -[ 0.000000] Xen: 0000000000100000 - 0000000008100000 (usable) -[ 0.000000] Xen: 0000000008100000 - 0000000020800000 (unusable) +[ 0.000000] Xen: 0000000000100000 - 0000000020800000 (usable) ... [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] DMI not present or invalid. [ 0.000000] e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved) [ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000 (usable) -[ 0.000000] last_pfn = 0x8100 max_arch_pfn = 0x1000000 +[ 0.000000] last_pfn = 0x20800 max_arch_pfn = 0x1000000 [ 0.000000] initial memory mapped : 0 - 027ff000 [ 0.000000] Base memory trampoline at [c009f000] 9f000 size 4096 -[ 0.000000] init_memory_mapping: 0000000000000000-0000000008100000 -[ 0.000000] 0000000000 - 0008100000 page 4k -[ 0.000000] kernel direct mapping tables up to 8100000 @ 27bb000-27ff000 +[ 0.000000] init_memory_mapping: 0000000000000000-0000000020800000 +[ 0.000000] 0000000000 - 0020800000 page 4k +[ 0.000000] kernel direct mapping tables up to 20800000 @ 26f8000-27ff000 [ 0.000000] xen: setting RW the range 27e8000 - 27ff000 [ 0.000000] 0MB HIGHMEM available. -[ 0.000000] 129MB LOWMEM available. -[ 0.000000] mapped low ram: 0 - 08100000 -[ 0.000000] low ram: 0 - 08100000 +[ 0.000000] 520MB LOWMEM available. +[ 0.000000] mapped low ram: 0 - 20800000 +[ 0.000000] low ram: 0 - 20800000 With this change "xl mem-set <domain> 512M" will successfully increase the guest RAM (by reducing the balloon). There is no change for dom0. Reported-and-Tested-by: NGeorge Shuklin <george.shuklin@gmail.com> Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: stable@kernel.org Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Timo Teräs 提交于
My box with following cpuinfo needs the cx8 enabling still: vendor_id : CentaurHauls cpu family : 6 model : 13 model name : VIA Eden Processor 1200MHz stepping : 0 cpu MHz : 1199.940 cache size : 128 KB This fixes valgrind to work on my box (it requires and checks cx8 from cpuinfo). Signed-off-by: NTimo Teräs <timo.teras@iki.fi> Link: http://lkml.kernel.org/r/1323961888-10223-1-git-send-email-timo.teras@iki.fiSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 15 12月, 2011 3 次提交
-
-
由 Jan Beulich 提交于
They had several problems/shortcomings: Only the first memory operand was mentioned in the 2x32bit asm() operands, and 2x64-bit version had a memory clobber. The first allowed the compiler to not recognize the need to re-load the data in case it had it cached in some register, and the second was overly destructive. The memory operand in the 2x32-bit asm() was declared to only be an output. The types of the local copies of the old and new values were incorrect (as in other per-CPU ops, the types of the per-CPU variables accessed should be used here, to make sure the respective types are compatible). The __dummy variable was pointless (and needlessly initialized in the 2x32-bit case), given that local copies of the inputs already exist. The 2x64-bit variant forced the address of the first object into %rsi, even though this is needed only for the call to the emulation function. The real cmpxchg16b can operate on an memory. At once also change the return value type to what it really is - 'bool'. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Howells <dhowells@redhat.com> Cc: Christoph Lameter <cl@linux.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/4EE86D6502000078000679FE@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Joerg Roedel 提交于
Add the flags to get rid of the [9] and [10] feature names in cpuinfo's 'power management' fields and replace them with meaningful names. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Link: http://lkml.kernel.org/r/1323875574-17881-1-git-send-email-joerg.roedel@amd.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Fenghua Yu 提交于
Thermal throttle and power limit events are not defined as MCE errors in x86 architecture and should not generate MCE errors in mcelog. Current kernel generates fake software defined MCE errors for these events. This may confuse users because they may think the machine has real MCE errors while actually only thermal throttle or power limit events happen. To make it worse, buggy firmware on some platforms may falsely generate the events. Therefore, kernel reports MCE errors which users think as real hardware errors. Although the firmware bugs should be fixed, on the other hand, kernel should not report MCE errors either. So mcelog is not a good mechanism to report these events. To report the events, we count them in respective counters (core_power_limit_count, package_power_limit_count, core_throttle_count, and package_throttle_count) in /sys/devices/system/cpu/cpu#/thermal_throttle/. Users can check the counters for each event on each CPU. Please note that all CPU's on one package report duplicate counters. It's user application's responsibity to retrieve a package level counter for one package. This patch doesn't report package level power limit, core level power limit, and package level thermal throttle events in mcelog. When the events happen, only report them in respective counters in sysfs. Since core level thermal throttle has been legacy code in kernel for a while and users accepted it as MCE error in mcelog, core level thermal throttle is still reported in mcelog. In the mean time, the event is counted in a counter in sysfs as well. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Acked-by: NBorislav Petkov <bp@amd64.org> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/20111215001945.GA21009@linux-os.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 14 12月, 2011 4 次提交
-
-
由 Borislav Petkov 提交于
Add a function which drains whatever MCEs were logged in already during boot and before the decoder chains were registered. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Borislav Petkov 提交于
No functionality change, this is done so that in a follow-on patch all queued-up MCEs can be decoded after registering on the chain. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Borislav Petkov 提交于
Add Andreas and me as current maintainers. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Borislav Petkov 提交于
Once we've found and validated the ucode patch for the current CPU, there's no need to iterate over the remaining patches in the binary image. Exit then and save us a bunch of cycles. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-