- 16 5月, 2014 3 次提交
-
-
由 Thomas Gleixner 提交于
No more users outside of itanic. Confine it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Tested-by: NTony Luck <tony.luck@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/20140507154338.700598389@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Just stumbled over it when staring into ia64 irq handling. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Tested-by: NTony Luck <tony.luck@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/20140507154336.566531793@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
ia64 and x86 share this driver. x86 is moving to a different irq allocation and ia64 keeps its private irq_create/destroy stuff. Use macros to redirect to one or the other. Yes, macros to avoid include hell. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Acked-by: NJoerg Roedel <joro@8bytes.org> Cc: x86@kernel.org Cc: linux-ia64@vger.kernel.org Cc: iommu@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20140507154336.372289825@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 26 4月, 2014 1 次提交
-
-
由 Linus Torvalds 提交于
The mmu-gather operation 'tlb_flush_mmu()' has done two things: the actual tlb flush operation, and the batched freeing of the pages that the TLB entries pointed at. This splits the operation into separate phases, so that the forced batched flushing done by zap_pte_range() can now do the actual TLB flush while still holding the page table lock, but delay the batched freeing of all the pages to after the lock has been dropped. This in turn allows us to avoid a race condition between set_page_dirty() (as called by zap_pte_range() when it finds a dirty shared memory pte) and page_mkclean(): because we now flush all the dirty page data from the TLB's while holding the pte lock, page_mkclean() will be held up walking the (recently cleaned) page tables until after the TLB entries have been flushed from all CPU's. Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Tested-by: NDave Hansen <dave.hansen@intel.com> Acked-by: NHugh Dickins <hughd@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 4月, 2014 1 次提交
-
-
由 Tony Luck 提交于
April 2014 Itanium processor specification update: http://www.intel.com/content/www/us/en/processors/itanium/itanium-specification-update.html describes this erratum: ========================================================================= 237. Under a complex set of conditions, store to load forwarding for a sub 8-byte load may complete incorrectly Problem: A load instruction may complete incorrectly when a code sequence using 4-byte or smaller load and store operations to the same address is executed in combination with specific timing of all the following concurrent conditions: store to load forwarding, alignment checking enabled, a mis-predicted branch, and complex cache utilization activity. Implication: The affected sub 8-byte instruction may complete incorrectly resulting in unpredictable system behavior. There is an extremely low probability of exposure due to the significant number of complex microarchitectural concurrent conditions required to encounter the erratum. Workaround: Set PSR.ac = 0 to completely avoid the erratum. Disabling Hyper-Threading will significantly reduce exposure to the conditions that contribute to encountering the erratum. Status: See the Summary Table of Changes for the affected steppings. ========================================================================= [Table of changes essentially lists all models from McKinley to Tukwila] The PSR.ac bit controls whether the processor will always generate an unaligned reference trap (0x5a00) for a misaligned data access (when PSR.ac=1) or if it will let the access succeed when running on a cpu that implements logic to handle some unaligned accesses. Way back in 2008 in commit b704882e [IA64] Rationalize kernel mode alignment checking we made the decision to always enable strict checking. We were already doing so in trap/interrupt context because the common preamble code set this bit - but the rest of supervisor code (and by inheritance user code) ran with PSR.ac=0. We now reverse that decision and set PSR.ac=0 everywhere in the kernel (also inherited by user processes). This will avoid the erratum using the method described in the Itanium specification update. Net effect for users is that the processor will handle unaligned access when it can (typically with a tiny performance bubble in the pipeline ... but much less invasive than taking a trap and having the OS perform the access). Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 08 4月, 2014 1 次提交
-
-
由 Josh Triplett 提交于
Fix breakage which will be exposed by the patch "kconfig: make allnoconfig disable options behind EMBEDDED and EXPERT". arch/ia64/kernel/unaligned.c uses tty_write_message to print an unaligned access exception to the TTY of the current user process. Enable TTY to prevent a build error. Minimal fix, on the basis that few people on ia64 will care deeply about kernel size enough to turn off TTY. Ideally, I'd instead suggest dropping the tty_write_message entirely, and just leaving the printk. Bonus: no need to sprintf first. Signed-off-by: NJosh Triplett <josh@joshtriplett.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 3月, 2014 1 次提交
-
-
由 Kees Cook 提交于
The buffer being sent to printk has already had format strings resolved. The string should not be reinterpreted again to avoid any unintended format strings from leaking into printk. Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 20 3月, 2014 5 次提交
-
-
由 AKASHI Takahiro 提交于
Currently AUDITSYSCALL has a long list of architecture depencency: depends on AUDIT && (X86 || PARISC || PPC || S390 || IA64 || UML || SPARC64 || SUPERH || (ARM && AEABI && !OABI_COMPAT) || ALPHA) The purpose of this patch is to replace it with HAVE_ARCH_AUDITSYSCALL for simplicity. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> (arm) Acked-by: Richard Guy Briggs <rgb@redhat.com> (audit) Acked-by: Matt Turner <mattst88@gmail.com> (alpha) Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the error injection code in ia64 by using this latter form of callback registration. Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the topology code in ia64 by using this latter form of callback registration. Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the palinfo code in ia64 by using this latter form of callback registration. Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the salinfo code in ia64 by using this latter form of callback registration. Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 13 3月, 2014 1 次提交
-
-
由 Gabriel L. Somlo 提交于
Both QEMU and KVM have already accumulated a significant number of optimizations based on the hard-coded assumption that ioapic polarity will always use the ActiveHigh convention, where the logical and physical states of level-triggered irq lines always match (i.e., active(asserted) == high == 1, inactive == low == 0). QEMU guests are expected to follow directions given via ACPI and configure the ioapic with polarity 0 (ActiveHigh). However, even when misbehaving guests (e.g. OS X <= 10.9) set the ioapic polarity to 1 (ActiveLow), QEMU will still use the ActiveHigh signaling convention when interfacing with KVM. This patch modifies KVM to completely ignore ioapic polarity as set by the guest OS, enabling misbehaving guests to work alongside those which comply with the ActiveHigh polarity specified by QEMU's ACPI tables. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NGabriel L. Somlo <somlo@cmu.edu> [Move documentation to KVM_IRQ_LINE, add ia64. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 12 3月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
The [user space] interface does not filter out offline cpus. It merily guarantees that the mask contains at least one online cpu. So the selector in the irq chip implementation needs to make sure to pick only an online cpu because otherwise: Offline Core 1 Set affinity to 0xe (is valid due to online mask 0xd) cpumask_first will pick core 1, which is offline Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: ia64 <linux-ia64@vger.kernel.org> Link: http://lkml.kernel.org/r/20140304203100.650414633@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 11 3月, 2014 2 次提交
-
-
由 Bjorn Helgaas 提交于
Remove mc_capable() and smt_capable(). Neither is used. Both were added by 5c45bf27 ("sched: mc/smt power savings sched policy"). Uses of both were removed by 8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs"). Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/20140304210737.16893.54289.stgit@bhelgaas-glaptop.roam.corp.google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Johannes Weiner 提交于
GFP_THISNODE is for callers that implement their own clever fallback to remote nodes. It restricts the allocation to the specified node and does not invoke reclaim, assuming that the caller will take care of it when the fallback fails, e.g. through a subsequent allocation request without GFP_THISNODE set. However, many current GFP_THISNODE users only want the node exclusive aspect of the flag, without actually implementing their own fallback or triggering reclaim if necessary. This results in things like page migration failing prematurely even when there is easily reclaimable memory available, unless kswapd happens to be running already or a concurrent allocation attempt triggers the necessary reclaim. Convert all callsites that don't implement their own fallback strategy to __GFP_THISNODE. This restricts the allocation a single node too, but at the same time allows the allocator to enter the slowpath, wake kswapd, and invoke direct reclaim if necessary, to make the allocation happen when memory is full. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NRik van Riel <riel@redhat.com> Cc: Jan Stancek <jstancek@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 3月, 2014 2 次提交
-
-
由 Jiri Slaby 提交于
As the data parameter is not really used by any ftrace_dyn_arch_init, remove that from ftrace_dyn_arch_init. This also removes the addr local variable from ftrace_init which is now unused. Note the documentation was imprecise as it did not suggest to set (*data) to 0. Link: http://lkml.kernel.org/r/1393268401-24379-4-git-send-email-jslaby@suse.cz Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Jiri Slaby 提交于
No architecture uses the "data" parameter in ftrace_dyn_arch_init() in any way, it just sets the value to 0. And this is used as a return value in the caller -- ftrace_init, which just checks the retval against zero. Note there is also "return 0" in every ftrace_dyn_arch_init. So it is enough to check the retval and remove all the indirect sets of data on all archs. Link: http://lkml.kernel.org/r/1393268401-24379-3-git-send-email-jslaby@suse.cz Cc: linux-arch@vger.kernel.org Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 05 3月, 2014 3 次提交
-
-
由 Michael Opdenacker 提交于
This patch removes the IRQF_DISABLED flag from ia64 architecture code. It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Cc: paul.gortmaker@windriver.com Cc: viro@zeniv.linux.org.uk Cc: srivatsa.bhat@linux.vnet.ibm.com Cc: andriy.shevchenko@linux.intel.com Cc: fenghua.yu@intel.com Cc: tony.luck@intel.com Link: http://lkml.kernel.org/r/1393964953-17002-1-git-send-email-michael.opdenacker@free-electrons.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Let the core do the irq_desc resolution. No functional change. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: ia64 <linux-ia64@vger.kernel.org> Link: http://lkml.kernel.org/r/20140223212738.099977064@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Matt Fleming 提交于
There's no good reason to keep efi_enabled() under CONFIG_X86 anymore, since nothing about the implementation is specific to x86. Set EFI feature flags in the ia64 boot path instead of claiming to support all features. The old behaviour was actually buggy since efi.memmap never points to a valid memory map, so we shouldn't be claiming to support EFI_MEMMAP. Fortunately, this bug was never triggered because EFI_MEMMAP isn't used outside of arch/x86 currently, but that may not always be the case. Reviewed-and-tested-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 01 3月, 2014 2 次提交
-
-
由 Jiang Liu 提交于
Now the ACPI container and ACPI PCI hotplug driver has been converted as built-in modules, so reflect these changes in IA64 defconfig file. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jiang Liu 提交于
To: linux-kernel@vger.kernel.org Fix the section mismatch warning by remove __init annotate for functions ioc_iova_init(), ioc_init() and acpi_sba_ioc_add() because they may be called at runtime. WARNING: vmlinux.o(.data+0x66ee0): Section mismatch in reference from the variable acpi_sba_ioc_handler to the function .init.text:acpi_sba_ioc_add() The variable acpi_sba_ioc_handler references the function __init acpi_sba_ioc_add() Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 19 2月, 2014 2 次提交
-
-
由 Hanjun Guo 提交于
BAD_MADT_ENTRY() is arch independent and will be used for all architectures which parse MADT, so move it to linux/acpi.h to reduce code duplication. Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Paul Bolle 提交于
Nothing cares about ACPI_PROCFS. This has been the case since v2.6.38. This Kconfig symbol serves no purpose and its help text is now misleading. It can safely be removed. If this symbol would be needed again in the future it can be readded in a commit that adds code that actually uses it. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 2月, 2014 1 次提交
-
-
由 Sander Eikelenboom 提交于
Setting the IORESOURCE_ROM_SHADOW flag on a VGA card other than the primary prevents it from reading its own ROM. It will get the content of the shadow ROM at C000 instead, which is of the primary VGA card and the driver of the secondary card will bail out. Fix this by checking if the arch code or vga-arbitration has already determined the vga_default_device, if so only apply the fix to this primary video device and let the comment reflect this. [bhelgaas: add subject, split x86 & ia64 into separate patches, include vgaarb.h] Signed-off-by: NSander Eikelenboom <linux@eikelenboom.it> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 10 2月, 2014 2 次提交
-
-
由 Tim Chen 提交于
This patch allows each architecture to add its specific assembly optimized arch_mcs_spin_lock_contended and arch_mcs_spinlock_uncontended for MCS lock and unlock functions. Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Cc: Scott J Norton <scott.norton@hp.com> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: AswinChandramouleeswaran <aswin@hp.com> Cc: George Spelvin <linux@horizon.com> Cc: Rik vanRiel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: MichelLespinasse <walken@google.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Alex Shi <alex.shi@linaro.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "Figo.zhang" <figo1802@gmail.com> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Waiman Long <waiman.long@hp.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1390347382.3138.67.camel@schen9-DESKSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Tim Chen 提交于
We perform a clean up of the Kbuid files in each architecture. We order the files in each Kbuild in alphabetical order by running the below script. for i in arch/*/include/asm/Kbuild do cat $i | gawk '/^generic-y/ { i = 3; do { for (; i <= NF; i++) { if ($i == "\\") { getline; i = 1; continue; } if ($i != "") hdr[$i] = $i; } break; } while (1); next; } // { print $0; } END { n = asort(hdr); for (i = 1; i <= n; i++) print "generic-y += " hdr[i]; }' > ${i}.sorted; mv ${i}.sorted $i; done Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com> Cc: AswinChandramouleeswaran <aswin@hp.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: Scott J Norton <scott.norton@hp.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "Figo.zhang" <figo1802@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rik van Riel <riel@redhat.com> Cc: Waiman Long <waiman.long@hp.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Alex Shi <alex.shi@linaro.org> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: George Spelvin <linux@horizon.com> Cc: MichelLespinasse <walken@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> [ Fixed build bug. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 04 2月, 2014 2 次提交
-
-
由 Bjorn Helgaas 提交于
The IOMMU, LSAPIC, IOSAPIC, and PCI host bridge code doesn't care about _PXM values directly; it only needs to know what NUMA node the hardware is on. This uses acpi_get_node() directly and removes the _PXM stuff. Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Bjorn Helgaas 提交于
MAX_NUMNODES is typically used for sizing arrays. NUMA_NO_NODE is the usual value for "we don't know what node this is on," e.g., it is the error return from acpi_get_node(). This changes the ioc->node value for unknown nodes from MAX_NUMNODES to NUMA_NO_NODE. Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 29 1月, 2014 1 次提交
-
-
由 Tony Luck 提交于
New syscalls for v3.14 Signed-off-by: Tony Luck <tony.luck#intel.com>
-
- 24 1月, 2014 2 次提交
-
-
由 Ard Biesheuvel 提交于
This patch makes a couple of changes to the SMBIOS/DMI scanning code so it can be used on other archs (such as ARM and arm64): (a) wrap the calls to ioremap()/iounmap(), this allows the use of a flavor of ioremap() more suitable for random unaligned access; (b) allow the non-EFI fallback probe into hardcoded physical address 0xF0000 to be disabled. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NGrant Likely <grant.likely@linaro.org> Cc: Ingo Molnar <mingo@elte.hu> Cc "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Joe Perches 提交于
Add #include <linux/cache.h> to define __read_mostly. Convert cache.h to use uapi/linux/kernel.h instead of linux/kernel.h to avoid recursive #includes. Convert the ALIGN macro to __ALIGN_KERNEL. printk_once only sets the bool variable tested once so mark it __read_mostly. Neaten the alignment so it matches the rest of the pr_<level>_once #defines too. Signed-off-by: NJoe Perches <joe@perches.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 1月, 2014 1 次提交
-
-
由 Mel Gorman 提交于
Commit 4b59e6c4 ("mm, show_mem: suppress page counts in non-blockable contexts") introduced SHOW_MEM_FILTER_PAGE_COUNT to suppress PFN walks on large memory machines. Commit c78e9363 ("mm: do not walk all of system memory during show_mem") avoided a PFN walk in the generic show_mem helper which removes the requirement for SHOW_MEM_FILTER_PAGE_COUNT in that case. This patch removes PFN walkers from the arch-specific implementations that report on a per-node or per-zone granularity. ARM and unicore32 still do a PFN walk as they report memory usage on each bank which is a much finer granularity where the debugging information may still be of use. As the remaining arches doing PFN walks have relatively small amounts of memory, this patch simply removes SHOW_MEM_FILTER_PAGE_COUNT. [akpm@linux-foundation.org: fix parisc] Signed-off-by: NMel Gorman <mgorman@suse.de> Acked-by: NDavid Rientjes <rientjes@google.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: James Bottomley <jejb@parisc-linux.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 1月, 2014 1 次提交
-
-
由 Michal Sekletar 提交于
For user space packet capturing libraries such as libpcap, there's currently only one way to check which BPF extensions are supported by the kernel, that is, commit aa1113d9 ("net: filter: return -EINVAL if BPF_S_ANC* operation is not supported"). For querying all extensions at once this might be rather inconvenient. Therefore, this patch introduces a new option which can be used as an argument for getsockopt(), and allows one to obtain information about which BPF extensions are supported by the current kernel. As David Miller suggests, we do not need to define any bits right now and status quo can just return 0 in order to state that this versions supports SKF_AD_PROTOCOL up to SKF_AD_PAY_OFFSET. Later additions to BPF extensions need to add their bits to the bpf_tell_extensions() function, as documented in the comment. Signed-off-by: NMichal Sekletar <msekleta@redhat.com> Cc: David Miller <davem@davemloft.net> Reviewed-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 1月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
A number of situations currently require the heavyweight smp_mb(), even though there is no need to order prior stores against later loads. Many architectures have much cheaper ways to handle these situations, but the Linux kernel currently has no portable way to make use of them. This commit therefore supplies smp_load_acquire() and smp_store_release() to remedy this situation. The new smp_load_acquire() primitive orders the specified load against any subsequent reads or writes, while the new smp_store_release() primitive orders the specifed store against any prior reads or writes. These primitives allow array-based circular FIFOs to be implemented without an smp_mb(), and also allow a theoretical hole in rcu_assign_pointer() to be closed at no additional expense on most architectures. In addition, the RCU experience transitioning from explicit smp_read_barrier_depends() and smp_wmb() to rcu_dereference() and rcu_assign_pointer(), respectively resulted in substantial improvements in readability. It therefore seems likely that replacing other explicit barriers with smp_load_acquire() and smp_store_release() will provide similar benefits. It appears that roughly half of the explicit barriers in core kernel code might be so replaced. [Changelog by PaulMck] Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 1月, 2014 1 次提交
-
-
由 Lv Zheng 提交于
This change adds a runtime option that will force ACPICA to use the RSDT instead of the XSDT. Although the ACPI spec requires that an XSDT be used instead of the RSDT, the XSDT has been found to be corrupt or ill-formed on some machines. This option is already in the Linux kernel. When it is back ported to ACPICA, code is re-written to follow ACPICA coding style. This patch is the generation of the integration. Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 03 1月, 2014 1 次提交
-
-
由 Mark Salter 提交于
Architectures which might use an i8042 for serial IO to keyboard, mouse, etc should select ARCH_MIGHT_HAVE_PC_SERIO. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
-
- 18 12月, 2013 1 次提交
-
-
由 David S. Miller 提交于
Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 12月, 2013 1 次提交
-
-
由 Takuya Yoshikawa 提交于
Since the commit 15ad7146 ("KVM: Use the scheduler preemption notifiers to make kvm preemptible"), the remaining stuff in this function is a simple cond_resched() call with an extra need_resched() check which was there to avoid dropping VCPUs unnecessarily. Now it is meaningless. Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-