- 05 7月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
Reimplement augmented RB-trees without sprinkling extra branches all over the RB-tree code (which lives in the scheduler hot path). This approach is 'borrowed' from Fabio's BFQ implementation and relies on traversing the rebalance path after the RB-tree-op to correct the heap property for insertion/removal and make up for the damage done by the tree rotations. For insertion the rebalance path is trivially that from the new node upwards to the root, for removal it is that from the deepest node in the path from the to be removed node that will still be around after the removal. [ This patch also fixes a video driver regression reported by Ali Gholami Rudi - the memtype->subtree_max_end was updated incorrectly. ] Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Tested-by: NAli Gholami Rudi <ali@rudi.ir> Cc: Fabio Checconi <fabio@gandalf.sssup.it> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1275414172.27810.27961.camel@twins> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 03 7月, 2010 1 次提交
-
-
由 Vince Weaver 提交于
While doing some performance counter validation tests on some assembly language programs I noticed that the "branches:u" count was very wrong on AMD machines. It looks like the wrong event was selected. Signed-off-by: NVince Weaver <vweaver1@eecs.utk.edu> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: <stable@kernel.org> LKML-Reference: <alpine.DEB.2.00.1007011526010.23160@cl320.eecs.utk.edu> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 01 7月, 2010 1 次提交
-
-
由 Darrick J. Wong 提交于
The x3950 family can have as many as 256 PCI buses in a single system, so change the limits to the maximum. Since there can only be 256 PCI buses in one domain, we no longer need the BUG_ON check. Signed-off-by: NDarrick J. Wong <djwong@us.ibm.com> LKML-Reference: <20100701004519.GQ15515@tux1.beaverton.ibm.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 30 6月, 2010 1 次提交
-
-
由 Frederic Weisbecker 提交于
Before we had a generic breakpoint layer, x86 used to send a sigtrap for any debug event that happened in userspace, except if it was caused by lazy dr7 switches. Currently we only send such signal for single step or breakpoint events. However, there are three other kind of debug exceptions: - debug register access detected: trigger an exception if the next instruction touches the debug registers. We don't use it. - task switch, but we don't use tss. - icebp/int01 trap. This instruction (0xf1) is undocumented and generates an int 1 exception. Unlike single step through TF flag, it doesn't set the single step origin of the exception in dr6. icebp then used to be reported in userspace using trap signals but this have been incidentally broken with the new breakpoint code. Reenable this. Since this is the only debug event that doesn't set anything in dr6, this is all we have to check. This fixes a regression in Wine where World Of Warcraft got broken as it uses this for software protection checks purposes. And probably other apps do. Reported-and-tested-by: NAlexandre Julliard <julliard@winehq.org> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Prasad <prasad@linux.vnet.ibm.com> Cc: 2.6.33.x 2.6.34.x <stable@kernel.org>
-
- 25 6月, 2010 1 次提交
-
-
由 Darrick J. Wong 提交于
Newer systems (x3950M2) can have 48 PHBs per chassis and 8 chassis, so bump the limits up and provide an explanation of the requirements for each class. Signed-off-by: NDarrick J. Wong <djwong@us.ibm.com> Acked-by: NMuli Ben-Yehuda <muli@il.ibm.com> Cc: Corinna Schultz <cschultz@linux.vnet.ibm.com> Cc: <stable@kernel.org> LKML-Reference: <20100624212647.GI15515@tux1.beaverton.ibm.com> [ v2: Fixed build bug, added back PHBS_PER_CALGARY == 4 ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 6月, 2010 1 次提交
-
-
由 Thomas Backlund 提交于
Dell Precision WorkStation T7400 freezes on reboot unless reboot=b is used. Reference: https://qa.mandriva.com/show_bug.cgi?id=58017Signed-off-by: NThomas Backlund <tmb@mandriva.org> LKML-Reference: <4C1CC6E9.6000701@mandriva.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 19 6月, 2010 1 次提交
-
-
由 Andi Kleen 提交于
This fixes the -Os breaks with gcc 4.5 bug. rdtsc_barrier needs to be force inlined, otherwise user space will jump into kernel space and kill init. This also addresses http://gcc.gnu.org/bugzilla/show_bug.cgi?id=44129 I believe. Signed-off-by: NAndi Kleen <ak@linux.intel.com> LKML-Reference: <20100618210859.GA10913@basil.fritz.box> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@kernel.org>
-
- 12 6月, 2010 2 次提交
-
-
由 Venkatesh Pallipadi 提交于
subtree_max_end that was recently added to struct memtype was not getting properly initialized resulting in WARNING: kmemcheck: Caught 64-bit read from uninitialized memory in memtype_rb_augment_cb() reported here https://bugzilla.kernel.org/show_bug.cgi?id=16092 This change fixes the problem. Reported-by: NChristian Casteyde <casteyde.christian@free.fr> Tested-by: NChristian Casteyde <casteyde.christian@free.fr> Signed-off-by: NVenkatesh Pallipadi <venki@google.com> LKML-Reference: <1276217101-11515-1-git-send-email-venki@google.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com>
-
由 Yinghai Lu 提交于
Yannick found that video does not work with 2.6.34. The cause of this bug was that the BIOS had assigned the wrong range to the PCI bridge above the video device. Before 2.6.34 the kernel would have shrunk the size of the bridge window, but since d65245c3 PCI: don't shrink bridge resources the kernel will avoid shrinking BIOS ranges. So zero out the old range if we fail to claim it at boot time; this will cause us to allocate a new range at startup, restoring the 2.6.34 behavior. Fixes regression https://bugzilla.kernel.org/show_bug.cgi?id=16009. Reported-by: NYannick <yannick.roehlly@free.fr> Acked-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
- 11 6月, 2010 2 次提交
-
-
由 Andi Kleen 提交于
Catch missing conversion to the register structure "glove box" scheme. Found by gcc 4.6's new warnings. Signed-off-by: NAndi Kleen <ak@linux.intel.com> LKML-Reference: <20100610111040.F1781B1A2B@basil.firstfloor.org> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Andi Kleen 提交于
Avoid hundreds of warnings with a gcc 4.6 -Wall build. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 6月, 2010 3 次提交
-
-
由 Matthew Garrett 提交于
Saving platform non-volatile state may be required for suspend to RAM as well as hibernation. Move it to more generic code. Signed-off-by: NMatthew Garrett <mjg@redhat.com> Acked-by: NRafael J. Wysocki <rjw@sisk.pl> Tested-by: NMaxim Levitsky <maximlevitsky@gmail.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Stephane Eranian 提交于
Based on Intel Vol3b (March 2010), the event SNOOPQ_REQUEST_OUTSTANDING is restricted to counters 0,1 so update the event table for Intel Westmere accordingly. Signed-off-by: NStephane Eranian <eranian@google.com> Cc: peterz@infradead.org Cc: paulus@samba.org Cc: davem@davemloft.net Cc: fweisbec@gmail.com Cc: perfmon2-devel@lists.sf.net Cc: eranian@gmail.com Cc: <stable@kernel.org> # .34.x LKML-Reference: <4c10cb56.5120e30a.2eb4.ffffc3de@mx.google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Eric W. Biederman 提交于
When I introduced the global variable gsi_end I thought gsi_end on io_apics was one past the end of the gsi range for the io_apic. After it was pointed out the the range on io_apics was inclusive I changed my global variable to match. That was a big mistake. Inclusive semantics without a range start cannot describe the case when no gsi's are allocated. Describing the case where no gsi's are allocated is important in sfi.c and mpparse.c so that we can assign gsi numbers instead of blindly copying the gsi assignments the BIOS has done as we do in the acpi case. To keep from getting the global variable confused with the gsi range end rename it gsi_top. To allow describing the case where no gsi's are allocated have gsi_top be one place the highest gsi number seen in the system. This fixes an off by one bug in sfi.c: Reported-by: Njacob pan <jacob.jun.pan@linux.intel.com> This fixes the same off by one bug in mpparse.c: This fixes an off unreachable by one bug in acpi/boot.c:irq_to_gsi Reported-by: NYinghai <yinghai.lu@oracle.com> Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> LKML-Reference: <m17hm9jre7.fsf_-_@fess.ebiederm.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 09 6月, 2010 4 次提交
-
-
由 Avi Kivity 提交于
If cr0.wp=0, we have to allow the guest kernel access to a page with pte.w=0. We do that by setting spte.w=1, since the host cr0.wp must remain set so the host can write protect pages. Once we allow write access, we must remove user access otherwise we mistakenly allow the user to write the page. Reviewed-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
Always invalidate spte and flush TLBs when changing page size, to make sure different sized translations for the same address are never cached in a CPU's TLB. Currently the only case where this occurs is when a non-leaf spte pointer is overwritten by a leaf, large spte entry. This can happen after dirty logging is disabled on a memslot, for example. Noticed by Andrea. KVM-Stable-Tag Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Joerg Roedel 提交于
This patch implements a workaround for AMD erratum 383 into KVM. Without this erratum fix it is possible for a guest to kill the host machine. This patch implements the suggested workaround for hypervisors which will be published by the next revision guide update. [jan: fix overflow warning on i386] [xiao: fix unused variable warning] Cc: stable@kernel.org Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Joerg Roedel 提交于
This patch moves handling of the MC vmexits to an earlier point in the vmexit. The handle_exit function is too late because the vcpu might alreadry have changed its physical cpu. Cc: stable@kernel.org Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 08 6月, 2010 2 次提交
-
-
由 Andres Salomon 提交于
As Ingo pointed out in a separate patch, we should be using __ASSEMBLY__. Make that the case in pgtable headers. Signed-off-by: NAndres Salomon <dilinger@queued.net> LKML-Reference: <20100605114042.35ac69c1@dev.queued.net> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Ondrej Zary 提交于
Save/restore MISC_ENABLE register on suspend/resume. This fixes OOPS (invalid opcode) on resume from STR on Asus P4P800-VM, which wakes up with MWAIT disabled. Fixes https://bugzilla.kernel.org/show_bug.cgi?id=15385Signed-off-by: NOndrej Zary <linux@rainbow-software.org> Tested-by: NAlan Stern <stern@rowland.harvard.edu> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 03 6月, 2010 1 次提交
-
-
由 Ian Campbell 提交于
The core suspend/resume code is run from stop_machine on CPU0 but parts of the suspend/resume machinery (including xen_arch_resume) are run on whichever CPU happened to schedule the xenwatch kernel thread. As part of the non-core resume code xen_arch_resume is called in order to restart the timer tick on non-boot processors. The boot processor itself is taken care of by core timekeeping code. xen_arch_resume uses smp_call_function which does not call the given function on the current processor. This means that we can end up with one CPU not receiving timer ticks if the xenwatch thread happened to be scheduled on CPU > 0. Use on_each_cpu instead of smp_call_function to ensure the timer tick is resumed everywhere. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Acked-by: NJeremy Fitzhardinge <jeremy@goop.org> Cc: Stable Kernel <stable@kernel.org> # .32.x
-
- 02 6月, 2010 1 次提交
-
-
由 Borislav Petkov 提交于
Percpu initialization happens now after booting the cores on the machine and this causes them all to be displayed as belonging to node 0: Jun 8 05:57:21 kepek kernel: [ 0.106999] Booting Node 0, Processors #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 Ok. Use early_cpu_to_node() to get the correct node of each core instead. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Cc: Mike Travis <travis@sgi.com> LKML-Reference: <20100601190455.GA14237@aftab> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 01 6月, 2010 2 次提交
-
-
由 Joerg Roedel 提交于
This patch implements a fallback to the GART IOMMU if this is possible and the AMD IOMMU initialization failed. Otherwise the fallback would be nommu which is very problematic on machines with more than 4GB of memory or swiotlb which hurts io-performance. Cc: stable@kernel.org Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
由 Joerg Roedel 提交于
When request_mem_region fails the error path tries to disable the IOMMUs. This accesses the mmio-region which was not allocated leading to a kernel crash. This patch fixes the issue. Cc: stable@kernel.org Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
- 31 5月, 2010 3 次提交
-
-
由 Akinobu Mita 提交于
DBG() macro for CONFIG_DEBUG_PER_CPU_MAPS is unused. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> LKML-Reference: <1274706291-13554-1-git-send-email-akinobu.mita@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
The transactional API patch between the generic and model-specific code introduced several important bugs with event scheduling, at least on X86. If you had pinned events, e.g., watchdog, and were over-committing the PMU, you would get bogus counts. The bug was showing up on Intel CPU because events would move around more often that on AMD. But the problem also existed on AMD, though harder to expose. The issues were: - group_sched_in() was missing a cancel_txn() in the error path - cpuc->n_added was not properly maintained, leading to missing actions in hw_perf_enable(), i.e., n_running being 0. You cannot update n_added until you know the transaction has succeeded. In case of failed transaction n_added was not adjusted back. - in case of failed transactions, event_sched_out() was called and eventually invoked x86_disable_event() to touch the HW reg. But with transactions, on X86, event_sched_in() does not touch HW registers, it simply collects events into a list. Thus, you could end up calling x86_disable_event() on a counter which did not correspond to the current event when idx != -1. The patch modifies the generic and X86 code to avoid all those problems. First, we keep track of the number of events added last. In case the transaction fails, we substract them from n_added. This approach is necessary (as opposed to delaying updates to n_added) because not all event updates use the transaction API, e.g., single events. Second, we encapsulate the event_sched_in() and event_sched_out() in group_sched_in() inside the transaction. That makes the operations symmetrical and you can also detect that you are inside a transaction and skip the HW reg access by checking cpuc->group_flag. With this patch, you can now overcommit the PMU even with pinned system-wide events present and still get valid counts. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1274796225.5882.1389.camel@twins> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Linus Torvalds 提交于
This reverts commit 0ac0c0d0, which caused cross-architecture build problems for all the wrong reasons. IA64 already added its own version of __node_random(), but the fact is, there is nothing architectural about the function, and the original commit was just badly done. Revert it, since no fix is forthcoming. Requested-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 5月, 2010 11 次提交
-
-
由 Len Brown 提交于
TS_POLLING set tells the scheduler an idle_task will poll need_resched() to look for work. TS_POLLING clear tells resched_task() and wake_up_idle_cpu() that the remote CPU's idle_task is now sleeping in idle, and thus requires a reschedule interrupt notice work. Update the description of TS_POLLING to reflect how it works. "idle task polling need_resched, skip sending interrupt" Wordsmithing-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NLen Brown <len.brown@intel.com> Acked-by: NPeter Zijlstra <peterz@infradead.org>
-
由 Florian Fainelli 提交于
This file is replaced by a cleaner version with the adding of a MFD driver for the southbridge. Signed-off-by: NFlorian Fainelli <florian@openwrt.org> Acked-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NSamuel Ortiz <sameo@linux.intel.com>
-
由 H. Peter Anvin 提交于
gcc 3 is too braindamaged to be able to compile static_cpu_has() -- apparently it can't tell that a constant passed to an inline function is still a constant -- so if we're using gcc 3, just use the dynamic test. This is bad for performance, but if you care about performance, don't use an ancient, known-to-optimize-poorly compiler. Reported-and-tested-by: NEric Dumazet <eric.dumazet@gmail.com> LKML-Reference: <4BF2FF82.7090005@zytor.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Lee Schermerhorn 提交于
x86 arch specific changes to use generic numa_node_id() based on generic percpu variable infrastructure. Back out x86's custom version of numa_node_id() Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com> Cc: Tejun Heo <tj@kernel.org> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Nick Piggin <npiggin@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Eric Whitney <eric.whitney@hp.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Lee Schermerhorn 提交于
Rework the generic version of the numa_node_id() function to use the new generic percpu variable infrastructure. Guard the new implementation with a new config option: CONFIG_USE_PERCPU_NUMA_NODE_ID. Archs which support this new implemention will default this option to 'y' when NUMA is configured. This config option could be removed if/when all archs switch over to the generic percpu implementation of numa_node_id(). Arch support involves: 1) converting any existing per cpu variable implementations to use this implementation. x86_64 is an instance of such an arch. 2) archs that don't use a per cpu variable for numa_node_id() will need to initialize the new per cpu variable "numa_node" as cpus are brought on-line. ia64 is an example. 3) Defining USE_PERCPU_NUMA_NODE_ID in arch dependent Kconfig--e.g., when NUMA is configured. This is required because I have retained the old implementation by default to allow archs to be modified incrementally, as desired. Subsequent patches will convert x86_64 and ia64 to use this implemenation. Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com> Cc: Tejun Heo <tj@kernel.org> Cc: Mel Gorman <mel@csn.ul.ie> Reviewed-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Nick Piggin <npiggin@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Eric Whitney <eric.whitney@hp.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 FUJITA Tomonori 提交于
There are more architectures that don't support ARCH_HAS_SG_CHAIN than those that support it. This removes removes ARCH_HAS_SG_CHAIN in asm-generic/scatterlist.h and lets arhictectures to define it. It's clearer than defining ARCH_HAS_SG_CHAIN asm-generic/scatterlist.h and undefing it in arhictectures that don't support it. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrew Morton 提交于
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 FUJITA Tomonori 提交于
There are only two ways to define sg_dma_len(); use sg->dma_length or sg->length. This patch introduces NEED_SG_DMA_LENGTH that enables architectures to choose sg->dma_length or sg->length. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 FUJITA Tomonori 提交于
sync_single_range_for_cpu and sync_single_range_for_device hooks in swiotlb_dma_ops are unnecessary because sync_single_for_cpu and sync_single_for_device are used there. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Akinobu Mita 提交于
By the previous modification, the cpu notifier can return encapsulate errno value. This converts the cpu notifiers for msr, cpuid, and therm_throt. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jack Steiner 提交于
Some workloads that create a large number of small files tend to assign too many pages to node 0 (multi-node systems). Part of the reason is that the rotor (in cpuset_mem_spread_node()) used to assign nodes starts at node 0 for newly created tasks. This patch changes the rotor to be initialized to a random node number of the cpuset. [akpm@linux-foundation.org: fix layout] [Lee.Schermerhorn@hp.com: Define stub numa_random() for !NUMA configuration] Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Paul Menage <menage@google.com> Cc: Jack Steiner <steiner@sgi.com> Cc: Robin Holt <holt@sgi.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 5月, 2010 2 次提交
-
-
由 Julia Lawall 提交于
Add a spin_unlock missing on the error path. The locks and unlocks are balanced in other functions, so it seems that the same should be the case here. The semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression E1; @@ * spin_lock(E1,...); <+... when != E1 if (...) { ... when != E1 * return ...; } ...+> * spin_unlock(E1,...); // </smpl> Cc: stable@kernel.org Signed-off-by: NJulia Lawall <julia@diku.dk> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
由 Xiaotian Feng 提交于
Reserve_memtype will allocate memory for new memtype, but in free_memtype, after the memtype erased from rbtree, the memory is not freed. Changes since V1: make rbt_memtype_erase return erased memtype so that it can be freed in free_memtype. [ hpa: not for -stable: 2.6.34 and earlier not affected ] Signed-off-by: NXiaotian Feng <dfeng@redhat.com> LKML-Reference: <1274838670-8731-1-git-send-email-dfeng@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Jack Steiner <steiner@sgi.com> Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-