- 03 4月, 2014 4 次提交
-
-
由 Martin Schwidefsky 提交于
The zEC12 machines introduced the local-clearing control for the IDTE and IPTE instruction. If the control is set only the TLB of the local CPU is cleared of entries, either all entries of a single address space for IDTE, or the entry for a single page-table entry for IPTE. Without the local-clearing control the TLB flush is broadcasted to all CPUs in the configuration, which is expensive. The reset of the bit mask of the CPUs that need flushing after a non-local IDTE is tricky. As TLB entries for an address space remain in the TLB even if the address space is detached a new bit field is required to keep track of attached CPUs vs. CPUs in the need of a flush. After a non-local flush with IDTE the bit-field of attached CPUs is copied to the bit-field of CPUs in need of a flush. The ordering of operations on cpu_attach_mask, attach_count and mm_cpumask(mm) is such that an underindication in mm_cpumask(mm) is prevented but an overindication in mm_cpumask(mm) is possible. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The principles of operations states that the CPU is allowed to create TLB entries for an address space anytime while an ASCE is loaded to the control register. This is true even if the CPU is running in the kernel and the user address space is not (actively) accessed. In theory this can affect two aspects of the TLB flush logic. For full-mm flushes the ASCE of the dying process is still attached. The approach to flush first with IDTE and then just free all page tables can in theory lead to stale TLB entries. Use the batched free of page tables for the full-mm flushes as well. For operations that can have a stale ASCE in the control register, e.g. a delayed update_user_asce in switch_mm, load the kernel ASCE to prevent invalid TLBs from being created. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Thomas Huth 提交于
Use the new defines for external interruption codes to get rid of "magic" numbers in the s390 source code. And while we're at it, also rename the (un-)register_external_interrupt function to something shorter so that this patch does not exceed the 80 columns all over the place. Signed-off-by: NThomas Huth <thuth@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Thomas Huth 提交于
Introduce defines for external interruption codes so that we can get rid of some "magic" numbers in the s390 source code. Signed-off-by: NThomas Huth <thuth@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 01 4月, 2014 2 次提交
-
-
由 Christian Borntraeger 提交于
We need to reset the usage state of the pages on kexec/kdump, which use subcode 0 and 1. We will only do the cmma reset in the kernel, everything else is done in userspace as before. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When reworking the bitops and atomic ops I missed that those instructions that got atomic behaviour only perform a "specific-operand-serialization" instead of a full "serialization". The compare-and-swap instruction used before performs a full serialization before and after the instruction is executed, which means it has full memory barrier semantics. In order to give the new bitops and atomic ops functions also full memory barrier semantics add a "bcr 14,0" before and after each of those new instructions which performs full serialization as well. This restores memory barrier semantics for bitops and atomic ops functions which return values, like e.g. atomic_add_return(), but not for functions which do not return a value, like e.g. atomic_add(). This is consistent to other architectures and what common code requires. Cc: stable@vger.kernel.org # v3.13+ Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 31 3月, 2014 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
-
- 29 3月, 2014 3 次提交
-
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Kees Cook 提交于
The buffer being sent to printk has already had format strings resolved. The string should not be reinterpreted again to avoid any unintended format strings from leaking into printk. Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Artem Fetishev 提交于
On x86 uniprocessor systems topology_physical_package_id() returns -1 which causes rapl_cpu_prepare() to leave rapl_pmu variable uninitialized which leads to GPF in rapl_pmu_init(). See arch/x86/kernel/cpu/perf_event_intel_rapl.c. It turns out that physical_package_id and core_id can actually be retreived for uniprocessor systems too. Enabling them also fixes rapl_pmu code. Signed-off-by: NArtem Fetishev <artem_fetishev@epam.com> Cc: Stephane Eranian <eranian@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 3月, 2014 1 次提交
-
-
由 Jason Wang 提交于
This patch bypass the timer_irq_works() check for hyperv guest since: - It was guaranteed to work. - timer_irq_works() may fail sometime due to the lpj calibration were inaccurate in a hyperv guest or a buggy host. In the future, we should get the tsc frequency from hypervisor and use preset lpj instead. [ hpa: I would prefer to not defer things to "the future" in the future... ] Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: <stable@vger.kernel.org> Acked-by: NK. Y. Srinivasan <kys@microsoft.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Link: http://lkml.kernel.org/r/1393558229-14755-1-git-send-email-jasowang@redhat.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 27 3月, 2014 1 次提交
-
-
由 Matt Fleming 提交于
The ARM EFI boot stub doesn't need to care about the efi_early infrastructure that x86 requires in order to do mixed mode thunking. So wrap everything up in an efi_call_early() macro. This allows x86 to do the necessary indirection jumps to call whatever firmware interface is necessary (native or mixed mode), but also allows the ARM folks to mask the fact that they don't support relocation in the boot stub and need to pass 'sys_table_arg' to every function. [ hpa: there are no object code changes from this patch ] Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Link: http://lkml.kernel.org/r/20140326091011.GB2958@console-pimps.org Cc: Roy Franz <roy.franz@linaro.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 26 3月, 2014 1 次提交
-
-
Update SATA AHCI support to make use of the new ahci_da850 host driver (instead of the generic ahci_platform one) and remove deprecated ahci_platform_data code. Cc: Sekhar Nori <nsekhar@ti.com> Cc: Kevin Hilman <khilman@deeprootsystems.com> Cc: Hans de Goede <hdegoede@redhat.com> Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 25 3月, 2014 3 次提交
-
-
由 David Vrabel 提交于
This reverts commit a9c8e4be. PTEs in Xen PV guests must contain machine addresses if _PAGE_PRESENT is set and pseudo-physical addresses is _PAGE_PRESENT is clear. This is because during a domain save/restore (migration) the page table entries are "canonicalised" and uncanonicalised". i.e., MFNs are converted to PFNs during domain save so that on a restore the page table entries may be rewritten with the new MFNs on the destination. This canonicalisation is only done for PTEs that are present. This change resulted in writing PTEs with MFNs if _PAGE_PROTNONE (or _PAGE_NUMA) was set but _PAGE_PRESENT was clear. These PTEs would be migrated as-is which would result in unexpected behaviour in the destination domain. Either a) the MFN would be translated to the wrong PFN/page; b) setting the _PAGE_PRESENT bit would clear the PTE because the MFN is no longer owned by the domain; or c) the present bit would not get set. Symptoms include "Bad page" reports when munmapping after migrating a domain. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: <stable@vger.kernel.org> [3.12+]
-
由 David S. Miller 提交于
In arch_cpu_idle() we must enable %pil based interrupts before potentially invoking the hypervisor cpu yield call. As per the Hypervisor API documentation for cpu_yield: Interrupts which are blocked by some mechanism other that pstate.ie (for example %pil) are not guaranteed to cause a return from this service. It seems that only first generation Niagara chips are hit by this bug. My best guess is that later chips implement this in hardware and wake up anyways from %pil events, whereas in first generation chips the yield is implemented completely in hypervisor code and requires %pil to be enabled in order to wake properly from this call. Fixes: 87fa05ae ("sparc: Use generic idle loop") Reported-by: NFabio M. Di Nitto <fabbione@fabbione.net> Reported-by: NJan Engelhardt <jengelh@inai.de> Tested-by: NJan Engelhardt <jengelh@inai.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kees Cook 提交于
There was a potential lock ordering problem with the module kASLR patch ("x86, kaslr: randomize module base load address"). This patch removes the usage of the module_mutex and creates a new mutex to protect the module base address offset value. Chain exists of: text_mutex --> kprobe_insn_slots.mutex --> module_mutex [ 0.515561] Possible unsafe locking scenario: [ 0.515561] [ 0.515561] CPU0 CPU1 [ 0.515561] ---- ---- [ 0.515561] lock(module_mutex); [ 0.515561] lock(kprobe_insn_slots.mutex); [ 0.515561] lock(module_mutex); [ 0.515561] lock(text_mutex); [ 0.515561] [ 0.515561] *** DEADLOCK *** Reported-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NAndy Honig <ahonig@google.com> Signed-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 24 3月, 2014 4 次提交
-
-
由 Catalin Marinas 提交于
Since this macro is identical to pgprot_writecombine() and is only used in a single place, remove it completely to avoid confusion. On ARMv7+ processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a. writecombine) to avoid mismatched hardware attribute aliases (with the kernel linear mapping as Normal Cacheable). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
DMA_ATTR_WRITE_COMBINE is currently ignored. Set the pgprot appropriately for non coherent opperations. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
The current dma_ops do not specify an mmap function so maping falls back to the default implementation. There are at least two issues with using the default implementation: 1) The pgprot is always pgprot_noncached (strongly ordered) memory even with coherent operations 2) dma_common_mmap calls virt_to_page on the remapped non-coherent address which leads to invalid memory being mapped. Fix both these issue by implementing a custom mmap function which correctly accounts for remapped addresses and sets vm_pg_prot appropriately. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> [catalin.marinas@arm.com: replaced "arm64_" with "__" prefix for consistency] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Now that the arch_{spin,read,write}_relax macros default to cpu_relax(), remove the redundant definitions for parisc. Cc: Helge Deller <deller@gmx.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHelge Deller <deller@gmx.de>
-
- 23 3月, 2014 2 次提交
-
-
由 Helge Deller 提交于
We seem to be nearly the only platform which does not provide the sys_utimes syscall. Adding it now makes our life much easier with userspace applications (like dietlibc and e2fsprogs) since we then behave like all other platforms too and don't need extra patches which are hard to get upstream anyway because we are not a mainstream architecture. Signed-off-by: NHelge Deller <deller@gmx.de> Cc: stable@vger.kernel.org # v3.13
-
由 John David Anglin 提交于
The attached change removes the unused and experimental CONFIG_PARISC_TMPALIAS code. It doesn't work and I don't believe it will ever be used. Signed-off-by: NJohn David Anglin <dave.anglin@bell.net> Signed-off-by: NHelge Deller <deller@gmx.de>
-
- 21 3月, 2014 6 次提交
-
-
由 Dominik Dingel 提交于
Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Chris Bainbridge 提交于
Many Pentium M systems disable PAE but may have a functionally usable PAE implementation. This adds the "forcepae" parameter which bypasses the boot check for PAE, and sets the CPU as being PAE capable. Using this parameter will taint the kernel with TAINT_CPU_OUT_OF_SPEC. Signed-off-by: NChris Bainbridge <chris.bainbridge@gmail.com> Link: http://lkml.kernel.org/r/20140307114040.GA4997@localhostAcked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Dave Jones 提交于
Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC, so we can repurpose the flag to encompass a wider range of pushing the CPU beyond its warrany. Signed-off-by: NDave Jones <davej@fedoraproject.org> Link: http://lkml.kernel.org/r/20140226154949.GA770@redhat.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Christopher Covington 提交于
Without this, the following scenario is incorrectly determined to be invalid. addr 0x7f_ffffe000 size 8192 addr_limit 0x80_00000000 This behavior was observed while trying to vmsplice the stack as part of a CRIU dump of a process on a system started with the norandmaps kernel parameter. Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 3月, 2014 6 次提交
-
-
由 Jim Quinlan 提交于
For non-mipsr2 processors, the local_irq_disable contains an mfc0-mtc0 pair with instructions inbetween. With preemption enabled, this sequence may get preempted and effect a stale value of CP0_STATUS when executing the mtc0 instruction. This commit avoids this scenario by incrementing the preempt count before the mfc0 and decrementing it after the mtc9. [ralf@linux-mips.org: This patch is sorting out the part that were missed by e97c5b60 [MIPS: Make irqflags.h functions preempt-safe for non-mipsr2 cpus.] I also re-enabled the inclusion of <asm/asm-offsets.h> at the top of <asm/asmmacro.h>]. Signed-off-by: NJim Quinlan <jim2101024@gmail.com> Cc: linux-mips@linux-mips.org Cc: cernekee@gmail.com Patchwork: https://patchwork.linux-mips.org/patch/6164/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Jan Beulich 提交于
Minor cleanups: - simplify switch statement - add __init annotation to setup_arch_fast_hash() Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/530F09CE020000780011FBEF@nat28.tlf.novell.com Cc: Francesco Fusco <ffusco@redhat.com> Cc: Thomas Graf <tgraf@redhat.com> Cc: David S. Miller <davem@davemloft.net> Acked-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Jan Beulich 提交于
... to match the function's parameters. While reportedly commutative, using the proper order allows for leveraging the instruction permitting the source operand to be in memory. [ hpa: This code originated in the dpdk toolkit. This was a bug in dpdk which has recently been fixed in part due to an earlier version of this patch. ] Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/530F09B6020000780011FBEB@nat28.tlf.novell.comAcked-by: NDaniel Borkmann <dborkman@redhat.com> Cc: Francesco Fusco <ffusco@redhat.com> Cc: Thomas Graf <tgraf@redhat.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Jan Beulich 提交于
Just like for other ISA extension instruction uses we should check whether the assembler actually supports them. The fallback here simply is to encode an instruction with fixed operands (%eax and %ecx). [ hpa: tagging for -stable as a build fix ] Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/530F0996020000780011FBE7@nat28.tlf.novell.com Cc: Francesco Fusco <ffusco@redhat.com> Cc: Thomas Graf <tgraf@redhat.com> Cc: David S. Miller <davem@davemloft.net> Acked-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> # v3.14
-
由 Andreas Herrmann 提交于
Starting with commit 3da52787 (of/irq: Rework of_irq_count()) the following warning is triggered on octeon cn3xxx: [ 0.887281] WARNING: CPU: 0 PID: 1 at drivers/of/platform.c:171 of_device_alloc+0x228/0x230() [ 0.895642] Modules linked in: [ 0.898689] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.14.0-rc7-00012-g9ae51f2-dirty #41 [ 0.906860] Stack : c8b439581166d96e ffffffff816b0000 0000000040808000 ffffffff81185ddc [ 0.906860] 0000000000000000 0000000000000000 0000000000000000 000000000000000b [ 0.906860] 000000000000000a 000000000000000a 0000000000000000 0000000000000000 [ 0.906860] ffffffff81740000 ffffffff81720000 ffffffff81615900 ffffffff816b0177 [ 0.906860] ffffffff81727d10 800000041f868fb0 0000000000000001 0000000000000000 [ 0.906860] 0000000000000000 0000000000000038 0000000000000001 ffffffff81568484 [ 0.906860] 800000041f86faa8 ffffffff81145ddc 0000000000000000 ffffffff811873f4 [ 0.906860] 800000041f868b88 800000041f86f9c0 0000000000000000 ffffffff81569c9c [ 0.906860] 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 0.906860] 0000000000000000 ffffffff811205e0 0000000000000000 0000000000000000 [ 0.906860] ... [ 0.971695] Call Trace: [ 0.974139] [<ffffffff811205e0>] show_stack+0x68/0x80 [ 0.979183] [<ffffffff81569c9c>] dump_stack+0x8c/0xe0 [ 0.984196] [<ffffffff81145efc>] warn_slowpath_common+0x84/0xb8 [ 0.990110] [<ffffffff81436888>] of_device_alloc+0x228/0x230 [ 0.995726] [<ffffffff814368d8>] of_platform_device_create_pdata+0x48/0xd0 [ 1.002593] [<ffffffff81436a94>] of_platform_bus_create+0x134/0x1e8 [ 1.008837] [<ffffffff81436af8>] of_platform_bus_create+0x198/0x1e8 [ 1.015064] [<ffffffff81436cc4>] of_platform_bus_probe+0xa4/0x100 [ 1.021149] [<ffffffff81100570>] do_one_initcall+0xd8/0x128 [ 1.026701] [<ffffffff816e2a10>] kernel_init_freeable+0x144/0x210 [ 1.032753] [<ffffffff81564bc4>] kernel_init+0x14/0x110 [ 1.037973] [<ffffffff8111bb44>] ret_from_kernel_thread+0x14/0x1c With this commit the kernel starts mapping the interrupts listed for gpio-controller node. irq_domain_ops for CIU (octeon_irq_ciu_map and octeon_irq_ciu_xlat) refuse to handle the GPIO lines (returning -EINVAL) and this is causing above warning in of_device_alloc(). Modify irq_domain_ops for CIU and CIU2 to "gracefully handle" GPIO lines (neither return error code nor call octeon_irq_set_ciu_mapping for it). This should avoid the warning. (As before the real setup for GPIO lines will happen using irq_domain_ops of gpio-controller.) This patch is based on Wei's patch v2 (see http://marc.info/?l=linux-mips&m=139511814813247). Signed-off-by: NAndreas Herrmann <andreas.herrmann@caviumnetworks.com> Reported-by: NYang Wei <wei.yang@windriver.com> Acked-by: NDavid Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6624/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Viller Hsiao 提交于
Due to name collision in ftrace safe_load and safe_store macros, these macros cannot take expressions as operands. For example, compiler will complain for a macro call like the following: safe_store_code(new_code2, ip + 4, faulted); arch/mips/include/asm/ftrace.h:61:6: note: in definition of macro 'safe_store' : [dst] "r" (dst), [src] "r" (src)\ ^ arch/mips/kernel/ftrace.c:118:2: note: in expansion of macro 'safe_store_code' safe_store_code(new_code2, ip + 4, faulted); ^ arch/mips/kernel/ftrace.c:118:32: error: undefined named operand 'ip + 4' safe_store_code(new_code2, ip + 4, faulted); ^ arch/mips/include/asm/ftrace.h:61:6: note: in definition of macro 'safe_store' : [dst] "r" (dst), [src] "r" (src)\ ^ arch/mips/kernel/ftrace.c:118:2: note: in expansion of macro 'safe_store_code' safe_store_code(new_code2, ip + 4, faulted); ^ This build error is triggered by a4671094 [MIPS: ftrace: Fix icache flush range error]. Tweak variable naming in those macros to allow flexible operands. Signed-off-by: NViller Hsiao <villerhsiao@gmail.com> Cc: linux-mips@linux-mips.org Cc: rostedt@goodmis.org Cc: fweisbec@gmail.com Cc: mingo@redhat.com Cc: Qais.Yousef@imgtec.com Patchwork: https://patchwork.linux-mips.org/patch/6622/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 19 3月, 2014 3 次提交
-
-
由 Mark Brown 提交于
Probably due to rebasing over the lengthy time it took to get the patch merged commit addea9ef (cpufreq: enable ARM drivers on arm64) added a duplicate Power management options section. Add CPUfreq to the CPU power management section and remove a duplicate include of the main power section. Signed-off-by: NMark Brown <broonie@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Rafał Miłecki 提交于
Broadcom boards support 32 GPIOs and NVRAM may have entires for higher ones too. Example: gpio23=wombo_reset Signed-off-by: NRafa? Mi?ecki <zajec5@gmail.com> Acked-by: NHauke Mehrtens <hauke@hauke-m.de> Cc: linux-mips@linux-mips.org Cc: Rafał Miłecki <zajec5@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/6547/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Bjorn Helgaas 提交于
This reverts commit 56dd669a, which makes the GART visible in /proc/iomem. This fixes a regression: e501b3d8 ("agp: Support 64-bit APBASE") exposed an existing problem with a conflict between the GART region and a PCI BAR region. The GART addresses are bus addresses, not CPU addresses, and therefore should not be inserted in iomem_resource. On many machines, the GART region is addressable by the CPU as well as by an AGP master, but CPU addressability is not required by the spec. On some of these machines, the GART is mapped by a PCI BAR, and in that case, the PCI core automatically inserts it into iomem_resource, just as it does for all BARs. Inserting it here means we'll have a conflict if the PCI core later tries to claim the GART region, so let's drop the insertion here. The conflict indirectly causes X failures, as reported by Jouni in the bugzilla below. We detected the conflict even before e501b3d8, but after it the AGP code (fix_northbridge()) uses the PCI resource (which is zeroed because of the conflict) instead of reading the BAR again. Conflicts: arch/x86_64/kernel/aperture.c Fixes: e501b3d8 agp: Support 64-bit APBASE Link: https://bugzilla.kernel.org/show_bug.cgi?id=72201Reported-and-tested-by: NJouni Mettälä <jtmettala@gmail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 18 3月, 2014 3 次提交
-
-
由 Alex Smith 提交于
If CONFIG_TRANSPARENT_HUGEPAGE is enabled, but CONFIG_HUGETLB_PAGE is not, it is possible to end up with a configuration that fails to build with the following error: include/linux/huge_mm.h:125:2: error: #error "hugepages can't be allocated by the buddy allocator" This is due to CONFIG_FORCE_MAX_ZONEORDER defaulting to 11. It already has ranges that change the valid values when HUGETLB_PAGE is enabled, but this is not done for TRANSPARENT_HUGEPAGE. Fix by changing the HUGETLB_PAGE dependencies to MIPS_HUGE_TLB_SUPPORT, which includes both TRANSPARENT_HUGEPAGE and HUGETLB_PAGE. Signed-off-by: NAlex Smith <alex.smith@imgtec.com> Reviewed-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6391/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Matt Fleming 提交于
In the thunk patches the 'attr' argument was dropped to query_variable_info(). Restore it otherwise the firmware will return EFI_INVALID_PARAMETER. Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Dan reported that phys_efi_get_time() is doing kmalloc(..., GFP_KERNEL) under a spinlock which is very clearly a bug. Since phys_efi_get_time() has no users let's just delete it instead of trying to fix it. Note that since there are no users of phys_efi_get_time(), it is not possible to actually trigger a GFP_KERNEL alloc under the spinlock. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Nathan Zimmer <nzimmer@sgi.com> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Jan Beulich <JBeulich@suse.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-