- 25 11月, 2014 1 次提交
-
-
由 Laszlo Ersek 提交于
To allow handling of incoherent memslots in a subsequent patch, this patch adds a paramater 'ipa_uncached' to cache_coherent_guest_page() so that we can instruct it to flush the page's contents to DRAM even if the guest has caching globally enabled. Signed-off-by: NLaszlo Ersek <lersek@redhat.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 14 10月, 2014 1 次提交
-
-
由 Christoffer Dall 提交于
This patch adds the necessary support for all host kernel PGSIZE and VA_SPACE configuration options for both EL2 and the Stage-2 page tables. However, for 40bit and 42bit PARange systems, the architecture mandates that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2 pagge tables than levels of host kernel page tables. At the same time, systems with a PARange > 42bit, we limit the IPA range by always setting VTCR_EL2.T0SZ to 24. To solve the situation with different levels of page tables for Stage-2 translation than the host kernel page tables, we allocate a dummy PGD with pointers to our actual inital level Stage-2 page table, in order for us to reuse the kernel pgtable manipulation primitives. Reproducing all these in KVM does not look pretty and unnecessarily complicates the 32-bit side. Systems with a PARange < 40bits are not yet supported. [ I have reworked this patch from its original form submitted by Jungseok to take the architecture constraints into consideration. There were too many changes from the original patch for me to preserve the authorship. Thanks to Catalin Marinas for his help in figuring out a good solution to this challenge. I have also fixed various bugs and missing error code handling from the original patch. - Christoffer ] Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 10 10月, 2014 5 次提交
-
-
由 Ard Biesheuvel 提交于
Now that we support read-only memslots, we need to make sure that pass-through device mappings are not mapped writable if the guest has requested them to be read-only. The existing implementation already honours this by calling kvm_set_s2pte_writable() on the new pte in case of writable mappings, so all we need to do is define the default pgprot_t value used for devices to be PTE_S2_RDONLY. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Ard Biesheuvel 提交于
Add support for read-only MMIO passthrough mappings by adding a 'writable' parameter to kvm_phys_addr_ioremap. For the moment, mappings will be read-write even if 'writable' is false, but once the definition of PAGE_S2_DEVICE gets changed, those mappings will be created read-only. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Steve Capper 提交于
Activate the RCU fast_gup for ARM. We also need to force THP splits to broadcast an IPI s.t. we block in the fast_gup page walker. As THP splits are comparatively rare, this should not lead to a noticeable performance degradation. Some pre-requisite functions pud_write and pud_page are also added. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Dann Frazier <dann.frazier@canonical.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Steve Capper 提交于
In order to implement fast_get_user_pages we need to ensure that the page table walker is protected from page table pages being freed from under it. This patch enables HAVE_RCU_TABLE_FREE, any page table pages belonging to address spaces with multiple users will be call_rcu_sched freed. Meaning that disabling interrupts will block the free and protect the fast gup page walker. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Dann Frazier <dann.frazier@canonical.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Steve Capper 提交于
We need a mechanism to tag ptes as being special, this indicates that no attempt should be made to access the underlying struct page * associated with the pte. This is used by the fast_gup when operating on ptes as it has no means to access VMAs (that also contain this information) locklessly. The L_PTE_SPECIAL bit is already allocated for LPAE, this patch modifies pte_special and pte_mkspecial to make use of it, and defines __HAVE_ARCH_PTE_SPECIAL. This patch also excludes special ptes from the icache/dcache sync logic. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Dann Frazier <dann.frazier@canonical.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 10月, 2014 1 次提交
-
-
由 Pranith Kumar 提交于
Use the much more reader friendly ACCESS_ONCE() instead of the cast to volatile. This is purely a stylistic change. Signed-off-by: NPranith Kumar <bobby.prani@gmail.com> Acked-by: NJesper Nilsson <jesper.nilsson@axis.com> Acked-by: NHans-Christian Egtvedt <egtvedt@samfundet.no> Acked-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1411482607-20948-1-git-send-email-bobby.prani@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 01 10月, 2014 1 次提交
-
-
由 Liviu Dudau 提交于
This is needed for calls into OF code that parses PCI ranges. It signals support for memory mapped PCI I/O accesses that are described by device trees. Signed-off-by: NLiviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de> CC: Russell King <linux@arm.linux.org.uk> CC: Rob Herring <robh+dt@kernel.org>
-
- 30 9月, 2014 2 次提交
-
-
由 Nathan Lynch 提交于
Joachim Eastwood reports that commit fbfb872f "ARM: 8148/1: flush TLS and thumbee register state during exec" causes a boot-time crash on a Cortex-M4 nommu system: Freeing unused kernel memory: 68K (281e5000 - 281f6000) Unhandled exception: IPSR = 00000005 LR = fffffff1 CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191 task: 29834000 ti: 29832000 task.ti: 29832000 PC is at flush_thread+0x2e/0x40 LR is at flush_thread+0x21/0x40 pc : [<2800954a>] lr : [<2800953d>] psr: 4100000b sp : 29833d60 ip : 00000000 fp : 00000001 r10: 00003cf8 r9 : 29b1f000 r8 : 00000000 r7 : 29b0bc00 r6 : 29834000 r5 : 29832000 r4 : 29832000 r3 : ffff0ff0 r2 : 29832000 r1 : 00000000 r0 : 282121f0 xPSR: 4100000b CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191 [<2800afa5>] (unwind_backtrace) from [<2800a327>] (show_stack+0xb/0xc) [<2800a327>] (show_stack) from [<2800a963>] (__invalid_entry+0x4b/0x4c) The problem is that set_tls is attempting to clear the TLS location in the kernel-user helper page, which isn't set up on V7M. Fix this by guarding the write to the kuser helper page with a CONFIG_KUSER_HELPERS ifdef. Fixes: fbfb872f ARM: 8148/1: flush TLS and thumbee register state during exec Reported-by: NJoachim Eastwood <manabian@gmail.com> Tested-by: NJoachim Eastwood <manabian@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Krzysztof Kozlowski 提交于
This fixes build breakage of platsmp.c if ARMv6 was chosen for compile time options (e.g. by building allmodconfig): $ make allmodconfig $ make CC arch/arm/mach-exynos/platsmp.o /tmp/ccdQM0Eg.s: Assembler messages: /tmp/ccdQM0Eg.s:432: Error: selected processor does not support ARM mode `isb ' /tmp/ccdQM0Eg.s:437: Error: selected processor does not support ARM mode `isb ' /tmp/ccdQM0Eg.s:438: Error: selected processor does not support ARM mode `dsb ' make[1]: *** [arch/arm/mach-exynos/platsmp.o] Error 1 The error was introduced in commit "ARM: EXYNOS: Move code from hotplug.c to platsmp.c". Previously code using v7_exit_coherency_flush() macro was built with '-march=armv7-a' flag but this flag dissapeared during the movement. Fix this by annotating the v7_exit_coherency_flush() asm code with armv7-a architecture. Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Reported-by: NMark Brown <broonie@kernel.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 9月, 2014 2 次提交
-
-
由 Nathan Lynch 提交于
The arch_timer_evtstrm_enable hooks in arm and arm64 are substantially similar, the only difference being a CONFIG_COMPAT-conditional section which is relevant only for arm64. Copy the arm64 version to the driver, removing the arch-specific hooks. Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Nathan Lynch 提交于
The only difference between arm and arm64's implementations of arch_counter_set_user_access is that 32-bit ARM does not enable user access to the virtual counter. We want to enable this access for the 32-bit ARM VDSO, so copy the arm64 version to the driver itself, and remove the arch-specific implementations. Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 26 9月, 2014 3 次提交
-
-
由 Behan Webster 提交于
With compilers which follow the C99 standard (like modern versions of gcc and clang), "extern inline" does the wrong thing (emits code for an externally linkable version of the inline function). In this case using static inline and removing the NULL version of return_address in return_address.c does the right thing. Signed-off-by: NBehan Webster <behanw@converseincode.com> Reviewed-by: NMark Charlebois <charlebm@gmail.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joe Perches 提交于
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Christoffer Dall 提交于
When we catch something that's not a permission fault or a translation fault, we log the unsupported FSC in the kernel log, but we were masking off the bottom bits of the FSC which was not very helpful. Also correctly report the FSC for data and instruction faults rather than telling people it was a DFCS, which doesn't exist in the ARM ARM. Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 25 9月, 2014 1 次提交
-
-
由 Carlo Caione 提交于
Add the UART definitions needed to support earlyprintk for MesonX SoCs on UARTAO. Signed-off-by: NCarlo Caione <carlo@caione.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 24 9月, 2014 2 次提交
-
-
由 Tang Chen 提交于
This will be used to let the guest run while the APIC access page is not pinned. Because subsequent patches will fill in the function for x86, place the (still empty) x86 implementation in the x86.c file instead of adding an inline function in kvm_host.h. Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andres Lagar-Cavilla 提交于
1. We were calling clear_flush_young_notify in unmap_one, but we are within an mmu notifier invalidate range scope. The spte exists no more (due to range_start) and the accessed bit info has already been propagated (due to kvm_pfn_set_accessed). Simply call clear_flush_young. 2. We clear_flush_young on a primary MMU PMD, but this may be mapped as a collection of PTEs by the secondary MMU (e.g. during log-dirty). This required expanding the interface of the clear_flush_young mmu notifier, so a lot of code has been trivially touched. 3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we emulate the access bit by blowing the spte. This requires proper synchronizing with MMU notifier consumers, like every other removal of spte's does. Signed-off-by: NAndres Lagar-Cavilla <andreslc@google.com> Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 19 9月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
In order to make the number of interrupts configurable, use the new fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as a VGIC configurable attribute. Userspace can now specify the exact size of the GIC (by increments of 32 interrupts). Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 18 9月, 2014 1 次提交
-
-
由 Florian Fainelli 提交于
Broadcom BCM63xx DSL SoCs have a different UART implementation for which we need specially crafted low-level debug assembly code to support. Add support for this using the standard definitions provided in include/linux/serial_bcm63xx.h (shared with their MIPS counterparts). Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
-
- 16 9月, 2014 1 次提交
-
-
由 Nathan Lynch 提交于
The TPIDRURO and TPIDRURW registers need to be flushed during exec; otherwise TLS information is potentially leaked. TPIDRURO in particular needs careful treatment. Since flush_thread basically needs the same code used to set the TLS in arm_syscall, pull that into a common set_tls helper in tls.h and use it in both places. Similarly, TEEHBR needs to be cleared during exec as well. Clearing its save slot in thread_info isn't right as there is no guarantee that a thread switch will occur before the new program runs. Just setting the register directly is sufficient. Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 9月, 2014 2 次提交
-
-
由 Frederic Weisbecker 提交于
ARM irq work IPI support depends on SMP support. That information is partly known at early boottime. Lets implement arch_irq_work_has_interrupt() accordingly. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
由 Peter Zijlstra 提交于
The nohz full code needs irq work to trigger its own interrupt so that the subsystem can work even when the tick is stopped. Lets introduce arch_irq_work_has_interrupt() that archs can override to tell about their support for this ability. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 13 9月, 2014 1 次提交
-
-
由 Victor Kamensky 提交于
e38361d0 'ARM: 8091/2: add get_user() support for 8 byte types' commit broke V7 BE get_user call when target var size is 64 bit, but '*ptr' size is 32 bit or smaller. e38361d0 changed type of __r2 from 'register unsigned long' to 'register typeof(x) __r2 asm("r2")' i.e before the change even when target variable size was 64 bit, __r2 was still 32 bit. But after e38361d0 commit, for target var of 64 bit size, __r2 became 64 bit and now it should occupy 2 registers r2, and r3. The issue in BE case that r3 register is least significant word of __r2 and r2 register is most significant word of __r2. But __get_user_4 still copies result into r2 (most significant word of __r2). Subsequent code copies from __r2 into x, but for situation described it will pick up only garbage from r3 register. Special __get_user_64t_(124) functions are introduced. They are similar to corresponding __get_user_(124) function but result stored in r3 register (lsw in case of 64 bit __r2 in BE image). Those function are used by get_user macro in case of BE and target var size is 64bit. Also changed __get_user_lo8 name into __get_user_32t_8 to get consistent naming accross all cases. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Suggested-by: NDaniel Thompson <daniel.thompson@linaro.org> Reviewed-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 9月, 2014 2 次提交
-
-
由 Stefano Stabellini 提交于
Remove the rbtree used to keep track of machine to physical mappings: the frontend can grant the same page multiple times, leading to errors inserting or removing entries from the mach_to_phys tree. Linux only needed to know the physical address corresponding to a given machine address in swiotlb-xen. Now that swiotlb-xen can call the xen_dma_* functions passing the machine address directly, we can remove it. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: NDenis Schneider <v1ne2go@gmail.com>
-
由 Stefano Stabellini 提交于
xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device are currently implemented by calling into the corresponding generic ARM implementation of these functions. In order to do this, firstly the dma_addr_t handle, that on Xen is a machine address, needs to be translated into a physical address. The operation is expensive and inaccurate, given that a single machine address can correspond to multiple physical addresses in one domain, because the same page can be granted multiple times by the frontend. To avoid this problem, we introduce a Xen specific implementation of xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device, that can operate on machine addresses directly. The new implementation relies on the fact that the hypervisor creates a second p2m mapping of any grant pages at physical address == machine address of the page for dom0. Therefore we can access memory at physical address == dma_addr_r handle and perform the cache flushing there. Some cache maintenance operations require a virtual address. Instead of using ioremap_cache, that is not safe in interrupt context, we allocate a per-cpu PAGE_KERNEL scratch page and we manually update the pte for it. arm64 doesn't need cache maintenance operations on unmap for now. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: NDenis Schneider <v1ne2go@gmail.com>
-
- 11 9月, 2014 1 次提交
-
-
由 Ard Biesheuvel 提交于
The ISS encoding for an exception from a Data Abort has a WnR bit[6] that indicates whether the Data Abort was caused by a read or a write instruction. While there are several fields in the encoding that are only valid if the ISV bit[24] is set, WnR is not one of them, so we can read it unconditionally. Instead of fixing both implementations of kvm_is_write_fault() in place, reimplement it just once using kvm_vcpu_dabt_iswrite(), which already does the right thing with respect to the WnR bit. Also fix up the callers to pass 'vcpu' Acked-by: NLaszlo Ersek <lersek@redhat.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 03 9月, 2014 1 次提交
-
-
由 Haojian Zhuang 提交于
Add the CONFIG_MCPM_QUAD_CLUSTER configuration to enlarge cluster number from 2 to 4. Signed-off-by: NHaojian Zhuang <haojian.zhuang@linaro.org> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWei Xu <xuwei5@hisilicon.com>
-
- 29 8月, 2014 3 次提交
-
-
由 Radim Krčmář 提交于
In the beggining was on_each_cpu(), which required an unused argument to kvm_arch_ops.hardware_{en,dis}able, but this was soon forgotten. Remove unnecessary arguments that stem from this. Signed-off-by: NRadim KrÄmář <rkrcmar@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Radim Krčmář 提交于
Using static inline is going to save few bytes and cycles. For example on powerpc, the difference is 700 B after stripping. (5 kB before) This patch also deals with two overlooked empty functions: kvm_arch_flush_shadow was not removed from arch/mips/kvm/mips.c 2df72e9b KVM: split kvm_arch_flush_shadow and kvm_arch_sched_in never made it into arch/ia64/kvm/kvm-ia64.c. e790d9ef KVM: add kvm_arch_sched_in Signed-off-by: NRadim KrÄmář <rkrcmar@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Opaque KVM structs are useful for prototypes in asm/kvm_host.h, to avoid "'struct foo' declared inside parameter list" warnings (and consequent breakage due to conflicting types). Move them from individual files to a generic place in linux/kvm_types.h. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 8月, 2014 2 次提交
-
-
由 Will Deacon 提交于
Sparse kicks up about a type mismatch for kvm_target_cpu: arch/arm64/kvm/guest.c:271:25: error: symbol 'kvm_target_cpu' redeclared with different type (originally declared at ./arch/arm64/include/asm/kvm_host.h:45) - different modifiers so fix this by adding the missing const attribute to the function declaration. Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
When userspace loads code and data in a read-only memory regions, KVM needs to be able to handle this on arm and arm64. Specifically this is used when running code directly from a read-only flash device; the common scenario is a UEFI blob loaded with the -bios option in QEMU. Note that the MMIO exit on writes to a read-only memory is ABI and can be used to emulate block-erase style flash devices. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 27 8月, 2014 4 次提交
-
-
由 Geert Uytterhoeven 提交于
After becoming a mandatory function, boot_secondary() is no longer used outside arch/arm/kernel/smp.c. Hence remove its public prototype, and, as suggested by Arnd, let it be absorbed by its single caller. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Juri Lelli 提交于
Commit af040ffc ("ARM: make it easier to check the CPU part number correctly") changed ARM_CPU_PART_X masks, and the way they are returned and checked against. Usage of read_cpuid_part_number() is now deprecated, and calling places updated accordingly. This actually broke cpuidle-big_little initialization, as bl_idle_driver_init() performs a check using an hardcoded mask on cpu_id. Create an interface to perform the check (that is now even easier to read). Define also a proper mask (ARM_CPU_PART_MASK) that makes this kind of checks cleaner and helps preventing bugs in the future. Update usage accordingly. Signed-off-by: NJuri Lelli <juri.lelli@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Mark Rutland 提交于
On revisions of Cortex-A15 prior to r3p3, a CLREX instruction at PL1 may falsely trigger a watchpoint exception, leading to potential data aborts during exception return and/or livelock. This patch resolves the issue in the following ways: - Replacing our uses of CLREX with a dummy STREX sequence instead (as we did for v6 CPUs). - Removing the clrex code from v7_exit_coherency_flush and derivatives, since this only exists as a minor performance improvement when non-cached exclusives are in use (Linux doesn't use these). Benchmarking on a variety of ARM cores revealed no measurable performance difference with this change applied, so the change is performed unconditionally and no new Kconfig entry is added. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Andrey Ryabinin 提交于
Kernel module build with GCOV profiling fails to load with the following error: $ insmod test_module.ko test_module: unknown relocation: 38 insmod: can't insert 'test_module.ko': invalid module format This happens because constructor pointers in the .init_array section have not supported R_ARM_TARGET1 relocation type. Documentation (ELF for the ARM Architecture) says: "The relocation must be processed either in the same way as R_ARM_REL32 or as R_ARM_ABS32: a virtual platform must specify which method is used." Since kernel expects to see absolute addresses in .init_array R_ARM_TARGET1 relocation type should be treated the same way as R_ARM_ABS32. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 8月, 2014 1 次提交
-
-
由 Thierry Reding 提交于
Provide an implementation for dma_{alloc,free,mmap}_writecombine() when the architecture supports DMA attributes. Signed-off-by: NThierry Reding <treding@nvidia.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 14 8月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
Many of the atomic op implementations are the same except for one instruction; fold the lot into a few CPP macros and reduce LoC. This also prepares for easy addition of new ops. Requires the asm_op because of eor. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Chen Gang <gang.chen@asianux.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nicolas Pitre <nico@linaro.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Albin Tonnerre <albin.tonnerre@arm.com> Cc: Victor Kamensky <victor.kamensky@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/20140508135851.939725247@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-