- 30 1月, 2017 1 次提交
-
-
由 Vijaya Kumar K 提交于
VGICv3 Distributor and Redistributor registers are accessed using KVM_DEV_ARM_VGIC_GRP_DIST_REGS and KVM_DEV_ARM_VGIC_GRP_REDIST_REGS with KVM_SET_DEVICE_ATTR and KVM_GET_DEVICE_ATTR ioctls. These registers are accessed as 32-bit and cpu mpidr value passed along with register offset is used to identify the cpu for redistributor registers access. The version of VGIC v3 specification is defined here Documentation/virtual/kvm/devices/arm-vgic-v3.txt Also update arch/arm/include/uapi/asm/kvm.h to compile for AArch32 mode. Signed-off-by: NVijaya Kumar K <Vijaya.Kumar@cavium.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 14 1月, 2017 1 次提交
-
-
由 Nicolas Dichtel 提交于
Due to the way kbuild works, this header was unintentionally exported back in 2013 when it was created, despite it not being in a uapi/ directory. This is very non-intuitive behaviour by Kbuild. However, we've had this include exported to userland for almost four years, and searching google for "ARM types.h __UINTPTR_TYPE__" gives no hint that anyone has complained about it. So, let's make it officially exported in this state. Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 13 1月, 2017 1 次提交
-
-
由 Jintack Lim 提交于
Current KVM world switch code is unintentionally setting wrong bits to CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical timer. Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit. EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they are 11th and 10th bits respectively when E2H is set. In fact, on VHE we only need to set those bits once, not for every world switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE == 1, which makes those bits have no effect for the host kernel execution. So we just set those bits once for guests, and that's it. Signed-off-by: NJintack Lim <jintack@cs.columbia.edu> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 11 1月, 2017 2 次提交
-
-
由 Mark Rutland 提交于
On APQ8060, the kernel crashes in arch_hw_breakpoint_init, taking an undefined instruction trap within write_wb_reg. This is because Scorpion CPUs erroneously appear to set DBGPRSR.SPD when WFI is issued, even if the core is not powered down. When DBGPRSR.SPD is set, breakpoint and watchpoint registers are treated as undefined. It's possible to trigger similar crashes later on from userspace, by requesting the kernel to install a breakpoint or watchpoint, as we can go idle at any point between the reset of the debug registers and their later use. This has always been the case. Given that this has always been broken, no-one has complained until now, and there is no clear workaround, disable hardware breakpoints and watchpoints on Scorpion to avoid these issues. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Rabin Vincent 提交于
ARM has a few system calls (most notably mmap) for which the names of the functions which are referenced in the syscall table do not match the names of the syscall tracepoints. As a consequence of this, these tracepoints are not made available. Implement arch_syscall_match_sym_name to fix this and allow tracing even these system calls. Signed-off-by: NRabin Vincent <rabinv@axis.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 12月, 2016 3 次提交
-
-
由 Aneesh Kumar K.V 提交于
Now that we check for page size change early in the loop, we can partially revert e9d55e15 ("mm: change the interface for __tlb_remove_page"). This simplies the code much, by removing the need to track the last address with which we adjusted the range. We also go back to the older way of filling the mmu_gather array, ie, we add an entry and then check whether the gather batch is full. Link: http://lkml.kernel.org/r/20161026084839.27299-6-aneesh.kumar@linux.vnet.ibm.comSigned-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Aneesh Kumar K.V 提交于
With commit e77b0852 ("mm/mmu_gather: track page size with mmu gather and force flush if page size change") we added the ability to force a tlb flush when the page size change in a mmu_gather loop. We did that by checking for a page size change every time we added a page to mmu_gather for lazy flush/remove. We can improve that by moving the page size change check early and not doing it every time we add a page. This also helps us to do tlb flush when invalidating a range covering dax mapping. Wrt dax mapping we don't have a backing struct page and hence we don't call tlb_remove_page, which earlier forced the tlb flush on page size change. Moving the page size change check earlier means we will do the same even for dax mapping. We also avoid doing this check on architecture other than powerpc. In a later patch we will remove page size check from tlb_remove_page(). Link: http://lkml.kernel.org/r/20161026084839.27299-5-aneesh.kumar@linux.vnet.ibm.comSigned-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Aneesh Kumar K.V 提交于
This add tlb_remove_hugetlb_entry similar to tlb_remove_pmd_tlb_entry. Link: http://lkml.kernel.org/r/20161026084839.27299-4-aneesh.kumar@linux.vnet.ibm.comSigned-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 12月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
ARM and arm64 Xen ports share a number of headers, leading to packaging issues when these headers needs to be exported, as it breaks the reasonable requirement that an architecture port has self-contained headers. Fix the issue by moving the 5 header files to include/xen/arm, and keep local placeholders to include the relevant files. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
-
- 29 11月, 2016 1 次提交
-
-
由 Vladimir Murzin 提交于
Wire-up flush_dcache, readq- and writeq-like gic-v3-its assessors, so GICv3 ITS gets all it needs to be built and run. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 23 11月, 2016 1 次提交
-
-
由 Russell King 提交于
This reverts commit 4dd1837d. Moving the exports for assembly code into the assembly files breaks KSYM trimming, but also breaks modversions. While fixing the KSYM trimming is trivial, fixing modversions brings us to a technically worse position that we had prior to the above change: - We end up with the prototype definitions divorsed from everything else, which means that adding or removing assembly level ksyms become more fragile: * if adding a new assembly ksyms export, a missed prototype in asm-prototypes.h results in a successful build if no module in the selected configuration makes use of the symbol. * when removing a ksyms export, asm-prototypes.h will get forgotten, with armksyms.c, you'll get a build error if you forget to touch the file. - We end up with the same amount of include files and prototypes, they're just in a header file instead of a .c file with their exports. As for lines of code, we don't get much of a size reduction: (original commit) 47 files changed, 131 insertions(+), 208 deletions(-) (fix for ksyms trimming) 7 files changed, 18 insertions(+), 5 deletions(-) (two fixes for modversions) 1 file changed, 34 insertions(+) 3 files changed, 7 insertions(+), 2 deletions(-) which results in a net total of only 25 lines deleted. As there does not seem to be much benefit from this change of approach, revert the change. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 17 11月, 2016 1 次提交
-
-
由 Christian Borntraeger 提交于
No need to duplicate the same define everywhere. Since the only user is stop-machine and the only provider is s390, we can use a default implementation of cpu_relax_yield() in sched.h. Suggested-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: kvm@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-s390 <linux-s390@vger.kernel.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: sparclinux@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/1479298985-191589-1-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 16 11月, 2016 2 次提交
-
-
由 Christian Borntraeger 提交于
As there are no users left, we can remove cpu_relax_lowlatency() implementations from every architecture. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Cc: <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1477386195-32736-6-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Christian Borntraeger 提交于
For spinning loops people do often use barrier() or cpu_relax(). For most architectures cpu_relax and barrier are the same, but on some architectures cpu_relax can add some latency. For example on power,sparc64 and arc, cpu_relax can shift the CPU towards other hardware threads in an SMT environment. On s390 cpu_relax does even more, it uses an hypercall to the hypervisor to give up the timeslice. In contrast to the SMT yielding this can result in larger latencies. In some places this latency is unwanted, so another variant "cpu_relax_lowlatency" was introduced. Before this is used in more and more places, lets revert the logic and provide a cpu_relax_yield that can be called in places where yielding is more important than latency. By default this is the same as cpu_relax on all architectures. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/1477386195-32736-2-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 14 11月, 2016 1 次提交
-
-
由 Vladimir Murzin 提交于
This patch allows to build and use vGICv3 ITS in 32-bit mode. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Reviewed-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 13 11月, 2016 1 次提交
-
-
由 Lukas Wunner 提交于
We already have a macro to invoke boot services which on x86 adapts automatically to the bitness of the EFI firmware: efi_call_early(). The macro allows sharing of functions across arches and bitness variants as long as those functions only call boot services. However in practice functions in the EFI stub contain a mix of boot services calls and protocol calls. Add an efi_call_proto() macro for bitness-agnostic protocol calls to allow sharing more code across arches as well as deduplicating 32 bit and 64 bit code paths. On x86, implement it using a new efi_table_attr() macro for bitness- agnostic table lookups. Refactor efi_call_early() to make use of the same macro. (The resulting object code remains identical.) Signed-off-by: NLukas Wunner <lukas@wunner.de> Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Cc: Andreas Noever <andreas.noever@gmail.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Jones <pjones@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20161112213237.8804-8-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 05 11月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
Architecturally, TLBs are private to the (physical) CPU they're associated with. But when multiple vcpus from the same VM are being multiplexed on the same CPU, the TLBs are not private to the vcpus (and are actually shared across the VMID). Let's consider the following scenario: - vcpu-0 maps PA to VA - vcpu-1 maps PA' to VA If run on the same physical CPU, vcpu-1 can hit TLB entries generated by vcpu-0 accesses, and access the wrong physical page. The solution to this is to keep a per-VM map of which vcpu ran last on each given physical CPU, and invalidate local TLBs when switching to a different vcpu from the same VM. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 01 11月, 2016 1 次提交
-
-
由 Christoph Hellwig 提交于
No need for it - we only use struct bio_vec in prototypes and already have forward declarations for it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 25 10月, 2016 1 次提交
-
-
由 Peter Zijlstra 提交于
Its all generic atomic_long_t stuff now. Tested-by: NJason Low <jason.low2@hpe.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 19 10月, 2016 4 次提交
-
-
由 Nicolas Pitre 提交于
Explain where the value for UDELAY_MULT and UDELAY_SHIFT come from. Also fix/clarify some comments pertaining to their usage in the assembly code. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Wire up the new pkey syscalls for ARM. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Russell King 提交于
Convert ARM to use a similar mechanism to x86 to generate the unistd.h system call numbers and the various kernel system call tables. This means that rather than having to edit three places (asm/unistd.h for the total number of system calls, uapi/asm/unistd.h for the system call numbers, and arch/arm/kernel/calls.S for the call table) we have only one place to edit, making the process much more simple. The scripts have knowledge of the table padding requirements, so there's no need to worry about __NR_syscalls not fitting within the immediate constant field of ALU instructions anymore. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Russell King 提交于
Arrange for mach-types.h to be directly generated in the relevant path, so we don't need a one-liner file in arch/arm/include/asm/. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 12 10月, 2016 1 次提交
-
-
由 Masahiro Yamada 提交于
Kernel source files need not include <linux/kconfig.h> explicitly because the top Makefile forces to include it with: -include $(srctree)/include/linux/kconfig.h This commit removes explicit includes except the following: * arch/s390/include/asm/facilities_src.h * tools/testing/radix-tree/linux/kernel.h These two are used for host programs. Link: http://lkml.kernel.org/r/1473656164-11929-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 10月, 2016 1 次提交
-
-
由 Chris Metcalf 提交于
Patch series "improvements to the nmi_backtrace code" v9. This patch series modifies the trigger_xxx_backtrace() NMI-based remote backtracing code to make it more flexible, and makes a few small improvements along the way. The motivation comes from the task isolation code, where there are scenarios where we want to be able to diagnose a case where some cpu is about to interrupt a task-isolated cpu. It can be helpful to see both where the interrupting cpu is, and also an approximation of where the cpu that is being interrupted is. The nmi_backtrace framework allows us to discover the stack of the interrupted cpu. I've tested that the change works as desired on tile, and build-tested x86, arm, mips, and sparc64. For x86 I confirmed that the generic cpuidle stuff as well as the architecture-specific routines are in the new cpuidle section. For arm, mips, and sparc I just build-tested it and made sure the generic cpuidle routines were in the new cpuidle section, but I didn't attempt to figure out which the platform-specific idle routines might be. That might be more usefully done by someone with platform experience in follow-up patches. This patch (of 4): Currently you can only request a backtrace of either all cpus, or all cpus but yourself. It can also be helpful to request a remote backtrace of a single cpu, and since we want that, the logical extension is to support a cpumask as the underlying primitive. This change modifies the existing lib/nmi_backtrace.c code to take a cpumask as its basic primitive, and modifies the linux/nmi.h code to use the new "cpumask" method instead. The existing clients of nmi_backtrace (arm and x86) are converted to using the new cpumask approach in this change. The other users of the backtracing API (sparc64 and mips) are converted to use the cpumask approach rather than the all/allbutself approach. The mips code ignored the "include_self" boolean but with this change it will now also dump a local backtrace if requested. Link: http://lkml.kernel.org/r/1472487169-14923-2-git-send-email-cmetcalf@mellanox.comSigned-off-by: NChris Metcalf <cmetcalf@mellanox.com> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm] Reviewed-by: NAaron Tomlin <atomlin@redhat.com> Reviewed-by: NPetr Mladek <pmladek@suse.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 10月, 2016 1 次提交
-
-
由 Russell King 提交于
Commit 215e362d ("ARM: 8306/1: loop_udelay: remove bogomips value limitation") tried to increase the bogomips limitation, but in doing so messed up udelay such that it always gives about a 5% error in the delay, even if we use a timer. The calculation is: loops = UDELAY_MULT * us_delay * ticks_per_jiffy >> UDELAY_SHIFT Originally, UDELAY_MULT was ((UL(2199023) * HZ) >> 11) and UDELAY_SHIFT 30. Assuming HZ=100, us_delay of 1000 and ticks_per_jiffy of 1660000 (eg, 166MHz timer, 1ms delay) this would calculate: ((UL(2199023) * HZ) >> 11) * 1000 * 1660000 >> 30 => 165999 With the new values of 2047 * HZ + 483648 * HZ / 1000000 and 31, we get: (2047 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31 => 158269 which is incorrect. This is due to a typo - correcting it gives: (2147 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31 => 165999 i.o.w, the original value. Fixes: 215e362d ("ARM: 8306/1: loop_udelay: remove bogomips value limitation") Cc: <stable@vger.kernel.org> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 29 9月, 2016 1 次提交
-
-
由 Roger Quadros 提交于
Since commit 6ce0d200 ("ARM: dma: Use dma_pfn_offset for dma address translation"), dma_to_pfn() already returns the PFN with the physical memory start offset so we don't need to add it again. This fixes USB mass storage lock-up problem on systems that can't do DMA over the entire physical memory range (e.g.) Keystone 2 systems with 4GB RAM can only do DMA over the first 2GB. [K2E-EVM]. What happens there is that without this patch SCSI layer sets a wrong bounce buffer limit in scsi_calculate_bounce_limit() for the USB mass storage device. dma_max_pfn() evaluates to 0x8fffff and bounce_limit is set to 0x8fffff000 whereas maximum DMA'ble physical memory on Keystone 2 is 0x87fffffff. This results in non DMA'ble pages being given to the USB controller and hence the lock-up. NOTE: in the above case, USB-SCSI-device's dma_pfn_offset was showing as 0. This should have really been 0x780000 as on K2e, LOWMEM_START is 0x80000000 and HIGHMEM_START is 0x800000000. DMA zone is 2GB so dma_max_pfn should be 0x87ffff. The incorrect dma_pfn_offset for the USB storage device is because USB devices are not correctly inheriting the dma_pfn_offset from the USB host controller. This will be fixed by a separate patch. Fixes: 6ce0d200 ("ARM: dma: Use dma_pfn_offset for dma address translation") Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Olof Johansson <olof@lixom.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Walleij <linus.walleij@linaro.org> Reported-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NRoger Quadros <rogerq@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 9月, 2016 1 次提交
-
-
由 Scott Wood 提交于
Instead of comparing the name to a magic string, use archdata to explicitly communicate whether the arch timer is suitable for direct vdso access. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NScott Wood <oss@buserror.net> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 9月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
A new accessor for gic_write_bpr1 is added to arch_gicv3.h in 4.9, whilst the CP15 accessors are redifined in a separate branch. This leads to a horrible clash, where the new accessor ends up with a crap "asm volatile" definition. Work around this by carrying our own definition of gic_write_bpr1, creating a small conflict which will be obvious to resolve. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 22 9月, 2016 5 次提交
-
-
由 Vladimir Murzin 提交于
This patch allows to build and use vgic-v3 in 32-bit mode. Unfortunately, it can not be split in several steps without extra stubs to keep patches independent and bisectable. For instance, virt/kvm/arm/vgic/vgic-v3.c uses function from vgic-v3-sr.c, handling access to GICv3 cpu interface from the guest requires vgic_v3.vgic_sre to be already defined. It is how support has been done: * handle SGI requests from the guest * report configured SRE on access to GICv3 cpu interface from the guest * required vgic-v3 macros are provided via uapi.h * static keys are used to select GIC backend * to make vgic-v3 build KVM_ARM_VGIC_V3 guard is removed along with the static inlines Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Vladimir Murzin 提交于
vgic-v3 save/restore routines are written in such way that they map arm64 system register naming nicely, but it does not fit to arm world. To keep virt/kvm/arm/hyp/vgic-v3-sr.c untouched we create a mapping with a function for each register mapping the 32-bit to the 64-bit accessors. Please, note that 64-bit wide ICH_LR is split in two 32-bit halves (ICH_LR and ICH_LRC) accessed independently. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Vladimir Murzin 提交于
Headers linux/irqchip/arm-gic.v3.h and arch/arm/include/asm/kvm_hyp.h are included in virt/kvm/arm/hyp/vgic-v3-sr.c and both define macros called __ACCESS_CP15 and __ACCESS_CP15_64 which obviously creates a conflict. These macros were introduced independently for GIC and KVM and, in fact, do the same thing. As an option we could add prefixes to KVM and GIC version of macros so they won't clash, but it'd introduce code duplication. Alternatively, we could keep macro in, say, GIC header and include it in KVM one (or vice versa), but such dependency would not look nicer. So we follow arm64 way (it handles this via sysreg.h) and move only single set of macros to asm/cp15.h Cc: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Vladimir Murzin 提交于
vgic-v3 driver uses architecture specific MPIDR_LEVEL_SHIFT macro to encode the affinity in a form compatible with ICC_SGI* registers. Unfortunately, that macro is missing on ARM, so let's add it. Cc: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Russell King 提交于
Provide a nicer to_sa1111_device macro to convert a struct device to a sa1111_dev. We will need this for drivers when converting them to dev_pm_ops, or removing shutdown methods. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 20 9月, 2016 1 次提交
-
-
由 Russell King 提交于
Add a helper function to get the irq number for a device. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 16 9月, 2016 1 次提交
-
-
由 Al Viro 提交于
adjust copy_from_user(), obviously Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 13 9月, 2016 1 次提交
-
-
由 Daniel Thompson 提交于
Currently, when running on FVP, CPU 0 boots up with its BPR changed from the reset value. This renders it impossible to (preemptively) prioritize interrupts on CPU 0. This is harmless on normal systems since Linux typically does not support preemptive interrupts. It does however cause problems in systems with additional changes (such as patches for NMI simulation). Many thanks to Andrew Thoelke for suggesting the BPR as having the potential to harm preemption. Suggested-by: NAndrew Thoelke <andrew.thoelke@arm.com> Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 12 9月, 2016 1 次提交
-
-
由 Stefan Agner 提交于
The cachepolicy variable gets initialized using a masked pmd value. So far, the pmd has been masked with flags valid for the 2-page table format, but the 3-page table format requires a different mask. On LPAE, this lead to a wrong assumption of what initial cache policy has been used. Later a check forces the cache policy to writealloc and prints the following warning: Forcing write-allocate cache policy for SMP This patch introduces a new definition PMD_SECT_CACHE_MASK for both page table formats which masks in all cache flags in both cases. Signed-off-by: NStefan Agner <stefan@agner.ch> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 9月, 2016 2 次提交
-
-
由 Marc Zyngier 提交于
An asynchronous abort can also be triggered whilst running at EL2. But instead of making that a new error code, we need to communicate it to the rest of KVM together with the exit reason. So let's hijack a single bit that allows the exception code to be tagged with a "pending Abort" information. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Now that we're able to context switch the HCR.VA bit, let's introduce a helper that injects an Abort into a vcpu. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-