- 20 5月, 2014 2 次提交
-
-
由 Michal Simek 提交于
Enable ARCH_SUPPORTS_BIG_ENDIAN in Kconfig. zynq_secondary_trampoline is the first function that is called on secondary CPU. Reference: "ARM: mcpm: fix big endian issue in mcpm startup code" (sha1: 519ceb9f) Fix early printk support. Based on: "ARM: pl01x debug code endian fix" (sha1: 76e3faf1) Signed-off-by: NMichal Simek <michal.simek@xilinx.com>
-
由 Michal Simek 提交于
Virtual address have to have the same offset within a 2MB aligned section of virtual/phycial address space. Fix uart0 virtual address to be align with physical one. Also remove UART_SIZE which is completely unused. Reported-by: NRuss Smith <russells@google.com> Signed-off-by: NMichal Simek <michal.simek@xilinx.com>
-
- 09 4月, 2014 3 次提交
-
-
由 Catalin Marinas 提交于
The patch adds asm macros for inc_preempt_count and dec_preempt_count_ti (which also gets the current thread_info) instead of open-coding them in arch/arm/vfp/*.S files. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArun KS <getarunks@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
asm/assembler.h is a better place for this macro since it is used by asm files outside arch/arm/kernel/ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArun KS <getarunks@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Chao Xie Linux 提交于
The patch add cpu_is_pj4 at arch/arm/include/asm/cputype.h PJ4 has some differences with V7, for example the coprocessor. To disinguish this kind of situation. cpu_is_pj4 is needed. Signed-off-by: NChao Xie <chao.xie@marvell.com> Reviewed-by: NKevin Hilman <khilman@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@nvidia.com> Reviewed-by: NStephen Warren <swarren@nvidia.com> Tested-by: NMatt Porter <mporter@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 4月, 2014 1 次提交
-
-
由 Russell King 提交于
virt_to_page() is incredibly inefficient when virt-to-phys patching is enabled. This is because we end up with this calculation: page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12] in assembly. The asm virt_to_phys() is equivalent this this operation: addr - PAGE_OFFSET + __pv_phys_offset and we can see that because this is assembly, the compiler has no chance to optimise some of that away. This should reduce down to: page = &mem_map[(addr - PAGE_OFFSET) >> 12] for the common cases. Permit the compiler to make this optimisation by giving it more of the information it needs - do this by providing a virt_to_pfn() macro. Another issue which makes this more complex is that __pv_phys_offset is a 64-bit type on all platforms. This is needlessly wasteful - if we store the physical offset as a PFN, we can save a lot of work having to deal with 64-bit values, which sometimes ends up producing incredibly horrid code: a4c: e3009000 movw r9, #0 a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset a50: R_ARM_MOVT_ABS __pv_phys_offset a54: e3002000 movw r2, #0 a54: R_ARM_MOVW_ABS_NC __pv_phys_offset a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset a58: R_ARM_MOVT_ABS __pv_phys_offset a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset a60: e3001000 movw r1, #0 a60: R_ARM_MOVW_ABS_NC mem_map a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 3月, 2014 2 次提交
-
-
由 Arnd Bergmann 提交于
In a combined ARMv6/v7 kernel, we cannot use the movt/movw instructions to load an immediate, as they are not valid on ARMv6. This changes the file to use an indirect load instead, as lots of other implementations do. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NStephen Warren <swarren@wwwdotorg.org> Acked-by: NStephen Warren <swarren@wwwdotorg.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-tegra@vger.kernel.org
-
由 Arnd Bergmann 提交于
Building an SMP kernel for the sunxi platform with THUMB2 instructions fails with this error at the moment: headsmp.S:7: Error: Thumb encoding does not support an immediate here -- `msr cpsr_fsxc,#0xd3' Since the generic secondary_startup function already does the same thing in a safe way, we can just drop the private sunxi implementation and jump straight to secondary_startup. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
-
- 20 3月, 2014 3 次提交
-
-
由 Eric Paris 提交于
The syscall.h headers were including linux/audit.h but really only needed the uapi/linux/audit.h to get the requisite defines. Switch to the uapi headers. Signed-off-by: NEric Paris <eparis@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@linux-mips.org Cc: linux-s390@vger.kernel.org Cc: x86@kernel.org
-
由 Eric Paris 提交于
Every caller of syscall_get_arch() uses current for the task and no implementors of the function need args. So just get rid of both of those things. Admittedly, since these are inline functions we aren't wasting stack space, but it just makes the prototypes better. Signed-off-by: NEric Paris <eparis@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@linux-mips.org Cc: linux390@de.ibm.com Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-arch@vger.kernel.org
-
由 Christopher Covington 提交于
The kcmp system call was ported to ARM in commit 3f7d1fe1 "ARM: 7665/1: Wire up kcmp syscall". Fixes: 3f7d1fe1 ("ARM: 7665/1: Wire up kcmp syscall") Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 3月, 2014 8 次提交
-
-
由 David A. Long 提交于
Using Rabin Vincent's ARM uprobes patches as a base, enable uprobes support on ARM. Caveats: - Thumb is not supported Signed-off-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NDavid A. Long <dave.long@linaro.org>
-
由 David A. Long 提交于
Because the common underlying code for ARM kprobes and uprobes needs to share a common architecrure-specific context structure, and because the generic kprobes include file insists on defining this to a dummy structure when kprobes is not configured, a new common structure is required which can exist when uprobes is configured without kprobes. In this case kprobes will define a dummy structure, but without the define aliasing the two structure tags it will not affect uprobes and the shared probes code. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Any more ARM kprobes/uprobes symbols which have "kprobe" in the name must be changed to the more generic "probes" or other non-kprobes specific symbol. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
In preparation for sharing the ARM kprobes instruction interpreting code with uprobes, make the symbols names less kprobes-specific. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Change the generic ARM probes code to pass in the opcode and architecture-specific structure separately instead of using struct kprobe, so we do not pollute code being used only for uprobes or other non-kprobes instruction interpretation. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Move the arm version of the kprobes instruction parsing code into more generic files from where it can be used by uprobes and possibly other subsystems. The symbol names will be made more generic in a subsequent part of this patchset. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Separate the kprobe-only definitions from the definitions needed by both kprobes and uprobes. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Make sure includes in ARM kprobes sources are done explicitly. Do not rely on includes from other includes. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
- 18 3月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
The grant mapping API does m2p_override unnecessarily: only gntdev needs it, for blkback and future netback patches it just cause a lock contention, as those pages never go to userspace. Therefore this series does the following: - the bulk of the original function (everything after the mapping hypercall) is moved to arch-dependent set/clear_foreign_p2m_mapping - the "if (xen_feature(XENFEAT_auto_translated_physmap))" branch goes to ARM - therefore the ARM function could be much smaller, the m2p_override stubs could be also removed - on x86 the set_phys_to_machine calls were moved up to this new funcion from m2p_override functions - and m2p_override functions are only called when there is a kmap_ops param It also removes a stray space from arch/x86/include/asm/xen/page.h. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Suggested-by: NAnthony Liguori <aliguori@amazon.com> Suggested-by: NDavid Vrabel <david.vrabel@citrix.com> Suggested-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 17 3月, 2014 1 次提交
-
-
由 Michal Simek 提交于
Add missing waituart implementation. Signed-off-by: NMichal Simek <michal.simek@xilinx.com>
-
- 12 3月, 2014 1 次提交
-
-
由 Michael Opdenacker 提交于
This patch removes the use of the IRQF_DISABLED flag in arch/arm/include/asm/floppy.h It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 3月, 2014 1 次提交
-
-
由 Bjorn Helgaas 提交于
Remove mc_capable() and smt_capable(). Neither is used. Both were added by 5c45bf27 ("sched: mc/smt power savings sched policy"). Uses of both were removed by 8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs"). Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/20140304210737.16893.54289.stgit@bhelgaas-glaptop.roam.corp.google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 3月, 2014 1 次提交
-
-
由 Russell King 提交于
With noMMU, CONFIG_PAGE_OFFSET was not being set correctly. As there's no MMU, PAGE_OFFSET should be equal to PHYS_OFFSET in all cases. This commit makes that explicit. Since we do this, we don't need to mess around in asm/memory.h with ifdefs to sort this out, so let's get rid of that, and there's no point offering the "Memory split" option for noMMU as that's meaningless there. Fixes: b9b32bf7 ("ARM: use linker magic for vectors and vector stubs") Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 3月, 2014 7 次提交
-
-
由 Marc Zyngier 提交于
In order to be able to detect the point where the guest enables its MMU and caches, trap all the VM related system registers. Once we see the guest enabling both the MMU and the caches, we can go back to a saner mode of operation, which is to leave these registers in complete control of the guest. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1. In order to minimise the amount of surprise a guest could generate by trying to access these registers with caches off, add them to the list of registers we switch/handle. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
So far, KVM/ARM used a fixed HCR configuration per guest, except for the VI/VF/VA bits to control the interrupt in absence of VGIC. With the upcoming need to dynamically reconfigure trapping, it becomes necessary to allow the HCR to be changed on a per-vcpu basis. The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR field, initialized at setup time. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
In order for a guest with caches disabled to observe data written contained in a given page, we need to make sure that page is committed to memory, and not just hanging in the cache (as guest accesses are completely bypassing the cache until it decides to enable it). For this purpose, hook into the coherent_cache_guest_page function and flush the region if the guest SCTLR register doesn't show the MMU and caches as being enabled. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
When the guest runs with caches disabled (like in an early boot sequence, for example), all the writes are diectly going to RAM, bypassing the caches altogether. Once the MMU and caches are enabled, whatever sits in the cache becomes suddenly visible, which isn't what the guest expects. A way to avoid this potential disaster is to invalidate the cache when the MMU is being turned on. For this, we hook into the SCTLR_EL1 trapping code, and scan the stage-2 page tables, invalidating the pages/sections that have already been mapped in. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
The use of p*d_addr_end with stage-2 translation is slightly dodgy, as the IPA is 40bits, while all the p*d_addr_end helpers are taking an unsigned long (arm64 is fine with that as unligned long is 64bit). The fix is to introduce 64bit clean versions of the same helpers, and use them in the stage-2 page table code. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
In order for the guest with caches off to observe data written contained in a given page, we need to make sure that page is committed to memory, and not just hanging in the cache (as guest accesses are completely bypassing the cache until it decides to enable it). For this purpose, hook into the coherent_icache_guest_page function and flush the region if the guest SCTLR_EL1 register doesn't show the MMU and caches as being enabled. The function also get renamed to coherent_cache_guest_page. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 28 2月, 2014 2 次提交
-
-
由 Marek Szyprowski 提交于
The 'order' parameter for IOMMU-aware dma-mapping implementation was introduced mainly as a hack to reduce size of the bitmap used for tracking IO virtual address space. Since now it is possible to dynamically resize the bitmap, this hack is not needed and can be removed without any impact on the client devices. This way the parameters for arm_iommu_create_mapping() becomes much easier to understand. 'size' parameter now means the maximum supported IO address space size. The code will allocate (resize) bitmap in chunks, ensuring that a single chunk is not larger than a single memory page to avoid unreliable allocations of size larger than PAGE_SIZE in atomic context. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Andreas Herrmann 提交于
Instead of using just one bitmap to keep track of IO virtual addresses (handed out for IOMMU use) introduce an array of bitmaps. This allows us to extend existing mappings when running out of iova space in the initial mapping etc. If there is not enough space in the mapping to service an IO virtual address allocation request, __alloc_iova() tries to extend the mapping -- by allocating another bitmap -- and makes another allocation attempt using the freshly allocated bitmap. This allows arm iommu drivers to start with a decent initial size when an dma_iommu_mapping is created and still to avoid running out of IO virtual addresses for the mapping. Signed-off-by: NAndreas Herrmann <andreas.herrmann@calxeda.com> [mszyprow: removed extensions parameter to arm_iommu_create_mapping() function, which will be modified in the next patch anyway, also some debug messages about extending bitmap] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 25 2月, 2014 7 次提交
-
-
由 Ard Biesheuvel 提交于
This allocates feature bits 0-4 in HWCAP2 for the crypto and CRC extensions introduced in ARMv8. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This enables AT_HWCAP2 for ARM. The generic support for this new ELF auxv entry was added in commit 2171364d (powerpc: Add HWCAP2 aux entry) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Looking at perf profiles of multi-threaded hackbench runs, a significant performance hit appears to manifest from the cmpxchg loop used to implement the 32-bit atomic_add_unless function. This can be mitigated by writing a direct implementation of __atomic_add_unless which doesn't require iteration outside of the atomic operation. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Victor Kamensky 提交于
Renames logical shift macros, 'push' and 'pull', defined in arch/arm/include/asm/assembler.h, into 'lspush' and 'lspull'. That eliminates name conflict between 'push' logical shift macro and 'push' instruction mnemonic. That allows assembler.h to be included in .S files that use 'push' instruction. Suggested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 David Howells 提交于
Delete ARM's asm/system.h. It's the last holdout and should be got rid of. This builds for defconfig, lpc32xx_defconfig, exynos_defconfig + XEN, the previous changed to a Gemini system and an omap3 config with TI_DAVINCI_EMAC. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The pte_accessible macro can be used to identify page table entries capable of being cached by a TLB. In principle, this differs from pte_present, since PROT_NONE mappings are mapped using invalid entries identified as present and ptes designated as `old' can use either invalid entries or those with the access flag cleared (guaranteed not to be in the TLB). However, there is a race to take care of, as described in 20841405 ("mm: fix TLB flush race between migration, and change_protection_range"), between a page being migrated and mprotected at the same time. In this case, we can check whether a TLB invalidation is pending for the mm and if so, temporarily consider PROT_NONE mappings as valid. This patch implements a quick pte_accessible macro for ARM by simply checking if the pte is valid/present depending on the mm. For classic MMU, these checks are identical and will generate some false positives for PROT_NONE mappings, but this is better than the current asm-generic definition of ((void)(pte),1). Finally, pte_present_user is moved to use pte_valid (and renamed appropriately) since we don't care about cache flushing for faulting mappings. Acked-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
After a bunch of benchmarking on the interaction between dmb and pldw, it turns out that issuing the pldw *after* the dmb instruction can give modest performance gains (~3% atomic_add_return improvement on a dual A15). This patch adds prefetchw invocations to our barriered atomic operations including cmpxchg, test_and_xxx and futexes. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-