- 29 5月, 2014 1 次提交
-
-
由 Christopher Covington 提交于
Put architecture-specific assembly code where it belongs, allowing for support of additional architectures such as arm64 in the future. Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 26 4月, 2014 1 次提交
-
-
由 Linus Torvalds 提交于
The mmu-gather operation 'tlb_flush_mmu()' has done two things: the actual tlb flush operation, and the batched freeing of the pages that the TLB entries pointed at. This splits the operation into separate phases, so that the forced batched flushing done by zap_pte_range() can now do the actual TLB flush while still holding the page table lock, but delay the batched freeing of all the pages to after the lock has been dropped. This in turn allows us to avoid a race condition between set_page_dirty() (as called by zap_pte_range() when it finds a dirty shared memory pte) and page_mkclean(): because we now flush all the dirty page data from the TLB's while holding the pte lock, page_mkclean() will be held up walking the (recently cleaned) page tables until after the TLB entries have been flushed from all CPU's. Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Tested-by: NDave Hansen <dave.hansen@intel.com> Acked-by: NHugh Dickins <hughd@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 4月, 2014 1 次提交
-
-
由 Sebastian Hesselbarth 提交于
Commit fdb487f5 ("ARM: 8015/1: Add cpu_is_pj4 to distinguish PJ4 because it has some differences with V7") introduced a cpuid check for Marvell PJ4 processors to fix a regression caused by adding PJ4 based Marvell Dove into multi_v7. Unfortunately, this check is too narrow to catch PJ4 used on Dove itself and breaks iWMMXt support. This patch therefore relaxes the cpuid mask to match both PJ4 and PJ4B. Also, rework the given comment about PJ4/PJ4B modifications to be a little bit more specific about the differences. Signed-off-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Tested-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 4月, 2014 3 次提交
-
-
由 Miklos Szeredi 提交于
Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> [dropped arch/arm/include/asm/unistd.h changes --rmk] Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
The switcher should not depend on MAX_CLUSTER to determine ifit should be activated or not. In a multiplatform kernel binary it is possible to have dual-cluster and quad-cluster platforms configured in. In that case MAX_CLUSTER which is a build time limit should be 4 and that shouldn't prevent the switcher from working if the kernel is booted on a b.L dual-cluster system. In bL_switcher_halve_cpus() we already have a runtime validation check to make sure we're dealing with only two clusters, so booting on a quad cluster system will be caught and switcher activation aborted. However, the b.L switcher must ensure the MCPM layer is initialized on the booted hardware before doing anything. The mcpm_is_available() function is added to that effect. Signed-off-by: NNicolas Pitre <nico@linaro.org> Tested-by: NAbhilash Kesavan <kesavan.abhilash@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Xiangyu Lu 提交于
In big-endian systems, "%1" get the most significant part of the value, cause the instruction to get the wrong result. When viewing ftrace record in big-endian ARM systems, we found that the timestamp errors: swapper-0 [001] 1325.970000: 0:120:R ==> [001] 16:120:R events/1 events/1-16 [001] 1325.970000: 16:120:S ==> [001] 0:120:R swapper swapper-0 [000] 1325.1000000: 0:120:R + [000] 15:120:R events/0 swapper-0 [000] 1325.1000000: 0:120:R ==> [000] 15:120:R events/0 swapper-0 [000] 1326.030000: 0:120:R + [000] 1150:120:R sshd swapper-0 [000] 1326.030000: 0:120:R ==> [000] 1150:120:R sshd When viewed ftrace records, it will call the do_div(n, base) function, which achieved arch/arm/include/asm/div64.h in. When n = 10000000, base = 1000000, in do_div(n, base) will execute "umull %Q0, %R0, %1, %Q2". Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Cc: <stable@vger.kernel.org> # 2.6.20+ Signed-off-by: NAlex Wu <wuquanming@huawei.com> Signed-off-by: NXiangyu Lu <luxiangyu@huawei.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 4月, 2014 3 次提交
-
-
由 Catalin Marinas 提交于
The patch adds asm macros for inc_preempt_count and dec_preempt_count_ti (which also gets the current thread_info) instead of open-coding them in arch/arm/vfp/*.S files. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArun KS <getarunks@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
asm/assembler.h is a better place for this macro since it is used by asm files outside arch/arm/kernel/ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArun KS <getarunks@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Chao Xie Linux 提交于
The patch add cpu_is_pj4 at arch/arm/include/asm/cputype.h PJ4 has some differences with V7, for example the coprocessor. To disinguish this kind of situation. cpu_is_pj4 is needed. Signed-off-by: NChao Xie <chao.xie@marvell.com> Reviewed-by: NKevin Hilman <khilman@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@nvidia.com> Reviewed-by: NStephen Warren <swarren@nvidia.com> Tested-by: NMatt Porter <mporter@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 4月, 2014 1 次提交
-
-
由 Russell King 提交于
virt_to_page() is incredibly inefficient when virt-to-phys patching is enabled. This is because we end up with this calculation: page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12] in assembly. The asm virt_to_phys() is equivalent this this operation: addr - PAGE_OFFSET + __pv_phys_offset and we can see that because this is assembly, the compiler has no chance to optimise some of that away. This should reduce down to: page = &mem_map[(addr - PAGE_OFFSET) >> 12] for the common cases. Permit the compiler to make this optimisation by giving it more of the information it needs - do this by providing a virt_to_pfn() macro. Another issue which makes this more complex is that __pv_phys_offset is a 64-bit type on all platforms. This is needlessly wasteful - if we store the physical offset as a PFN, we can save a lot of work having to deal with 64-bit values, which sometimes ends up producing incredibly horrid code: a4c: e3009000 movw r9, #0 a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset a50: R_ARM_MOVT_ABS __pv_phys_offset a54: e3002000 movw r2, #0 a54: R_ARM_MOVW_ABS_NC __pv_phys_offset a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset a58: R_ARM_MOVT_ABS __pv_phys_offset a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset a60: e3001000 movw r1, #0 a60: R_ARM_MOVW_ABS_NC mem_map a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 3月, 2014 2 次提交
-
-
由 Arnd Bergmann 提交于
In a combined ARMv6/v7 kernel, we cannot use the movt/movw instructions to load an immediate, as they are not valid on ARMv6. This changes the file to use an indirect load instead, as lots of other implementations do. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NStephen Warren <swarren@wwwdotorg.org> Acked-by: NStephen Warren <swarren@wwwdotorg.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-tegra@vger.kernel.org
-
由 Arnd Bergmann 提交于
Building an SMP kernel for the sunxi platform with THUMB2 instructions fails with this error at the moment: headsmp.S:7: Error: Thumb encoding does not support an immediate here -- `msr cpsr_fsxc,#0xd3' Since the generic secondary_startup function already does the same thing in a safe way, we can just drop the private sunxi implementation and jump straight to secondary_startup. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
-
- 20 3月, 2014 3 次提交
-
-
由 Eric Paris 提交于
The syscall.h headers were including linux/audit.h but really only needed the uapi/linux/audit.h to get the requisite defines. Switch to the uapi headers. Signed-off-by: NEric Paris <eparis@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@linux-mips.org Cc: linux-s390@vger.kernel.org Cc: x86@kernel.org
-
由 Eric Paris 提交于
Every caller of syscall_get_arch() uses current for the task and no implementors of the function need args. So just get rid of both of those things. Admittedly, since these are inline functions we aren't wasting stack space, but it just makes the prototypes better. Signed-off-by: NEric Paris <eparis@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@linux-mips.org Cc: linux390@de.ibm.com Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-arch@vger.kernel.org
-
由 Christopher Covington 提交于
The kcmp system call was ported to ARM in commit 3f7d1fe1 "ARM: 7665/1: Wire up kcmp syscall". Fixes: 3f7d1fe1 ("ARM: 7665/1: Wire up kcmp syscall") Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 3月, 2014 8 次提交
-
-
由 David A. Long 提交于
Using Rabin Vincent's ARM uprobes patches as a base, enable uprobes support on ARM. Caveats: - Thumb is not supported Signed-off-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NDavid A. Long <dave.long@linaro.org>
-
由 David A. Long 提交于
Because the common underlying code for ARM kprobes and uprobes needs to share a common architecrure-specific context structure, and because the generic kprobes include file insists on defining this to a dummy structure when kprobes is not configured, a new common structure is required which can exist when uprobes is configured without kprobes. In this case kprobes will define a dummy structure, but without the define aliasing the two structure tags it will not affect uprobes and the shared probes code. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Any more ARM kprobes/uprobes symbols which have "kprobe" in the name must be changed to the more generic "probes" or other non-kprobes specific symbol. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
In preparation for sharing the ARM kprobes instruction interpreting code with uprobes, make the symbols names less kprobes-specific. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Change the generic ARM probes code to pass in the opcode and architecture-specific structure separately instead of using struct kprobe, so we do not pollute code being used only for uprobes or other non-kprobes instruction interpretation. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Move the arm version of the kprobes instruction parsing code into more generic files from where it can be used by uprobes and possibly other subsystems. The symbol names will be made more generic in a subsequent part of this patchset. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Separate the kprobe-only definitions from the definitions needed by both kprobes and uprobes. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Make sure includes in ARM kprobes sources are done explicitly. Do not rely on includes from other includes. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
- 18 3月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
The grant mapping API does m2p_override unnecessarily: only gntdev needs it, for blkback and future netback patches it just cause a lock contention, as those pages never go to userspace. Therefore this series does the following: - the bulk of the original function (everything after the mapping hypercall) is moved to arch-dependent set/clear_foreign_p2m_mapping - the "if (xen_feature(XENFEAT_auto_translated_physmap))" branch goes to ARM - therefore the ARM function could be much smaller, the m2p_override stubs could be also removed - on x86 the set_phys_to_machine calls were moved up to this new funcion from m2p_override functions - and m2p_override functions are only called when there is a kmap_ops param It also removes a stray space from arch/x86/include/asm/xen/page.h. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Suggested-by: NAnthony Liguori <aliguori@amazon.com> Suggested-by: NDavid Vrabel <david.vrabel@citrix.com> Suggested-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 17 3月, 2014 1 次提交
-
-
由 Michal Simek 提交于
Add missing waituart implementation. Signed-off-by: NMichal Simek <michal.simek@xilinx.com>
-
- 12 3月, 2014 1 次提交
-
-
由 Michael Opdenacker 提交于
This patch removes the use of the IRQF_DISABLED flag in arch/arm/include/asm/floppy.h It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 3月, 2014 1 次提交
-
-
由 Bjorn Helgaas 提交于
Remove mc_capable() and smt_capable(). Neither is used. Both were added by 5c45bf27 ("sched: mc/smt power savings sched policy"). Uses of both were removed by 8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs"). Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/20140304210737.16893.54289.stgit@bhelgaas-glaptop.roam.corp.google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 3月, 2014 1 次提交
-
-
由 Russell King 提交于
With noMMU, CONFIG_PAGE_OFFSET was not being set correctly. As there's no MMU, PAGE_OFFSET should be equal to PHYS_OFFSET in all cases. This commit makes that explicit. Since we do this, we don't need to mess around in asm/memory.h with ifdefs to sort this out, so let's get rid of that, and there's no point offering the "Memory split" option for noMMU as that's meaningless there. Fixes: b9b32bf7 ("ARM: use linker magic for vectors and vector stubs") Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 3月, 2014 7 次提交
-
-
由 Marc Zyngier 提交于
In order to be able to detect the point where the guest enables its MMU and caches, trap all the VM related system registers. Once we see the guest enabling both the MMU and the caches, we can go back to a saner mode of operation, which is to leave these registers in complete control of the guest. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1. In order to minimise the amount of surprise a guest could generate by trying to access these registers with caches off, add them to the list of registers we switch/handle. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
So far, KVM/ARM used a fixed HCR configuration per guest, except for the VI/VF/VA bits to control the interrupt in absence of VGIC. With the upcoming need to dynamically reconfigure trapping, it becomes necessary to allow the HCR to be changed on a per-vcpu basis. The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR field, initialized at setup time. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
In order for a guest with caches disabled to observe data written contained in a given page, we need to make sure that page is committed to memory, and not just hanging in the cache (as guest accesses are completely bypassing the cache until it decides to enable it). For this purpose, hook into the coherent_cache_guest_page function and flush the region if the guest SCTLR register doesn't show the MMU and caches as being enabled. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
When the guest runs with caches disabled (like in an early boot sequence, for example), all the writes are diectly going to RAM, bypassing the caches altogether. Once the MMU and caches are enabled, whatever sits in the cache becomes suddenly visible, which isn't what the guest expects. A way to avoid this potential disaster is to invalidate the cache when the MMU is being turned on. For this, we hook into the SCTLR_EL1 trapping code, and scan the stage-2 page tables, invalidating the pages/sections that have already been mapped in. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
The use of p*d_addr_end with stage-2 translation is slightly dodgy, as the IPA is 40bits, while all the p*d_addr_end helpers are taking an unsigned long (arm64 is fine with that as unligned long is 64bit). The fix is to introduce 64bit clean versions of the same helpers, and use them in the stage-2 page table code. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
In order for the guest with caches off to observe data written contained in a given page, we need to make sure that page is committed to memory, and not just hanging in the cache (as guest accesses are completely bypassing the cache until it decides to enable it). For this purpose, hook into the coherent_icache_guest_page function and flush the region if the guest SCTLR_EL1 register doesn't show the MMU and caches as being enabled. The function also get renamed to coherent_cache_guest_page. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 28 2月, 2014 2 次提交
-
-
由 Marek Szyprowski 提交于
The 'order' parameter for IOMMU-aware dma-mapping implementation was introduced mainly as a hack to reduce size of the bitmap used for tracking IO virtual address space. Since now it is possible to dynamically resize the bitmap, this hack is not needed and can be removed without any impact on the client devices. This way the parameters for arm_iommu_create_mapping() becomes much easier to understand. 'size' parameter now means the maximum supported IO address space size. The code will allocate (resize) bitmap in chunks, ensuring that a single chunk is not larger than a single memory page to avoid unreliable allocations of size larger than PAGE_SIZE in atomic context. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Andreas Herrmann 提交于
Instead of using just one bitmap to keep track of IO virtual addresses (handed out for IOMMU use) introduce an array of bitmaps. This allows us to extend existing mappings when running out of iova space in the initial mapping etc. If there is not enough space in the mapping to service an IO virtual address allocation request, __alloc_iova() tries to extend the mapping -- by allocating another bitmap -- and makes another allocation attempt using the freshly allocated bitmap. This allows arm iommu drivers to start with a decent initial size when an dma_iommu_mapping is created and still to avoid running out of IO virtual addresses for the mapping. Signed-off-by: NAndreas Herrmann <andreas.herrmann@calxeda.com> [mszyprow: removed extensions parameter to arm_iommu_create_mapping() function, which will be modified in the next patch anyway, also some debug messages about extending bitmap] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 25 2月, 2014 3 次提交
-
-
由 Ard Biesheuvel 提交于
This allocates feature bits 0-4 in HWCAP2 for the crypto and CRC extensions introduced in ARMv8. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This enables AT_HWCAP2 for ARM. The generic support for this new ELF auxv entry was added in commit 2171364d (powerpc: Add HWCAP2 aux entry) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Looking at perf profiles of multi-threaded hackbench runs, a significant performance hit appears to manifest from the cmpxchg loop used to implement the 32-bit atomic_add_unless function. This can be mitigated by writing a direct implementation of __atomic_add_unless which doesn't require iteration outside of the atomic operation. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-