- 25 10月, 2013 4 次提交
-
-
由 Mark Rutland 提交于
With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may tells us something useful (i.e. how to hotplug it back on), so we must read it along with all the enable-method for all the other CPUs. Even on UP the enable-method may tell us useful information (e.g. if a core has some mechanism that might be usable for cpuidle), so we should always read it. This patch factors out the reading of the enable method, and ensures that CPU0's enable method is read regardless of whether the kernel is built with SMP support. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The arm64 kernel has an internal holding pen, which is necessary for some systems where we can't bring CPUs online individually and must hold multiple CPUs in a safe area until the kernel is able to handle them. The current SMP infrastructure for arm64 is closely coupled to this holding pen, and alternative boot methods must launch CPUs into the pen, where they sit before they are launched into the kernel proper. With PSCI (and possibly other future boot methods), we can bring CPUs online individually, and need not perform the secondary_holding_pen dance. Instead, this patch factors the holding pen management code out to the spin-table boot method code, as it is the only boot method requiring the pen. A new entry point for secondaries, secondary_entry is added for other boot methods to use, which bypasses the holding pen and its associated overhead when bringing CPUs online. The smp.pen.text section is also removed, as the pen can live in head.text without problem. The cpu_operations structure is extended with two new functions, cpu_boot and cpu_postboot, for bringing a cpu into the kernel and performing any post-boot cleanup required by a bootmethod (e.g. resetting the secondary_holding_pen_release to INVALID_HWID). Documentation is added for cpu_operations. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
For hotplug support, we're going to want a place to store operations that do more than bring CPUs online, and it makes sense to group these with our current smp_enable_ops. For cpuidle support, we'll want to group additional functions, and we may want them even for UP kernels. This patch renames smp_enable_ops to the more general cpu_operations, and pulls the definitions out of smp code such that they can be used in UP kernels. While we're at it, fix up instances of the cpu parameter to be an unsigned int, drop the init markings and rename the *_cpu functions to cpu_* to reduce future churn when cpu_operations is extended. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The functions in psci.c are only used from smp_psci.c, and smp_psci cannot function without psci.c. Additionally psci.c is built when !SMP, where it's expected that cpu_suspend may be useful. This patch unifies the two files, removing pointless duplication and paving the way for PSCI support in UP systems. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 10月, 2013 3 次提交
-
-
由 Will Deacon 提交于
This patch introduces cmpxchg64_relaxed for arm64 using the existing cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits) without barrier semantics. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can deal with 8-bytes (as one would hope!). This patch wires up the cmpxchg-based lockless lockref implementation for arm64. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
This patch introduces a ticket lock implementation for arm64, along the same lines as the implementation for arch/arm/. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 10月, 2013 1 次提交
-
-
由 AKASHI Takahiro 提交于
In ftrace_syscall_enter(), syscall_get_arguments(..., 0, n, ...) if (i == 0) { <handle orig_x0> ...; n--;} memcpy(..., n * sizeof(args[0])); If 'number of arguments(n)' is zero and 'argument index(i)' is also zero in syscall_get_arguments(), none of arguments should be copied by memcpy(). Otherwise 'n--' can be a big positive number and unexpected amount of data will be copied. Tracing system calls which take no argument, say sync(void), may hit this case and eventually make the system corrupted. This patch fixes the issue both in syscall_get_arguments() and syscall_set_arguments(). Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 9月, 2013 1 次提交
-
-
由 AKASHI Takahiro 提交于
get_user() is defined as a function macro in arm64, and trace_get_user() calls it as followed: get_user(ch, ptr++); Since the second parameter occurs twice in the definition, 'ptr++' is unexpectedly evaluated twice and trace_get_user() will generate a bogus string from user-provided one. As a result, some ftrace sysfs operations, like "echo FUNCNAME > set_ftrace_filter," hit this case and eventually fail. This patch fixes the issue both in get_user() and put_user(). Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> [catalin.marinas@arm.com: added __user type annotation and s/optr/__p/] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 9月, 2013 1 次提交
-
-
由 Steve Capper 提交于
Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit. This patch widens elf_hwcap to be 64 bit. Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 9月, 2013 1 次提交
-
-
由 Will Deacon 提交于
TCR.TBI0 can be used to cause hardware address translation to ignore the top byte of userspace virtual addresses. Whilst not especially useful in standard C programs, this can be used by JITs to `tag' pointers with various pieces of metadata. This patch enables this bit for AArch64 Linux, and adds a new file to Documentation/arm64/ which describes some potential caveats when using tagged virtual addresses. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 9月, 2013 1 次提交
-
-
由 Dan Aloni 提交于
Signed-off-by: NDan Aloni <alonid@stratoscale.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 8月, 2013 1 次提交
-
-
由 Chen Gang 提交于
Need include "asm/types.h", just like arm has done, or can not pass compiling, the related error: In file included from arch/arm64/include/asm/page.h:37:0, from drivers/staging/lustre/include/linux/lnet/linux/lib-lnet.h:42, from drivers/staging/lustre/include/linux/lnet/lib-lnet.h:44, from drivers/staging/lustre/lnet/lnet/api-ni.c:38: arch/arm64/include/asm/pgtable-2level-types.h:19:1: error: unknown type name ‘u64 arch/arm64/include/asm/pgtable-2level-types.h:20:1: error: unknown type name ‘u64’ Signed-off-by: NChen Gang <gang.chen@asianux.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 8月, 2013 1 次提交
-
-
由 Ard Biesheuvel 提交于
Add <asm/neon.h> containing kernel_neon_begin/kernel_neon_end function declarations and corresponding definitions in fpsimd.c These are needed to wrap uses of NEON in kernel mode. The names are identical to the ones used in arm/ so code using intrinsics or vectorized by GCC can be shared between arm and arm64. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 8月, 2013 1 次提交
-
-
由 Linus Torvalds 提交于
Ben Tebulin reported: "Since v3.7.2 on two independent machines a very specific Git repository fails in 9/10 cases on git-fsck due to an SHA1/memory failures. This only occurs on a very specific repository and can be reproduced stably on two independent laptops. Git mailing list ran out of ideas and for me this looks like some very exotic kernel issue" and bisected the failure to the backport of commit 53a59fc6 ("mm: limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT"). That commit itself is not actually buggy, but what it does is to make it much more likely to hit the partial TLB invalidation case, since it introduces a new case in tlb_next_batch() that previously only ever happened when running out of memory. The real bug is that the TLB gather virtual memory range setup is subtly buggered. It was introduced in commit 597e1c35 ("mm/mmu_gather: enable tlb flush range in generic mmu_gather"), and the range handling was already fixed at least once in commit e6c495a9 ("mm: fix the TLB range flushed when __tlb_remove_page() runs out of slots"), but that fix was not complete. The problem with the TLB gather virtual address range is that it isn't set up by the initial tlb_gather_mmu() initialization (which didn't get the TLB range information), but it is set up ad-hoc later by the functions that actually flush the TLB. And so any such case that forgot to update the TLB range entries would potentially miss TLB invalidates. Rather than try to figure out exactly which particular ad-hoc range setup was missing (I personally suspect it's the hugetlb case in zap_huge_pmd(), which didn't have the same logic as zap_pte_range() did), this patch just gets rid of the problem at the source: make the TLB range information available to tlb_gather_mmu(), and initialize it when initializing all the other tlb gather fields. This makes the patch larger, but conceptually much simpler. And the end result is much more understandable; even if you want to play games with partial ranges when invalidating the TLB contents in chunks, now the range information is always there, and anybody who doesn't want to bother with it won't introduce subtle bugs. Ben verified that this fixes his problem. Reported-bisected-and-tested-by: NBen Tebulin <tebulin@googlemail.com> Build-testing-by: NStephen Rothwell <sfr@canb.auug.org.au> Build-testing-by: NRichard Weinberger <richard.weinberger@gmail.com> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Acked-by: NPeter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 8月, 2013 2 次提交
-
-
由 Chen Gang 提交于
'target' will be set to '-1' in kvm_arch_vcpu_init(), and it need check 'target' whether less than zero or not in kvm_vcpu_initialized(). So need define target as 'int' instead of 'u32', just like ARM has done. The related warning: arch/arm64/kvm/../../../arch/arm/kvm/arm.c:497:2: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] Signed-off-by: NChen Gang <gang.chen@asianux.com> [Marc: reformated the Subject line to fit the series] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Not saving PAR_EL1 is an unfortunate oversight. If the guest performs an AT* operation and gets scheduled out before reading the result of the translation from PAREL1, it could become corrupted by another guest or the host. Saving this register is made slightly more complicated as KVM also uses it on the permission fault handling path, leading to an ugly "stash and restore" sequence. Fortunately, this is already a slow path so we don't really care. Also, Linux doesn't do any AT* operation, so Linux guests are not impacted by this bug. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 01 8月, 2013 2 次提交
-
-
由 Stephen Boyd 提交于
We're going to introduce support to read and write the memory mapped timer registers in the next patch, so push the cp15 read/write functions one level deeper. This simplifies the next patch and makes it clearer what's going on. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
由 Stephen Boyd 提交于
Using an enum for the register we wish to access allows newer compilers to determine if we've forgotten a case in our switch statement. This allows us to remove the BUILD_BUG() instances in the arm64 port, avoiding problems where optimizations may not happen. To try and force better code generation we're currently marking the accessor functions as inline, but newer compilers can ignore the inline keyword unless it's marked __always_inline. Luckily on arm and arm64 inline is __always_inline, but let's make everything __always_inline to be explicit. Suggested-by: NThomas Gleixner <tglx@linutronix.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
- 26 7月, 2013 1 次提交
-
-
由 Feng Kan 提交于
Written by Catalin Marinas, tested by APM on storm platform. This is needed because of the failures encountered when running SpecWeb benchmark test. Signed-off-by: NFeng Kan <fkan@apm.com> Acked-by: NKumar Sankaran <ksankaran@apm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 7月, 2013 1 次提交
-
-
由 Mark Rutland 提交于
Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a cached value of __boot_cpu_mode may be incoherent with that in memory. This could lead to a failure to detect mismatched boot modes. This patch adds flushing to ensure that writes by secondaries to __boot_cpu_mode are made visible before we test against it. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@cs.columbia.edu> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 7月, 2013 2 次提交
-
-
由 Marc Zyngier 提交于
Commit 7b6d864b (reboot: arm: change reboot_mode to use enum reboot_mode) changed the way reboot is handled on arm, which has a direct impact on arm64 as we share the reset driver on the VE platform. The obvious fix is to move arm64 to use the same infrastructure. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: removed reboot_mode = REBOOT_HARD default setting] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Chen Gang 提交于
If 'COMPAT' not defined, aarch32_break_handler() cannot pass compiling, and it can work independent with 'COMPAT', so remove dummy definition. The related error: arch/arm64/kernel/debug-monitors.c:249:5: error: redefinition of ‘aarch32_break_handler’ In file included from arch/arm64/kernel/debug-monitors.c:29:0: /root/linux-next/arch/arm64/include/asm/debug-monitors.h:89:12: note: previous definition of ‘aarch32_break_handler’ was here Signed-off-by: NChen Gang <gang.chen@asianux.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/arm64 uses of the __cpuinit macros from all C files. Currently arm64 does not have any __CPUINIT used in assembly files. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 29 6月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 21 6月, 2013 1 次提交
-
-
由 Vinayak Kale 提交于
This patch adds defines for APM CPU implementer ID and APM CPU part numbers in asm/cputype.h Signed-off-by: NKumar Sankaran <ksankaran@apm.com> Signed-off-by: NLoc Ho <lho@apm.com> Signed-off-by: NFeng Kan <fkan@apm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 6月, 2013 4 次提交
-
-
由 Steve Capper 提交于
Bring Transparent HugePage support to ARM. The size of a transparent huge page depends on the normal page size. A transparent huge page is always represented as a pmd. If PAGE_SIZE is 4KB, THPs are 2MB. If PAGE_SIZE is 64KB, THPs are 512MB. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
Add huge page support to ARM64, different huge page sizes are supported depending on the size of normal pages: PAGE_SIZE is 4KB: 2MB - (pmds) these can be allocated at any time. 1024MB - (puds) usually allocated on bootup with the command line with something like: hugepagesz=1G hugepages=6 PAGE_SIZE is 64KB: 512MB - (pmds) usually allocated on bootup via command line. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
Under ARM64, PTEs can be broadly categorised as follows: - Present and valid: Bit #0 is set. The PTE is valid and memory access to the region may fault. - Present and invalid: Bit #0 is clear and bit #1 is set. Represents present memory with PROT_NONE protection. The PTE is an invalid entry, and the user fault handler will raise a SIGSEGV. - Not present (file or swap): Bits #0 and #1 are clear. Memory represented has been paged out. The PTE is an invalid entry, and the fault handler will try and re-populate the memory where necessary. Huge PTEs are block descriptors that have bit #1 clear. If we wish to represent PROT_NONE huge PTEs we then run into a problem as there is no way to distinguish between regular and huge PTEs if we set bit #1. To resolve this ambiguity this patch moves PTE_PROT_NONE from bit #1 to bit #2 and moves PTE_FILE from bit #2 to bit #3. The number of swap/file bits is reduced by 1 as a consequence, leaving 60 bits for file and swap entries. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
If we consider the following code sequence: my_pte = pte_modify(entry, myprot); x = pte_write(my_pte); y = pte_exec(my_pte); If myprot comes from a PROT_NONE page, then x and y will both be true which is undesireable behaviour. This patch sets the no-execute and read-only bits for PAGE_NONE such that the code above will return false for both x and y. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 6月, 2013 9 次提交
-
-
由 Marc Zyngier 提交于
Wire the init of a 32bit vcpu by allowing 32bit modes in pstate, and providing sensible defaults out of reset state. This feature is of course conditioned by the presence of 32bit capability on the physical CPU, and is checked by the KVM_CAP_ARM_EL1_32BIT capability. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Provide the necessary infrastructure to trap coprocessor accesses that occur when running 32bit guests. Also wire SMC and HVC trapped in 32bit mode while were at it. Reviewed-by: NChristopher Covington <cov@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
As conditional instructions can trap on AArch32, add the thinest possible emulation layer to keep 32bit guests happy. Reviewed-by: NChristopher Covington <cov@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Allow access to the 32bit register file through the usual API. Reviewed-by: NChristopher Covington <cov@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Define the 32bit specific registers (SPSRs, cp15...). Most CPU registers are directly mapped to a 64bit register (r0->x0...). Only the SPSRs have separate registers. cp15 registers are also mapped into their 64bit counterpart in most cases. Reviewed-by: NChristopher Covington <cov@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Wire the PSCI backend into the exit handling code. Reviewed-by: NChristopher Covington <cov@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Will Deacon 提交于
The software breakpoint handlers are hooked in directly from ptrace, which makes it difficult to add additional handlers for things like kprobes and kgdb. This patch moves the handling code into debug-monitors.c, where we can dispatch to different debug subsystems more easily. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
When using an IOMMU for device mappings, it is necessary to keep a pointer between the device and the IOMMU to which it is attached in order to obtain the correct IOMMU when attaching the device to a domain. This patch adds an iommu pointer to the dev_archdata structure, in a similar manner to other architectures (ARM, PowerPC, x86, ...). Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
pte_index is a useful helper outside of arch/arm64, for things like the ARM SMMU driver, so rename __pte_index to pte_index to be consistent with both arch/arm/ and also the definitions of pmd_index and pgd_index. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 6月, 2013 1 次提交
-
-
由 Chen Gang 提交于
Under arm64, we will calibrate the delay loop statically using a known timer frequency, so delete read_current_timer(), or it will cause compiling issue with allmodconfig. The related error: ERROR: "read_current_timer" [lib/rbtree_test.ko] undefined! ERROR: "read_current_timer" [lib/interval_tree_test.ko] undefined! ERROR: "read_current_timer" [fs/ext4/ext4.ko] undefined! ERROR: "read_current_timer" [crypto/tcrypt.ko] undefined! Signed-off-by: NChen Gang <gang.chen@asianux.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-