- 09 1月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 12月, 2017 2 次提交
-
-
由 Kristina Martsenko 提交于
Currently, when using VA_BITS < 48, if the ID map text happens to be placed in physical memory above VA_BITS, we increase the VA size (up to 48) and create a new table level, in order to map in the ID map text. This is okay because the system always supports 48 bits of VA. This patch extends the code such that if the system supports 52 bits of VA, and the ID map text is placed that high up, then we increase the VA size accordingly, up to 52. One difference from the current implementation is that so far the condition of VA_BITS < 48 has meant that the top level table is always "full", with the maximum number of entries, and an extra table level is always needed. Now, when VA_BITS = 48 (and using 64k pages), the top level table is not full, and we simply need to increase the number of entries in it, instead of creating a new table level. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: reduce arguments to __create_hyp_mappings()] [catalin.marinas@arm.com: reworked/renamed __cpu_uses_extended_idmap_level()] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
The top 4 bits of a 52-bit physical address are positioned at bits 2..5 in the TTBR registers. Introduce a couple of macros to move the bits there, and change all TTBR writers to use them. Leave TTBR0 PAN code unchanged, to avoid complicating it. A system with 52-bit PA will have PAN anyway (because it's ARMv8.1 or later), and a system without 52-bit PA can only use up to 48-bit PAs. A later patch in this series will add a kconfig dependency to ensure PAN is configured. In addition, when using 52-bit PA there is a special alignment requirement on the top-level table. We don't currently have any VA_BITS configuration that would violate the requirement, but one could be added in the future, so add a compile-time BUG_ON to check for it. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: added TTBR_BADD_MASK_52 comment] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 11月, 2017 1 次提交
-
-
由 Dan Williams 提交于
In response to compile breakage introduced by a series that added the pud_write helper to x86, Stephen notes: did you consider using the other paradigm: In arch include files: #define pud_write pud_write static inline int pud_write(pud_t pud) ..... Then in include/asm-generic/pgtable.h: #ifndef pud_write tatic inline int pud_write(pud_t pud) { .... } #endif If you had, then the powerpc code would have worked ... ;-) and many of the other interfaces in include/asm-generic/pgtable.h are protected that way ... Given that some architecture already define pmd_write() as a macro, it's a net reduction to drop the definition of __HAVE_ARCH_PMD_WRITE. Link: http://lkml.kernel.org/r/151129126721.37405.13339850900081557813.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NDan Williams <dan.j.williams@intel.com> Suggested-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Oliver OHalloran <oliveroh@au1.ibm.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 11月, 2017 2 次提交
-
-
由 Alex Bennée 提交于
After emulating instructions we may want return to user-space to handle single-step debugging. Introduce a helper function, which, if single-step is enabled, sets the run structure for return and returns true. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
VTTBR_BADDR_MASK is used to sanity check the size and alignment of the VTTBR address. It seems to currently be off by one, thereby only allowing up to 39-bit addresses (instead of 40-bit) and also insufficiently checking the alignment. This patch fixes it. This patch is the 32bit pendent of Kristina's arm64 fix, and she deserves the actual kudos for pinpointing that one. Fixes: f7ed45be ("KVM: ARM: World-switch implementation") Cc: <stable@vger.kernel.org> # 3.9 Reported-by: NKristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 26 11月, 2017 1 次提交
-
-
由 Russell King 提交于
Detect if we are returning to usermode via the normal kernel exit paths but the saved PSR value indicates that we are in kernel mode. This could occur due to corrupted stack state, which has been observed with "ftracetest". This ensures that we catch the problem case before we get to user code. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 21 11月, 2017 1 次提交
-
-
由 Russell King 提交于
Ensure that get_user_pages_fast() is not able to access memory which has been mapped with PROT_NONE. Reported-by: NAl Viro <viro@ZenIV.linux.org.uk> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 16 11月, 2017 2 次提交
-
-
Convert all allocations that used a NOTRACK flag to stop using it. Link: http://lkml.kernel.org/r/20171007030159.22241-3-alexander.levin@verizon.comSigned-off-by: NSasha Levin <alexander.levin@verizon.com> Cc: Alexander Potapenko <glider@google.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tim Hansen <devtimhansen@gmail.com> Cc: Vegard Nossum <vegardno@ifi.uio.no> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
Patch series "kmemcheck: kill kmemcheck", v2. As discussed at LSF/MM, kill kmemcheck. KASan is a replacement that is able to work without the limitation of kmemcheck (single CPU, slow). KASan is already upstream. We are also not aware of any users of kmemcheck (or users who don't consider KASan as a suitable replacement). The only objection was that since KASAN wasn't supported by all GCC versions provided by distros at that time we should hold off for 2 years, and try again. Now that 2 years have passed, and all distros provide gcc that supports KASAN, kill kmemcheck again for the very same reasons. This patch (of 4): Remove kmemcheck annotations, and calls to kmemcheck from the kernel. [alexander.levin@verizon.com: correctly remove kmemcheck call from dma_map_sg_attrs] Link: http://lkml.kernel.org/r/20171012192151.26531-1-alexander.levin@verizon.com Link: http://lkml.kernel.org/r/20171007030159.22241-2-alexander.levin@verizon.comSigned-off-by: NSasha Levin <alexander.levin@verizon.com> Cc: Alexander Potapenko <glider@google.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tim Hansen <devtimhansen@gmail.com> Cc: Vegard Nossum <vegardno@ifi.uio.no> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 11月, 2017 3 次提交
-
-
由 Dongjiu Geng 提交于
kvm_vcpu_dabt_isextabt() tries to match a full fault syndrome, but calls kvm_vcpu_trap_get_fault_type() that only returns the fault class, thus reducing the scope of the check. This doesn't cause any observable bug yet as we end-up matching a closely related syndrome for which we return the same value. Using kvm_vcpu_trap_get_fault() instead fixes it for good. Signed-off-by: NDongjiu Geng <gengdongjiu@huawei.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Both arm and arm64 implementations are capable of injecting faults, and yet have completely divergent implementations, leading to different bugs and reduced maintainability. Let's elect the arm64 version as the canonical one and move it into aarch32.c, which is common to both architectures. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
As we are about to be lazy with saving and restoring the timer registers, we prepare by moving all possible timer configuration logic out of the hyp code. All virtual timer registers can be programmed from EL1 and since the arch timer is always a level triggered interrupt we can safely do this with interrupts disabled in the host kernel on the way to the guest without taking vtimer interrupts in the host kernel (yet). The downside is that the cntvoff register can only be programmed from hyp mode, so we jump into hyp mode and back to program it. This is also safe, because the host kernel doesn't use the virtual timer in the KVM code. It may add a little performance performance penalty, but only until following commits where we move this operation to vcpu load/put. Signed-off-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 03 11月, 2017 1 次提交
-
-
由 Dave Martin 提交于
Until KVM has full SVE support, guests must not be allowed to execute SVE instructions. This patch enables the necessary traps, and also ensures that the traps are disabled again on exit from the guest so that the host can still use SVE if it wants to. On guest exit, high bits of the SVE Zn registers may have been clobbered as a side-effect the execution of FPSIMD instructions in the guest. The existing KVM host FPSIMD restore code is not sufficient to restore these bits, so this patch explicitly marks the CPU as not containing cached vector state for any task, thus forcing a reload on the next return to userspace. This is an interim measure, in advance of adding full SVE awareness to KVM. This marking of cached vector state in the CPU as invalid is done using __this_cpu_write(fpsimd_last_state, NULL) in fpsimd.c. Due to the repeated use of this rather obscure operation, it makes sense to factor it out as a separate helper with a clearer name. This patch factors it out as fpsimd_flush_cpu_state(), and ports all callers to use it. As a side effect of this refactoring, a this_cpu_write() in fpsimd_cpu_pm_notifier() is changed to __this_cpu_write(). This should be fine, since cpu_pm_enter() is supposed to be called only with interrupts disabled. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 11月, 2017 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org> Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 29 10月, 2017 2 次提交
-
-
由 Martin Blumenstingl 提交于
On Amlogic Meson8 / Meson8m2 (both Cortex-A9) and Meson8b (Cortex-A5) the CPU hotplug code needs to wait until the SCU status of the CPU that is being taken offline is SCU_PM_POWEROFF. Provide a utility function (which can be invoked for example from .cpu_kill()) which allows reading the SCU status of a CPU. While here, replace the magic number 0x3 with a preprocessor macro (SCU_CPU_STATUS_MASK) so we don't have to duplicate this magic number in the new function. Signed-off-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NKevin Hilman <khilman@baylibre.com>
-
由 Martin Blumenstingl 提交于
To boot the secondary CPUs on the Amlogic Meson8/Meson8m2 (Cortex-A9) and Meson8b (Cortex-A5) SoCs we have to enable SCU mode SCU_PM_NORMAL, otherwise the secondary cores will not start. This patch adds a scu_cpu_power_enable() function which can be used to enable SCU_PM_NORMAL for a specific (logical) CPU. An internal helper function is also created, to avoid code duplication with scu_power_mode(). Signed-off-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NKevin Hilman <khilman@baylibre.com>
-
- 25 10月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE() Please do not apply this to mainline directly, instead please re-run the coccinelle script shown below and apply its output. For several reasons, it is desirable to use {READ,WRITE}_ONCE() in preference to ACCESS_ONCE(), and new code is expected to use one of the former. So far, there's been no reason to change most existing uses of ACCESS_ONCE(), as these aren't harmful, and changing them results in churn. However, for some features, the read/write distinction is critical to correct operation. To distinguish these cases, separate read/write accessors must be used. This patch migrates (most) remaining ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following coccinelle script: ---- // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and // WRITE_ONCE() // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch virtual patch @ depends on patch @ expression E1, E2; @@ - ACCESS_ONCE(E1) = E2 + WRITE_ONCE(E1, E2) @ depends on patch @ expression E; @@ - ACCESS_ONCE(E) + READ_ONCE(E) ---- Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: davem@davemloft.net Cc: linux-arch@vger.kernel.org Cc: mpe@ellerman.id.au Cc: shuah@kernel.org Cc: snitzer@redhat.com Cc: thor.thayer@linux.intel.com Cc: tj@kernel.org Cc: viro@zeniv.linux.org.uk Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 24 10月, 2017 2 次提交
-
-
由 Will Deacon 提交于
linux/compiler.h is included indirectly by linux/types.h via uapi/linux/types.h -> uapi/linux/posix_types.h -> linux/stddef.h -> uapi/linux/stddef.h and is needed to provide a proper definition of offsetof. Unfortunately, compiler.h requires a definition of smp_read_barrier_depends() for defining lockless_dereference() and soon for defining READ_ONCE(), which means that all users of READ_ONCE() will need to include asm/barrier.h to avoid splats such as: In file included from include/uapi/linux/stddef.h:1:0, from include/linux/stddef.h:4, from arch/h8300/kernel/asm-offsets.c:11: include/linux/list.h: In function 'list_empty': >> include/linux/compiler.h:343:2: error: implicit declaration of function 'smp_read_barrier_depends' [-Werror=implicit-function-declaration] smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \ ^ A better alternative is to include asm/barrier.h in linux/compiler.h, but this requires a type definition for "bool" on some architectures (e.g. x86), which is defined later by linux/types.h. Type "bool" is also used directly in linux/compiler.h, so the whole thing is pretty fragile. This patch splits compiler.h in two: compiler_types.h contains type annotations, definitions and the compiler-specific parts, whereas compiler.h #includes compiler-types.h and additionally defines macros such as {READ,WRITE.ACCESS}_ONCE(). uapi/linux/stddef.h and linux/linkage.h are then moved over to include linux/compiler_types.h, which fixes the build for h8 and blackfin. Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1508840570-22169-2-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Arnd Bergmann 提交于
The asm-generic/unaligned.h header provides two different implementations for accessing unaligned variables: the access_ok.h version used when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers are in fact aligned, while the le_struct.h version convinces gcc that the alignment of a pointer is '1', to make it issue the correct load/store instructions depending on the architecture flags. On ARMv5 and older, we always use the second version, to let the compiler use byte accesses. On ARMv6 and newer, we currently use the access_ok.h version, so the compiler can use any instruction including stm/ldm and ldrd/strd that will cause an alignment trap. This trap can significantly impact performance when we have to do a lot of fixups and, worse, has led to crashes in the LZ4 decompressor code that does not have a trap handler. This adds an ARM specific version of asm/unaligned.h that uses the le_struct.h/be_struct.h implementation unconditionally. This should lead to essentially the same code on ARMv6+ as before, with the exception of using regular load/store instructions instead of the trapping instructions multi-register variants. The crash in the LZ4 decompressor code was probably introduced by the patch replacing the LZ4 implementation, commit 4e1a33b1 ("lib: update LZ4 compressor module"), so linux-4.11 and higher would be affected most. However, we probably want to have this backported to all older stable kernels as well, to help with the performance issues. There are two follow-ups that I think we should also work on, but not backport to stable kernels, first to change the asm-generic version of the header to remove the ARM special case, and second to review all other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they might be affected by the same problem on ARM. Cc: stable@vger.kernel.org Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 23 10月, 2017 5 次提交
-
-
由 Vladimir Murzin 提交于
Currently, there is assumption in early MPU setup code that kernel image is located in RAM, which is obviously not true for XIP. To run code from ROM we need to make sure that it is covered by MPU. However, due to we allocate regions (semi-)dynamically we can run into issue of trimming region we are running from in case ROM spawns several MPU regions. To help deal with that we enforce minimum alignments for start end end of XIP address space as 1MB and 128Kb correspondingly. Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
PMSAv7 defines curious alignment requirements to the regions: - size must be power of 2, and - region start must be aligned to the region size Because of that we currently adjust lowmem bounds plus we assign only one MPU region to cover memory all these lead to significant amount of memory could be wasted. As an example, consider 64Mb of memory at 0x70000000 - it fits alignment requirements nicely; now, imagine that 2Mb of memory is reserved for coherent DMA allocation, so now Linux is expected to see 62Mb of memory... and here annoying thing happens - memory gets truncated to 32Mb (we've lost 30Mb!), i.e. MPU layout looks like: 0: base 0x70000000, size 0x2000000 This patch tries to allocate as much as possible MPU slots to minimise amount of truncated memory. Moreover, with this patch MPU subregions starting to get used. MPU subregions allow us reduce the number of MPU slots used. For example given above, MPU layout looks like: 0: base 0x70000000, size 0x2000000 1: base 0x72000000, size 0x1000000 2: base 0x73000000, size 0x1000000, disable subreg 7 (0x73e00000 - 0x73ffffff) Where without subregions we'd get: 0: base 0x70000000, size 0x2000000 1: base 0x72000000, size 0x1000000 2: base 0x73000000, size 0x800000 3: base 0x73800000, size 0x400000 4: base 0x73c00000, size 0x200000 To achieve better layout we fist try to cover specified memory as is (maybe with help of subregions) and if we failed, we truncate memory to fit alignment requirements (so it occupies one MPU slot) and perform one more attempt with the reminder, and so on till we either cover all memory or run out of MPU slots. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
This patch makes it possible to use MPU with v7M cores. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
Currently, there are several issues with how MPU is setup: 1. We won't boot if MPU is missing 2. We won't boot if use XIP 3. Further extension of MPU setup requires asm skills The 1st point can be relaxed, so we can continue with boot CPU even if MPU is missed and fail boot for secondaries only. To address the 2nd point we could create region covering CONFIG_XIP_PHYS_ADDR - _end and that might work for the first stage of MPU enable, but due to MPU's alignment requirement we could cover too much, IOW we need more flexibility in how we're partitioning memory regions... and it'd be hardly possible to archive because of the 3rd point. This patch is trying to address 1st and 3rd issues and paves the path for 2nd and further improvements. The most visible change introduced with this patch is that we start using mpu_rgn_info array (as it was supposed?), so change in MPU setup done by boot CPU is recorded there and feed to secondaries. It allows us to keep minimal region setup for boot CPU and do the rest in C. Since we start programming MPU regions in C evaluation of MPU constrains (number of regions supported and minimal region order) can be done once, which in turn open possibility to free-up "probe" region early. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
Having MPU handling code in dedicated module makes it easier to enhance/maintain it. Tested-by: NSzemző András <sza@esh.hu> Tested-by: NAlexandre TORGUE <alexandre.torgue@st.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 19 10月, 2017 1 次提交
-
-
由 Shanker Donthineni 提交于
A new feature Range Selector (RS) has been added to GIC specification in order to support more than 16 CPUs at affinity level 0. New fields are introduced in SGI system registers (ICC_SGI0R_EL1, ICC_SGI1R_EL1 and ICC_ASGI1R_EL1) to relax an artificial limit of 16 at level 0. - A new RSS field in ICC_CTLR_EL3, ICC_CTLR_EL1 and ICV_CTLR_EL1: [18] - Range Selector Support (RSS) 0b0 = Targeted SGIs with affinity level 0 values of 0-15 are supported. 0b1 = Targeted SGIs with affinity level 0 values of 0-255 are supported. - A new RS field in ICC_SGI0R_EL1, ICC_SGI1R_EL1 and ICC_ASGI1R_EL1: [47:44] - RangeSelector (RS) which group of 16 TargetList[n] field TargetList[n] represents aff0 value ((RS*16)+n) When ICC_CTLR_EL3.RSS==0 or ICC_CTLR_EL1.RSS==0, RS is RES0. - A new RSS field in GICD_TYPER: [26] - Range Selector Support (RSS) 0b0 = Targeted SGIs with affinity level 0 values of 0-15 are supported. 0b1 = Targeted SGIs with affinity level 0 values of 0-255 are supported. Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 14 10月, 2017 1 次提交
-
-
由 Julien Thierry 提交于
The arch timer configuration for a CPU might get reset after suspending said CPU. In order to reliably use the event stream in the kernel (e.g. for delays), we keep track of the state where we can safely consider the event stream as properly configured. After writing to cntkctl, we issue an ISB to ensure that subsequent delay loops can rely on the event stream being enabled. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 10月, 2017 3 次提交
-
-
由 Will Deacon 提交于
The arch_{read,spin,write}_lock_flags() macros are simply mapped to the non-flags versions by the majority of architectures, so do this in core code and remove the dummy implementations. Also remove the implementation in spinlock_up.h, since all callers of do_raw_spin_lock_flags() call local_irq_save(flags) anyway. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1507055129-12300-4-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Will Deacon 提交于
arch_{read,spin,write}_relax() are defined as cpu_relax() by the core code, so architectures that can't do better (i.e. most of them) don't need to bother with the dummy definitions. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1507055129-12300-3-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Will Deacon 提交于
Outside of the locking code itself, {read,spin,write}_can_lock() have no users in tree. Apparmor (the last remaining user of write_can_lock()) got moved over to lockdep by the previous patch. This patch removes the use of {read,spin,write}_can_lock() from the BUILD_LOCK_OPS macro, deferring to the trylock operation for testing the lock status, and subsequently removes the unused macros altogether. They aren't guaranteed to work in a concurrent environment and can give incorrect results in the case of qrwlock. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1507055129-12300-2-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 05 10月, 2017 1 次提交
-
-
由 Rafael J. Wysocki 提交于
None of the locomo drivers in the tree implements the suspend and resume callbacks from struct locomo_driver, so drop them and drop the corresponding callbacks from locomo_bus_type. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 03 10月, 2017 2 次提交
-
-
由 Dietmar Eggemann 提交于
Commit 8cd5601c ("sched/fair: Convert arch_scale_cpu_capacity() from weak function to #define") changed the wiring which now has to be done by associating arch_scale_cpu_capacity with the actual implementation provided by the architecture. Define arch_scale_cpu_capacity to use the arch_topology "driver" function topology_get_cpu_scale() for the task scheduler's cpu-invariant accounting instead of the default arch_scale_cpu_capacity() in kernel/sched/sched.h. Signed-off-by: NDietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: NVincent Guittot <vincent.guittot@linaro.org> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Tested-by: NJuri Lelli <juri.lelli@arm.com> Reviewed-by: NJuri Lelli <juri.lelli@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Dietmar Eggemann 提交于
Commit dfbca41f ("sched: Optimize freq invariant accounting") changed the wiring which now has to be done by associating arch_scale_freq_capacity with the actual implementation provided by the architecture. Define arch_scale_freq_capacity to use the arch_topology "driver" function topology_get_freq_scale() for the task scheduler's frequency-invariant accounting instead of the default arch_scale_freq_capacity() in kernel/sched/sched.h. Signed-off-by: NDietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: NVincent Guittot <vincent.guittot@linaro.org> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Tested-by: NJuri Lelli <juri.lelli@arm.com> Reviewed-by: NJuri Lelli <juri.lelli@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 28 9月, 2017 2 次提交
-
-
由 Vladimir Murzin 提交于
There are no users of init_dma_coherent_pool_size() left due to 387870f2 ("mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls"), so remove it. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Vladimir Murzin 提交于
fixmap_page_table was removed by commit 836a2418 (ARM: expand fixmap region to 3MB), but some traces are still there - get rid of them. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 18 9月, 2017 1 次提交
-
-
由 Thomas Garnier 提交于
This reverts commit 73ac5d6a. The work pending loop can call set_fs after addr_limit_user_check removed the _TIF_FSCHECK flag. This may happen at anytime based on how ARM handles alignment exceptions. It leads to an infinite loop condition. After discussion, it has been agreed that the generic approach is not tailored to the ARM architecture and any fix might not be complete. This patch will be replaced by an architecture specific implementation. The work flag approach will be kept for other architectures. Reported-by: NLeonard Crestez <leonard.crestez@nxp.com> Signed-off-by: NThomas Garnier <thgarnie@google.com> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Pratyush Anand <panand@redhat.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Will Drewry <wad@chromium.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Andy Lutomirski <luto@amacapital.net> Cc: David Howells <dhowells@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: linux-api@vger.kernel.org Cc: Yonghong Song <yhs@fb.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1504798247-48833-3-git-send-email-keescook@chromium.org
-
- 11 9月, 2017 4 次提交
-
-
由 Nicolas Pitre 提交于
Provide the necessary changes to be able to execute ELF-FDPIC binaries on ARM systems with an MMU. The default for CONFIG_BINFMT_ELF_FDPIC is also set to n if the regular ELF loader is already configured so not to force FDPIC support on everyone. Given that CONFIG_BINFMT_ELF depends on CONFIG_MMU, this means CONFIG_BINFMT_ELF_FDPIC will still default to y when !MMU. Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NMickael GUENE <mickael.guene@st.com> Tested-by: NVincent Abriou <vincent.abriou@st.com> Tested-by: NAndras Szemzo <szemzo.andras@gmail.com>
-
由 Nicolas Pitre 提交于
This includes the necessary code to recognise the FDPIC format on ARM and the ptrace command definitions used by the common ptrace code. Based on patches originally from Mickael Guene <mickael.guene@st.com>. Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NMickael GUENE <mickael.guene@st.com> Tested-by: NVincent Abriou <vincent.abriou@st.com> Tested-by: NAndras Szemzo <szemzo.andras@gmail.com>
-
由 Nicolas Pitre 提交于
Signal handlers are not direct function pointers but pointers to function descriptor in that case. Therefore we must retrieve the actual function address and load the GOT value into r9 from the descriptor before branching to the actual handler. If a restorer is provided, we also have to load its address and GOT from its descriptor. That descriptor address and the code to load it is pushed onto the stack to be executed as soon as the signal handler returns. However, to be compatible with NX stacks, the FDPIC bounce code is also copied to the signal page along with the other code stubs. Therefore this code must get at the descriptor address whether it executes from the stack or the signal page. To do so we use the stack pointer which points at the signal stack frame where the descriptor address was stored. Because the rt signal frame is different from the simpler frame, two versions of the bounce code are needed, and two variants (ARM and Thumb) as well. The asm-offsets facility is used to determine the actual offset in the signal frame for each version, meaning that struct sigframe and rt_sigframe had to be moved to a separate file. Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NMickael GUENE <mickael.guene@st.com> Tested-by: NVincent Abriou <vincent.abriou@st.com> Tested-by: NAndras Szemzo <szemzo.andras@gmail.com>
-
由 Nicolas Pitre 提交于
The elf_fdpic binary format driver has to initialize extra registers beside the stack and program counter as required by the corresponding ABI. So reinstate them after the regs structure has been cleared. While at it let's get rid of start_thread_nommu(). Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NMickael GUENE <mickael.guene@st.com> Tested-by: NVincent Abriou <vincent.abriou@st.com> Tested-by: NAndras Szemzo <szemzo.andras@gmail.com>
-