- 22 10月, 2019 2 次提交
-
-
由 Steven Price 提交于
Implement the service call for configuring a shared structure between a VCPU and the hypervisor in which the hypervisor can write the time stolen from the VCPU's execution time by other tasks on the host. User space allocates memory which is placed at an IPA also chosen by user space. The hypervisor then updates the shared structure using kvm_put_guest() to ensure single copy atomicity of the 64-bit value reporting the stolen time in nanoseconds. Whenever stolen time is enabled by the guest, the stolen time counter is reset. The stolen time itself is retrieved from the sched_info structure maintained by the Linux scheduler code. We enable SCHEDSTATS when selecting KVM Kconfig to ensure this value is meaningful. Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Steven Price 提交于
This provides a mechanism for querying which paravirtualized time features are available in this hypervisor. Also add the header file which defines the ABI for the paravirtualized time features we're about to add. Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 07 10月, 2019 2 次提交
-
-
由 Vincenzo Frascino 提交于
Older versions of binutils (prior to 2.24) do not support the "ISHLD" option for memory barrier instructions, which leads to a build failure when assembling the vdso32 library. Add a compilation time mechanism that detects if binutils supports those instructions and configure the kernel accordingly. Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Reported-by: NWill Deacon <will@kernel.org> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Vincenzo Frascino 提交于
Moving over to the generic C implementation of the vDSO inadvertently left some stale files behind which are no longer used. Remove them. Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 04 10月, 2019 1 次提交
-
-
由 Will Deacon 提交于
As of ac7c3e4f ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly"), inline functions are no longer annotated with '__always_inline', which allows the compiler to decide whether inlining is really a good idea or not. Although this is a great idea on paper, the reality is that AArch64 GCC prior to 9.1 has been shown to get confused when creating an out-of-line copy of a function passing explicit 'register' variables into an inline assembly block: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91111 It's not clear whether this is specific to arm64 or not but, for now, ensure that all of our functions using 'register' variables are marked as '__always_inline' so that the old behaviour is effectively preserved. Hopefully other architectures are luckier with their compilers. Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 02 10月, 2019 1 次提交
-
-
由 Juergen Gross 提交于
Today the EFI runtime functions are setup in architecture specific code (x86 and arm), with the functions themselves living in drivers/xen as they are not architecture dependent. As the setup is exactly the same for arm and x86 move the setup to drivers/xen, too. This at once removes the need to make the single functions global visible. Signed-off-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NJan Beulich <jbeulich@suse.com> [boris: "Dropped EXPORT_SYMBOL_GPL(xen_efi_runtime_setup)"] Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
- 27 9月, 2019 1 次提交
-
-
由 Mark Rutland 提交于
The naming of pgtable_page_{ctor,dtor}() seems to have confused a few people, and until recently arm64 used these erroneously/pointlessly for other levels of page table. To make it incredibly clear that these only apply to the PTE level, and to align with the naming of pgtable_pmd_page_{ctor,dtor}(), let's rename them to pgtable_pte_page_{ctor,dtor}(). These changes were generated with the following shell script: ---- git grep -lw 'pgtable_page_.tor' | while read FILE; do sed -i '{s/pgtable_page_ctor/pgtable_pte_page_ctor/}' $FILE; sed -i '{s/pgtable_page_dtor/pgtable_pte_page_dtor/}' $FILE; done ---- ... with the documentation re-flowed to remain under 80 columns, and whitespace fixed up in macros to keep backslashes aligned. There should be no functional change as a result of this patch. Link: http://lkml.kernel.org/r/20190722141133.3116-1-mark.rutland@arm.comSigned-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMike Rapoport <rppt@linux.ibm.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Yu Zhao <yuzhao@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 9月, 2019 3 次提交
-
-
由 Alexandre Ghiti 提交于
arm64 handles top-down mmap layout in a way that can be easily reused by other architectures, so make it available in mm. It then introduces a new config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT that can be set by other architectures to benefit from those functions. Note that this new config depends on MMU being enabled, if selected without MMU support, a warning will be thrown. Link: http://lkml.kernel.org/r/20190730055113.23635-5-alex@ghiti.frSigned-off-by: NAlexandre Ghiti <alex@ghiti.fr> Suggested-by: NChristoph Hellwig <hch@infradead.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKees Cook <keescook@chromium.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NLuis Chamberlain <mcgrof@kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: James Hogan <jhogan@kernel.org> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mike Rapoport 提交于
Both pgtable_cache_init() and pgd_cache_init() are used to initialize kmem cache for page table allocations on several architectures that do not use PAGE_SIZE tables for one or more levels of the page table hierarchy. Most architectures do not implement these functions and use __weak default NOP implementation of pgd_cache_init(). Since there is no such default for pgtable_cache_init(), its empty stub is duplicated among most architectures. Rename the definitions of pgd_cache_init() to pgtable_cache_init() and drop empty stubs of pgtable_cache_init(). Link: http://lkml.kernel.org/r/1566457046-22637-1-git-send-email-rppt@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com> Acked-by: Will Deacon <will@kernel.org> [arm64] Acked-by: Thomas Gleixner <tglx@linutronix.de> [x86] Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nicholas Piggin 提交于
Patch series "mm: remove quicklist page table caches". A while ago Nicholas proposed to remove quicklist page table caches [1]. I've rebased his patch on the curren upstream and switched ia64 and sh to use generic versions of PTE allocation. [1] https://lore.kernel.org/linux-mm/20190711030339.20892-1-npiggin@gmail.com This patch (of 3): Remove page table allocator "quicklists". These have been around for a long time, but have not got much traction in the last decade and are only used on ia64 and sh architectures. The numbers in the initial commit look interesting but probably don't apply anymore. If anybody wants to resurrect this it's in the git history, but it's unhelpful to have this code and divergent allocator behaviour for minor archs. Also it might be better to instead make more general improvements to page allocator if this is still so slow. Link: http://lkml.kernel.org/r/1565250728-21721-2-git-send-email-rppt@linux.ibm.comSigned-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 9月, 2019 2 次提交
-
-
由 Sami Tolvanen 提交于
Define a weak function in COND_SYSCALL instead of a weak alias to sys_ni_syscall, which has an incompatible type. This fixes indirect call mismatches with Control-Flow Integrity (CFI) checking. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Arnd Bergmann 提交于
On arm64 build with clang, sometimes the __cmpxchg_mb is not inlined when CONFIG_OPTIMIZE_INLINING is set. Clang then fails a compile-time assertion, because it cannot tell at compile time what the size of the argument is: mm/memcontrol.o: In function `__cmpxchg_mb': memcontrol.c:(.text+0x1a4c): undefined reference to `__compiletime_assert_175' memcontrol.c:(.text+0x1a4c): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `__compiletime_assert_175' Mark all of the cmpxchg() style functions as __always_inline to ensure that the compiler can see the result. Acked-by: NNick Desaulniers <ndesaulniers@google.com> Reported-by: NNathan Chancellor <natechancellor@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/648Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Tested-by: NNathan Chancellor <natechancellor@gmail.com> Reviewed-by: NAndrew Murray <andrew.murray@arm.com> Tested-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 11 9月, 2019 3 次提交
-
-
由 Christoph Hellwig 提交于
Now that the Xen special cases are gone nothing worth mentioning is left in the arm64 <asm/dma-mapping.h> file, so switch to use the asm-generic version instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
-
由 Christoph Hellwig 提交于
Use the dma-noncoherent dev_is_dma_coherent helper instead of the home grown variant. Note that both are always initialized to the same value in arch_setup_dma_ops. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
-
由 Christoph Hellwig 提交于
Shared the duplicate arm/arm64 code in include/xen/arm/page-coherent.h. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
-
- 10 9月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
hyp_alternate_select() is now completely unused. Goodbye. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com>
-
- 04 9月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
No need to indirect iounmap for arm64. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 01 9月, 2019 1 次提交
-
-
由 Steven Rostedt (VMware) 提交于
Most archs (well at least x86) store the function call return address on the stack before storing the local variables for the function. The max stack tracer depends on this in its algorithm to display the stack size of each function it finds in the back trace. Some archs (arm64), may store the return address (from its link register) just before calling a nested function. There's no reason to save the link register on leaf functions, as it wont be updated. This breaks the algorithm of the max stack tracer. Add a new define ARCH_FTRACE_SHIFT_STACK_TRACER that an architecture may set if it stores the return address (link register) after it stores the function's local variables, and have the stack trace shift the values of the mapped stack size to the appropriate functions. Link: 20190802094103.163576-1-jiping.ma2@windriver.com Reported-by: NJiping Ma <jiping.ma2@windriver.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
- 30 8月, 2019 4 次提交
-
-
由 Will Deacon 提交于
The 'K' constraint is a documented AArch64 machine constraint supported by GCC for matching integer constants that can be used with a 32-bit logical instruction. Unfortunately, some released compilers erroneously accept the immediate '4294967295' for this constraint, which is later refused by GAS at assembly time. This had led us to avoid the use of the 'K' constraint altogether. Instead, detect whether the compiler is up to the job when building the kernel and pass the 'K' constraint to our 32-bit atomic macros when it appears to be supported. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
We use a bunch of internal macros when constructing our atomic and cmpxchg routines in order to save on boilerplate. Avoid exposing these directly to users of the header files. Reviewed-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The contents of 'asm/atomic_arch.h' can be split across some of our other 'asm/' headers. Remove it. Reviewed-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
The 'alt_lse' assembly macro has been unused since 7c8fc35d ("locking/atomics/arm64: Replace our atomic/lock bitop implementations with asm-generic"). Remove it. Reviewed-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 29 8月, 2019 5 次提交
-
-
由 Andrew Murray 提交于
Now that we have removed the out-of-line ll/sc atomics we can give the compiler the freedom to choose its own register allocation. Remove the hard-coded use of x30. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Andrew Murray 提交于
When building for LSE atomics (CONFIG_ARM64_LSE_ATOMICS), if the hardware or toolchain doesn't support it the existing code will fallback to ll/sc atomics. It achieves this by branching from inline assembly to a function that is built with special compile flags. Further this results in the clobbering of registers even when the fallback isn't used increasing register pressure. Improve this by providing inline implementations of both LSE and ll/sc and use a static key to select between them, which allows for the compiler to generate better atomics code. Put the LL/SC fallback atomics in their own subsection to improve icache performance. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Christoph Hellwig 提交于
Based on an email from Will Deacon. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
由 Christoph Hellwig 提交于
arch_dma_mmap_pgprot is used for two things: 1) to override the "normal" uncached page attributes for mapping memory coherent to devices that can't snoop the CPU caches 2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older arm systems and some mips platforms Replace one with the pgprot_dmacoherent macro that is already provided by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE handling to common code with an explicit arch opt-in. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Acked-by: Paul Burton <paul.burton@mips.com> # mips
-
由 Andrew Murray 提交于
The A64 ISA accepts distinct (but overlapping) ranges of immediates for: * add arithmetic instructions ('I' machine constraint) * sub arithmetic instructions ('J' machine constraint) * 32-bit logical instructions ('K' machine constraint) * 64-bit logical instructions ('L' machine constraint) ... but we currently use the 'I' constraint for many atomic operations using sub or logical instructions, which is not always valid. When CONFIG_ARM64_LSE_ATOMICS is not set, this allows invalid immediates to be passed to instructions, potentially resulting in a build failure. When CONFIG_ARM64_LSE_ATOMICS is selected the out-of-line ll/sc atomics always use a register as they have no visibility of the value passed by the caller. This patch adds a constraint parameter to the ATOMIC_xx and __CMPXCHG_CASE macros so that we can pass appropriate constraints for each case, with uses updated accordingly. Unfortunately prior to GCC 8.1.0 the 'K' constraint erroneously accepted '4294967295', so we must instead force the use of a register. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 28 8月, 2019 5 次提交
-
-
由 James Morse 提交于
Since commit 2f6ea23f ("arm64: KVM: Avoid marking pages as XN in Stage-2 if CTR_EL0.DIC is set"), KVM has stopped marking normal memory as execute-never at stage2 when the system supports D->I Coherency at the PoU. This avoids KVM taking a trap when the page is first executed, in order to clean it to PoU. The patch that added this change also wrapped PAGE_S2_DEVICE mappings up in this too. The upshot is, if your CPU caches support DIC ... you can execute devices. Revert the PAGE_S2_DEVICE change so PTE_S2_XN is always used directly. Fixes: 2f6ea23f ("arm64: KVM: Avoid marking pages as XN in Stage-2 if CTR_EL0.DIC is set") Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Will Deacon 提交于
PAR_EL1 is a mysterious creature, but sometimes it's necessary to read it when translating addresses in situations where we cannot walk the page table directly. Add a couple of system register definitions for the fault indication field ('F') and the fault status code ('FST'). Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Commit 6a4cbd63c25a ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"") reintroduced ISB instructions to some of our page table setter functions in light of a recent clarification to the Armv8 architecture. Although 'set_pgd()' isn't currently used to update a live page table, add the ISB instruction there too for consistency with the other macros and to provide some future-proofing if we use it on live tables in the future. Reported-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
05f2d2f8 ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable") added a new TLB invalidation helper which is used when freeing intermediate levels of page table used for kernel mappings, but is missing the required ISB instruction after completion of the TLBI instruction. Add the missing barrier. Cc: <stable@vger.kernel.org> Fixes: 05f2d2f8 ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable") Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
This reverts commit 24fe1b0e. Commit 24fe1b0e ("arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}") removed ISB instructions immediately following updates to the page table, on the grounds that they are not required by the architecture and a DSB alone is sufficient to ensure that subsequent data accesses use the new translation: DDI0487E_a, B2-128: | ... no instruction that appears in program order after the DSB | instruction can alter any state of the system or perform any part of | its functionality until the DSB completes other than: | | * Being fetched from memory and decoded | * Reading the general-purpose, SIMD and floating-point, | Special-purpose, or System registers that are directly or indirectly | read without causing side-effects. However, the same document also states the following: DDI0487E_a, B2-125: | DMB and DSB instructions affect reads and writes to the memory system | generated by Load/Store instructions and data or unified cache | maintenance instructions being executed by the PE. Instruction fetches | or accesses caused by a hardware translation table access are not | explicit accesses. which appears to claim that the DSB alone is insufficient. Unfortunately, some CPU designers have followed the second clause above, whereas in Linux we've been relying on the first. This means that our mapping sequence: MOV X0, <valid pte> STR X0, [Xptep] // Store new PTE to page table DSB ISHST LDR X1, [X2] // Translates using the new PTE can actually raise a translation fault on the load instruction because the translation can be performed speculatively before the page table update and then marked as "faulting" by the CPU. For user PTEs, this is ok because we can handle the spurious fault, but for kernel PTEs and intermediate table entries this results in a panic(). Revert the offending commit to reintroduce the missing barriers. Cc: <stable@vger.kernel.org> Fixes: 24fe1b0e ("arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}") Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 23 8月, 2019 1 次提交
-
-
由 Hsin-Yi Wang 提交于
Currently in arm64, FDT is mapped to RO before it's passed to early_init_dt_scan(). However, there might be some codes (eg. commit "fdt: add support for rng-seed") that need to modify FDT during init. Map FDT to RO after early fixups are done. Signed-off-by: NHsin-Yi Wang <hsinyi@chromium.org> Reviewed-by: NStephen Boyd <swboyd@chromium.org> Reviewed-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 22 8月, 2019 1 次提交
-
-
由 James Morse 提交于
When taking an SError or Debug exception from EL0, we run the C handler for these exceptions before updating the context tracking code and unmasking lower priority interrupts. When booting with nohz_full lockdep tells us we got this wrong: | ============================= | WARNING: suspicious RCU usage | 5.3.0-rc2-00010-gb4b5e9dcb11b-dirty #11271 Not tainted | ----------------------------- | include/linux/rcupdate.h:643 rcu_read_unlock() used illegally wh! | | other info that might help us debug this: | | | RCU used illegally from idle CPU! | rcu_scheduler_active = 2, debug_locks = 1 | RCU used illegally from extended quiescent state! | 1 lock held by a.out/432: | #0: 00000000c7a79515 (rcu_read_lock){....}, at: brk_handler+0x00 | | stack backtrace: | CPU: 1 PID: 432 Comm: a.out Not tainted 5.3.0-rc2-00010-gb4b5e9d1 | Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno De8 | Call trace: | dump_backtrace+0x0/0x140 | show_stack+0x14/0x20 | dump_stack+0xbc/0x104 | lockdep_rcu_suspicious+0xf8/0x108 | brk_handler+0x164/0x1b0 | do_debug_exception+0x11c/0x278 | el0_dbg+0x14/0x20 Moving the ct_user_exit calls to be before do_debug_exception() means they are also before trace_hardirqs_off() has been updated. Add a new ct_user_exit_irqoff macro to avoid the context-tracking code using irqsave/restore before we've updated trace_hardirqs_off(). To be consistent, do this everywhere. The C helper is called enter_from_user_mode() to match x86 in the hope we can merge them into kernel/context_tracking.c later. Cc: Masami Hiramatsu <mhiramat@kernel.org> Fixes: 6c81fe79 ("arm64: enable context tracking") Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 15 8月, 2019 3 次提交
-
-
由 Sudeep Holla 提交于
The trusted OS may reject CPU_OFF calls to its resident CPU, so we must avoid issuing those. We never migrate a Trusted OS and we already take care to prevent CPU_OFF PSCI call. However, this is not reflected explicitly to the userspace. Any user can attempt to hotplug trusted OS resident CPU. The entire motion of going through the various state transitions in the CPU hotplug state machine gets executed and the PSCI layer finally refuses to make CPU_OFF call. This results is unnecessary unwinding of CPU hotplug state machine in the kernel. Instead we can mark the trusted OS resident CPU as not available for hotplug, so that the user attempt or request to do the same will get immediately rejected. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Brown 提交于
Strengthen the wording in the documentation for cpu_enable() to make it more obvious to readers not already familiar with the code when the core will call this callback and that this is intentional. Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [will: minor tweak to emphasis in the comment] Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Rutland 提交于
Prior to commit: 14c127c9 ("arm64: mm: Flip kernel VA space") ... VA_START described the start of the TTBR1 address space for a given VA size described by VA_BITS, where all kernel mappings began. Since that commit, VA_START described a portion midway through the address space, where the linear map ends and other kernel mappings begin. To avoid confusion, let's rename VA_START to PAGE_END, making it clear that it's not the start of the TTBR1 address space and implying that it's related to PAGE_OFFSET. Comments and other mnemonics are updated accordingly, along with a typo fix in the decription of VMEMMAP_SIZE. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Tested-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 14 8月, 2019 3 次提交
-
-
由 Will Deacon 提交于
Cleanup memory.h so that the indentation is consistent, remove pointless line-wrapping and use consistent parameter names for different versions of the same macro. Reviewed-by: NSteve Capper <steve.capper@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Commenting the #endif of a multi-statement #ifdef block with the condition which guards it is useful and can save having to scroll back through the file to figure out which set of Kconfig options apply to a particular piece of code. Reviewed-by: NSteve Capper <steve.capper@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
There's no need for __tag_set() to be a complicated macro when CONFIG_KASAN_SW_TAGS=y and a simple static inline otherwise. Rewrite the thing as a common static inline function. Tested-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NSteve Capper <steve.capper@arm.com> Tested-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-