- 06 4月, 2019 1 次提交
-
-
由 Vladimir Murzin 提交于
[ Upstream commit 72cd4064fccaae15ab84d40d4be23667402df4ed ] ARMv8M introduces support for Security extension to M class, among other things it affects exception handling, especially, encoding of EXC_RETURN. The new bits have been added: Bit [6] Secure or Non-secure stack Bit [5] Default callee register stacking Bit [0] Exception Secure which conflicts with hard-coded value of EXC_RETURN: In fact, we only care of few bits: Bit [3] Mode (0 - Handler, 1 - Thread) Bit [2] Stack pointer selection (0 - Main, 1 - Process) We can toggle only those bits and left other bits as they were on exception entry. It is basically, what patch does - saves EXC_RETURN when we do transition form Thread to Handler mode (it is first svc), so later saved value is used instead of EXC_RET_THREADMODE_PROCESSSTACK. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 24 3月, 2019 2 次提交
-
-
由 Dietmar Eggemann 提交于
[ Upstream commit 1b5ba350784242eb1f899bcffd95d2c7cff61e84 ] Arm TC2 fails cpu hotplug stress test. This issue was tracked down to a missing copy of the new affinity cpumask for the vexpress-spc interrupt into struct irq_common_data.affinity when the interrupt is migrated in migrate_one_irq(). Fix it by replacing the arm specific hotplug cpu migration with the generic irq code. This is the counterpart implementation to commit 217d453d ("arm64: fix a migrating irq bug when hotplug cpu"). Tested with cpu hotplug stress test on Arm TC2 (multi_v7_defconfig plus CONFIG_ARM_BIG_LITTLE_CPUFREQ=y and CONFIG_ARM_VEXPRESS_SPC_CPUFREQ=y). The vexpress-spc interrupt (irq=22) on this board is affine to CPU0. Its affinity cpumask now changes correctly e.g. from 0 to 1-4 when CPU0 is hotplugged out. Suggested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NDietmar Eggemann <dietmar.eggemann@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Marc Zyngier 提交于
[ Upstream commit 358b28f09f0ab074d781df72b8a671edb1547789 ] The current kvm_psci_vcpu_on implementation will directly try to manipulate the state of the VCPU to reset it. However, since this is not done on the thread that runs the VCPU, we can end up in a strangely corrupted state when the source and target VCPUs are running at the same time. Fix this by factoring out all reset logic from the PSCI implementation and forwarding the required information along with a request to the target VCPU. Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 20 2月, 2019 8 次提交
-
-
由 Russell King 提交于
Commit 383fb3ee8024d596f488d2dbaf45e572897acbdb upstream. In big.Little systems, some CPUs require the Spectre workarounds in paths such as the context switch, but other CPUs do not. In order to handle these differences, we need per-CPU vtables. We are unable to use the kernel's per-CPU variables to support this as per-CPU is not initialised at times when we need access to the vtables, so we have to use an array indexed by logical CPU number. We use an array-of-pointers to avoid having function pointers in the kernel's read/write .data section. Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Russell King 提交于
Commit e209950fdd065d2cc46e6338e47e52841b830cba upstream. Allow the way we access members of the processor vtable to be changed at compile time. We will need to move to per-CPU vtables to fix the Spectre variant 2 issues on big.Little systems. However, we have a couple of calls that do not need the vtable treatment, and indeed cause a kernel warning due to the (later) use of smp_processor_id(), so also introduce the PROC_TABLE macro for these which always use CPU 0's function pointers. Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Russell King 提交于
Commit 945aceb1db8885d3a35790cf2e810f681db52756 upstream. Call the per-processor type check_bugs() method in the same way as we do other per-processor functions - move the "processor." detail into proc-fns.h. Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Russell King 提交于
Commit 65987a8553061515b5851b472081aedb9837a391 upstream. Split out the lookup of the processor type and associated error handling from the rest of setup_processor() - we will need to use this in the secondary CPU bringup path for big.Little Spectre variant 2 mitigation. Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Julien Thierry 提交于
Commit afaf6838f4bc896a711180b702b388b8cfa638fc upstream. Introduce C and asm helpers to sanitize user address, taking the address range they target into account. Use asm helper for existing sanitization in __copy_from_user(). Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Julien Thierry 提交于
Commit e3aa6243434fd9a82e84bb79ab1abd14f2d9a5a7 upstream. When Spectre mitigation is required, __put_user() needs to include check_uaccess. This is already the case for put_user(), so just make __put_user() an alias of put_user(). Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Julien Thierry 提交于
Commit 621afc677465db231662ed126ae1f355bf8eac47 upstream. A mispredicted conditional call to set_fs could result in the wrong addr_limit being forwarded under speculation to a subsequent access_ok check, potentially forming part of a spectre-v1 attack using uaccess routines. This patch prevents this forwarding from taking place, but putting heavy barriers in set_fs after writing the addr_limit. Porting commit c2f0ad4f ("arm64: uaccess: Prevent speculative use of the current addr_limit"). Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Julien Thierry 提交于
Commit 3aa2df6ec2ca6bc143a65351cca4266d03a8bc41 upstream. Use __copy_to_user() rather than __put_user_error() for individual members when saving VFP state. This has the benefit of disabling/enabling PAN once per copied struct intead of once per write. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 29 12月, 2018 1 次提交
-
-
由 Martin Schwidefsky 提交于
[ Upstream commit a8874e7e8a8896f2b6c641f4b8e2473eafd35204 ] Change the currently empty defines for __PAGETABLE_PMD_FOLDED, __PAGETABLE_PUD_FOLDED and __PAGETABLE_P4D_FOLDED to return 1. This makes it possible to use __is_defined() to test if the preprocessor define exists. Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 07 9月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
kvm_unmap_hva is long gone, and we only have kvm_unmap_hva_range to deal with. Drop the now obsolete code. Fixes: fb1522e0 ("KVM: update to new mmu_notifier semantic v2") Cc: James Hogan <jhogan@kernel.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
- 12 8月, 2018 1 次提交
-
-
由 Gustavo A. R. Silva 提交于
Return statements in functions returning bool should use true or false instead of an integer value. This code was detected with the help of Coccinelle. Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 03 8月, 2018 5 次提交
-
-
由 Palmer Dabbelt 提交于
Converts the ARM interrupt code to use the recently added GENERIC_IRQ_MULTI_HANDLER, which is essentially just a copy of ARM's existhing MULTI_IRQ_HANDLER. The only changes are: * handle_arch_irq is now defined in a generic C file instead of an arm-specific assembly file. * handle_arch_irq is now marked as __ro_after_init. Signed-off-by: NPalmer Dabbelt <palmer@sifive.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux@armlinux.org.uk Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: jonas@southpole.se Cc: stefan.kristiansson@saunalahti.fi Cc: shorne@gmail.com Cc: jason@lakedaemon.net Cc: marc.zyngier@arm.com Cc: Arnd Bergmann <arnd@arndb.de> Cc: nicolas.pitre@linaro.org Cc: vladimir.murzin@arm.com Cc: keescook@chromium.org Cc: jinb.park7@gmail.com Cc: yamada.masahiro@socionext.com Cc: alexandre.belloni@bootlin.com Cc: pombredanne@nexb.com Cc: Greg KH <gregkh@linuxfoundation.org> Cc: kstewart@linuxfoundation.org Cc: jhogan@kernel.org Cc: mark.rutland@arm.com Cc: ard.biesheuvel@linaro.org Cc: james.morse@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: openrisc@lists.librecores.org Link: https://lkml.kernel.org/r/20180622170126.6308-3-palmer@sifive.com
-
由 Russell King 提交于
Spectre variant 1 attacks are about this sequence of pseudo-code: index = load(user-manipulated pointer); access(base + index * stride); In order for the cache side-channel to work, the access() must me made to memory which userspace can detect whether cache lines have been loaded. On 32-bit ARM, this must be either user accessible memory, or a kernel mapping of that same user accessible memory. The problem occurs when the load() speculatively loads privileged data, and the subsequent access() is made to user accessible memory. Any load() which makes use of a user-maniplated pointer is a potential problem if the data it has loaded is used in a subsequent access. This also applies for the access() if the data loaded by that access is used by a subsequent access. Harden the get_user() accessors against Spectre attacks by forcing out of bounds addresses to a NULL pointer. This prevents get_user() being used as the load() step above. As a side effect, put_user() will also be affected even though it isn't implicated. Also harden copy_from_user() by redoing the bounds check within the arm_copy_from_user() code, and NULLing the pointer if out of bounds. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Russell King 提交于
Fixing __get_user() for spectre variant 1 is not sane: we would have to add address space bounds checking in order to validate that the location should be accessed, and then zero the address if found to be invalid. Since __get_user() is supposed to avoid the bounds check, and this is exactly what get_user() does, there's no point having two different implementations that are doing the same thing. So, when the Spectre workarounds are required, make __get_user() an alias of get_user(). Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Russell King 提交于
Borrow the x86 implementation of __inttype() to use in get_user() to select an integer type suitable to temporarily hold the result value. This is necessary to avoid propagating the volatile nature of the result argument, which can cause the following warning: lib/iov_iter.c:413:5: warning: optimization may eliminate reads and/or writes to register variables [-Wvolatile-register-var] Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Russell King 提交于
__get_user_error() is used as a fast accessor to make copying structure members in the signal handling path as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors. In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access. Use __copy_from_user() rather than __get_user_err() for individual members when restoring VFP state. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 30 7月, 2018 1 次提交
-
-
由 Nicolas Pitre 提交于
On ARMv5 and above, it is beneficial to use compiler built-ins such as __builtin_ffs() and __builtin_ctzl() to implement ffs(), __ffs(), fls() and __fls(). The compiler does inline the clz instruction and even the rbit instruction when available, or provide a constant value when possible. On ARMv4 the compiler calls out to helper functions for those built-ins so it is best to keep the open coded versions in that case. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
- 25 7月, 2018 1 次提交
-
-
由 Anders Roxell 提交于
Building on arm 32 with LPAE enabled we don't include asm-generic/tlb.h, where we have tlb_flush_remove_tables_local and tlb_flush_remove_tables defined. The build fails with: mm/memory.c: In function ‘tlb_remove_table_smp_sync’: mm/memory.c:339:2: error: implicit declaration of function ‘tlb_flush_remove_tables_local’; did you mean ‘tlb_remove_table’? [-Werror=implicit-function-declaration] ... This bug got introduced in: 2ff6ddf1 ("x86/mm/tlb: Leave lazy TLB mode at page table free time") To fix this issue we define them in arm 32's specific asm/tlb.h file as well. Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dave.hansen@intel.com Cc: linux-arm-kernel@lists.infradead.org Cc: linux@armlinux.org.uk Cc: riel@surriel.com Cc: songliubraving@fb.com Fixes: 2ff6ddf1 ("x86/mm/tlb: Leave lazy TLB mode at page table free time") Link: http://lkml.kernel.org/r/20180725095557.19668-1-anders.roxell@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 22 7月, 2018 1 次提交
-
-
由 Lukas Wunner 提交于
There's one ARM, one x86_32 and one x86_64 version of efi_open_volume() which can be folded into a single shared version by masking their differences with the efi_call_proto() macro introduced by commit: 3552fdf2 ("efi: Allow bitness-agnostic protocol calls"). To be able to dereference the device_handle attribute from the efi_loaded_image_t table in an arch- and bitness-agnostic manner, introduce the efi_table_attr() macro (which already exists for x86) to arm and arm64. No functional change intended. Signed-off-by: NLukas Wunner <lukas@wunner.de> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20180720014726.24031-7-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 21 7月, 2018 1 次提交
-
-
由 James Morse 提交于
arm64's new use of KVMs get_events/set_events API calls isn't just or RAS, it allows an SError that has been made pending by KVM as part of its device emulation to be migrated. Wire this up for 32bit too. We only need to read/write the HCR_VA bit, and check that no esr has been provided, as we don't yet support VDFSR. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NDongjiu Geng <gengdongjiu@huawei.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 20 7月, 2018 1 次提交
-
-
由 Pavel Tatashin 提交于
read_boot_clock64() is deleted, and replaced with read_persistent_wall_and_boot_offset(). The default implementation of read_persistent_wall_and_boot_offset() provides a better fallback than the current stubs for read_boot_clock64() that arm has with no users, so remove the old code. Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: steven.sistare@oracle.com Cc: daniel.m.jordan@oracle.com Cc: linux@armlinux.org.uk Cc: schwidefsky@de.ibm.com Cc: heiko.carstens@de.ibm.com Cc: john.stultz@linaro.org Cc: sboyd@codeaurora.org Cc: hpa@zytor.com Cc: douly.fnst@cn.fujitsu.com Cc: peterz@infradead.org Cc: prarit@redhat.com Cc: feng.tang@intel.com Cc: pmladek@suse.com Cc: gnomes@lxorguk.ukuu.org.uk Cc: linux-s390@vger.kernel.org Cc: boris.ostrovsky@oracle.com Cc: jgross@suse.com Cc: pbonzini@redhat.com Link: https://lkml.kernel.org/r/20180719205545.16512-19-pasha.tatashin@oracle.com
-
- 10 7月, 2018 1 次提交
-
-
由 Arnd Bergmann 提交于
The asm/module.h header file can not be included standalone, which breaks the module signing code after a recent change: In file included from kernel/module-internal.h:13, from kernel/module_signing.c:17: arch/arm/include/asm/module.h:37:27: error: 'struct module' declared inside parameter list will not be visible outside of this definition or declaration [-Werror] u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val); This adds a forward declaration of struct module to make it all work. Fixes: f314dfea ("modsign: log module name in the event of an error") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NJessica Yu <jeyu@kernel.org>
-
- 09 7月, 2018 3 次提交
-
-
由 Marc Zyngier 提交于
Trapping blocking WFE is extremely beneficial in situations where the system is oversubscribed, as it allows another thread to run while being blocked. In a non-oversubscribed environment, this is the complete opposite, and trapping WFE is just unnecessary overhead. Let's only enable WFE trapping if the CPU has more than a single task to run (that is, more than just the vcpu thread). Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
The {pmd,pud,pgd}_populate accessors usage have always been a bit weird in KVM. We don't have a struct mm to pass (and neither does the kernel most of the time, but still...), and the 32bit code has all kind of cache maintenance that doesn't make sense on ARMv7+ when MP extensions are mandatory (which is the case when the VEs are present). Let's bite the bullet and provide our own implementations. The only bit of architectural code left has to do with building the table entry itself (arm64 having up to 52bit PA, arm lacking PUD level). Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
The arm and arm64 KVM page tables accessors are pointlessly different between the two architectures, and likely both wrong one way or another: arm64 lacks a dsb(), and arm doesn't use WRITE_ONCE. Let's unify them. Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 06 7月, 2018 1 次提交
-
-
由 Mark Rutland 提交于
Some code cares about the SPSR_ELx format for exceptions taken from AArch32 to inspect or manipulate the SPSR_ELx value, which is already in the SPSR_ELx format, and not in the AArch32 PSR format. To separate these from cases where we care about the AArch32 PSR format, migrate these cases to use the PSR_AA32_* definitions rather than COMPAT_PSR_*. There should be no functional change as a result of this patch. Note that arm64 KVM does not support a compat KVM API, and always uses the SPSR_ELx format, even for AArch32 guests. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 6月, 2018 3 次提交
-
-
由 Frederic Weisbecker 提交于
All architectures have implemented it, we can now remove the poor weak version. Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Joel Fernandes <joel.opensrc@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rich Felker <dalias@libc.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/1529981939-8231-11-git-send-email-frederic@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Frederic Weisbecker 提交于
Migrate to the new API in order to remove arch_validate_hwbkpt_settings() that clumsily mixes up architecture validation and commit. Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Joel Fernandes <joel.opensrc@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rich Felker <dalias@libc.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/1529981939-8231-6-git-send-email-frederic@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Frederic Weisbecker 提交于
We can't pass the breakpoint directly on arch_check_bp_in_kernelspace() anymore because its architecture internal datas (struct arch_hw_breakpoint) are not yet filled by the time we call the function, and most implementation need this backend to be up to date. So arrange the function to take the probing struct instead. Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Joel Fernandes <joel.opensrc@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rich Felker <dalias@libc.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/1529981939-8231-3-git-send-email-frederic@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 21 6月, 2018 8 次提交
-
-
由 Mark Rutland 提交于
The ifdeffery for atomic*_{fetch_,}andnot() is unlike that for all the other atomics. If atomic*_andnot() is not defined, the corresponding atomic*_fetch_andnot() is assumed to not be defined. Additionally, the fallbacks for the various ordering cases are written much later in atomic.h as static inlines. This isn't problematic today, but gets in the way of scripting the generation of atomics. To prepare for scripting, this patch: * Switches to separate ifdefs for atomic*_andnot() and atomic*_fetch_andnot(), updating implementations as appropriate. * Moves the fallbacks into the standards ifdefs, as macro expansions rather than static inlines. * Removes trivial andnot implementations from architectures, where these are superseded by core code. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-19-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
The conditional inc/dec ops differ for atomic_t and atomic64_t: - atomic_inc_unless_positive() is optional for atomic_t, and doesn't exist for atomic64_t. - atomic_dec_unless_negative() is optional for atomic_t, and doesn't exist for atomic64_t. - atomic_dec_if_positive is optional for atomic_t, and is mandatory for atomic64_t. Let's make these consistently optional for both. At the same time, let's clean up the existing fallbacks to use atomic_try_cmpxchg(). The instrumented atomics are updated accordingly. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-18-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
Many of the inc/dec ops are mandatory, but for most architectures inc/dec are simply trivial wrappers around their corresponding add/sub ops. Let's make all the inc/dec ops optional, so that we can get rid of these boilerplate wrappers. The instrumented atomics are updated accordingly. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-17-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
Some of the atomics return the result of a test applied after the atomic operation, and almost all architectures implement these as trivial wrappers around the underlying atomic. Specifically: * <atomic>_inc_and_test(v) is (<atomic>_inc_return(v) == 0) * <atomic>_dec_and_test(v) is (<atomic>_dec_return(v) == 0) * <atomic>_sub_and_test(i, v) is (<atomic>_sub_return(i, v) == 0) * <atomic>_add_negative(i, v) is (<atomic>_add_return(i, v) < 0) Rather than have these definitions duplicated in all architectures, with minor inconsistencies in formatting and documentation, let's make these operations optional, with default fallbacks as above. Implementations must now provide a preprocessor symbol. The instrumented atomics are updated accordingly. Both x86 and m68k have custom implementations, which are left as-is, given preprocessor symbols to avoid being overridden. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-16-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
As a step towards unifying the atomic/atomic64/atomic_long APIs, this patch converts the arch/arm implementation of atomic64_add_unless() into an implementation of atomic64_fetch_add_unless(). A wrapper in <linux/atomic.h> will build atomic_add_unless() atop of this, provided it is given a preprocessor definition. No functional change is intended as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-12-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
Several architectures these have a near-identical implementation based on atomic_read() and atomic_cmpxchg() which we can instead define in <linux/atomic.h>, so let's do so, using something close to the existing x86 implementation with try_cmpxchg(). Where an architecture provides its own atomic_fetch_add_unless(), it must define a preprocessor symbol for it. The instrumented atomics are updated accordingly. Note that arch/arc's existing atomic_fetch_add_unless() had redundant barriers, as these are already present in its atomic_cmpxchg() implementation. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NGeert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vineet Gupta <vgupta@synopsys.com> Link: https://lore.kernel.org/lkml/20180621121321.4761-7-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
We define a trivial fallback for atomic_inc_not_zero(), but don't do the same for atomic64_inc_not_zero(), leading most architectures to define the same boilerplate. Let's add a fallback in <linux/atomic.h>, and remove the redundant implementations. Note that atomic64_add_unless() is always defined in <linux/atomic.h>, and promotes its arguments to the requisite types, so we need not do this explicitly. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-6-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mark Rutland 提交于
While __atomic_add_unless() was originally intended as a building-block for atomic_add_unless(), it's now used in a number of places around the kernel. It's the only common atomic operation named __atomic*(), rather than atomic_*(), and for consistency it would be better named atomic_fetch_add_unless(). This lack of consistency is slightly confusing, and gets in the way of scripting atomics. Given that, let's clean things up and promote it to an official part of the atomics API, in the form of atomic_fetch_add_unless(). This patch converts definitions and invocations over to the new name, including the instrumented version, using the following script: ---- git grep -w __atomic_add_unless | while read line; do sed -i '{s/\<__atomic_add_unless\>/atomic_fetch_add_unless/}' "${line%%:*}"; done git grep -w __arch_atomic_add_unless | while read line; do sed -i '{s/\<__arch_atomic_add_unless\>/arch_atomic_fetch_add_unless/}' "${line%%:*}"; done ---- Note that we do not have atomic{64,_long}_fetch_add_unless(), which will be introduced by later patches. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NPalmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-2-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-