- 04 12月, 2014 1 次提交
-
-
由 Zi Shen Lim 提交于
Earlier implementation assumed last instruction is BPF_EXIT. Since this is no longer a restriction in eBPF, we remove this limitation. Per Alexei Starovoitov [1]: > classic BPF has a restriction that last insn is always BPF_RET. > eBPF doesn't have BPF_RET instruction and this restriction. > It has BPF_EXIT insn which can appear anywhere in the program > one or more times and it doesn't have to be last insn. [1] https://lkml.org/lkml/2014/11/27/2 Fixes: e54bcde3 ("arm64: eBPF JIT compiler") Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 12月, 2014 1 次提交
-
-
由 Jungseok Lee 提交于
As putting data which is read mostly together, we can avoid unnecessary cache line bouncing. Other architectures, such as ARM and x86, adopted the same idea. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 12月, 2014 1 次提交
-
-
由 Vladimir Murzin 提交于
Update handling of cacheflush syscall with changes made in arch/arm counterpart: - return error to userspace when flushing syscall fails - split user cache-flushing into interruptible chunks - don't bother rounding to nearest vma Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> [will: changed internal return value from -EINTR to 0 to match arch/arm/] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 28 11月, 2014 6 次提交
-
-
由 AKASHI Takahiro 提交于
secure_computing() is called first in syscall_trace_enter() so that a system call will be aborted quickly without doing succeeding syscall tracing if seccomp rules want to deny that system call. On compat task, syscall numbers for system calls allowed in seccomp mode 1 are different from those on normal tasks, and so _NR_seccomp_xxx_32's need to be redefined. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
SIGSYS is primarily used in secure computing to notify tracer of syscall events. This patch allows signal handler on compat task to get correct information with SA_SIGINFO specified when this signal is delivered. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch allows compat task to issue seccomp() system call. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
Those values (__NR_seccomp_*) are used solely in secure_computing() to identify mode 1 system calls. If compat system calls have different syscall numbers, asm/seccomp.h may override them. Acked-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
If tracer modifies a syscall number to -1, this traced system call should be skipped with a return value specified in x0. This patch implements this semantics. Please note: * syscall entry tracing and syscall exit tracing (ftrace tracepoint and audit) are always executed, if enabled, even when skipping a system call (that is, -1). In this way, we can avoid a potential bug where audit_syscall_entry() might be called without audit_syscall_exit() at the previous system call being called, that would cause OOPs in audit_syscall_entry(). Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> [will: fixed up conflict with blr rework] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This regeset is intended to be used to get and set a system call number while tracing. There was some discussion about possible approaches to do so: (1) modify x8 register with ptrace(PTRACE_SETREGSET) indirectly, and update regs->syscallno later on in syscall_trace_enter(), or (2) define a dedicated regset for this purpose as on s390, or (3) support ptrace(PTRACE_SET_SYSCALL) as on arch/arm Thinking of the fact that user_pt_regs doesn't expose 'syscallno' to tracer as well as that secure_computing() expects a changed syscall number, especially case of -1, to be visible before this function returns in syscall_trace_enter(), (1) doesn't work well. We will take (2) since it looks much cleaner. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 11月, 2014 3 次提交
-
-
由 Laura Abbott 提交于
The head.text section is intended to be run at early bootup before any of the regular kernel mappings have been setup. Parts of head.text may be freed back into the buddy allocator due to TEXT_OFFSET so for security requirements this memory must not be executable. The suspend/resume/hotplug code path requires some of these head.S functions to run however which means they need to be executable. Support these conflicting requirements by moving the few head.text functions that need to be executable to the text section which has the appropriate page table permissions. Tested-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
In the arm64 arch_static_branch implementation we place an A64 NOP into the instruction stream and log relevant details to a jump_entry in a __jump_table section. Later this may be replaced with an immediate branch without link to the code for the unlikely case. At init time, the core calls arch_jump_label_transform_static to initialise the NOPs. On x86 this involves inserting the optimal NOP for a given microarchitecture, but on arm64 we only use the architectural NOP, and hence replace each NOP with the exact same NOP. This is somewhat pointless. Additionally, at module load time we don't call jump_label_apply_nops to patch the optimal NOPs in, unlike other architectures, but get away with this because we only use the architectural NOP anyway. A later notifier will patch NOPs with branches as required. Similarly to x86 commit 11570da1 (x86/jump-label: Do not bother updating NOPs if they are correct), we can avoid patching NOPs with identical NOPs. Given that we only use a single NOP encoding, this means we can NOP-out the body of arch_jump_label_transform_static entirely. As the default __weak arch_jump_label_transform_static implementation performs a patch, we must use an empty function to achieve this. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jiang Liu <liuj97@gmail.com> Cc: Laura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
In a similar manner to arm, it's useful to be able to dump the page tables to verify permissions and memory types. Add a debugfs file to check the page tables. Acked-by: NSteve Capper <steve.capper@linaro.org> Tested-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> [will: s/BUFFERABLE/NORMAL-NC/] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 11月, 2014 2 次提交
-
-
由 Laura Abbott 提交于
Every other architecture with permanent fixed addresses has FIX_HOLE as the first entry. This seems to be designed as a debugging aid but there are a couple of side effects of not having FIX_HOLE: - If the first fixed address is 0, fix_to_virt -> virt_to_fix triggers a BUG_ON for the virtual address being equal to FIXADDR_TOP - fix_to_virt may return a value outside of FIXADDR_START and FIXADDR_TOP which may look like a bug to a developer. Match up with other architectures and make everything clearer by adding FIX_HOLE. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Consistently use the plural form for alternatives pr_fmt strings. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 11月, 2014 16 次提交
-
-
由 Will Deacon 提交于
.exit.* sections may be subject to patching by the new alternatives framework and so shouldn't be discarded at link-time. Without this patch, such a section will result in the following linker error: `.exit.text' referenced in section `.altinstructions' of drivers/built-in.o: defined in discarded section `.exit.text' of drivers/built-in.o Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
The fixmap API was originally added for arm64 for early_ioremap purposes. It can be used for other purposes too so move the initialization from ioremap to somewhere more generic. This makes it obvious where the fixmap is being set up and allows for a cleaner implementation of __set_fixmap. Reviewed-by: NKees Cook <keescook@chromium.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
The function cpu_resume currently lives in the .data section. There's no reason for it to be there since we can use relative instructions without a problem. Move a few cpu_resume data structures out of the assembly file so the .data annotation can be dropped completely and cpu_resume ends up in the read only text section. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
The hyp stub vectors are currently loaded using adr. This instruction has a +/- 1MB range for the loading address. If the alignment for sections is changed the address may be more than 1MB away, resulting in reclocation errors. Switch to using adrp for getting the address to ensure we aren't affected by the location of the __hyp_stub_vectors. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
handle_arch_irq isn't actually text, it's just a function pointer. It doesn't need to be stored in the text section and doing so causes problesm if we ever want to make the kernel text read only. Declare handle_arch_irq as a proper function pointer stored in the data section. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
While we currently expect self-hosted debug support to be identical across CPUs, we don't currently sanity check this. This patch adds logging of the ID_AA64DFR{0,1}_EL1 values and associated sanity checking code. It's not clear to me whether we need to check PMUVer, TraceVer, and DebugVer, as we don't currently rely on these fields at all. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
A missing newline in the WARN_TAINT_ONCE string results in ugly and somewhat difficult to read output in the case of a sanity check failure, as the next print does not appear on a new line: Unsupported CPU feature variation.Modules linked in: This patch adds the missing newline, fixing the output formatting. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
It seems that Cortex-A53 r0p4 added support for AIFSR and ADFSR, and ID_MMFR0.AuxReg has been updated accordingly to report this fact. As Cortex-A53 could be paired with CPUs which do not implement these registers (e.g. all current revisions of Cortex-A57), this may trigger a sanity check failure at boot. The AuxReg value describes the availability of the ACTLR, AIFSR, and ADFSR registers, which are only of use to 32-bit guest OSs, and have IMPLEMENTATION DEFINED contents. Given the nature of these registers it is likely that KVM will need to trap accesses regardless of whether the CPUs are heterogeneous. This patch masks out the ID_MMFR0.AuxReg value from the sanity checks, preventing spurious warnings at boot time. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NAndre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Brown 提交于
The only requirement the scheduler has on cluster IDs is that they must be unique. When enumerating the topology based on MPIDR information the kernel currently generates cluster IDs by using the first level of affinity above the core ID (either level one or two depending on if the core has multiple threads) however the ARMv8 architecture allows for up to three levels of affinity. This means that an ARMv8 system may contain cores which have MPIDRs identical other than affinity level three which with current code will cause us to report multiple cores with the same identification to the scheduler in violation of its uniqueness requirement. Ensure that we do not violate the scheduler requirements on systems that uses all the affinity levels by incorporating both affinity levels two and three into the cluser ID when the cores are not threaded. While no currently known hardware uses multi-level clusters it is better to program defensively, this will help ease bringup of systems that have them and will ensure that things like distribution install media do not need to be respun to replace kernels in order to deploy such systems. In the worst case the system will work but perform suboptimally until a kernel modified to handle the new topology better is installed, in the best case this will be an adequate description of such topologies for the scheduler to perform well. Signed-off-by: NMark Brown <broonie@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
Not all of the errata we have workarounds for apply necessarily to all SoCs, so people compiling a kernel for one very specific SoC may not need to patch the kernel. Introduce a new submenu in the "Platform selection" menu to allow people to turn off certain bugs if they are not affected. By default all of them are enabled. Normal users or distribution kernels shouldn't bother to deselect any bugs here, since the alternatives framework will take care of patching them in only if needed. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> [will: moved kconfig menu under `Kernel Features'] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
The ARM erratum 832075 applies to certain revisions of Cortex-A57, one of the workarounds is to change device loads into using load-aquire semantics. This is achieved using the alternatives framework. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
The ARM errata 819472, 826319, 827319 and 824069 define the same workaround for these hardware issues in certain Cortex-A53 parts. Use the new alternatives framework and the CPU MIDR detection to patch "cache clean" into "cache clean and invalidate" instructions if an affected CPU is detected at runtime. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> [will: add __maybe_unused to squash gcc warning] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
After each CPU has been started, we iterate through a list of CPU features or bugs to detect CPUs which need (or could benefit from) kernel code patches. For each feature/bug there is a function which checks if that particular CPU is affected. We will later provide some more generic functions for common things like testing for certain MIDR ranges. We do this for every CPU to cover big.LITTLE systems properly as well. If a certain feature/bug has been detected, the capability bit will be set, so that later the call to apply_alternatives() will trigger the actual code patching. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
With a blatant copy of some x86 bits we introduce the alternative runtime patching "framework" to arm64. This is quite basic for now and we only provide the functions we need at this time. This is connected to the newly introduced feature bits. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
For taking note if at least one CPU in the system needs a bug workaround or would benefit from a code optimization, we create a new bitmap to hold (artificial) feature bits. Since elf_hwcap is part of the userland ABI, we keep it alone and introduce a new data structure for that (along with some accessors). Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
update_insn_emulation_mode() returns 0 on success, so we should be treating any non-zero values as failure, rather than the other way around. Otherwise, writes to the sysctl file controlling the emulation are ignored and immediately rolled back. Reported-by: NGene Hackmann <ghackmann@google.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 21 11月, 2014 8 次提交
-
-
由 Will Deacon 提交于
Translation faults that occur due to the input address being outside of the address range mapped by the relevant base register are reported as level 0 faults in ESR.DFSC. If the faulting access cannot be resolved by the kernel (e.g. because it is not mapped by a vma), then we report "input address range fault" on the console. This was fine until we added support for 48-bit VAs, which actually place PGDs at level 0 and can trigger faults for invalid addresses that are within the range of the page tables. This patch changes the string to report "level 0 translation fault", which is far less confusing. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Having the instruction emulation submenu underneath "platform selection" is a great way to hide options we don't want people to use, but somewhat confusing when you stumble across it there. Move the menuconfig option underneath "kernel features", where it makes a bit more sense. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Punit Agrawal 提交于
Introduce an event to trace the usage of emulated instructions. The trace event is intended to help identify and encourage the migration of legacy software using the emulation features. Use this event to trace usage of swp and CP15 barrier emulation. Acked-by: NSteven Rostedt <rostedt@goodmis.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Punit Agrawal 提交于
The CP15 barrier instructions (CP15ISB, CP15DSB and CP15DMB) are deprecated in the ARMv7 architecture, superseded by ISB, DSB and DMB instructions respectively. Some implementations may provide the ability to disable the CP15 barriers by disabling the CP15BEN bit in SCTLR_EL1. If not enabled, the encodings for these instructions become undefined. To support legacy software using these instructions, this patch register hooks to - * emulate CP15 barriers and warn the user about their use * toggle CP15BEN in SCTLR_EL1 Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Punit Agrawal 提交于
The SWP instruction was deprecated in the ARMv6 architecture. The ARMv7 multiprocessing extensions mandate that SWP/SWPB instructions are treated as undefined from reset, with the ability to enable them through the System Control Register SW bit. With ARMv8, the option to enable these instructions through System Control Register was dropped as well. To support legacy applications using these instructions, port the emulation of the SWP and SWPB instructions from the arm port to arm64. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Punit Agrawal 提交于
Typically, providing support for legacy instructions requires emulating the behaviour of instructions whose encodings have become undefined. If the instructions haven't been removed from the architecture, there maybe an option in the implementation to turn on/off the support for these instructions. Create common infrastructure to support legacy instruction emulation. In addition to emulation, also provide an option to support hardware execution when supported. The default execution mode (one of undef, emulate, hw exeuction) is dependent on the state of the instruction (deprecated or obsolete) in the architecture and can specified at the time of registering the instruction handlers. The runtime state of the emulation can be controlled by writing to individual nodes in sysctl. The expected default behaviour is documented as part of this patch. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Punit Agrawal 提交于
Port support for AArch32 instruction condition code checking from arm to arm64. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Punit Agrawal 提交于
Add support to register hooks for undefined instructions. The handlers will be called when the undefined instruction and the processor state (as contained in pstate) match criteria used at registration. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 11月, 2014 2 次提交
-
-
由 Steve Capper 提交于
The generic this_cpu operations disable interrupts to ensure that the requested operation is protected from pre-emption. For arm64, this is overkill and can hurt throughput and latency. This patch provides arm64 specific implementations for the this_cpu operations. Rather than disable interrupts, we use the exclusive monitor or atomic operations as appropriate. The following operations are implemented: add, add_return, and, or, read, write, xchg. We also wire up a cmpxchg implementation from cmpxchg.h. Testing was performed using the percpu_test module and hackbench on a Juno board running 3.18-rc4. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We currently allocate different levels of page tables with a variety of differing flags, and the PGALLOC_GFP flags, intended for use when allocating any level of page table, are only used for ptes in pte_alloc_one. On x86, PGALLOC_GFP is used for all page table allocations. Currently the major differences are: * __GFP_NOTRACK -- Needed to ensure page tables are always accessible in the presence of kmemcheck to prevent recursive faults. Currently kmemcheck cannot be selected for arm64. * __GFP_REPEAT -- Causes the allocator to try to reclaim pages and retry upon a failure to allocate. * __GFP_ZERO -- Sometimes passed explicitly, sometimes zalloc variants are used. While we've no encountered issues so far, it would be preferable to be consistent. This patch ensures all levels of table are allocated in the same manner, with PGALLOC_GFP. Cc: Steve Capper <steve.capper@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-