- 10 9月, 2021 1 次提交
-
-
由 Ard Biesheuvel 提交于
KVM in nVHE mode divides up its VA space into two equal halves, and picks the half that does not conflict with the HYP ID map to map its linear region. This worked fine when the kernel's linear map itself was guaranteed to cover precisely as many bits of VA space, but this was changed by commit f4693c27 ("arm64: mm: extend linear region for 52-bit VA configurations"). The result is that, depending on the placement of the ID map, kernel-VA to hyp-VA translations may produce addresses that either conflict with other HYP mappings (including the ID map itself) or generate addresses outside of the 52-bit addressable range, neither of which is likely to lead to anything useful. Given that 52-bit capable cores are guaranteed to implement VHE, this only affects configurations such as pKVM where we opt into non-VHE mode even if the hardware is VHE capable. So just for these configurations, let's limit the kernel linear map to 51 bits and work around the problem. Fixes: f4693c27 ("arm64: mm: extend linear region for 52-bit VA configurations") Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210826165613.60774-1-ardb@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 31 8月, 2021 1 次提交
-
-
由 Catalin Marinas 提交于
* tip/sched/arm64: (785 commits) Documentation: arm64: describe asymmetric 32-bit support arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 arm64: Advertise CPUs capable of running 32-bit applications in sysfs arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0 arm64: Implement task_cpu_possible_mask() sched: Introduce dl_task_check_affinity() to check proposed affinity sched: Allow task CPU affinity to be restricted on asymmetric systems sched: Split the guts of sched_setaffinity() into a helper function sched: Introduce task_struct::user_cpus_ptr to track requested affinity sched: Reject CPU affinity changes based on task_cpu_possible_mask() cpuset: Cleanup cpuset_cpus_allowed_fallback() use in select_fallback_rq() cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus() cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 sched: Introduce task_cpu_possible_mask() to limit fallback rq selection sched: Cgroup SCHED_IDLE support sched/topology: Skip updating masks for non-online nodes Linux 5.14-rc6 lib: use PFN_PHYS() in devmem_is_allowed() ...
-
- 26 8月, 2021 5 次提交
-
-
由 Catalin Marinas 提交于
* for-next/entry: : More entry.S clean-ups and conversion to C. arm64: entry: call exit_to_user_mode() from C arm64: entry: move bulk of ret_to_user to C arm64: entry: clarify entry/exit helpers arm64: entry: consolidate entry/exit helpers
-
由 Catalin Marinas 提交于
Merge branches 'for-next/mte', 'for-next/misc' and 'for-next/kselftest', remote-tracking branch 'arm64/for-next/perf' into for-next/core * arm64/for-next/perf: arm64/perf: Replace '0xf' instances with ID_AA64DFR0_PMUVER_IMP_DEF * for-next/mte: : Miscellaneous MTE improvements. arm64/cpufeature: Optionally disable MTE via command-line arm64: kasan: mte: remove redundant mte_report_once logic arm64: kasan: mte: use a constant kernel GCR_EL1 value arm64: avoid double ISB on kernel entry arm64: mte: optimize GCR_EL1 modification on kernel entry/exit Documentation: document the preferred tag checking mode feature arm64: mte: introduce a per-CPU tag checking mode preference arm64: move preemption disablement to prctl handlers arm64: mte: change ASYNC and SYNC TCF settings into bitfields arm64: mte: rename gcr_user_excl to mte_ctrl arm64: mte: avoid TFSRE0_EL1 related operations unless in async mode * for-next/misc: : Miscellaneous updates. arm64: Do not trap PMSNEVFR_EL1 arm64: mm: fix comment typo of pud_offset_phys() arm64: signal32: Drop pointless call to sigdelsetmask() arm64/sve: Better handle failure to allocate SVE register storage arm64: Document the requirement for SCR_EL3.HCE arm64: head: avoid over-mapping in map_memory arm64/sve: Add a comment documenting the binutils needed for SVE asm arm64/sve: Add some comments for sve_save/load_state() arm64: replace in_irq() with in_hardirq() arm64: mm: Fix TLBI vs ASID rollover arm64: entry: Add SYM_CODE annotation for __bad_stack arm64: fix typo in a comment arm64: move the (z)install rules to arch/arm64/Makefile arm64/sve: Make fpsimd_bind_task_to_cpu() static arm64: unnecessary end 'return;' in void functions arm64/sme: Document boot requirements for SME arm64: use __func__ to get function name in pr_err arm64: SSBS/DIT: print SSBS and DIT bit when printing PSTATE arm64: cpufeature: Use defined macro instead of magic numbers arm64/kexec: Test page size support with new TGRAN range values * for-next/kselftest: : Kselftest additions for arm64. kselftest/arm64: signal: Add a TODO list for signal handling tests kselftest/arm64: signal: Add test case for SVE register state in signals kselftest/arm64: signal: Verify that signals can't change the SVE vector length kselftest/arm64: signal: Check SVE signal frame shows expected vector length kselftest/arm64: signal: Support signal frames with SVE register data kselftest/arm64: signal: Add SVE to the set of features we can check for kselftest/arm64: pac: Fix skipping of tests on systems without PAC kselftest/arm64: mte: Fix misleading output when skipping tests kselftest/arm64: Add a TODO list for floating point tests kselftest/arm64: Add tests for SVE vector configuration kselftest/arm64: Validate vector lengths are set in sve-probe-vls kselftest/arm64: Provide a helper binary and "library" for SVE RDVL kselftest/arm64: Ignore check_gcr_el1_cswitch binary
-
由 Alexandru Elisei 提交于
Commit 31c00d2a ("arm64: Disable fine grained traps on boot") zeroed the fine grained trap registers to prevent unwanted register traps from occuring. However, for the PMSNEVFR_EL1 register, the corresponding HDFG{R,W}TR_EL2.nPMSNEVFR_EL1 fields must be 1 to disable trapping. Set both fields to 1 if FEAT_SPEv1p2 is detected to disable read and write traps. Fixes: 31c00d2a ("arm64: Disable fine grained traps on boot") Cc: <stable@vger.kernel.org> # 5.13.x Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: NMark Brown <broonie@kernel.org> Acked-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210824154523.906270-1-alexandru.elisei@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Xujun Leng 提交于
Fix a typo in the comment of macro pud_offset_phys(). Signed-off-by: NXujun Leng <lengxujun2007@126.com> Link: https://lore.kernel.org/r/20210825150526.12582-1-lengxujun2007@126.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Commit 77097ae5 ("most of set_current_blocked() callers want SIGKILL/SIGSTOP removed from set") extended set_current_blocked() to remove SIGKILL and SIGSTOP from the new signal set and updated all callers accordingly. Unfortunately, this collided with the merge of the arm64 architecture, which duly removes these signals when restoring the compat sigframe, as this was what was previously done by arch/arm/. Remove the redundant call to sigdelsetmask() from compat_restore_sigframe(). Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210825093911.24493-1-will@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 8月, 2021 5 次提交
-
-
由 Mark Brown 提交于
Currently we "handle" failure to allocate the SVE register storage by doing a BUG_ON() and hoping for the best. This is obviously not great and the memory allocation failure will already be loud enough without the BUG_ON(). As the comment says it is a corner case but let's try to do a bit better, remove the BUG_ON() and add code to handle the failure in the callers. For the ptrace and signal code we can return -ENOMEM gracefully however we have no real error reporting path available to us for the SVE access trap so instead generate a SIGKILL if the allocation fails there. This at least means that we won't try to soldier on and end up trying to access the nonexistant state and while it's obviously not ideal for userspace SIGKILL doesn't allow any handling so minimises the ABI impact, making it easier to improve the interface later if we come up with a better idea. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210824153417.18371-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
It is amazing that we never documented this absolutely basic requirement: if you boot the kernel at EL2, you'd better enable the HVC instruction from EL3. Really, just do it. Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210812190213.2601506-6-maz@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e ("arm64: Extend early page table code to allow for larger kernels") Cc: <stable@vger.kernel.org> # 4.16.x Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
At some point it would be nice to avoid the need to manually encode SVE instructions, add a note of the binutils version required to save looking it up. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210816125024.8112-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
The use of macros for the actual function bodies means legibility is always going to be a bit of a challenge, especially while we can't rely on SVE support in the toolchain, but this helps a little. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210812201143.35578-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 8月, 2021 6 次提交
-
-
由 Mark Brown 提交于
Note down a few gaps in our coverage. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-7-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
Currently this doesn't actually verify that the register contents do the right thing, it just verifes that a SVE context with appropriate size appears. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-6-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
We do not support changing the SVE vector length as part of signal return, verify that this is the case if the system supports multiple vector lengths. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-5-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
As a basic check that the SVE signal frame is being set up correctly verify that the vector length in the signal frame is the vector length that the process has. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-4-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
A signal frame with SVE may validly either be a bare struct sve_context or a struct sve_context followed by vector length dependent register data. Support either in the generic helpers for the signal tests, and while we're at it validate the SVE vector length reported. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-3-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
Allow testcases for SVE signal handling to flag the dependency and be skipped on systems without SVE support. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-2-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 8月, 2021 1 次提交
-
-
由 Changbin Du 提交于
Replace the obsolete and ambiguos macro in_irq() with new macro in_hardirq(). Signed-off-by: NChangbin Du <changbin.du@gmail.com> Link: https://lore.kernel.org/r/20210814005405.2658-1-changbin.du@gmail.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 8月, 2021 21 次提交
-
-
由 Mark Brown 提交于
The PAC tests check to see if the system supports the relevant PAC features but instead of skipping the tests if they can't be executed they fail the tests which makes things look like they're not working when they are. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819165723.43903-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Document support for running 32-bit tasks on asymmetric 32-bit systems and its impact on the user ABI when enabled. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210730112443.23245-17-will@kernel.org
-
由 Will Deacon 提交于
The scheduler now knows enough about these braindead systems to place 32-bit tasks accordingly, so throw out the safety checks and allow the ret-to-user path to avoid do_notify_resume() if there is nothing to do. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-16-will@kernel.org
-
由 Will Deacon 提交于
Allow systems with mismatched 32-bit support at EL0 to run 32-bit applications based on a new kernel parameter. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-15-will@kernel.org
-
由 Will Deacon 提交于
Since 32-bit applications will be killed if they are caught trying to execute on a 64-bit-only CPU in a mismatched system, advertise the set of 32-bit capable CPUs to userspace in sysfs. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-14-will@kernel.org
-
由 Will Deacon 提交于
If we want to support 32-bit applications, then when we identify a CPU with mismatched 32-bit EL0 support we must ensure that we will always have an active 32-bit CPU available to us from then on. This is important for the scheduler, because is_cpu_allowed() will be constrained to 32-bit CPUs for compat tasks and forced migration due to a hotplug event will hang if no 32-bit CPUs are available. On detecting a mismatch, prevent offlining of either the mismatching CPU if it is 32-bit capable, or find the first active 32-bit capable CPU otherwise. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-13-will@kernel.org
-
由 Will Deacon 提交于
When exec'ing a 32-bit task on a system with mismatched support for 32-bit EL0, try to ensure that it starts life on a CPU that can actually run it. Similarly, when exec'ing a 64-bit task on such a system, try to restore the old affinity mask if it was previously restricted. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NDaniel Bristot de Oliveira <bristot@redhat.com> Reviewed-by: NQuentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-12-will@kernel.org
-
由 Will Deacon 提交于
Provide an implementation of task_cpu_possible_mask() so that we can prevent 64-bit-only cores being added to the 'cpus_mask' for compat tasks on systems with mismatched 32-bit support at EL0, Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-11-will@kernel.org
-
由 Peter Zijlstra 提交于
-
由 Will Deacon 提交于
In preparation for restricting the affinity of a task during execve() on arm64, introduce a new dl_task_check_affinity() helper function to give an indication as to whether the restricted mask is admissible for a deadline task. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NDaniel Bristot de Oliveira <bristot@redhat.com> Link: https://lore.kernel.org/r/20210730112443.23245-10-will@kernel.org
-
由 Will Deacon 提交于
Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Although userspace can carefully manage the affinity masks for such tasks, one place where it is particularly problematic is execve() because the CPU on which the execve() is occurring may be incompatible with the new application image. In such a situation, it is desirable to restrict the affinity mask of the task and ensure that the new image is entered on a compatible CPU. From userspace's point of view, this looks the same as if the incompatible CPUs have been hotplugged off in the task's affinity mask. Similarly, if a subsequent execve() reverts to a compatible image, then the old affinity is restored if it is still valid. In preparation for restricting the affinity mask for compat tasks on arm64 systems without uniform support for 32-bit applications, introduce {force,relax}_compatible_cpus_allowed_ptr(), which respectively restrict and restore the affinity mask for a task based on the compatible CPUs. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NValentin Schneider <valentin.schneider@arm.com> Reviewed-by: NQuentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-9-will@kernel.org
-
由 Will Deacon 提交于
In preparation for replaying user affinity requests using a saved mask, split sched_setaffinity() up so that the initial task lookup and security checks are only performed when the request is coming directly from userspace. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NValentin Schneider <Valentin.Schneider@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-8-will@kernel.org
-
由 Will Deacon 提交于
In preparation for saving and restoring the user-requested CPU affinity mask of a task, add a new cpumask_t pointer to 'struct task_struct'. If the pointer is non-NULL, then the mask is copied across fork() and freed on task exit. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NValentin Schneider <Valentin.Schneider@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-7-will@kernel.org
-
由 Will Deacon 提交于
Reject explicit requests to change the affinity mask of a task via set_cpus_allowed_ptr() if the requested mask is not a subset of the mask returned by task_cpu_possible_mask(). This ensures that the 'cpus_mask' for a given task cannot contain CPUs which are incapable of executing it, except in cases where the affinity is forced. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NValentin Schneider <Valentin.Schneider@arm.com> Reviewed-by: NQuentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-6-will@kernel.org
-
由 Will Deacon 提交于
select_fallback_rq() only needs to recheck for an allowed CPU if the affinity mask of the task has changed since the last check. Return a 'bool' from cpuset_cpus_allowed_fallback() to indicate whether the affinity mask was updated, and use this to elide the allowed check when the mask has been left alone. No functional change. Suggested-by: NValentin Schneider <valentin.schneider@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NValentin Schneider <valentin.schneider@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-5-will@kernel.org
-
由 Will Deacon 提交于
Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Modify guarantee_online_cpus() to take task_cpu_possible_mask() into account when trying to find a suitable set of online CPUs for a given task. This will avoid passing an invalid mask to set_cpus_allowed_ptr() during ->attach() and will subsequently allow the cpuset hierarchy to be taken into account when forcefully overriding the affinity mask for a task which requires migration to a compatible CPU. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NValentin Schneider <Valentin.Schneider@arm.com> Link: https://lkml.kernel.org/r/20210730112443.23245-4-will@kernel.org
-
由 Will Deacon 提交于
If the scheduler cannot find an allowed CPU for a task, cpuset_cpus_allowed_fallback() will widen the affinity to cpu_possible_mask if cgroup v1 is in use. In preparation for allowing architectures to provide their own fallback mask, just return early if we're either using cgroup v1 or we're using cgroup v2 with a mask that contains invalid CPUs. This will allow select_fallback_rq() to figure out the mask by itself. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NValentin Schneider <valentin.schneider@arm.com> Reviewed-by: NQuentin Perret <qperret@google.com> Link: https://lkml.kernel.org/r/20210730112443.23245-3-will@kernel.org
-
由 Will Deacon 提交于
Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. On such a system, we must take care not to migrate a task to an unsupported CPU when forcefully moving tasks in select_fallback_rq() in response to a CPU hot-unplug operation. Introduce a task_cpu_possible_mask() hook which, given a task argument, allows an architecture to return a cpumask of CPUs that are capable of executing that task. The default implementation returns the cpu_possible_mask, since sane machines do not suffer from per-cpu ISA limitations that affect scheduling. The new mask is used when selecting the fallback runqueue as a last resort before forcing a migration to the first active CPU. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NValentin Schneider <Valentin.Schneider@arm.com> Reviewed-by: NQuentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-2-will@kernel.org
-
由 Josh Don 提交于
This extends SCHED_IDLE to cgroups. Interface: cgroup/cpu.idle. 0: default behavior 1: SCHED_IDLE Extending SCHED_IDLE to cgroups means that we incorporate the existing aspects of SCHED_IDLE; a SCHED_IDLE cgroup will count all of its descendant threads towards the idle_h_nr_running count of all of its ancestor cgroups. Thus, sched_idle_rq() will work properly. Additionally, SCHED_IDLE cgroups are configured with minimum weight. There are two key differences between the per-task and per-cgroup SCHED_IDLE interface: - The cgroup interface allows tasks within a SCHED_IDLE hierarchy to maintain their relative weights. The entity that is "idle" is the cgroup, not the tasks themselves. - Since the idle entity is the cgroup, our SCHED_IDLE wakeup preemption decision is not made by comparing the current task with the woken task, but rather by comparing their matching sched_entity. A typical use-case for this is a user that creates an idle and a non-idle subtree. The non-idle subtree will dominate competition vs the idle subtree, but the idle subtree will still be high priority vs other users on the system. The latter is accomplished via comparing matching sched_entity in the waken preemption path (this could also be improved by making the sched_idle_rq() decision dependent on the perspective of a specific task). For now, we maintain the existing SCHED_IDLE semantics. Future patches may make improvements that extend how we treat SCHED_IDLE entities. The per-task_group idle field is an integer that currently only holds either a 0 or a 1. This is explicitly typed as an integer to allow for further extensions to this API. For example, a negative value may indicate a highly latency-sensitive cgroup that should be preferred for preemption/placement/etc. Signed-off-by: NJosh Don <joshdon@google.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NVincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/20210730020019.1487127-2-joshdon@google.com
-
由 Valentin Schneider 提交于
The scheduler currently expects NUMA node distances to be stable from init onwards, and as a consequence builds the related data structures once-and-for-all at init (see sched_init_numa()). Unfortunately, on some architectures node distance is unreliable for offline nodes and may very well change upon onlining. Skip over offline nodes during sched_init_numa(). Track nodes that have been onlined at least once, and trigger a build of a node's NUMA masks when it is first onlined post-init. Reported-by: NGeetika Moolchandani <Geetika.Moolchandani1@ibm.com> Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: NValentin Schneider <valentin.schneider@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210818074333.48645-1-srikar@linux.vnet.ibm.com
-
由 Mark Brown 提交于
When skipping the tests due to a lack of system support for MTE we currently print a message saying FAIL which makes it look like the test failed even though the test did actually report KSFT_SKIP, creating some confusion. Change the error message to say SKIP instead so things are clearer. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819172902.56211-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-