- 24 4月, 2019 8 次提交
-
-
由 Andrew Murray 提交于
Enable/disable event counters as appropriate when entering and exiting the guest to enable support for guest or host only event counting. For both VHE and non-VHE we switch the counters between host/guest at EL2. The PMU may be on when we change which counters are enabled however we avoid adding an isb as we instead rely on existing context synchronisation events: the eret to enter the guest (__guest_enter) and eret in kvm_call_hyp for __kvm_vcpu_run_nvhe on returning. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andrew Murray 提交于
Add support for the :G and :H attributes in perf by handling the exclude_host/exclude_guest event attributes. We notify KVM of counters that we wish to be enabled or disabled on guest entry/exit and thus defer from starting or stopping events based on their event attributes. With !VHE we switch the counters between host/guest at EL2. We are able to eliminate counters counting host events on the boundaries of guest entry/exit when using :G by filtering out EL2 for exclude_host. When using !exclude_hv there is a small blackout window at the guest entry/exit where host events are not captured. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andrew Murray 提交于
In order to effeciently switch events_{guest,host} perf counters at guest entry/exit we add bitfields to kvm_cpu_context for guest and host events as well as accessors for updating them. A function is also provided which allows the PMU driver to determine if a counter should start counting when it is enabled. With exclude_host, we may only start counting when entering the guest. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andrew Murray 提交于
The virt/arm core allocates a kvm_cpu_context_t percpu, at present this is a typedef to kvm_cpu_context and is used to store host cpu context. The kvm_cpu_context structure is also used elsewhere to hold vcpu context. In order to use the percpu to hold additional future host information we encapsulate kvm_cpu_context in a new structure and rename the typedef and percpu to match. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andrew Murray 提交于
The armv8pmu_enable_event_counter function issues an isb instruction after enabling a pair of counters - this doesn't provide any value and is inconsistent with the armv8pmu_disable_event_counter. In any case armv8pmu_enable_event_counter is always called with the PMU stopped. Starting the PMU with armv8pmu_start results in an isb instruction being issued prior to writing to PMCR_EL0. Let's remove the unnecessary isb instruction. Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Amit Daniel Kachhap 提交于
This patch advertises the capability of two cpu feature called address pointer authentication and generic pointer authentication. These capabilities depend upon system support for pointer authentication and VHE mode. The current arm64 KVM partially implements pointer authentication and support of address/generic authentication are tied together. However, separate ABI requirements for both of them is added so that any future isolated implementation will not require any ABI changes. Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Amit Daniel Kachhap 提交于
Now that the building blocks of pointer authentication are present, lets add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer authentication for the KVM guest on a per-vcpu basis through the ioctl KVM_ARM_VCPU_INIT. This features will allow the KVM guest to allow the handling of pointer authentication instructions or to treat them as undefined if not set. Necessary documentations are added to reflect the changes done. Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Mark Rutland 提交于
When pointer authentication is supported, a guest may wish to use it. This patch adds the necessary KVM infrastructure for this to work, with a semi-lazy context switch of the pointer auth state. Pointer authentication feature is only enabled when VHE is built in the kernel and present in the CPU implementation so only VHE code paths are modified. When we schedule a vcpu, we disable guest usage of pointer authentication instructions and accesses to the keys. While these are disabled, we avoid context-switching the keys. When we trap the guest trying to use pointer authentication functionality, we change to eagerly context-switching the keys, and enable the feature. The next time the vcpu is scheduled out/in, we start again. However the host key save is optimized and implemented inside ptrauth instruction/register access trap. Pointer authentication consists of address authentication and generic authentication, and CPUs in a system might have varied support for either. Where support for either feature is not uniform, it is hidden from guests via ID register emulation, as a result of the cpufeature framework in the host. Unfortunately, address authentication and generic authentication cannot be trapped separately, as the architecture provides a single EL2 trap covering both. If we wish to expose one without the other, we cannot prevent a (badly-written) guest from intermittently using a feature which is not uniformly supported (when scheduled on a physical CPU which supports the relevant feature). Hence, this patch expects both type of authentication to be present in a cpu. This switch of key is done from guest enter/exit assembly as preparation for the upcoming in-kernel pointer authentication support. Hence, these key switching routines are not implemented in C code as they may cause pointer authentication key signing error in some situations. Signed-off-by: NMark Rutland <mark.rutland@arm.com> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks , save host key in ptrauth exception trap] Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: kvmarm@lists.cs.columbia.edu [maz: various fixups] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 23 4月, 2019 1 次提交
-
-
由 Amit Daniel Kachhap 提交于
A per vcpu flag is added to check if pointer authentication is enabled for the vcpu or not. This flag may be enabled according to the necessary user policies and host capabilities. This patch also adds a helper to check the flag. Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 19 4月, 2019 14 次提交
-
-
由 Dave Martin 提交于
The existing documentation for which SVE register slice IDs are considered out-of-range, and what happens when userspace tries to access them, is cryptic. This patch rewords the text with the aim of making it a bit easier to understand. No functional change. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
The current error code documentation for KVM_GET_ONE_REG and KVM_SET_ONE_REG could be read as implying that all architectures implement these error codes, or that KVM guarantees which error code is returned in a particular situation. Because this is not really the case, this patch waters down the documentation explicitly to remove such guarantees. EPERM is marked as arm64-specific, since for now arm64 really is the only architecture that yields this error code for the finalization-required case. Keeping this as a distinct error code is useful however for debugging due to the statefulness of the API in this instance. No functional change. Suggested-by: NAndrew Jones <drjones@redhat.com> Fixes: 395f562f ("KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG") Fixes: 50036ad0 ("KVM: arm64/sve: Document KVM API extensions for SVE") Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Userspace is only supposed to use KVM_ARM_VCPU_FINALIZE when there is some vcpu feature that can actually be finalized. This means that documenting KVM_ARM_VCPU_FINALIZE as available or not depending on the capabilities present is not helpful. This patch amends the documentation to describe availability in terms of which capability is required for each finalizable feature instead. In any case, userspace sees the same error (EINVAL) regardless of whether the given feature is not present or KVM_ARM_VCPU_FINALIZE is not implemented at all. No functional change. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Currently, the internal vcpu finalization functions use a different name ("what") for the feature parameter than the name ("feature") used in the documentation. To avoid future confusion, this patch converts everything to use the name "feature" consistently. No functional change. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Correct virtualization of SVE relies for correctness on code in set_sve_vls() that verifies consistency between the set of vector lengths requested by userspace and the set of vector lengths available on the host. However, the purpose of this code is not obvious, and not likely to be apparent at all to people who do not have detailed knowledge of the SVE system-level architecture. This patch adds a suitable comment to explain what these checks are for. No functional change. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
A complicated DIV_ROUND_UP() expression is currently written out explicitly in multiple places in order to specify the size of the bitmap exchanged with userspace to represent the value of the KVM_REG_ARM64_SVE_VLS pseudo-register. Userspace currently has no direct way to work this out either: for documentation purposes, the size is just quoted as 8 u64s. To make this more intuitive, this patch replaces these with a single define, which is also exported to userspace as KVM_ARM64_SVE_VLS_WORDS. Since the number of words in a bitmap is just the index of the last word used + 1, this patch expresses the bound that way instead. This should make it clearer what is being expressed. For userspace convenience, the minimum and maximum possible vector lengths relevant to the KVM ABI are exposed to UAPI as KVM_ARM64_SVE_VQ_MIN, KVM_ARM64_SVE_VQ_MAX. Since the only direct use for these at present is manipulation of KVM_REG_ARM64_SVE_VLS, no corresponding _VL_ macros are defined. They could be added later if a need arises. Since use of DIV_ROUND_UP() was the only reason for including <linux/kernel.h> in guest.c, this patch also removes that #include. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
sve_reg_to_region() currently passes the result of vcpu_sve_state_size() to array_index_nospec(), effectively leading to a divide / modulo operation. Currently the code bails out and returns -EINVAL if vcpu_sve_state_size() turns out to be zero, in order to avoid going ahead and attempting to divide by zero. This is reasonable, but it should only happen if the kernel contains some other bug that allowed this code to be reached without the vcpu having been properly initialised. To make it clear that this is a defence against bugs rather than something that the user should be able to trigger, this patch marks the check with WARN_ON(). Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Currently, the way error codes are generated when processing the SVE register access ioctls in a bit haphazard. This patch refactors the code so that the behaviour is more consistent: now, -EINVAL should be returned only for unrecognised register IDs or when some other runtime error occurs. -ENOENT is returned for register IDs that are recognised, but whose corresponding register (or slice) does not exist for the vcpu. To this end, in {get,set}_sve_reg() we now delegate the vcpu_has_sve() check down into {get,set}_sve_vls() and sve_reg_to_region(). The KVM_REG_ARM64_SVE_VLS special case is picked off first, then sve_reg_to_region() plays the role of exhaustively validating or rejecting the register ID and (where accepted) computing the applicable register region as before. sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not returned prematurely, before checking whether reg->id is in a recognised range. -EPERM is now only returned when an attempt is made to access an actually existing register slice on an unfinalized vcpu. Fixes: e1c9c983 ("KVM: arm64/sve: Add SVE support to register access ioctl interface") Fixes: 9033bba4 ("KVM: arm64/sve: Add pseudo-register for the guest's vector lengths") Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
* Remove a few redundant blank lines that are stylistically inconsistent with code already in guest.c and are just taking up space. * Delete a couple of pointless empty default cases from switch statements whose behaviour is otherwise obvious anyway. * Fix some typos and consolidate some redundantly duplicated comments. * Respell the slice index check in sve_reg_to_region() as "> 0" to be more consistent with what is logically being checked here (i.e., "is the slice index too large"), even though we don't try to cope with multiple slices yet. No functional change. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Currently, the SVE register ID macros are not all defined in the same way, and advertise the fact that FFR maps onto the nonexistent predicate register P16. This is really just for kernel convenience, and may lead userspace into bad habits. Instead, this patch masks the ID macro arguments so that architecturally invalid register numbers will not be passed through any more, and uses a literal KVM_REG_ARM64_SVE_FFR_BASE macro to define KVM_REG_ARM64_SVE_FFR(), similarly to the way the _ZREG() and _PREG() macros are defined. Rather than plugging in magic numbers for the number of Z- and P- registers and the maximum possible number of register slices, this patch provides definitions for those too. Userspace is going to need them in any case, and it makes sense for them to come from <uapi/asm/kvm.h>. sve_reg_to_region() uses convenience constants that are defined in a different way, and also makes use of the fact that the FFR IDs are really contiguous with the P15 IDs, so this patch retains the existing convenience constants in guest.c, supplemented with a couple of sanity checks to check for consistency with the UAPI header. Fixes: e1c9c983 ("KVM: arm64/sve: Add SVE support to register access ioctl interface") Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Because of the logic in kvm_arm_sys_reg_{get,set}_reg() and sve_id_visibility(), we should never call {get,set}_id_aa64zfr0_el1() for a vcpu where !vcpu_has_sve(vcpu). To avoid the code giving the impression that it is valid for these functions to be called in this situation, and to help the compiler make the right optimisation decisions, this patch adds WARN_ON() for these cases. Given the way the logic is spread out, this seems preferable to dropping the checks altogether. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
The vcpu finalization stubs kvm_arm_vcpu_finalize() and kvm_arm_vcpu_is_finalized() are currently #defines for ARM, which limits the type-checking that the compiler can do at runtime. The only reason for them to be #defines was to avoid reliance on the definition of struct kvm_vcpu, which is not available here due to circular #include problems. However, because these are stubs containing no code, they don't need the definition of struct kvm_vcpu after all; only a declaration is needed (which is available already). So in the interests of cleanliness, this patch converts them to inline functions. No functional change. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
The introduction of kvm_arm_init_arch_resources() looks like premature factoring, since nothing else uses this hook yet and it is not clear what will use it in the future. For now, let's not pretend that this is a general thing: This patch simply renames the function to kvm_arm_init_sve(), retaining the arm stub version under the new name. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Currently the meanings of sve_vq_map and the ancillary helpers __bit_to_vq() and __vq_to_bit() are not clearly explained. This patch makes the explanatory comment clearer, and removes the duplicate comment from fpsimd.h. The WARN_ON() currently present in __bit_to_vq() confuses the intended use of this helper. Since these are low-level helpers not intended for general-purpose use anyway, it is better not to make guesses about how these functions will be used: rather, this patch removes the WARN_ON() and relies on callers to use the helpers sensibly. Suggested-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 02 4月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
The introduction of the SVE registers to userspace started with a refactoring of the way we expose any register via the ONE_REG interface. Unfortunately, this change doesn't exactly behave as expected if the number of registers is non-zero and consider everything to be an error. The visible result is that QEMU barfs very early when creating vcpus. Make sure we only exit early in case there is an actual error, rather than a positive number of registers... Fixes: be25bbb3 ("KVM: arm64: Factor out core register ID enumeration") Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 29 3月, 2019 16 次提交
-
-
由 Dave Martin 提交于
This patch adds sections to the KVM API documentation describing the extensions for supporting the Scalable Vector Extension (SVE) in guests. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
KVM_GET_ONE_REG and KVM_SET_ONE_REG return some error codes that are not documented (but hopefully not surprising either). To give an indication of what these may mean, this patch adds brief documentation. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
To provide a uniform way to check for KVM SVE support amongst other features, this patch adds a suitable capability KVM_CAP_ARM_SVE, and reports it as present when SVE is available. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Now that all the pieces are in place, this patch offers a new flag KVM_ARM_VCPU_SVE that userspace can pass to KVM_ARM_VCPU_INIT to turn on SVE for the guest, on a per-vcpu basis. As part of this, support for initialisation and reset of the SVE vector length set and registers is added in the appropriate places, as well as finally setting the KVM_ARM64_GUEST_HAS_SVE vcpu flag, to turn on the SVE support code. Allocation of the SVE register storage in vcpu->arch.sve_state is deferred until the SVE configuration is finalized, by which time the size of the registers is known. Setting the vector lengths supported by the vcpu is considered configuration of the emulated hardware rather than runtime configuration, so no support is offered for changing the vector lengths available to an existing vcpu across reset. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
This patch adds a new pseudo-register KVM_REG_ARM64_SVE_VLS to allow userspace to set and query the set of vector lengths visible to the guest. In the future, multiple register slices per SVE register may be visible through the ioctl interface. Once the set of slices has been determined we would not be able to allow the vector length set to be changed any more, in order to avoid userspace seeing inconsistent sets of registers. For this reason, this patch adds support for explicit finalization of the SVE configuration via the KVM_ARM_VCPU_FINALIZE ioctl. Finalization is the proper place to allocate the SVE register state storage in vcpu->arch.sve_state, so this patch adds that as appropriate. The data is freed via kvm_arch_vcpu_uninit(), which was previously a no-op on arm64. To simplify the logic for determining what vector lengths can be supported, some code is added to KVM init to work this out, in the kvm_arm_init_arch_resources() hook. The KVM_REG_ARM64_SVE_VLS pseudo-register is not exposed yet. Subsequent patches will allow SVE to be turned on for guest vcpus, making it visible. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
Some aspects of vcpu configuration may be too complex to be completed inside KVM_ARM_VCPU_INIT. Thus, there may be a requirement for userspace to do some additional configuration before various other ioctls will work in a consistent way. In particular this will be the case for SVE, where userspace will need to negotiate the set of vector lengths to be made available to the guest before the vcpu becomes fully usable. In order to provide an explicit way for userspace to confirm that it has finished setting up a particular vcpu feature, this patch adds a new ioctl KVM_ARM_VCPU_FINALIZE. When userspace has opted into a feature that requires finalization, typically by means of a feature flag passed to KVM_ARM_VCPU_INIT, a matching call to KVM_ARM_VCPU_FINALIZE is now required before KVM_RUN or KVM_GET_REG_LIST is allowed. Individual features may impose additional restrictions where appropriate. No existing vcpu features are affected by this, so current userspace implementations will continue to work exactly as before, with no need to issue KVM_ARM_VCPU_FINALIZE. As implemented in this patch, KVM_ARM_VCPU_FINALIZE is currently a placeholder: no finalizable features exist yet, so ioctl is not required and will always yield EINVAL. Subsequent patches will add the finalization logic to make use of this ioctl for SVE. No functional change for existing userspace. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
This patch adds a kvm_arm_init_arch_resources() hook to perform subarch-specific initialisation when starting up KVM. This will be used in a subsequent patch for global SVE-related setup on arm64. No functional change. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
KVM will need to interrogate the set of SVE vector lengths available on the system. This patch exposes the relevant bits to the kernel, along with a sve_vq_available() helper to check whether a particular vector length is supported. __vq_to_bit() and __bit_to_vq() are not intended for use outside these functions: now that these are exposed outside fpsimd.c, they are prefixed with __ in order to provide an extra hint that they are not intended for general-purpose use. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
This patch includes the SVE register IDs in the list returned by KVM_GET_REG_LIST, as appropriate. On a non-SVE-enabled vcpu, no new IDs are added. On an SVE-enabled vcpu, IDs for the FPSIMD V-registers are removed from the list, since userspace is required to access the Z- registers instead in order to access the V-register content. For the variably-sized SVE registers, the appropriate set of slice IDs are enumerated, depending on the maximum vector length for the vcpu. As it currently stands, the SVE architecture never requires more than one slice to exist per register, so this patch adds no explicit support for enumerating multiple slices. The code can be extended straightforwardly to support this in the future, if needed. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
This patch adds the following registers for access via the KVM_{GET,SET}_ONE_REG interface: * KVM_REG_ARM64_SVE_ZREG(n, i) (n = 0..31) (in 2048-bit slices) * KVM_REG_ARM64_SVE_PREG(n, i) (n = 0..15) (in 256-bit slices) * KVM_REG_ARM64_SVE_FFR(i) (in 256-bit slices) In order to adapt gracefully to future architectural extensions, the registers are logically divided up into slices as noted above: the i parameter denotes the slice index. This allows us to reserve space in the ABI for future expansion of these registers. However, as of today the architecture does not permit registers to be larger than a single slice, so no code is needed in the kernel to expose additional slices, for now. The code can be extended later as needed to expose them up to a maximum of 32 slices (as carved out in the architecture itself) if they really exist someday. The registers are only visible for vcpus that have SVE enabled. They are not enumerated by KVM_GET_REG_LIST on vcpus that do not have SVE. Accesses to the FPSIMD registers via KVM_REG_ARM_CORE is not allowed for SVE-enabled vcpus: SVE-aware userspace can use the KVM_REG_ARM64_SVE_ZREG() interface instead to access the same register state. This avoids some complex and pointless emulation in the kernel to convert between the two views of these aliased registers. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
In order to avoid the pointless complexity of maintaining two ioctl register access views of the same data, this patch blocks ioctl access to the FPSIMD V-registers on vcpus that support SVE. This will make it more straightforward to add SVE register access support. Since SVE is an opt-in feature for userspace, this will not affect existing users. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
In preparation for adding logic to filter out some KVM_REG_ARM_CORE registers from the KVM_GET_REG_LIST output, this patch factors out the core register enumeration into a separate function and rebuilds num_core_regs() on top of it. This may be a little more expensive (depending on how good a job the compiler does of specialising the code), but KVM_GET_REG_LIST is not a hot path. This will make it easier to consolidate ID filtering code in one place. No functional change. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
arch/arm64/kvm/guest.c uses the string functions, but the corresponding header is not included. We seem to get away with this for now, but for completeness this patch adds the #include, in preparation for adding yet more memset() calls. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
The Arm SVE architecture defines registers that are up to 2048 bits in size (with some possibility of further future expansion). In order to avoid the need for an excessively large number of ioctls when saving and restoring a vcpu's registers, this patch adds a #define to make support for individual 2048-bit registers through the KVM_{GET,SET}_ONE_REG ioctl interface official. This will allow each SVE register to be accessed in a single call. There are sufficient spare bits in the register id size field for this change, so there is no ABI impact, providing that KVM_GET_REG_LIST does not enumerate any 2048-bit register unless userspace explicitly opts in to the relevant architecture-specific features. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
In order to give each vcpu its own view of the SVE registers, this patch adds context storage via a new sve_state pointer in struct vcpu_arch. An additional member sve_max_vl is also added for each vcpu, to determine the maximum vector length visible to the guest and thus the value to be configured in ZCR_EL2.LEN while the vcpu is active. This also determines the layout and size of the storage in sve_state, which is read and written by the same backend functions that are used for context-switching the SVE state for host tasks. On SVE-enabled vcpus, SVE access traps are now handled by switching in the vcpu's SVE context and disabling the trap before returning to the guest. On other vcpus, the trap is not handled and an exit back to the host occurs, where the handle_sve() fallback path reflects an undefined instruction exception back to the guest, consistently with the behaviour of non-SVE-capable hardware (as was done unconditionally prior to this patch). No SVE handling is added on non-VHE-only paths, since VHE is an architectural and Kconfig prerequisite of SVE. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
This patch adds the necessary support for context switching ZCR_EL1 for each vcpu. ZCR_EL1 is trapped alongside the FPSIMD/SVE registers, so it makes sense for it to be handled as part of the guest FPSIMD/SVE context for context switch purposes instead of handling it as a general system register. This means that it can be switched in lazily at the appropriate time. No effort is made to track host context for this register, since SVE requires VHE: thus the hosts's value for this register lives permanently in ZCR_EL2 and does not alias the guest's value at any time. The Hyp switch and fpsimd context handling code is extended appropriately. Accessors are added in sys_regs.c to expose the SVE system registers and ID register fields. Because these need to be conditionally visible based on the guest configuration, they are implemented separately for now rather than by use of the generic system register helpers. This may be abstracted better later on when/if there are more features requiring this model. ID_AA64ZFR0_EL1 is RO-RAZ for MRS/MSR when SVE is disabled for the guest, but for compatibility with non-SVE aware KVM implementations the register should not be enumerated at all for KVM_GET_REG_LIST in this case. For consistency we also reject ioctl access to the register. This ensures that a non-SVE-enabled guest looks the same to userspace, irrespective of whether the kernel KVM implementation supports SVE. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-