- 11 7月, 2014 3 次提交
-
-
由 Kim Phillips 提交于
A userspace process can map device MMIO memory via VFIO or /dev/mem, e.g., for platform device passthrough support in QEMU. During early development, we found the PAGE_S2 memory type being used for MMIO mappings. This patch corrects that by using the more strongly ordered memory type for device MMIO mappings: PAGE_S2_DEVICE. Signed-off-by: NKim Phillips <kim.phillips@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Eric Auger 提交于
Currently when a KVM region is deleted or moved after KVM_SET_USER_MEMORY_REGION ioctl, the corresponding intermediate physical memory is not unmapped. This patch corrects this and unmaps the region's IPA range in kvm_arch_commit_memory_region using unmap_stage2_range. Signed-off-by: NEric Auger <eric.auger@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
unmap_range() was utterly broken, to quote Marc, and broke in all sorts of situations. It was also quite complicated to follow and didn't follow the usual scheme of having a separate iterating function for each level of page tables. Address this by refactoring the code and introduce a pgd_clear() function. Reviewed-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NMario Smarduch <m.smarduch@samsung.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 30 4月, 2014 8 次提交
-
-
由 Anup Patel 提交于
We have PSCI v0.2 emulation available in KVM ARM/ARM64 hence advertise this to user space (i.e. QEMU or KVMTOOL) via KVM_CHECK_EXTENSION ioctl. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Anup Patel 提交于
This patch adds emulation of PSCI v0.2 CPU_SUSPEND function call for KVM ARM/ARM64. This is a CPU-level function call which can suspend current CPU or current CPU cluster. We don't have VCPU clusters in KVM so we only suspend the current VCPU. The CPU_SUSPEND emulation is not tested much because currently there is no CPUIDLE driver in Linux kernel that uses PSCI CPU_SUSPEND. The PSCI CPU_SUSPEND implementation in ARM64 kernel was tested using a Simple CPUIDLE driver which is not published due to unstable DT-bindings for PSCI. (For more info, http://lwn.net/Articles/574950/) For simplicity, we implement CPU_SUSPEND emulation similar to WFI (Wait-for-interrupt) emulation and we also treat power-down request to be same as stand-by request. This is consistent with section 5.4.1 and section 5.4.2 of PSCI v0.2 specification. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Anup Patel 提交于
As-per PSCI v0.2, the source CPU provides physical address of "entry point" and "context id" for starting a target CPU. Also, if target CPU is already running then we should return ALREADY_ON. Current emulation of CPU_ON function does not consider physical address of "context id" and returns INVALID_PARAMETERS if target CPU is already running. This patch updates kvm_psci_vcpu_on() such that it works for both PSCI v0.1 and PSCI v0.2. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Anup Patel 提交于
This patch adds emulation of PSCI v0.2 MIGRATE, MIGRATE_INFO_TYPE, and MIGRATE_INFO_UP_CPU function calls for KVM ARM/ARM64. KVM ARM/ARM64 being a hypervisor (and not a Trusted OS), we cannot provide this functions hence we emulate these functions in following way: 1. MIGRATE - Returns "Not Supported" 2. MIGRATE_INFO_TYPE - Return 2 i.e. Trusted OS is not present 3. MIGRATE_INFO_UP_CPU - Returns "Not Supported" Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Anup Patel 提交于
This patch adds emulation of PSCI v0.2 AFFINITY_INFO function call for KVM ARM/ARM64. This is a VCPU-level function call which will be used to determine current state of given affinity level. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Anup Patel 提交于
The PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET functions are system-level functions hence cannot be fully emulated by in-kernel PSCI emulation code. To tackle this, we forward PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET function calls from vcpu to user space (i.e. QEMU or KVMTOOL) via kvm_run structure using KVM_EXIT_SYSTEM_EVENT exit reasons. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Anup Patel 提交于
Currently, the kvm_psci_call() returns 'true' or 'false' based on whether the PSCI function call was handled successfully or not. This does not help us emulate system-level PSCI functions where the actual emulation work will be done by user space (QEMU or KVMTOOL). Examples of such system-level PSCI functions are: PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET. This patch updates kvm_psci_call() to return three types of values: 1) > 0 (success) 2) = 0 (success but exit to user space) 3) < 0 (errors) Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Anup Patel 提交于
Currently, the in-kernel PSCI emulation provides PSCI v0.1 interface to VCPUs. This patch extends current in-kernel PSCI emulation to provide PSCI v0.2 interface to VCPUs. By default, ARM/ARM64 KVM will always provide PSCI v0.1 interface for keeping the ABI backward-compatible. To select PSCI v0.2 interface for VCPUs, the user space (i.e. QEMU or KVMTOOL) will have to set KVM_ARM_VCPU_PSCI_0_2 feature when doing VCPU init using KVM_ARM_VCPU_INIT ioctl. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 28 4月, 2014 1 次提交
-
-
由 Mark Salter 提交于
The kvm/mmu code shared by arm and arm64 uses kalloc() to allocate a bounce page (if hypervisor init code crosses page boundary) and hypervisor PGDs. The problem is that kalloc() does not guarantee the proper alignment. In the case of the bounce page, the page sized buffer allocated may also cross a page boundary negating the purpose and leading to a hang during kvm initialization. Likewise the PGDs allocated may not meet the minimum alignment requirements of the underlying MMU. This patch uses __get_free_page() to guarantee the worst case alignment needs of the bounce page and PGDs on both arm and arm64. Cc: <stable@vger.kernel.org> # 3.10+ Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 26 4月, 2014 1 次提交
-
-
由 Will Deacon 提交于
KVM currently crashes and burns on big-endian hosts, so don't allow it to be selected until we've got that fixed. Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 20 3月, 2014 1 次提交
-
-
由 Srivatsa S. Bhat 提交于
On 03/15/2014 12:40 AM, Christoffer Dall wrote: > On Fri, Mar 14, 2014 at 11:13:29AM +0530, Srivatsa S. Bhat wrote: >> On 03/13/2014 04:51 AM, Christoffer Dall wrote: >>> On Tue, Mar 11, 2014 at 02:05:38AM +0530, Srivatsa S. Bhat wrote: >>>> Subsystems that want to register CPU hotplug callbacks, as well as perform >>>> initialization for the CPUs that are already online, often do it as shown >>>> below: >>>> [...] >>> Just so we're clear, the existing code was simply racy as not prone to >>> deadlocks, right? > >>> This makes it clear that the test above for compatible CPUs can be quite > >>> easily evaded by using CPU hotplug, but we don't really have a good > >>> solution for handling that yet... Hmmm, grumble grumble, I guess if you > >>> hotplug unsupported CPUs on a KVM/ARM system for now, stuff will break. >> >> In this particular case, there was no deadlock possibility, rather the >> existing code had insufficient synchronization against CPU hotplug. >> >> init_hyp_mode() would invoke cpu_init_hyp_mode() on currently online CPUs >> using on_each_cpu(). If a CPU came online after this point and before calling >> register_cpu_notifier(), that CPU would remain uninitialized because this >> subsystem would miss the hot-online event. This patch fixes this bug and >> also uses the new synchronization method (instead of get/put_online_cpus()) >> to ensure that we don't deadlock with CPU hotplug. >> > > Yes, that was my conclusion as well. Thanks for clarifying. (It could > be noted in the commit message as well if you should feel so inclined). > Please find the patch with updated changelog (and your Ack) below. (No changes in code). From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Subject: [PATCH] arm, kvm: Fix CPU hotplug callback registration Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); In the existing arm kvm code, there is no synchronization with CPU hotplug to avoid missing the hotplug events that might occur after invoking init_hyp_mode() and before calling register_cpu_notifier(). Fix this bug and also use the new synchronization method (instead of get/put_online_cpus()) to ensure that we don't deadlock with CPU hotplug. Cc: Gleb Natapov <gleb@kernel.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ingo Molnar <mingo@kernel.org> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 03 3月, 2014 9 次提交
-
-
由 Marc Zyngier 提交于
Compiling with THP enabled leads to the following warning: arch/arm/kvm/mmu.c: In function ‘unmap_range’: arch/arm/kvm/mmu.c:177:39: warning: ‘pte’ may be used uninitialized in this function [-Wmaybe-uninitialized] if (kvm_pmd_huge(*pmd) || page_empty(pte)) { ^ Code inspection reveals that these two cases are mutually exclusive, so GCC is a bit overzealous here. Silence it anyway by initializing pte to NULL and testing it later on. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
In order to be able to detect the point where the guest enables its MMU and caches, trap all the VM related system registers. Once we see the guest enabling both the MMU and the caches, we can go back to a saner mode of operation, which is to leave these registers in complete control of the guest. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1. In order to minimise the amount of surprise a guest could generate by trying to access these registers with caches off, add them to the list of registers we switch/handle. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
So far, KVM/ARM used a fixed HCR configuration per guest, except for the VI/VF/VA bits to control the interrupt in absence of VGIC. With the upcoming need to dynamically reconfigure trapping, it becomes necessary to allow the HCR to be changed on a per-vcpu basis. The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR field, initialized at setup time. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Commit 240e99cb (ARM: KVM: Fix 64-bit coprocessor handling) added an ordering dependency for the 64bit registers. The order described is: CRn, CRm, Op1, Op2, 64bit-first. Unfortunately, the implementation is: CRn, 64bit-first, CRm... Move the 64bit test to be last in order to match the documentation. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Commit 240e99cb (ARM: KVM: Fix 64-bit coprocessor handling) changed the way we match the 64bit coprocessor access from user space, but didn't update the trap handler for the same set of registers. The effect is that a trapped 64bit access is never matched, leading to a fault being injected into the guest. This went unnoticed as we didn't really trap any 64bit register so far. Placing the CRm field of the access into the CRn field of the matching structure fixes the problem. Also update the debug feature to emit the expected string in case of failing match. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
When the guest runs with caches disabled (like in an early boot sequence, for example), all the writes are diectly going to RAM, bypassing the caches altogether. Once the MMU and caches are enabled, whatever sits in the cache becomes suddenly visible, which isn't what the guest expects. A way to avoid this potential disaster is to invalidate the cache when the MMU is being turned on. For this, we hook into the SCTLR_EL1 trapping code, and scan the stage-2 page tables, invalidating the pages/sections that have already been mapped in. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
The use of p*d_addr_end with stage-2 translation is slightly dodgy, as the IPA is 40bits, while all the p*d_addr_end helpers are taking an unsigned long (arm64 is fine with that as unligned long is 64bit). The fix is to introduce 64bit clean versions of the same helpers, and use them in the stage-2 page table code. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
In order for the guest with caches off to observe data written contained in a given page, we need to make sure that page is committed to memory, and not just hanging in the cache (as guest accesses are completely bypassing the cache until it decides to enable it). For this purpose, hook into the coherent_icache_guest_page function and flush the region if the guest SCTLR_EL1 register doesn't show the MMU and caches as being enabled. The function also get renamed to coherent_cache_guest_page. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 28 2月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
Commit 1fcf7ce0 (arm: kvm: implement CPU PM notifier) added support for CPU power-management, using a cpu_notifier to re-init KVM on a CPU that entered CPU idle. The code assumed that a CPU entering idle would actually be powered off, loosing its state entierely, and would then need to be reinitialized. It turns out that this is not always the case, and some HW performs CPU PM without actually killing the core. In this case, we try to reinitialize KVM while it is still live. It ends up badly, as reported by Andre Przywara (using a Calxeda Midway): [ 3.663897] Kernel panic - not syncing: unexpected prefetch abort in Hyp mode at: 0x685760 [ 3.663897] unexpected data abort in Hyp mode at: 0xc067d150 [ 3.663897] unexpected HVC/SVC trap in Hyp mode at: 0xc0901dd0 The trick here is to detect if we've been through a full re-init or not by looking at HVBAR (VBAR_EL2 on arm64). This involves implementing the backend for __hyp_get_vectors in the main KVM HYP code (rather small), and checking the return value against the default one when the CPU notifier is called on CPU_PM_EXIT. Reported-by: NAndre Przywara <osp@andrep.de> Tested-by: NAndre Przywara <osp@andrep.de> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Rob Herring <rob.herring@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 09 1月, 2014 2 次提交
-
-
由 Sachin Kamat 提交于
trace.h was included twice. Remove duplicate inclusion. Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
The THP code in KVM/ARM is a bit restrictive in not allowing a THP to be used if the VMA is not 2MB aligned. Actually, it is not so much the VMA that matters, but the associated memslot: A process can perfectly mmap a region with no particular alignment restriction, and then pass a 2MB aligned address to KVM. In this case, KVM will only use this 2MB aligned region, and will ignore the range between vma->vm_start and memslot->userspace_addr. It can also choose to place this memslot at whatever alignment it wants in the IPA space. In the end, what matters is the relative alignment of the user space and IPA mappings with respect to a 2M page. They absolutely must be the same if you want to use THP. Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 22 12月, 2013 7 次提交
-
-
由 Christoffer Dall 提交于
The arch-generic KVM code expects the cpu field of a vcpu to be -1 if the vcpu is no longer assigned to a cpu. This is used for the optimized make_all_cpus_request path and will be used by the vgic code to check that no vcpus are running. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
Support setting the distributor and cpu interface base addresses in the VM physical address space through the KVM_{SET,GET}_DEVICE_ATTR API in addition to the ARM specific API. This has the added benefit of being able to share more code in user space and do things in a uniform manner. Also deprecate the older API at the same time, but backwards compatibility will be maintained. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
Support creating the ARM VGIC device through the KVM_CREATE_DEVICE ioctl, which can then later be leveraged to use the KVM_{GET/SET}_DEVICE_ATTR, which is useful both for setting addresses in a more generic API than the ARM-specific one and is useful for save/restore of VGIC state. Adds KVM_CAP_DEVICE_CTRL to ARM capabilities. Note that we change the check for creating a VGIC from bailing out if any VCPUs were created, to bailing out if any VCPUs were ever run. This is an important distinction that shouldn't break anything, but allows creating the VGIC after the VCPUs have been created. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
Rework the VGIC initialization slightly to allow initialization of the vgic cpu-specific state even if the irqchip (the VGIC) hasn't been created by user space yet. This is safe, because the vgic data structures are already allocated when the CPU is allocated if VGIC support is compiled into the kernel. Further, the init process does not depend on any other information and the sacrifice is a slight performance degradation for creating VMs in the no-VGIC case. The reason is that the new device control API doesn't mandate creating the VGIC before creating the VCPU and it is unreasonable to require user space to create the VGIC before creating the VCPUs. At the same time move the irqchip_in_kernel check out of kvm_vcpu_first_run_init and into the init function to make the per-vcpu and global init functions symmetric and add comments on the exported functions making it a bit easier to understand the init flow by only looking at vgic.c. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Andre Przywara 提交于
For migration to work we need to save (and later restore) the state of each core's virtual generic timer. Since this is per VCPU, we can use the [gs]et_one_reg ioctl and export the three needed registers (control, counter, compare value). Though they live in cp15 space, we don't use the existing list, since they need special accessor functions and the arch timer is optional. Acked-by: NMarc Zynger <marc.zyngier@arm.com> Signed-off-by: NAndre Przywara <andre.przywara@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
Initialize the cntvoff at kvm_init_vm time, not before running the VCPUs at the first time because that will overwrite any potentially restored values from user space. Cc: Andre Przywara <andre.przywara@linaro.org> Acked-by: NMarc Zynger <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
The current KVM implementation of PSCI returns INVALID_PARAMETERS if the waitqueue for the corresponding CPU is not active. This does not seem correct, since KVM should not care what the specific thread is doing, for example, user space may not have called KVM_RUN on this VCPU yet or the thread may be busy looping to user space because it received a signal; this is really up to the user space implementation. Instead we should check specifically that the CPU is marked as being turned off, regardless of the VCPU thread state, and if it is, we shall simply clear the pause flag on the CPU and wake up the thread if it happens to be blocked for us. Further, the implementation seems to be racy when executing multiple VCPU threads. There really isn't a reasonable user space programming scheme to ensure all secondary CPUs have reached kvm_vcpu_first_run_init before turning on the boot CPU. Therefore, set the pause flag on the vcpu at VCPU init time (which can reasonably be expected to be completed for all CPUs by user space before running any VCPUs) and clear both this flag and the feature (in case the feature can somehow get set again in the future) and ping the waitqueue on turning on a VCPU using PSCI. Reported-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 17 12月, 2013 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
Upon CPU shutdown and consequent warm-reboot, the hypervisor CPU state must be re-initialized. This patch implements a CPU PM notifier that upon warm-boot calls a KVM hook to reinitialize properly the hypervisor state so that the CPU can be safely resumed. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
- 12 12月, 2013 1 次提交
-
-
由 Santosh Shilimkar 提交于
KVM initialisation fails on architectures implementing virt_to_idmap() because virt_to_phys() on such architectures won't fetch you the correct idmap page. So update the KVM ARM code to use the virt_to_idmap() to fix the issue. Since the KVM code is shared between arm and arm64, we create kvm_virt_to_phys() and handle the redirection in respective headers. Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 17 11月, 2013 1 次提交
-
-
由 Christoffer Dall 提交于
Using virt_to_phys on percpu mappings is horribly wrong as it may be backed by vmalloc. Introduce kvm_kaddr_to_phys which translates both types of valid kernel addresses to the corresponding physical address. At the same time resolves a typing issue where we were storing the physical address as a 32 bit unsigned long (on arm), truncating the physical address for addresses above the 4GB limit. This caused breakage on Keystone. Cc: <stable@vger.kernel.org> [3.10+] Reported-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 08 11月, 2013 2 次提交
-
-
由 Marc Zyngier 提交于
When booting a vcpu using PSCI, make sure we start it with the endianness of the caller. Otherwise, secondaries can be pretty unhappy to execute a BE kernel in LE mode... This conforms to PSCI spec Rev B, 5.13.3. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Do the necessary byteswap when host and guest have different views of the universe. Actually, the only case we need to take care of is when the guest is BE. All the other cases are naturally handled. Also be careful about endianness when the data is being memcopy-ed from/to the run buffer. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 29 10月, 2013 1 次提交
-
-
由 Christoph Lameter 提交于
This is the ARM part of Christoph's patchset cleaning up the various uses of __get_cpu_var across the tree. The idea is to convert __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and fewer registers are used when code is generated. [will: fixed debug ref counting checks and pcpu array accesses] Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 10月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
The KVM PSCI code blindly assumes that vcpu_id and MPIDR are the same thing. This is true when vcpus are organized as a flat topology, but is wrong when trying to emulate any other topology (such as A15 clusters). Change the KVM PSCI CPU_ON code to look at the MPIDR instead of the vcpu_id to pick a target CPU. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-