1. 11 7月, 2014 3 次提交
  2. 30 4月, 2014 8 次提交
  3. 28 4月, 2014 1 次提交
    • M
      arm: KVM: fix possible misalignment of PGDs and bounce page · 5d4e08c4
      Mark Salter 提交于
      The kvm/mmu code shared by arm and arm64 uses kalloc() to allocate
      a bounce page (if hypervisor init code crosses page boundary) and
      hypervisor PGDs. The problem is that kalloc() does not guarantee
      the proper alignment. In the case of the bounce page, the page sized
      buffer allocated may also cross a page boundary negating the purpose
      and leading to a hang during kvm initialization. Likewise the PGDs
      allocated may not meet the minimum alignment requirements of the
      underlying MMU. This patch uses __get_free_page() to guarantee the
      worst case alignment needs of the bounce page and PGDs on both arm
      and arm64.
      
      Cc: <stable@vger.kernel.org> # 3.10+
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      5d4e08c4
  4. 26 4月, 2014 1 次提交
  5. 20 3月, 2014 1 次提交
    • S
      arm, kvm: Fix CPU hotplug callback registration · 8146875d
      Srivatsa S. Bhat 提交于
      On 03/15/2014 12:40 AM, Christoffer Dall wrote:
      > On Fri, Mar 14, 2014 at 11:13:29AM +0530, Srivatsa S. Bhat wrote:
      >> On 03/13/2014 04:51 AM, Christoffer Dall wrote:
      >>> On Tue, Mar 11, 2014 at 02:05:38AM +0530, Srivatsa S. Bhat wrote:
      >>>> Subsystems that want to register CPU hotplug callbacks, as well as perform
      >>>> initialization for the CPUs that are already online, often do it as shown
      >>>> below:
      >>>>
      [...]
      >>> Just so we're clear, the existing code was simply racy as not prone to
      >>> deadlocks, right?
      
      > >>> This makes it clear that the test above for compatible CPUs can be quite
      > >>> easily evaded by using CPU hotplug, but we don't really have a good
      > >>> solution for handling that yet...  Hmmm, grumble grumble, I guess if you
      > >>> hotplug unsupported CPUs on a KVM/ARM system for now, stuff will break.
      
      >>
      >> In this particular case, there was no deadlock possibility, rather the
      >> existing code had insufficient synchronization against CPU hotplug.
      >>
      >> init_hyp_mode() would invoke cpu_init_hyp_mode() on currently online CPUs
      >> using on_each_cpu(). If a CPU came online after this point and before calling
      >> register_cpu_notifier(), that CPU would remain uninitialized because this
      >> subsystem would miss the hot-online event. This patch fixes this bug and
      >> also uses the new synchronization method (instead of get/put_online_cpus())
      >> to ensure that we don't deadlock with CPU hotplug.
      >>
      >
      > Yes, that was my conclusion as well.  Thanks for clarifying.  (It could
      > be noted in the commit message as well if you should feel so inclined).
      >
      
      Please find the patch with updated changelog (and your Ack) below.
      (No changes in code).
      
      From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Subject: [PATCH] arm, kvm: Fix CPU hotplug callback registration
      
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Instead, the correct and race-free way of performing the callback
      registration is:
      
      	cpu_notifier_register_begin();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	/* Note the use of the double underscored version of the API */
      	__register_cpu_notifier(&foobar_cpu_notifier);
      
      	cpu_notifier_register_done();
      
      In the existing arm kvm code, there is no synchronization with CPU hotplug
      to avoid missing the hotplug events that might occur after invoking
      init_hyp_mode() and before calling register_cpu_notifier(). Fix this bug
      and also use the new synchronization method (instead of get/put_online_cpus())
      to ensure that we don't deadlock with CPU hotplug.
      
      Cc: Gleb Natapov <gleb@kernel.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ingo Molnar <mingo@kernel.org>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      8146875d
  6. 03 3月, 2014 9 次提交
  7. 28 2月, 2014 1 次提交
    • M
      arm/arm64: KVM: detect CPU reset on CPU_PM_EXIT · b20c9f29
      Marc Zyngier 提交于
      Commit 1fcf7ce0 (arm: kvm: implement CPU PM notifier) added
      support for CPU power-management, using a cpu_notifier to re-init
      KVM on a CPU that entered CPU idle.
      
      The code assumed that a CPU entering idle would actually be powered
      off, loosing its state entierely, and would then need to be
      reinitialized. It turns out that this is not always the case, and
      some HW performs CPU PM without actually killing the core. In this
      case, we try to reinitialize KVM while it is still live. It ends up
      badly, as reported by Andre Przywara (using a Calxeda Midway):
      
      [    3.663897] Kernel panic - not syncing: unexpected prefetch abort in Hyp mode at: 0x685760
      [    3.663897] unexpected data abort in Hyp mode at: 0xc067d150
      [    3.663897] unexpected HVC/SVC trap in Hyp mode at: 0xc0901dd0
      
      The trick here is to detect if we've been through a full re-init or
      not by looking at HVBAR (VBAR_EL2 on arm64). This involves
      implementing the backend for __hyp_get_vectors in the main KVM HYP
      code (rather small), and checking the return value against the
      default one when the CPU notifier is called on CPU_PM_EXIT.
      Reported-by: NAndre Przywara <osp@andrep.de>
      Tested-by: NAndre Przywara <osp@andrep.de>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Rob Herring <rob.herring@linaro.org>
      Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b20c9f29
  8. 09 1月, 2014 2 次提交
  9. 22 12月, 2013 7 次提交
    • C
      arm/arm64: kvm: Set vcpu->cpu to -1 on vcpu_put · e9b152cb
      Christoffer Dall 提交于
      The arch-generic KVM code expects the cpu field of a vcpu to be -1 if
      the vcpu is no longer assigned to a cpu.  This is used for the optimized
      make_all_cpus_request path and will be used by the vgic code to check
      that no vcpus are running.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      e9b152cb
    • C
      KVM: arm-vgic: Set base addr through device API · ce01e4e8
      Christoffer Dall 提交于
      Support setting the distributor and cpu interface base addresses in the
      VM physical address space through the KVM_{SET,GET}_DEVICE_ATTR API
      in addition to the ARM specific API.
      
      This has the added benefit of being able to share more code in user
      space and do things in a uniform manner.
      
      Also deprecate the older API at the same time, but backwards
      compatibility will be maintained.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      ce01e4e8
    • C
      KVM: arm-vgic: Support KVM_CREATE_DEVICE for VGIC · 7330672b
      Christoffer Dall 提交于
      Support creating the ARM VGIC device through the KVM_CREATE_DEVICE
      ioctl, which can then later be leveraged to use the
      KVM_{GET/SET}_DEVICE_ATTR, which is useful both for setting addresses in
      a more generic API than the ARM-specific one and is useful for
      save/restore of VGIC state.
      
      Adds KVM_CAP_DEVICE_CTRL to ARM capabilities.
      
      Note that we change the check for creating a VGIC from bailing out if
      any VCPUs were created, to bailing out if any VCPUs were ever run.  This
      is an important distinction that shouldn't break anything, but allows
      creating the VGIC after the VCPUs have been created.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      7330672b
    • C
      ARM: KVM: Allow creating the VGIC after VCPUs · e1ba0207
      Christoffer Dall 提交于
      Rework the VGIC initialization slightly to allow initialization of the
      vgic cpu-specific state even if the irqchip (the VGIC) hasn't been
      created by user space yet.  This is safe, because the vgic data
      structures are already allocated when the CPU is allocated if VGIC
      support is compiled into the kernel.  Further, the init process does not
      depend on any other information and the sacrifice is a slight
      performance degradation for creating VMs in the no-VGIC case.
      
      The reason is that the new device control API doesn't mandate creating
      the VGIC before creating the VCPU and it is unreasonable to require user
      space to create the VGIC before creating the VCPUs.
      
      At the same time move the irqchip_in_kernel check out of
      kvm_vcpu_first_run_init and into the init function to make the per-vcpu
      and global init functions symmetric and add comments on the exported
      functions making it a bit easier to understand the init flow by only
      looking at vgic.c.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      e1ba0207
    • A
      ARM/KVM: save and restore generic timer registers · 39735a3a
      Andre Przywara 提交于
      For migration to work we need to save (and later restore) the state of
      each core's virtual generic timer.
      Since this is per VCPU, we can use the [gs]et_one_reg ioctl and export
      the three needed registers (control, counter, compare value).
      Though they live in cp15 space, we don't use the existing list, since
      they need special accessor functions and the arch timer is optional.
      Acked-by: NMarc Zynger <marc.zyngier@arm.com>
      Signed-off-by: NAndre Przywara <andre.przywara@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      39735a3a
    • C
      arm/arm64: KVM: arch_timer: Initialize cntvoff at kvm_init · a1a64387
      Christoffer Dall 提交于
      Initialize the cntvoff at kvm_init_vm time, not before running the VCPUs
      at the first time because that will overwrite any potentially restored
      values from user space.
      
      Cc: Andre Przywara <andre.przywara@linaro.org>
      Acked-by: NMarc Zynger <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      a1a64387
    • C
      arm: KVM: Don't return PSCI_INVAL if waitqueue is inactive · 478a8237
      Christoffer Dall 提交于
      The current KVM implementation of PSCI returns INVALID_PARAMETERS if the
      waitqueue for the corresponding CPU is not active.  This does not seem
      correct, since KVM should not care what the specific thread is doing,
      for example, user space may not have called KVM_RUN on this VCPU yet or
      the thread may be busy looping to user space because it received a
      signal; this is really up to the user space implementation.  Instead we
      should check specifically that the CPU is marked as being turned off,
      regardless of the VCPU thread state, and if it is, we shall
      simply clear the pause flag on the CPU and wake up the thread if it
      happens to be blocked for us.
      
      Further, the implementation seems to be racy when executing multiple
      VCPU threads.  There really isn't a reasonable user space programming
      scheme to ensure all secondary CPUs have reached kvm_vcpu_first_run_init
      before turning on the boot CPU.
      
      Therefore, set the pause flag on the vcpu at VCPU init time (which can
      reasonably be expected to be completed for all CPUs by user space before
      running any VCPUs) and clear both this flag and the feature (in case the
      feature can somehow get set again in the future) and ping the waitqueue
      on turning on a VCPU using PSCI.
      Reported-by: NPeter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      478a8237
  10. 17 12月, 2013 1 次提交
  11. 12 12月, 2013 1 次提交
  12. 17 11月, 2013 1 次提交
  13. 08 11月, 2013 2 次提交
  14. 29 10月, 2013 1 次提交
  15. 22 10月, 2013 1 次提交