1. 23 10月, 2015 1 次提交
    • C
      arm/arm64: KVM: arch_timer: Only schedule soft timer on vcpu_block · d35268da
      Christoffer Dall 提交于
      We currently schedule a soft timer every time we exit the guest if the
      timer did not expire while running the guest.  This is really not
      necessary, because the only work we do in the timer work function is to
      kick the vcpu.
      
      Kicking the vcpu does two things:
      (1) If the vpcu thread is on a waitqueue, make it runnable and remove it
      from the waitqueue.
      (2) If the vcpu is running on a different physical CPU from the one
      doing the kick, it sends a reschedule IPI.
      
      The second case cannot happen, because the soft timer is only ever
      scheduled when the vcpu is not running.  The first case is only relevant
      when the vcpu thread is on a waitqueue, which is only the case when the
      vcpu thread has called kvm_vcpu_block().
      
      Therefore, we only need to make sure a timer is scheduled for
      kvm_vcpu_block(), which we do by encapsulating all calls to
      kvm_vcpu_block() with kvm_timer_{un}schedule calls.
      
      Additionally, we only schedule a soft timer if the timer is enabled and
      unmasked, since it is useless otherwise.
      
      Note that theoretically userspace can use the SET_ONE_REG interface to
      change registers that should cause the timer to fire, even if the vcpu
      is blocked without a scheduled timer, but this case was not supported
      before this patch and we leave it for future work for now.
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      d35268da
  2. 21 10月, 2015 1 次提交
    • C
      arm/arm64: KVM: Fix arch timer behavior for disabled interrupts · cff9211e
      Christoffer Dall 提交于
      We have an interesting issue when the guest disables the timer interrupt
      on the VGIC, which happens when turning VCPUs off using PSCI, for
      example.
      
      The problem is that because the guest disables the virtual interrupt at
      the VGIC level, we never inject interrupts to the guest and therefore
      never mark the interrupt as active on the physical distributor.  The
      host also never takes the timer interrupt (we only use the timer device
      to trigger a guest exit and everything else is done in software), so the
      interrupt does not become active through normal means.
      
      The result is that we keep entering the guest with a programmed timer
      that will always fire as soon as we context switch the hardware timer
      state and run the guest, preventing forward progress for the VCPU.
      
      Since the active state on the physical distributor is really part of the
      timer logic, it is the job of our virtual arch timer driver to manage
      this state.
      
      The timer->map->active boolean field indicates whether we have signalled
      this interrupt to the vgic and if that interrupt is still pending or
      active.  As long as that is the case, the hardware doesn't have to
      generate physical interrupts and therefore we mark the interrupt as
      active on the physical distributor.
      
      We also have to restore the pending state of an interrupt that was
      queued to an LR but was retired from the LR for some reason, while
      remaining pending in the LR.
      
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reported-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      cff9211e
  3. 04 9月, 2015 1 次提交
  4. 12 8月, 2015 1 次提交
  5. 14 3月, 2015 1 次提交
    • C
      arm/arm64: KVM: Fix migration race in the arch timer · 1a748478
      Christoffer Dall 提交于
      When a VCPU is no longer running, we currently check to see if it has a
      timer scheduled in the future, and if it does, we schedule a host
      hrtimer to notify is in case the timer expires while the VCPU is still
      not running.  When the hrtimer fires, we mask the guest's timer and
      inject the timer IRQ (still relying on the guest unmasking the time when
      it receives the IRQ).
      
      This is all good and fine, but when migration a VM (checkpoint/restore)
      this introduces a race.  It is unlikely, but possible, for the following
      sequence of events to happen:
      
       1. Userspace stops the VM
       2. Hrtimer for VCPU is scheduled
       3. Userspace checkpoints the VGIC state (no pending timer interrupts)
       4. The hrtimer fires, schedules work in a workqueue
       5. Workqueue function runs, masks the timer and injects timer interrupt
       6. Userspace checkpoints the timer state (timer masked)
      
      At restore time, you end up with a masked timer without any timer
      interrupts and your guest halts never receiving timer interrupts.
      
      Fix this by only kicking the VCPU in the workqueue function, and sample
      the expired state of the timer when entering the guest again and inject
      the interrupt and mask the timer only then.
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      1a748478
  6. 31 12月, 2014 1 次提交
  7. 15 12月, 2014 1 次提交
    • C
      arm/arm64: KVM: Require in-kernel vgic for the arch timers · 05971120
      Christoffer Dall 提交于
      It is curently possible to run a VM with architected timers support
      without creating an in-kernel VGIC, which will result in interrupts from
      the virtual timer going nowhere.
      
      To address this issue, move the architected timers initialization to the
      time when we run a VCPU for the first time, and then only initialize
      (and enable) the architected timers if we have a properly created and
      initialized in-kernel VGIC.
      
      When injecting interrupts from the virtual timer to the vgic, the
      current setup should ensure that this never calls an on-demand init of
      the VGIC, which is the only call path that could return an error from
      kvm_vgic_inject_irq(), so capture the return value and raise a warning
      if there's an error there.
      
      We also change the kvm_timer_init() function from returning an int to be
      a void function, since the function always succeeds.
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      05971120
  8. 08 4月, 2014 1 次提交
  9. 22 12月, 2013 1 次提交
  10. 27 6月, 2013 1 次提交
  11. 19 5月, 2013 1 次提交
    • M
      ARM: KVM: move GIC/timer code to a common location · 7275acdf
      Marc Zyngier 提交于
      As KVM/arm64 is looming on the horizon, it makes sense to move some
      of the common code to a single location in order to reduce duplication.
      
      The code could live anywhere. Actually, most of KVM is already built
      with a bunch of ugly ../../.. hacks in the various Makefiles, so we're
      not exactly talking about style here. But maybe it is time to start
      moving into a less ugly direction.
      
      The include files must be in a "public" location, as they are accessed
      from non-KVM files (arch/arm/kernel/asm-offsets.c).
      
      For this purpose, introduce two new locations:
      - virt/kvm/arm/ : x86 and ia64 already share the ioapic code in
        virt/kvm, so this could be seen as a (very ugly) precedent.
      - include/kvm/  : there is already an include/xen, and while the
        intent is slightly different, this seems as good a location as
        any
      
      Eventually, we should probably have independant Makefiles at every
      levels (just like everywhere else in the kernel), but this is just
      the first step.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      7275acdf
  12. 29 4月, 2013 1 次提交
  13. 12 2月, 2013 1 次提交