1. 02 4月, 2010 1 次提交
  2. 30 3月, 2010 1 次提交
  3. 04 3月, 2010 2 次提交
    • J
      KVM: Rework VCPU state writeback API · ea375f9a
      Jan Kiszka 提交于
      This grand cleanup drops all reset and vmsave/load related
      synchronization points in favor of four(!) generic hooks:
      
      - cpu_synchronize_all_states in qemu_savevm_state_complete
        (initial sync from kernel before vmsave)
      - cpu_synchronize_all_post_init in qemu_loadvm_state
        (writeback after vmload)
      - cpu_synchronize_all_post_init in main after machine init
      - cpu_synchronize_all_post_reset in qemu_system_reset
        (writeback after system reset)
      
      These writeback points + the existing one of VCPU exec after
      cpu_synchronize_state map on three levels of writeback:
      
      - KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
      - KVM_PUT_RESET_STATE   (on synchronous system reset, all VCPUs stopped)
      - KVM_PUT_FULL_STATE    (on init or vmload, all VCPUs stopped as well)
      
      This level is passed to the arch-specific VCPU state writing function
      that will decide which concrete substates need to be written. That way,
      no writer of load, save or reset functions that interact with in-kernel
      KVM states will ever have to worry about synchronization again. That
      also means that a lot of reasons for races, segfaults and deadlocks are
      eliminated.
      
      cpu_synchronize_state remains untouched, just as Anthony suggested. We
      continue to need it before reading or writing of VCPU states that are
      also tracked by in-kernel KVM subsystems.
      
      Consequently, this patch removes many cpu_synchronize_state calls that
      are now redundant, just like remaining explicit register syncs.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      ea375f9a
    • J
      KVM: Rework of guest debug state writing · b0b1d690
      Jan Kiszka 提交于
      So far we synchronized any dirty VCPU state back into the kernel before
      updating the guest debug state. This was a tribute to a deficite in x86
      kernels before 2.6.33. But as this is an arch-dependent issue, it is
      better handle in the x86 part of KVM and remove the writeback point for
      generic code. This also avoids overwriting the flushed state later on if
      user space decides to change some more registers before resuming the
      guest.
      
      We furthermore need to reinject guest exceptions via the appropriate
      mechanism. That is KVM_SET_GUEST_DEBUG for older kernels and
      KVM_SET_VCPU_EVENTS for recent ones. Using both mechanisms at the same
      time will cause state corruptions.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      b0b1d690
  4. 23 2月, 2010 1 次提交
  5. 22 2月, 2010 2 次提交
  6. 11 2月, 2010 1 次提交
  7. 10 2月, 2010 2 次提交
  8. 04 2月, 2010 2 次提交
    • J
      KVM: Move and rename regs_modified · 9ded2744
      Jan Kiszka 提交于
      Touching the user space representation of KVM's VCPU state is -
      naturally - a per-VCPU thing. So move the dirty flag into KVM_CPU_COMMON
      and rename it at this chance to reflect its true meaning.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      9ded2744
    • S
      kvm: Flush coalesced MMIO buffer periodly · 62a2744c
      Sheng Yang 提交于
      The default action of coalesced MMIO is, cache the writing in buffer, until:
      1. The buffer is full.
      2. Or the exit to QEmu due to other reasons.
      
      But this would result in a very late writing in some condition.
      1. The each time write to MMIO content is small.
      2. The writing interval is big.
      3. No need for input or accessing other devices frequently.
      
      This issue was observed in a experimental embbed system. The test image
      simply print "test" every 1 seconds. The output in QEmu meets expectation,
      but the output in KVM is delayed for seconds.
      
      Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
      handler. By this way, We don't need vcpu explicit exit to QEmu to
      handle this issue.
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      62a2744c
  9. 04 12月, 2009 2 次提交
  10. 17 11月, 2009 1 次提交
  11. 13 11月, 2009 1 次提交
  12. 12 10月, 2009 1 次提交
  13. 05 10月, 2009 2 次提交
  14. 02 10月, 2009 2 次提交
  15. 21 9月, 2009 1 次提交
  16. 12 9月, 2009 1 次提交
    • B
      Fix sys-queue.h conflict for good · 72cf2d4f
      Blue Swirl 提交于
      Problem: Our file sys-queue.h is a copy of the BSD file, but there are
      some additions and it's not entirely compatible. Because of that, there have
      been conflicts with system headers on BSD systems. Some hacks have been
      introduced in the commits 15cc9235,
      f40d7537,
      96555a96 and
      3990d09a but the fixes were fragile.
      
      Solution: Avoid the conflict entirely by renaming the functions and the
      file. Revert the previous hacks.
      Signed-off-by: NBlue Swirl <blauwirbel@gmail.com>
      72cf2d4f
  17. 28 8月, 2009 1 次提交
  18. 28 7月, 2009 4 次提交
    • A
      Revert "Fake dirty loggin when it's not there" · 6e489f3f
      Anthony Liguori 提交于
      This reverts commit bd836776.
      
      PPC should just implement dirty logging so we can avoid all the fall-out from
      this changeset.
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      6e489f3f
    • L
      Fix broken build · fc5d642f
      Luiz Capitulino 提交于
      The only caller of on_vcpu() is protected by ifdef
      KVM_CAP_SET_GUEST_DEBUG, so protect on_vcpu() too otherwise QEMU
      may not to build.
      Signed-off-by: NLuiz Capitulino <lcapitulino@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      fc5d642f
    • A
      Use Little Endian for Dirty Log · 96c1606b
      Alexander Graf 提交于
      We currently use host endian long types to store information
      in the dirty bitmap.
      
      This works reasonably well on Little Endian targets, because the
      u32 after the first contains the next 32 bits. On Big Endian this
      breaks completely though, forcing us to be inventive here.
      
      So Ben suggested to always use Little Endian, which looks reasonable.
      
      We only have dirty bitmap implemented in Little Endian targets so far
      and since PowerPC would be the first Big Endian platform, we can just
      as well switch to Little Endian always with little effort without
      breaking existing targets.
      
      This is the userspace part of the patch. It shouldn't change anything
      for existing targets, but help PowerPC.
      
      It replaces my older patch called "Use 64bit pointer for dirty log".
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      96c1606b
    • A
      Use 64bit pointer for dirty log · 1c7936e3
      Alexander Graf 提交于
      Dirty logs currently get written with native "long" size. On little endian
      it doesn't matter if we use uint64_t instead though, because we'd still end
      up using the right bytes.
      
      On big endian, this does become a bigger problem, so we need to ensure that
      kernel and userspace talk the same language, which means getting rid of "long"
      and using a defined size instead.
      
      So I decided to use 64 bit types at all times. This doesn't break existing
      targets but will in conjunction with a patch I'll send to the KVM ML make
      dirty logs work with 32 bit userspace on 64 kernel with big endian.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      1c7936e3
  19. 22 7月, 2009 4 次提交
  20. 30 6月, 2009 2 次提交
  21. 17 6月, 2009 1 次提交
    • J
      kvm: Fix IRQ injection into full queue · 8c14c173
      Jan Kiszka 提交于
      User space may only inject interrupts during kvm_arch_pre_run if
      ready_for_interrupt_injection is set in kvm_run. But that field is
      updated on exit from KVM_RUN, so we must ensure that we enter the
      kernel after potentially queuing an interrupt, otherwise we risk to
      loose one - like it happens with the current code against latest
      kernel modules (since kvm-86) that started to queue only a single
      interrupt.
      
      Fix the problem by reordering kvm_cpu_exec.
      
      Credits go to Gleb Natapov for analyzing the issue in details.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      8c14c173
  22. 07 6月, 2009 1 次提交
  23. 22 5月, 2009 4 次提交