1. 12 5月, 2010 1 次提交
  2. 05 5月, 2010 1 次提交
  3. 26 4月, 2010 1 次提交
  4. 18 4月, 2010 1 次提交
    • B
      PPC: avoid function pointer type mismatch, spotted by clang · 7b13448f
      Blue Swirl 提交于
      Fixes clang errors:
        CC    ppc-softmmu/translate.o
      /src/qemu/target-ppc/translate.c:3748:13: error: comparison of distinct pointer types ('void (*)(void *, int, int)' and 'void *')
              if (likely(read_cb != SPR_NOACCESS)) {
      /src/qemu/target-ppc/translate.c:3748:28: note: instantiated from:
              if (likely(read_cb != SPR_NOACCESS)) {
      /src/qemu/target-ppc/translate.c:3903:13: error: comparison of distinct pointer types ('void (*)(void *, int, int)' and 'void *')
              if (likely(write_cb != SPR_NOACCESS)) {
      /src/qemu/target-ppc/translate.c:3903:29: note: instantiated from:
              if (likely(write_cb != SPR_NOACCESS)) {
      Signed-off-by: NBlue Swirl <blauwirbel@gmail.com>
      7b13448f
  5. 27 3月, 2010 1 次提交
  6. 17 3月, 2010 1 次提交
    • P
      Large page TLB flush · d4c430a8
      Paul Brook 提交于
      QEMU uses a fixed page size for the CPU TLB.  If the guest uses large
      pages then we effectively split these into multiple smaller pages, and
      populate the corresponding TLB entries on demand.
      
      When the guest invalidates the TLB by virtual address we must invalidate
      all entries covered by the large page.  However the address used to
      invalidate the entry may not be present in the QEMU TLB, so we do not
      know which regions to clear.
      
      Implementing a full vaiable size TLB is hard and slow, so just keep a
      simple address/mask pair to record which addresses may have been mapped by
      large pages.  If the guest invalidates this region then flush the
      whole TLB.
      Signed-off-by: NPaul Brook <paul@codesourcery.com>
      d4c430a8
  7. 13 3月, 2010 3 次提交
  8. 12 3月, 2010 3 次提交
  9. 04 3月, 2010 1 次提交
    • J
      KVM: Rework VCPU state writeback API · ea375f9a
      Jan Kiszka 提交于
      This grand cleanup drops all reset and vmsave/load related
      synchronization points in favor of four(!) generic hooks:
      
      - cpu_synchronize_all_states in qemu_savevm_state_complete
        (initial sync from kernel before vmsave)
      - cpu_synchronize_all_post_init in qemu_loadvm_state
        (writeback after vmload)
      - cpu_synchronize_all_post_init in main after machine init
      - cpu_synchronize_all_post_reset in qemu_system_reset
        (writeback after system reset)
      
      These writeback points + the existing one of VCPU exec after
      cpu_synchronize_state map on three levels of writeback:
      
      - KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
      - KVM_PUT_RESET_STATE   (on synchronous system reset, all VCPUs stopped)
      - KVM_PUT_FULL_STATE    (on init or vmload, all VCPUs stopped as well)
      
      This level is passed to the arch-specific VCPU state writing function
      that will decide which concrete substates need to be written. That way,
      no writer of load, save or reset functions that interact with in-kernel
      KVM states will ever have to worry about synchronization again. That
      also means that a lot of reasons for races, segfaults and deadlocks are
      eliminated.
      
      cpu_synchronize_state remains untouched, just as Anthony suggested. We
      continue to need it before reading or writing of VCPU states that are
      also tracked by in-kernel KVM subsystems.
      
      Consequently, this patch removes many cpu_synchronize_state calls that
      are now redundant, just like remaining explicit register syncs.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      ea375f9a
  10. 28 2月, 2010 3 次提交
  11. 27 2月, 2010 2 次提交
  12. 14 2月, 2010 3 次提交
    • A
      PPC: Add timer when running KVM · c6a94ba5
      Alexander Graf 提交于
      For some odd reason we sometimes hang inside KVM forever. I'd guess it's
      a race condition where we actually have a level triggered interrupt, but
      the infrastructure can't expose that yet, so the guest ACKs it, goes to
      sleep and never gets notified that there's still an interrupt pending.
      
      As a quick workaround, let's just wake up every 500 ms. That way we can
      assure that we're always reinjecting interrupts in time.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      c6a94ba5
    • A
      PPC: Fix large pages · b2eca445
      Alexander Graf 提交于
      We were masking 1TB SLB entries on the feature bit of 16 MB pages. Obviously
      that breaks, so let's just ignore 1TB SLB entries for now and instead do
      16MB pages correctly.
      
      This fixes PPC64 Linux boot with -m above 256.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      b2eca445
    • A
      PPC: tell the guest about the time base frequency · dc333cd6
      Alexander Graf 提交于
      Our guest systems need to know by how much the timebase increases every second,
      so there usually is a "timebase-frequency" property in the cpu leaf of the
      device tree.
      
      This property is missing in OpenBIOS.
      
      With qemu, Linux's fallback timebase speed and qemu's internal timebase speed
      match up. With KVM, that is no longer true. The guest is running at the same
      timebase speed as the host.
      
      This leads to massive timing problems. On my test machine, a "sleep 2" takes
      about 14 seconds with KVM enabled.
      
      This patch exports the timebase frequency to OpenBIOS, so it can then put them
      into the device tree.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      dc333cd6
  13. 07 2月, 2010 1 次提交
  14. 20 1月, 2010 1 次提交
  15. 14 1月, 2010 4 次提交
  16. 21 12月, 2009 3 次提交
  17. 19 12月, 2009 1 次提交
  18. 04 12月, 2009 1 次提交
  19. 17 11月, 2009 1 次提交
  20. 13 11月, 2009 1 次提交
  21. 07 11月, 2009 3 次提交
  22. 23 10月, 2009 1 次提交
  23. 18 10月, 2009 2 次提交