1. 16 6月, 2014 3 次提交
  2. 20 3月, 2014 1 次提交
  3. 05 3月, 2014 1 次提交
  4. 26 10月, 2013 1 次提交
  5. 29 7月, 2013 1 次提交
  6. 27 4月, 2013 1 次提交
    • F
      PPC: Remove env->hreset_excp_prefix · 2cf3eb6d
      Fabien Chouteau 提交于
      This value is not needed if we use correctly the MSR[IP] bit.
      
      excp_prefix is always 0x00000000, except when the MSR[IP] bit is
      implemented and set to 1, in that case excp_prefix is 0xfff00000.
      
      The handling of MSR[IP] was already implemented but not used at reset
      because the value of env->msr was changed "manually".
      
      The patch uses the function hreg_store_msr() to set env->msr, this
      ensures a good handling of MSR[IP] at reset, and therefore a good value
      for excp_prefix.
      Signed-off-by: NFabien Chouteau <chouteau@adacore.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      2cf3eb6d
  7. 22 3月, 2013 1 次提交
    • D
      target-ppc: Remove vestigial PowerPC 620 support · 9baea4a3
      David Gibson 提交于
      The PowerPC 620 was the very first 64-bit PowerPC implementation, but
      hardly anyone ever actually used the chips.  qemu notionally supports the
      620, but since we don't actually have code to implement the segment table,
      the support is broken (quite likely in other ways too).
      
      This patch, therefore, removes all remaining pieces of 620 support, to
      stop it cluttering up the platforms we actually care about.  This includes
      removing support for the ASR register, used only on segment table based
      machines.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9baea4a3
  8. 24 2月, 2013 1 次提交
  9. 19 12月, 2012 1 次提交
  10. 01 11月, 2012 1 次提交
    • D
      target-ppc: Extend FPU state for newer POWER CPUs · 30304420
      David Gibson 提交于
      This patch adds some extra FPU state to CPUPPCState.  Specifically,
      fpscr is extended to a target_ulong bits, since some recent (64 bit)
      CPUs now have more status bits than fit inside 32 bits.  Also, we add
      the 32 VSR registers present on CPUs with VSX (these extend the
      standard FP regs, which together with the Altivec/VMX registers form a
      64 x 128bit register file for VSX).
      
      We don't actually support the instructions using these extra registers
      in TCG yet, but we still need a place to store the state so we can
      sync it with KVM and savevm/loadvm it.  This patch updates the savevm
      code to not fail on the extended state, but also does not actually
      save it - that's a project for another patch.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      30304420
  11. 04 10月, 2012 1 次提交
  12. 16 4月, 2012 1 次提交
  13. 15 3月, 2012 1 次提交
  14. 17 6月, 2011 1 次提交
    • A
      PPC: move TLBs to their own arrays · 1c53accc
      Alexander Graf 提交于
      Until now, we've created a union over multiple different TLB types and
      allocated that union. While it's a waste of memory (and cache) to allocate
      TLB information for a TLB type with much information when you only need
      little, it also inflicts another issue.
      
      With the new KVM API, we can now share the TLB between KVM and qemu, but
      for that to work we need to have both be in the same layout. We can't just
      stretch it over to fit some internal different TLB representation.
      
      Hence this patch moves all TLB types to their own array, allowing us to only
      address and allocate exactly the boundaries required for the specific TLB
      type at hand.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      1c53accc
  15. 02 4月, 2011 1 次提交
    • D
      Parse SDR1 on mtspr instead of at translate time · bb593904
      David Gibson 提交于
      On ppc machines with hash table MMUs, the special purpose register SDR1
      contains both the base address of the encoded size (hashed) page tables.
      
      At present, we interpret the SDR1 value within the address translation
      path.  But because the encodings of the size for 32-bit and 64-bit are
      different this makes for a confusing branch on the MMU type with a bunch
      of curly shifts and masks in the middle of the translate path.
      
      This patch cleans things up by moving the interpretation on SDR1 into the
      helper function handling the write to the register.  This leaves a simple
      pre-sanitized base address and mask for the hash table in the CPUState
      structure which is easier to work with in the translation path.
      
      This makes the translation path more readable.  It addresses the FIXME
      comment currently in the mtsdr1 helper, by validating the SDR1 value during
      interpretation.  Finally it opens the way for emulating a pSeries-style
      partition where the hash table used for translation is not mapped into
      the guests's RAM.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      bb593904
  16. 04 3月, 2010 1 次提交
    • J
      KVM: Rework VCPU state writeback API · ea375f9a
      Jan Kiszka 提交于
      This grand cleanup drops all reset and vmsave/load related
      synchronization points in favor of four(!) generic hooks:
      
      - cpu_synchronize_all_states in qemu_savevm_state_complete
        (initial sync from kernel before vmsave)
      - cpu_synchronize_all_post_init in qemu_loadvm_state
        (writeback after vmload)
      - cpu_synchronize_all_post_init in main after machine init
      - cpu_synchronize_all_post_reset in qemu_system_reset
        (writeback after system reset)
      
      These writeback points + the existing one of VCPU exec after
      cpu_synchronize_state map on three levels of writeback:
      
      - KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
      - KVM_PUT_RESET_STATE   (on synchronous system reset, all VCPUs stopped)
      - KVM_PUT_FULL_STATE    (on init or vmload, all VCPUs stopped as well)
      
      This level is passed to the arch-specific VCPU state writing function
      that will decide which concrete substates need to be written. That way,
      no writer of load, save or reset functions that interact with in-kernel
      KVM states will ever have to worry about synchronization again. That
      also means that a lot of reasons for races, segfaults and deadlocks are
      eliminated.
      
      cpu_synchronize_state remains untouched, just as Anthony suggested. We
      continue to need it before reading or writing of VCPU states that are
      also tracked by in-kernel KVM subsystems.
      
      Consequently, this patch removes many cpu_synchronize_state calls that
      are now redundant, just like remaining explicit register syncs.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      ea375f9a
  17. 28 8月, 2009 1 次提交
  18. 04 8月, 2009 1 次提交
  19. 22 5月, 2009 1 次提交
  20. 21 5月, 2009 1 次提交
  21. 29 4月, 2009 1 次提交
  22. 03 3月, 2009 1 次提交
  23. 31 12月, 2008 1 次提交
  24. 16 12月, 2008 1 次提交
  25. 04 5月, 2008 1 次提交