1. 14 3月, 2014 3 次提交
  2. 05 3月, 2014 1 次提交
  3. 03 9月, 2013 1 次提交
  4. 02 9月, 2013 1 次提交
  5. 23 7月, 2013 1 次提交
  6. 10 7月, 2013 3 次提交
  7. 01 7月, 2013 2 次提交
  8. 06 5月, 2013 1 次提交
    • A
      PPC: Add MMU type for 2.06 with AMR but no TB pages · 126a7930
      Alexander Graf 提交于
      When running -cpu on a POWER7 system with PR KVM, we mask out the 1TB
      MMU capability from the MMU type mask, but not the AMR bit.
      
      This leads to us having a new MMU type that we don't check for in our
      MMU management functions.
      
      Add the new type, so that we don't have to worry about breakage there.
      We're not going to use the TCG MMU management in that case anyway.
      
      The long term fix for this will be to move all these MMU management
      functions to class callbacks.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      126a7930
  9. 22 3月, 2013 18 次提交
    • D
      target-ppc: Use QOM method dispatch for MMU fault handling · b632a148
      David Gibson 提交于
      After previous cleanups, the many scattered checks of env->mmu_model in
      the ppc MMU implementation have, at least for "classic" hash MMUs been
      reduced (almost) to a single switch at the top of
      cpu_ppc_handle_mmu_fault().
      
      An explicit switch is still a pretty ugly way of handling this though.  Now
      that Andreas Färber's CPU QOM cleanups for ppc have gone in, it's quite
      straightforward to instead make the handle_mmu_fault function a QOM method
      on the CPU object.
      
      This patch implements such a scheme, initializing the method pointer at
      the same time as the mmu_model variable.  We need to keep the latter around
      for now, because of the MMU types (BookE, 4xx, et al) which haven't been
      converted to the new scheme yet, and also for a few other uses.  It would
      be good to clean those up eventually.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      b632a148
    • D
      target-ppc: Move ppc tlb_fill implementation into mmu_helper.c · eb20c1c6
      David Gibson 提交于
      For softmmu builds the interface from the generic code to the target
      specific MMU implementation is through the tlb_fill() function.  For ppc
      this is currently in mem_helper.c, whereas it would make more sense in
      mmu_helper.c.  This patch moves it, which also allows
      cpu_ppc_handle_mmu_fault() to become a local function in mmu_helper.c
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      eb20c1c6
    • D
      target-ppc: Split user only code out of mmu_helper.c · cc8eae8a
      David Gibson 提交于
      mmu_helper.c is, for obvious reasons, almost entirely concerned with
      softmmu builds of qemu.  However, it does contain one stub function which
      is used when CONFIG_USER_ONLY=y - the user only versoin of
      cpu_ppc_handle_mmu_fault, which always triggers an exception.  The entire
      rest of the file is surrounded by #if !defined(CONFIG_USER_ONLY).
      
      We clean this up by moving the user only stub into its own new file,
      removing the ifdefs and building mmu_helper.c only when CONFIG_SOFTMMU
      is set.  This also lets us remove the #define of cpu_handle_mmu_fault to
      cpu_ppc_handle_mmu_fault - that name is only used from generic code for
      user only - so we just name our split user version by the generic name.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      cc8eae8a
    • D
      target-ppc: mmu_ctx_t should not be a global type · 5dc68eb0
      David Gibson 提交于
      mmu_ctx_t is currently defined in cpu.h.  However it is used for temporary
      information relating to mmu translation, and is only used in mmu_helper.c
      and (now) mmu-hash{32,64}.c.  Furthermore it contains information which
      should be specific to particular MMU types.  Therefore, move its definition
      to mmu_helper.c.  mmu-hash{32,64}.c are converted to use new data types
      private to the relevant MMUs (identical to mmu_ctx_t for now, but that will
      change in future patches).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      5dc68eb0
    • D
      target-ppc: Disentangle BAT code for 32-bit hash MMUs · 98132796
      David Gibson 提交于
      The functions for looking up BATs (Block Address Translation - essentially
      a level 0 TLB) are shared between the classic 32-bit hash MMUs and the
      6xx style software loaded TLB implementations.
      
      This patch splits out a copy for the 32-bit hash MMUs, to facilitate
      cleaning it up.  The remaining version is left, but cleaned up slightly
      to no longer deal with PowerPC 601 peculiarities (601 has a hash MMU).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      98132796
    • D
      target-ppc: Don't share get_pteg_offset() between 32 and 64-bit · 59191721
      David Gibson 提交于
      The get_pteg_offset() helper function is currently shared between 32-bit
      and 64-bit hash mmus, taking a parameter for the hash pte size.  In the
      64-bit paths, it's only called in one place, and it's a trivial
      calculation.  This patch, therefore, open codes it for 64-bit.  The
      remaining version, which is used in two places is made 32-bit only and
      moved to mmu-hash32.c.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      59191721
    • D
      target-ppc: Disentangle hash mmu helper functions · 496272a7
      David Gibson 提交于
      The newly separated paths for hash mmus rely on several helper functions
      which are still shared with 32-bit hash mmus: pp_check(), check_prot() and
      pte_update_flags().  While these don't have ugly ifdefs on the mmu type,
      they're not very well thought out, so sharing them impedes cleaning up the
      hash mmu paths.  For now, put near-duplicate versions into mmu-hash64.c and
      mmu-hash32.c, leaving the old version in mmu_helper.c for 6xx software
      loaded tlb implementations.  The hash 32 and software loaded
      implementations are simplfied slightly, using the fact that no 32-bit CPUs
      implement the 3rd page protection bit.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      496272a7
    • D
      target-ppc: Disentangle hash mmu versions of cpu_get_phys_page_debug() · f2ad6be8
      David Gibson 提交于
      cpu_get_phys_page_debug() is a trivial wrapper around
      get_physical_address().  But even the signature of
      get_physical_address() has some things we'd like to clean up on a
      per-mmu basis, so this patch moves the test on mmu model out to
      cpu_get_phys_page_debug(), moving the version for 64-bit hash MMUs out
      to mmu-hash64.c and the version for 32-bit hash MMUs to mmu-hash32.c
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      f2ad6be8
    • D
      target-ppc: Disentangle hash mmu paths for cpu_ppc_handle_mmu_fault · 25de24ab
      David Gibson 提交于
      cpu_ppc_handle_mmu_fault() calls get_physical_address() (whose behaviour
      depends on MMU type) then, if that fails, issues an appropriate exception
      - which again has a number of dependencies on MMU type.
      
      This patch starts converting cpu_ppc_handle_mmu_fault() to have a
      single switch on MMU type, calling MMU specific fault handler
      functions which deal with both translation and exception delivery
      appropriately for the MMU type.  We convert 32-bit and 64-bit hash
      MMUs to this new model, but the existing code is left in place for
      other MMU types for now.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      25de24ab
    • D
      target-ppc: Disentangle get_physical_address() paths · 629bd516
      David Gibson 提交于
      Depending on the MSR state, for 64-bit hash MMUs, get_physical_address
      can either call check_physical (which has further tests for mmu type)
      or get_segment64.  Similarly for 32-bit hash MMUs we can either call
      check_physucal or get_bat() and get_segment32().
      
      This patch splits off the whole get_physical_addresss() path for hash
      MMUs into 32-bit and 64-bit versions, handling real mode correctly for
      such MMUs without going to check_physical and rechecking the mmu type.
      Correspondingly, the hash MMU specific paths in check_physical() are
      removed.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      629bd516
    • D
      target-ppc: Rework get_physical_address() · 44bc9107
      David Gibson 提交于
      Currently get_physical_address() first checks to see if translation is
      enabled in the MSR, then in the translation on case switches on the mmu
      type.  Except that for BookE MMUs, translation is always on, and so it
      has to switch in the "translation off" case as well and do the same thing
      as the translation on path for those MMUs.  Plus, even translation off
      doesn't behave exactly the same on the various MMU types so there are
      further mmu type checks in the "translation off" path.
      
      As a first step to cleaning this up, this patch moves the switch on mmu
      type to the top level, then makes the translation on/off check just for
      those mmu types where it is meaningful.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      44bc9107
    • D
      target-ppc: Disentangle get_segment() · 0480884f
      David Gibson 提交于
      The poorly named get_segment() function handles most of the address
      translation logic for hash-based MMUs.  It has many ugly conditionals on
      whether the MMU is 32-bit or 64-bit.
      
      This patch splits the function into 32 and 64-bit versions, using the
      switch on mmu_type that's already in the caller
      (get_physical_address()) to select the right one.  Most of the
      original function remains in mmu_helper.c to support the 6xx software
      loaded TLB implementations (cleaning those up is a project for another
      day).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      0480884f
    • D
      target-ppc: Disentangle find_pte() · c69b6151
      David Gibson 提交于
      32-bit and 64-bit hash MMU implementations currently share a find_pte
      function.  This results in a whole bunch of ugly conditionals in the shared
      function, and not all that much actually shared code.
      
      This patch separates out the 32-bit and 64-bit versions, putting then
      in mmu-hash64.c and mmu-has32.c, and removes the conditionals from
      both versions.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      c69b6151
    • D
      target-ppc: Disentangle pte_check() · 9d7c3f4a
      David Gibson 提交于
      Currently support for both 32-bit and 64-bit hash MMUs share an
      implementation of pte_check.  But there are enough differences that this
      means the shared function has several very ugly conditionals on "is_64b".
      
      This patch cleans things up by separating out the 64-bit version
      (putting it into mmu-hash64.c) and the 32-bit hash version (putting it
      in mmu-hash32.c).  Another copy remains in mmu_helper.c, which is used
      for the 6xx software loaded TLB paths.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9d7c3f4a
    • D
      target-ppc: Move SLB handling into a mmu-hash64.c · 10b46525
      David Gibson 提交于
      As a first step to disentangling the handling for 64-bit hash MMUs from
      the rest, we move the code handling the Segment Lookaside Buffer (SLB)
      (which only exists on 64-bit hash MMUs) into a new mmu-hash64.c file.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      10b46525
    • D
      target-ppc: Remove address check for logging · 8152ceaf
      David Gibson 提交于
      One LOG_MMU statement in mmu_helper.c has an odd check on the effective
      address being translated.  I can see no reason for this; I suspect it was
      a debugging hack from long ago.  This patch removes it.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      8152ceaf
    • D
      target-ppc: Trivial cleanups in mmu_helper.c · 213c7180
      David Gibson 提交于
      This removes the never-used pte64_invalidate() function, and makes
      ppcmas_tlb_check() static, since it's only used within that file.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      213c7180
    • D
      target-ppc: Remove vestigial PowerPC 620 support · 9baea4a3
      David Gibson 提交于
      The PowerPC 620 was the very first 64-bit PowerPC implementation, but
      hardly anyone ever actually used the chips.  qemu notionally supports the
      620, but since we don't actually have code to implement the segment table,
      the support is broken (quite likely in other ways too).
      
      This patch, therefore, removes all remaining pieces of 620 support, to
      stop it cluttering up the platforms we actually care about.  This includes
      removing support for the ASR register, used only on segment table based
      machines.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9baea4a3
  10. 02 2月, 2013 1 次提交
  11. 01 2月, 2013 1 次提交
  12. 19 12月, 2012 1 次提交
  13. 02 11月, 2012 1 次提交
  14. 29 10月, 2012 1 次提交
    • P
      Drop unnecessary check of TARGET_PHYS_ADDR_SPACE_BITS · 21b2f13a
      Peter Maydell 提交于
      For all our PPC targets the physical address space is at least
      36 bits, so drop an unnecessary preprocessor conditional check
      on TARGET_PHYS_ADDR_SPACE_BITS (erroneously introduced as part
      of the change from target_phys_addr_t to hwaddr). This brings
      this bit of code into line with the way we handle the other
      cases which were originally checking TARGET_PHYS_ADDR_BITS in
      order to avoid compiler complaints about overflowing a 32 bit type.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      21b2f13a
  15. 23 10月, 2012 1 次提交
    • A
      Rename target_phys_addr_t to hwaddr · a8170e5e
      Avi Kivity 提交于
      target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
      reserved) and its purpose doesn't match the name (most target_phys_addr_t
      addresses are not target specific).  Replace it with a finger-friendly,
      standards conformant hwaddr.
      
      Outstanding patchsets can be fixed up with the command
      
        git rebase -i --exec 'find -name "*.[ch]"
                              | xargs s/target_phys_addr_t/hwaddr/g' origin
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      a8170e5e
  16. 05 10月, 2012 1 次提交
  17. 24 6月, 2012 2 次提交
    • B
      ppc64: Rudimentary Support for extra page sizes on server CPUs · 4656e1f0
      Benjamin Herrenschmidt 提交于
      More recent Power server chips (i.e. based on the 64 bit hash MMU)
      support more than just the traditional 4k and 16M page sizes.  This
      can get quite complicated, because which page sizes are supported,
      which combinations are supported within an MMU segment and how these
      page sizes are encoded both in the SLB entry and the hash PTE can vary
      depending on the CPU model (they are not specified by the
      architecture).  In addition the firmware or hypervisor may not permit
      use of certain page sizes, for various reasons.  Whether various page
      sizes are supported on KVM, for example, depends on whether the PR or
      HV variant of KVM is in use, and on the page size of the memory
      backing the guest's RAM.
      
      This patch adds information to the CPUState and cpu defs to describe
      the supported page sizes and encodings.  Since TCG does not yet
      support any extended page sizes, we just set this to NULL in the
      static CPU definitions, expanding this to the default 4k and 16M page
      sizes when we initialize the cpu state.  When using KVM, however, we
      instead determine available page sizes using the new
      KVM_PPC_GET_SMMU_INFO call.  For old kernels without that call, we use
      some defaults, with some guesswork which should do the right thing for
      existing HV and PR implementations.  The fallback might not be correct
      for future versions, but that's ok, because they'll have
      KVM_PPC_GET_SMMU_INFO.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      4656e1f0
    • F
      booke_206_tlbwe: Discard invalid bits in MAS2 · 77c2cf33
      Fabien Chouteau 提交于
      The size of EPN field in MAS2 depends on page size. This patch adds a
      mask to discard invalid bits in EPN field.
      
      Definition of EPN field from e500v2 RM:
      EPN Effective page number: Depending on page size, only the bits
      associated with a page boundary are valid. Bits that represent offsets
      within a page are ignored and should be cleared.
      
      There is a similar (but more complicated) definition in PowerISA V2.06.
      Signed-off-by: NFabien Chouteau <chouteau@adacore.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      77c2cf33