1. 14 8月, 2013 6 次提交
  2. 08 8月, 2013 6 次提交
  3. 01 8月, 2013 2 次提交
    • R
      powerpc: VPHN topology change updates all siblings · 3be7db6a
      Robert Jennings 提交于
      When an associativity level change is found for one thread, the
      siblings threads need to be updated as well.  This is done today
      for PRRN in stage_topology_update() but is missing for VPHN in
      update_cpu_associativity_changes_mask().  This patch will correctly
      update all thread siblings during a topology change.
      
      Without this patch a topology update can result in a CPU in
      init_sched_groups_power() getting stuck indefinitely in a loop.
      
      This loop is built in build_sched_groups(). As a result of the thread
      moving to a node separate from its siblings the struct sched_group will
      have its next pointer set to point to itself rather than the sched_group
      struct of the next thread.  This happens because we have a domain without
      the SD_OVERLAP flag, which is correct, and a topology that doesn't conform
      with reality (threads on the same core assigned to different numa nodes).
      When this list is traversed by init_sched_groups_power() it will reach
      the thread's sched_group structure and loop indefinitely; the cpu will
      be stuck at this point.
      
      The bug was exposed when VPHN was enabled in commit b7abef04 (v3.9).
      
      Cc: <stable@vger.kernel.org> [v3.9+]
      Reported-by: NJan Stancek <jstancek@redhat.com>
      Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3be7db6a
    • M
      powerpc/perf: Export PERF_EVENT_CONFIG_EBB_SHIFT to userspace · 8d7c55d0
      Michael Ellerman 提交于
      We use bit 63 of the event code for userspace to request that the event
      be counted using EBB (Event Based Branches). Export this value, making
      it part of the API - though only on processors that support EBB.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8d7c55d0
  4. 31 7月, 2013 1 次提交
  5. 24 7月, 2013 9 次提交
  6. 02 7月, 2013 3 次提交
  7. 01 7月, 2013 8 次提交
  8. 30 6月, 2013 1 次提交
    • P
      KVM: PPC: Book3S PR: Allow guest to use 1TB segments · 0f296829
      Paul Mackerras 提交于
      With this, the guest can use 1TB segments as well as 256MB segments.
      Since we now have the situation where a single emulated guest segment
      could correspond to multiple shadow segments (as the shadow segments
      are still 256MB segments), this adds a new kvmppc_mmu_flush_segment()
      to scan for all shadow segments that need to be removed.
      
      This restructures the guest HPT (hashed page table) lookup code to
      use the correct hashing and matching functions for HPTEs within a
      1TB segment.  We use the standard hpt_hash() function instead of
      open-coding the hash calculation, and we use HPTE_V_COMPARE() with
      an AVPN value that has the B (segment size) field included.  The
      calculation of avpn is done a little earlier since it doesn't change
      in the loop starting at the do_second label.
      
      The computation in kvmppc_mmu_book3s_64_esid_to_vsid() changes so that
      it returns a 256MB VSID even if the guest SLB entry is a 1TB entry.
      This is because the users of this function are creating 256MB SLB
      entries.  We set a new VSID_1T flag so that entries created from 1T
      segments don't collide with entries from 256MB segments.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      0f296829
  9. 29 6月, 2013 1 次提交
  10. 26 6月, 2013 1 次提交
  11. 25 6月, 2013 2 次提交