1. 19 10月, 2016 2 次提交
  2. 12 10月, 2016 1 次提交
    • M
      powerpc/mm/hash64: Fix might_have_hea() check · 08bf75ba
      Michael Ellerman 提交于
      In commit 2b4e3ad8 ("powerpc/mm/hash64: Don't test for machine type
      to detect HEA special case") we changed the logic in might_have_hea()
      to check FW_FEATURE_SPLPAR rather than machine_is(pseries).
      
      However the check was incorrectly negated, leading to crashes on
      machines with HEA adapters, such as:
      
        mm: Hashing failure ! EA=0xd000080080004040 access=0x800000000000000e current=NetworkManager
            trap=0x300 vsid=0x13d349c ssize=1 base psize=2 psize 2 pte=0xc0003cc033e701ae
        Unable to handle kernel paging request for data at address 0xd000080080004040
        Call Trace:
          .ehea_create_cq+0x148/0x340 [ehea] (unreliable)
          .ehea_up+0x258/0x1200 [ehea]
          .ehea_open+0x44/0x1a0 [ehea]
          ...
      
      Fix it by removing the negation.
      
      Fixes: 2b4e3ad8 ("powerpc/mm/hash64: Don't test for machine type to detect HEA special case")
      Cc: stable@vger.kernel.org # v4.8+
      Reported-by: NDenis Kirjanov <kda@linux-powerpc.org>
      Reported-by: NJan Stancek <jstancek@redhat.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      08bf75ba
  3. 29 9月, 2016 1 次提交
    • B
      KVM: PPC: Book3S HV: Migrate pinned pages out of CMA · 2e5bbb54
      Balbir Singh 提交于
      When PCI Device pass-through is enabled via VFIO, KVM-PPC will
      pin pages using get_user_pages_fast(). One of the downsides of
      the pinning is that the page could be in CMA region. The CMA
      region is used for other allocations like the hash page table.
      Ideally we want the pinned pages to be from non CMA region.
      
      This patch (currently only for KVM PPC with VFIO) forcefully
      migrates the pages out (huge pages are omitted for the moment).
      There are more efficient ways of doing this, but that might
      be elaborate and might impact a larger audience beyond just
      the kvm ppc implementation.
      
      The magic is in new_iommu_non_cma_page() which allocates the
      new page from a non CMA region.
      
      I've tested the patches lightly at my end. The full solution
      requires migration of THP pages in the CMA region. That work
      will be done incrementally on top of this.
      Signed-off-by: NBalbir Singh <bsingharora@gmail.com>
      Acked-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      [mpe: Merged via powerpc tree as that's where the changes are]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2e5bbb54
  4. 23 9月, 2016 4 次提交
  5. 19 9月, 2016 1 次提交
  6. 13 9月, 2016 5 次提交
  7. 09 9月, 2016 1 次提交
    • P
      powerpc/mm: Speed up computation of base and actual page size for a HPTE · 0eeede0c
      Paul Mackerras 提交于
      This replaces a 2-D search through an array with a simple 8-bit table
      lookup for determining the actual and/or base page size for a HPT entry.
      
      The encoding in the second doubleword of the HPTE is designed to encode
      the actual and base page sizes without using any more bits than would be
      needed for a 4k page number, by using between 1 and 8 low-order bits of
      the RPN (real page number) field to encode the page sizes.  A single
      "large page" bit in the first doubleword indicates that these low-order
      bits are to be interpreted like this.
      
      We can determine the page sizes by using the low-order 8 bits of the RPN
      to look up a 256-entry table.  For actual page sizes less than 1MB, some
      of the upper bits of these 8 bits are going to be real address bits, but
      we can cope with that by replicating the entries for those smaller page
      sizes.
      
      While we're at it, let's move the hpte_page_size() and hpte_base_page_size()
      functions from a KVM-specific header to a header for 64-bit HPT systems,
      since this computation doesn't have anything specifically to do with KVM.
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      0eeede0c
  8. 08 9月, 2016 1 次提交
    • P
      powerpc/mm: Don't alias user region to other regions below PAGE_OFFSET · f077aaf0
      Paul Mackerras 提交于
      In commit c60ac569 ("powerpc: Update kernel VSID range", 2013-03-13)
      we lost a check on the region number (the top four bits of the effective
      address) for addresses below PAGE_OFFSET.  That commit replaced a check
      that the top 18 bits were all zero with a check that bits 46 - 59 were
      zero (performed for all addresses, not just user addresses).
      
      This means that userspace can access an address like 0x1000_0xxx_xxxx_xxxx
      and we will insert a valid SLB entry for it.  The VSID used will be the
      same as if the top 4 bits were 0, but the page size will be some random
      value obtained by indexing beyond the end of the mm_ctx_high_slices_psize
      array in the paca.  If that page size is the same as would be used for
      region 0, then userspace just has an alias of the region 0 space.  If the
      page size is different, then no HPTE will be found for the access, and
      the process will get a SIGSEGV (since hash_page_mm() will refuse to create
      a HPTE for the bogus address).
      
      The access beyond the end of the mm_ctx_high_slices_psize can be at most
      5.5MB past the array, and so will be in RAM somewhere.  Since the access
      is a load performed in real mode, it won't fault or crash the kernel.
      At most this bug could perhaps leak a little bit of information about
      blocks of 32 bytes of memory located at offsets of i * 512kB past the
      paca->mm_ctx_high_slices_psize array, for 1 <= i <= 11.
      
      Fixes: c60ac569 ("powerpc: Update kernel VSID range")
      Cc: stable@vger.kernel.org # v3.9+
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f077aaf0
  9. 07 9月, 2016 1 次提交
  10. 22 8月, 2016 1 次提交
  11. 08 8月, 2016 1 次提交
  12. 04 8月, 2016 1 次提交
    • M
      powerpc/mm: Move register_process_table() out of ppc_md · eea8148c
      Michael Ellerman 提交于
      We want to initialise register_process_table() before ppc_md is setup,
      so that it can be called as part of MMU init (at least on Radix ATM).
      
      That no longer works because probe_machine() requires that ppc_md be
      empty before it's called, and we now do probe_machine() much later.
      
      So make register_process_table a global for now. It will probably move
      into a mmu_radix_ops struct at some point in the future.
      
      This was broken by me when applying commit 7025776e "powerpc/mm:
      Move hash table ops to a separate structure" due to conflicts with other
      patches.
      
      Fixes: 7025776e ("powerpc/mm: Move hash table ops to a separate structure")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      eea8148c
  13. 03 8月, 2016 1 次提交
  14. 01 8月, 2016 14 次提交
  15. 28 7月, 2016 1 次提交
  16. 27 7月, 2016 1 次提交
  17. 26 7月, 2016 2 次提交
  18. 23 7月, 2016 1 次提交
    • S
      powerpc/numa: Convert to hotplug state machine · bdab88e0
      Sebastian Andrzej Siewior 提交于
      Install the callbacks via the state machine. On the boot cpu the callback is
      invoked manually because cpuhp is not up yet and everything must be
      preinitialized before additional CPUs are up.
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Christophe Jaillet <christophe.jaillet@wanadoo.fr>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: rt@linutronix.de
      Link: http://lkml.kernel.org/r/20160718140727.GA13132@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      bdab88e0