1. 25 9月, 2014 1 次提交
    • M
      powerpc/ppc64: Clean up the boot-time settings display · bdce97e9
      Michael Ellerman 提交于
      At boot we display a bunch of low level settings which can be useful to
      know, and can help to spot bugs when things are fundamentally
      misconfigured.
      
      At the moment they are very widely spaced, so that we can accommodate
      the line:
      
        ppc64_caches.dcache_line_size = 0xYY
      
      But we only print that line when the cache line size is not 128, ie.
      almost never, so it just makes the display look odd usually.
      
      The ppc64_caches prefix is redundant so remove it, which means we can
      align things a bit closer for the common case. While we're there
      replace the last use of camelCase (physicalMemorySize), and use
      phys_mem_size.
      
      Before:
        Starting Linux PPC64 #104 SMP Wed Aug 6 18:41:34 EST 2014
        -----------------------------------------------------
        ppc64_pft_size                = 0x1a
        physicalMemorySize            = 0x200000000
        ppc64_caches.dcache_line_size = 0xf0
        ppc64_caches.icache_line_size = 0xf0
        htab_address                  = 0xdeadbeef
        htab_hash_mask                = 0x7ffff
        physical_start                = 0xf000bar
        -----------------------------------------------------
      
      After:
        Starting Linux PPC64 #103 SMP Wed Aug 6 18:38:04 EST 2014
        -----------------------------------------------------
        ppc64_pft_size    = 0x1a
        phys_mem_size     = 0x200000000
        dcache_line_size  = 0xf0
        icache_line_size  = 0xf0
        htab_address      = 0xdeadbeef
        htab_hash_mask    = 0x7ffff
        physical_start    = 0xf000bar
        -----------------------------------------------------
      
      This patch is final, no bike shedding ;)
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      bdce97e9
  2. 09 8月, 2014 1 次提交
  3. 30 7月, 2014 1 次提交
    • A
      powerpc/e6500: Add support for hardware threads · e16c8765
      Andy Fleming 提交于
      The general idea is that each core will release all of its
      threads into the secondary thread startup code, which will
      eventually wait in the secondary core holding area, for the
      appropriate bit in the PACA to be set. The kick_cpu function
      pointer will set that bit in the PACA, and thus "release"
      the core/thread to boot. We also need to do a few things that
      U-Boot normally does for CPUs (like enable branch prediction).
      Signed-off-by: NAndy Fleming <afleming@freescale.com>
      [scottwood@freescale.com: various changes, including only enabling
       threads if Linux wants to kick them]
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      e16c8765
  4. 28 7月, 2014 2 次提交
  5. 05 6月, 2014 1 次提交
  6. 23 4月, 2014 1 次提交
  7. 13 4月, 2014 1 次提交
  8. 07 4月, 2014 3 次提交
  9. 20 3月, 2014 2 次提交
  10. 10 1月, 2014 1 次提交
    • S
      powerpc/e6500: TLB miss handler with hardware tablewalk support · 28efc35f
      Scott Wood 提交于
      There are a few things that make the existing hw tablewalk handlers
      unsuitable for e6500:
      
       - Indirect entries go in TLB1 (though the resulting direct entries go in
         TLB0).
      
       - It has threads, but no "tlbsrx." -- so we need a spinlock and
         a normal "tlbsx".  Because we need this lock, hardware tablewalk
         is mandatory on e6500 unless we want to add spinlock+tlbsx to
         the normal bolted TLB miss handler.
      
       - TLB1 has no HES (nor next-victim hint) so we need software round robin
         (TODO: integrate this round robin data with hugetlb/KVM)
      
       - The existing tablewalk handlers map half of a page table at a time,
         because IBM hardware has a fixed 1MiB indirect page size.  e6500
         has variable size indirect entries, with a minimum of 2MiB.
         So we can't do the half-page indirect mapping, and even if we
         could it would be less efficient than mapping the full page.
      
       - Like on e5500, the linear mapping is bolted, so we don't need the
         overhead of supporting nested tlb misses.
      
      Note that hardware tablewalk does not work in rev1 of e6500.
      We do not expect to support e6500 rev1 in mainline Linux.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Cc: Mihai Caraman <mihai.caraman@freescale.com>
      28efc35f
  11. 05 12月, 2013 1 次提交
  12. 02 12月, 2013 1 次提交
  13. 26 11月, 2013 1 次提交
  14. 30 10月, 2013 1 次提交
  15. 14 8月, 2013 4 次提交
  16. 08 8月, 2013 1 次提交
  17. 08 7月, 2013 2 次提交
  18. 01 7月, 2013 1 次提交
    • C
      powerpc/smp: Section mismatch from smp_release_cpus to __initdata spinning_secondaries · 8246aca7
      Chen Gang 提交于
      the smp_release_cpus is a normal funciton and called in normal environments,
        but it calls the __initdata spinning_secondaries.
        need modify spinning_secondaries to match smp_release_cpus.
      
      the related warning:
        (the linker report boot_paca.33377, but it should be spinning_secondaries)
      
      -----------------------------------------------------------------------------
      
      WARNING: arch/powerpc/kernel/built-in.o(.text+0x23176): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377
      The function .smp_release_cpus() references
      the variable __initdata boot_paca.33377.
      This is often because .smp_release_cpus lacks a __initdata
      annotation or the annotation of boot_paca.33377 is wrong.
      
      WARNING: arch/powerpc/kernel/built-in.o(.text+0x231fe): Section mismatch in reference from the function .smp_release_cpus() to the variable .init.data:boot_paca.33377
      The function .smp_release_cpus() references
      the variable __initdata boot_paca.33377.
      This is often because .smp_release_cpus lacks a __initdata
      annotation or the annotation of boot_paca.33377 is wrong.
      
      -----------------------------------------------------------------------------
      Signed-off-by: NChen Gang <gang.chen@asianux.com>
      CC: <stable@vger.kernel.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8246aca7
  19. 30 4月, 2013 1 次提交
    • A
      powerpc: Reduce PTE table memory wastage · 5c1f6ee9
      Aneesh Kumar K.V 提交于
      We allocate one page for the last level of linux page table. With THP and
      large page size of 16MB, that would mean we are wasting large part
      of that page. To map 16MB area, we only need a PTE space of 2K with 64K
      page size. This patch reduce the space wastage by sharing the page
      allocated for the last level of linux page table with multiple pmd
      entries. We call these smaller chunks PTE page fragments and allocated
      page, PTE page.
      
      In order to support systems which doesn't have 64K HPTE support, we also
      add another 2K to PTE page fragment. The second half of the PTE fragments
      is used for storing slot and secondary bit information of an HPTE. With this
      we now have a 4K PTE fragment.
      
      We use a simple approach to share the PTE page. On allocation, we bump the
      PTE page refcount to 16 and share the PTE page with the next 16 pte alloc
      request. This should help in the node locality of the PTE page fragment,
      assuming that the immediate pte alloc request will mostly come from the
      same NUMA node. We don't try to reuse the freed PTE page fragment. Hence
      we could be waisting some space.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5c1f6ee9
  20. 15 2月, 2013 2 次提交
  21. 15 11月, 2012 1 次提交
  22. 27 9月, 2012 1 次提交
  23. 29 3月, 2012 1 次提交
  24. 05 3月, 2012 1 次提交
  25. 07 12月, 2011 1 次提交
  26. 16 11月, 2011 1 次提交
  27. 01 11月, 2011 1 次提交
  28. 20 9月, 2011 2 次提交
  29. 12 7月, 2011 1 次提交
    • P
      KVM: PPC: Allocate RMAs (Real Mode Areas) at boot for use by guests · aa04b4cc
      Paul Mackerras 提交于
      This adds infrastructure which will be needed to allow book3s_hv KVM to
      run on older POWER processors, including PPC970, which don't support
      the Virtual Real Mode Area (VRMA) facility, but only the Real Mode
      Offset (RMO) facility.  These processors require a physically
      contiguous, aligned area of memory for each guest.  When the guest does
      an access in real mode (MMU off), the address is compared against a
      limit value, and if it is lower, the address is ORed with an offset
      value (from the Real Mode Offset Register (RMOR)) and the result becomes
      the real address for the access.  The size of the RMA has to be one of
      a set of supported values, which usually includes 64MB, 128MB, 256MB
      and some larger powers of 2.
      
      Since we are unlikely to be able to allocate 64MB or more of physically
      contiguous memory after the kernel has been running for a while, we
      allocate a pool of RMAs at boot time using the bootmem allocator.  The
      size and number of the RMAs can be set using the kvm_rma_size=xx and
      kvm_rma_count=xx kernel command line options.
      
      KVM exports a new capability, KVM_CAP_PPC_RMA, to signal the availability
      of the pool of preallocated RMAs.  The capability value is 1 if the
      processor can use an RMA but doesn't require one (because it supports
      the VRMA facility), or 2 if the processor requires an RMA for each guest.
      
      This adds a new ioctl, KVM_ALLOCATE_RMA, which allocates an RMA from the
      pool and returns a file descriptor which can be used to map the RMA.  It
      also returns the size of the RMA in the argument structure.
      
      Having an RMA means we will get multiple KMV_SET_USER_MEMORY_REGION
      ioctl calls from userspace.  To cope with this, we now preallocate the
      kvm->arch.ram_pginfo array when the VM is created with a size sufficient
      for up to 64GB of guest memory.  Subsequently we will get rid of this
      array and use memory associated with each memslot instead.
      
      This moves most of the code that translates the user addresses into
      host pfns (page frame numbers) out of kvmppc_prepare_vrma up one level
      to kvmppc_core_prepare_memory_region.  Also, instead of having to look
      up the VMA for each page in order to check the page size, we now check
      that the pages we get are compound pages of 16MB.  However, if we are
      adding memory that is mapped to an RMA, we don't bother with calling
      get_user_pages_fast and instead just offset from the base pfn for the
      RMA.
      
      Typically the RMA gets added after vcpus are created, which makes it
      inconvenient to have the LPCR (logical partition control register) value
      in the vcpu->arch struct, since the LPCR controls whether the processor
      uses RMA or VRMA for the guest.  This moves the LPCR value into the
      kvm->arch struct and arranges for the MER (mediated external request)
      bit, which is the only bit that varies between vcpus, to be set in
      assembly code when going into the guest if there is a pending external
      interrupt request.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      aa04b4cc
  30. 17 6月, 2011 1 次提交