1. 27 4月, 2011 1 次提交
  2. 20 4月, 2011 1 次提交
  3. 09 12月, 2010 1 次提交
  4. 18 11月, 2010 1 次提交
  5. 24 8月, 2010 1 次提交
  6. 05 8月, 2010 1 次提交
    • B
      memblock: Remove rmo_size, burry it in arch/powerpc where it belongs · cd3db0c4
      Benjamin Herrenschmidt 提交于
      The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
      server ppc64 though I hijack it on embedded ppc64 for similar purposes)
      and represents the area of memory that can be accessed in real mode
      (aka with MMU off), or on embedded, from the exception vectors (which
      is bolted in the TLB) which pretty much boils down to the same thing.
      
      We take that out of the generic MEMBLOCK data structure and move it into
      arch/powerpc where it belongs, renaming it to "RMA" while at it.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cd3db0c4
  7. 31 7月, 2010 1 次提交
  8. 14 7月, 2010 1 次提交
  9. 09 7月, 2010 1 次提交
    • A
      powerpc: Optimise per cpu accesses on 64bit · ae01f84b
      Anton Blanchard 提交于
      Now we dynamically allocate the paca array, it takes an extra load
      whenever we want to access another cpu's paca. One place we do that a lot
      is per cpu variables. A simple example:
      
      DEFINE_PER_CPU(unsigned long, vara);
      unsigned long test4(int cpu)
      {
      	return per_cpu(vara, cpu);
      }
      
      This takes 4 loads, 5 if you include the actual load of the per cpu variable:
      
          ld r11,-32760(r30)  # load address of paca pointer
          ld r9,-32768(r30)   # load link address of percpu variable
          sldi r3,r29,9       # get offset into paca (each entry is 512 bytes)
          ld r0,0(r11)        # load paca pointer
          add r3,r0,r3        # paca + offset
          ld r11,64(r3)       # load paca[cpu].data_offset
      
          ldx r3,r9,r11       # load per cpu variable
      
      If we remove the ppc64 specific per_cpu_offset(), we get the generic one
      which indexes into a statically allocated array. This removes one load and
      one add:
      
          ld r11,-32760(r30)  # load address of __per_cpu_offset
          ld r9,-32768(r30)   # load link address of percpu variable
          sldi r3,r29,3       # get offset into __per_cpu_offset (each entry 8 bytes)
          ldx r11,r11,r3      # load __per_cpu_offset[cpu]
      
          ldx r3,r9,r11       # load per cpu variable
      
      Having all the offsets in one array also helps when iterating over a per cpu
      variable across a number of cpus, such as in the scheduler. Before we would
      need to load one paca cacheline when calculating each per cpu offset. Now we
      have 16 (128 / sizeof(long)) per cpu offsets in each cacheline.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ae01f84b
  10. 15 6月, 2010 1 次提交
  11. 21 5月, 2010 2 次提交
  12. 19 3月, 2010 1 次提交
  13. 09 3月, 2010 1 次提交
    • M
      powerpc: Dynamically allocate pacas · 1426d5a3
      Michael Ellerman 提交于
      On 64-bit kernels we currently have a 512 byte struct paca_struct for
      each cpu (usually just called "the paca"). Currently they are statically
      allocated, which means a kernel built for a large number of cpus will
      waste a lot of space if it's booted on a machine with few cpus.
      
      We can avoid that by only allocating the number of pacas we need at
      boot. However this is complicated by the fact that we need to access
      the paca before we know how many cpus there are in the system.
      
      The solution is to dynamically allocate enough space for NR_CPUS pacas,
      but then later in boot when we know how many cpus we have, we free any
      unused pacas.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1426d5a3
  14. 10 11月, 2009 1 次提交
    • F
      swiotlb: Defer swiotlb init printing, export swiotlb_print_info() · ad32e8cb
      FUJITA Tomonori 提交于
      This enables us to avoid printing swiotlb memory info when we
      initialize swiotlb. After swiotlb initialization, we could find
      that we don't need swiotlb.
      
      This patch removes the code to print swiotlb memory info in
      swiotlb_init() and exports the function to do that.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: chrisw@sous-sol.org
      Cc: dwmw2@infradead.org
      Cc: joerg.roedel@amd.com
      Cc: muli@il.ibm.com
      Cc: tony.luck@intel.com
      Cc: benh@kernel.crashing.org
      LKML-Reference: <1257849980-22640-9-git-send-email-fujita.tomonori@lab.ntt.co.jp>
      [ -v2: merge up conflict ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ad32e8cb
  15. 30 10月, 2009 1 次提交
  16. 27 10月, 2009 1 次提交
    • K
      powerpc: Fix compile errors found by new ppc64e_defconfig · ce7a35c7
      Kumar Gala 提交于
      Fix the following 3 issues:
      
      arch/powerpc/kernel/process.c: In function 'arch_randomize_brk':
      arch/powerpc/kernel/process.c:1183: error: 'mmu_highuser_ssize' undeclared (first use in this function)
      arch/powerpc/kernel/process.c:1183: error: (Each undeclared identifier is reported only once
      arch/powerpc/kernel/process.c:1183: error: for each function it appears in.)
      arch/powerpc/kernel/process.c:1183: error: 'MMU_SEGSIZE_1T' undeclared (first use in this function)
      
      In file included from arch/powerpc/kernel/setup_64.c:60:
      arch/powerpc/include/asm/mmu-hash64.h:132: error: redefinition of 'struct mmu_psize_def'
      arch/powerpc/include/asm/mmu-hash64.h:159: error: expected identifier or '(' before numeric constant
      arch/powerpc/include/asm/mmu-hash64.h:396: error: conflicting types for 'mm_context_t'
      arch/powerpc/include/asm/mmu-book3e.h:184: error: previous declaration of 'mm_context_t' was here
      
      cc1: warnings being treated as errors
      arch/powerpc/kernel/pci_64.c: In function 'pcibios_unmap_io_space':
      arch/powerpc/kernel/pci_64.c:100: error: unused variable 'res'
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ce7a35c7
  17. 20 8月, 2009 5 次提交
  18. 14 8月, 2009 1 次提交
    • T
      powerpc64: convert to dynamic percpu allocator · c2a7e818
      Tejun Heo 提交于
      Now that percpu allows arbitrary embedding of the first chunk,
      powerpc64 can easily be converted to dynamic percpu allocator.
      Convert it.  powerpc supports several large page sizes.  Cap atom_size
      at 1M.  There isn't much to gain by going above that anyway.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      c2a7e818
  19. 15 6月, 2009 1 次提交
  20. 09 6月, 2009 2 次提交
  21. 24 3月, 2009 1 次提交
  22. 11 2月, 2009 1 次提交
  23. 13 1月, 2009 1 次提交
  24. 21 12月, 2008 1 次提交
  25. 16 12月, 2008 1 次提交
  26. 03 12月, 2008 1 次提交
    • J
      powerpc: Eliminate NULL test and memset after alloc_bootmem · 786b32f8
      Julia Lawall 提交于
      As noted by Akinobu Mita in commit b1fceac2 ("x86: remove unnecessary
      memset and NULL check after alloc_bootmem()"), alloc_bootmem and
      related functions never return NULL and always return a zeroed region
      of memory.  Thus a NULL test or memset after calls to these functions
      is unnecessary.
      
      This was fixed using the following semantic patch.
      (http://www.emn.fr/x-info/coccinelle/)
      
      // <smpl>
      @@
      expression E;
      statement S;
      @@
      
      E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
      ... when != E
      (
      - BUG_ON (E == NULL);
      |
      - if (E == NULL) S
      )
      
      @@
      expression E,E1;
      @@
      
      E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
      ... when != E
      - memset(E,0,E1);
      // </smpl>
      Signed-off-by: NJulia Lawall <julia@diku.dk>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      786b32f8
  27. 31 10月, 2008 1 次提交
  28. 16 9月, 2008 1 次提交
    • P
      powerpc: Make it possible to move the interrupt handlers away from the kernel · 1f6a93e4
      Paul Mackerras 提交于
      This changes the way that the exception prologs transfer control to
      the handlers in 64-bit kernels with the aim of making it possible to
      have the prologs separate from the main body of the kernel.  Now,
      instead of computing the address of the handler by taking the top
      32 bits of the paca address (to get the 0xc0000000........ part) and
      ORing in something in the bottom 16 bits, we get the base address of
      the kernel by doing a load from the paca and add an offset.
      
      This also replaces an mfmsr and an ori to compute the MSR value for
      the handler with a load from the paca.  That makes it unnecessary to
      have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit
      mode.
      
      We can no longer use a direct branches in the exception prolog code,
      which means that the SLB miss handlers can't branch directly to
      .slb_miss_realmode any more.  Instead we have to compute the address
      and do an indirect branch.  This is conditional on CONFIG_RELOCATABLE;
      for non-relocatable kernels we use a direct branch as before.  (A later
      change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.)
      
      Since the secondary CPUs on pSeries start execution in the first 0x100
      bytes of real memory and then have to get to wherever the kernel is,
      we can't use a direct branch to get there.  Instead this changes
      __secondary_hold_spinloop from a flag to a function pointer.  When it
      is set to a non-NULL value, the secondary CPUs jump to the function
      pointed to by that value.
      
      Finally this eliminates one code difference between 32-bit and 64-bit
      by making __secondary_hold be the text address of the secondary CPU
      spinloop rather than a function descriptor for it.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      1f6a93e4
  29. 28 7月, 2008 1 次提交
  30. 03 7月, 2008 1 次提交
    • K
      powerpc: Fixup lwsync at runtime · 2d1b2027
      Kumar Gala 提交于
      To allow for a single kernel image on e500 v1/v2/mc we need to fixup lwsync
      at runtime.  On e500v1/v2 lwsync causes an illop so we need to patch up
      the code.  We default to 'sync' since that is always safe and if the cpu
      is capable we will replace 'sync' with 'lwsync'.
      
      We introduce CPU_FTR_LWSYNC as a way to determine at runtime if this is
      needed.  This flag could be moved elsewhere since we dont really use it
      for the normal CPU_FTR purpose.
      
      Finally we only store the relative offset in the fixup section to keep it
      as small as possible rather than using a full fixup_entry.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2d1b2027
  31. 27 5月, 2008 1 次提交
    • S
      ftrace: powerpc clean ups · ccbfac29
      Steven Rostedt 提交于
      This patch cleans up the ftrace code in PowerPC based on the comments from
      Michael Ellerman.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: proski@gnu.org
      Cc: a.p.zijlstra@chello.nl
      Cc: Pekka Paalanen <pq@iki.fi>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: linuxppc-dev@ozlabs.org
      Cc: Soeren Sandmann Pedersen <sandmann@redhat.com>
      Cc: paulus@samba.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ccbfac29
  32. 24 5月, 2008 1 次提交
  33. 09 5月, 2008 2 次提交