1. 10 11月, 2009 1 次提交
    • F
      swiotlb: Defer swiotlb init printing, export swiotlb_print_info() · ad32e8cb
      FUJITA Tomonori 提交于
      This enables us to avoid printing swiotlb memory info when we
      initialize swiotlb. After swiotlb initialization, we could find
      that we don't need swiotlb.
      
      This patch removes the code to print swiotlb memory info in
      swiotlb_init() and exports the function to do that.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: chrisw@sous-sol.org
      Cc: dwmw2@infradead.org
      Cc: joerg.roedel@amd.com
      Cc: muli@il.ibm.com
      Cc: tony.luck@intel.com
      Cc: benh@kernel.crashing.org
      LKML-Reference: <1257849980-22640-9-git-send-email-fujita.tomonori@lab.ntt.co.jp>
      [ -v2: merge up conflict ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ad32e8cb
  2. 30 10月, 2009 1 次提交
  3. 27 10月, 2009 1 次提交
    • K
      powerpc: Fix compile errors found by new ppc64e_defconfig · ce7a35c7
      Kumar Gala 提交于
      Fix the following 3 issues:
      
      arch/powerpc/kernel/process.c: In function 'arch_randomize_brk':
      arch/powerpc/kernel/process.c:1183: error: 'mmu_highuser_ssize' undeclared (first use in this function)
      arch/powerpc/kernel/process.c:1183: error: (Each undeclared identifier is reported only once
      arch/powerpc/kernel/process.c:1183: error: for each function it appears in.)
      arch/powerpc/kernel/process.c:1183: error: 'MMU_SEGSIZE_1T' undeclared (first use in this function)
      
      In file included from arch/powerpc/kernel/setup_64.c:60:
      arch/powerpc/include/asm/mmu-hash64.h:132: error: redefinition of 'struct mmu_psize_def'
      arch/powerpc/include/asm/mmu-hash64.h:159: error: expected identifier or '(' before numeric constant
      arch/powerpc/include/asm/mmu-hash64.h:396: error: conflicting types for 'mm_context_t'
      arch/powerpc/include/asm/mmu-book3e.h:184: error: previous declaration of 'mm_context_t' was here
      
      cc1: warnings being treated as errors
      arch/powerpc/kernel/pci_64.c: In function 'pcibios_unmap_io_space':
      arch/powerpc/kernel/pci_64.c:100: error: unused variable 'res'
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ce7a35c7
  4. 20 8月, 2009 5 次提交
  5. 14 8月, 2009 1 次提交
    • T
      powerpc64: convert to dynamic percpu allocator · c2a7e818
      Tejun Heo 提交于
      Now that percpu allows arbitrary embedding of the first chunk,
      powerpc64 can easily be converted to dynamic percpu allocator.
      Convert it.  powerpc supports several large page sizes.  Cap atom_size
      at 1M.  There isn't much to gain by going above that anyway.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      c2a7e818
  6. 15 6月, 2009 1 次提交
  7. 09 6月, 2009 2 次提交
  8. 24 3月, 2009 1 次提交
  9. 11 2月, 2009 1 次提交
  10. 13 1月, 2009 1 次提交
  11. 21 12月, 2008 1 次提交
  12. 16 12月, 2008 1 次提交
  13. 03 12月, 2008 1 次提交
    • J
      powerpc: Eliminate NULL test and memset after alloc_bootmem · 786b32f8
      Julia Lawall 提交于
      As noted by Akinobu Mita in commit b1fceac2 ("x86: remove unnecessary
      memset and NULL check after alloc_bootmem()"), alloc_bootmem and
      related functions never return NULL and always return a zeroed region
      of memory.  Thus a NULL test or memset after calls to these functions
      is unnecessary.
      
      This was fixed using the following semantic patch.
      (http://www.emn.fr/x-info/coccinelle/)
      
      // <smpl>
      @@
      expression E;
      statement S;
      @@
      
      E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
      ... when != E
      (
      - BUG_ON (E == NULL);
      |
      - if (E == NULL) S
      )
      
      @@
      expression E,E1;
      @@
      
      E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
      ... when != E
      - memset(E,0,E1);
      // </smpl>
      Signed-off-by: NJulia Lawall <julia@diku.dk>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      786b32f8
  14. 31 10月, 2008 1 次提交
  15. 16 9月, 2008 1 次提交
    • P
      powerpc: Make it possible to move the interrupt handlers away from the kernel · 1f6a93e4
      Paul Mackerras 提交于
      This changes the way that the exception prologs transfer control to
      the handlers in 64-bit kernels with the aim of making it possible to
      have the prologs separate from the main body of the kernel.  Now,
      instead of computing the address of the handler by taking the top
      32 bits of the paca address (to get the 0xc0000000........ part) and
      ORing in something in the bottom 16 bits, we get the base address of
      the kernel by doing a load from the paca and add an offset.
      
      This also replaces an mfmsr and an ori to compute the MSR value for
      the handler with a load from the paca.  That makes it unnecessary to
      have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit
      mode.
      
      We can no longer use a direct branches in the exception prolog code,
      which means that the SLB miss handlers can't branch directly to
      .slb_miss_realmode any more.  Instead we have to compute the address
      and do an indirect branch.  This is conditional on CONFIG_RELOCATABLE;
      for non-relocatable kernels we use a direct branch as before.  (A later
      change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.)
      
      Since the secondary CPUs on pSeries start execution in the first 0x100
      bytes of real memory and then have to get to wherever the kernel is,
      we can't use a direct branch to get there.  Instead this changes
      __secondary_hold_spinloop from a flag to a function pointer.  When it
      is set to a non-NULL value, the secondary CPUs jump to the function
      pointed to by that value.
      
      Finally this eliminates one code difference between 32-bit and 64-bit
      by making __secondary_hold be the text address of the secondary CPU
      spinloop rather than a function descriptor for it.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      1f6a93e4
  16. 28 7月, 2008 1 次提交
  17. 03 7月, 2008 1 次提交
    • K
      powerpc: Fixup lwsync at runtime · 2d1b2027
      Kumar Gala 提交于
      To allow for a single kernel image on e500 v1/v2/mc we need to fixup lwsync
      at runtime.  On e500v1/v2 lwsync causes an illop so we need to patch up
      the code.  We default to 'sync' since that is always safe and if the cpu
      is capable we will replace 'sync' with 'lwsync'.
      
      We introduce CPU_FTR_LWSYNC as a way to determine at runtime if this is
      needed.  This flag could be moved elsewhere since we dont really use it
      for the normal CPU_FTR purpose.
      
      Finally we only store the relative offset in the fixup section to keep it
      as small as possible rather than using a full fixup_entry.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2d1b2027
  18. 27 5月, 2008 1 次提交
    • S
      ftrace: powerpc clean ups · ccbfac29
      Steven Rostedt 提交于
      This patch cleans up the ftrace code in PowerPC based on the comments from
      Michael Ellerman.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: proski@gnu.org
      Cc: a.p.zijlstra@chello.nl
      Cc: Pekka Paalanen <pq@iki.fi>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: linuxppc-dev@ozlabs.org
      Cc: Soeren Sandmann Pedersen <sandmann@redhat.com>
      Cc: paulus@samba.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ccbfac29
  19. 24 5月, 2008 1 次提交
  20. 09 5月, 2008 2 次提交
  21. 30 4月, 2008 1 次提交
  22. 24 4月, 2008 2 次提交
    • T
      [POWERPC] Raise the upper limit of NR_CPUS and move the pacas into the BSS · 90035fe3
      Tony Breeds 提交于
      This adds the required functionality to fill in all pacas at runtime.
      
      With NR_CPUS=1024
      text    data     bss     dec     hex filename
       137 1704032       0 1704169  1a00e9 arch/powerpc/kernel/paca.o :Before
       121 1179744  524288 1704153  1a00d9 arch/powerpc/kernel/paca.o :After
      
      Also remove unneeded #includes from arch/powerpc/kernel/paca.c
      Signed-off-by: NTony Breeds <tony@bakeyournoodle.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      90035fe3
    • K
      [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) · 37dd2bad
      Kumar Gala 提交于
      Added support to allow an 85xx kernel to be run from a non-zero physical
      address (useful for cooperative asymmetric multiprocessing situations and
      kdump).  The support can be configured at compile time by setting
      CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
      desired.
      
      Alternatively, the kernel build can set CONFIG_RELOCATABLE.  Setting this
      config option causes the kernel to determine at runtime the physical
      addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START.  If
      CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
      However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
      header physical address field in the resulting ELF image.
      
      Currently we are limited to running at a physical address that is a
      multiple of 256M.  This is due to how we map TLBs to cover
      lowmem.  This should be fixed to allow 64M or maybe even 16M alignment
      in the future.  It is considered an error to try and run a kernel at a
      non-aligned physical address.
      
      All the magic for this support is accomplished by proper initialization
      of the kernel memory subsystem and use of ARCH_PFN_OFFSET.
      
      The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
      ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.
      
      /dev/mem continues to allow access to any physical address in the system
      regardless of how CONFIG_PHYSICAL_START is set.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      37dd2bad
  23. 18 4月, 2008 1 次提交
  24. 17 4月, 2008 1 次提交
  25. 14 2月, 2008 1 次提交
  26. 08 11月, 2007 1 次提交
    • B
      [POWERPC] Fix cache line vs. block size confusion · 20474abd
      Benjamin Herrenschmidt 提交于
      We had an historical confusion in the kernel between cache line
      and cache block size. The former is an implementation detail of
      the L1 cache which can be useful for performance optimisations,
      the later is the actual size on which the cache control
      instructions operate, which can be different.
      
      For some reason, we had a weird hack reading the right property
      on powermac and the wrong one on any other 64 bits (32 bits is
      unaffected as it only uses the cputable for cache block size
      infos at this stage).
      
      This fixes the booting-without-of.txt documentation to mention
      the right properties, and fixes the 64 bits initialization code
      to look for the block size first, with a fallback to the line
      size if the property is missing.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      20474abd
  27. 17 10月, 2007 2 次提交
    • A
      [POWERPC] Quieten cache information at boot · 9697add0
      Anton Blanchard 提交于
      After 6 years the ppc64 kernel still thinks its important to tell me my
      cache line size is 0x80 bytes. I think most people who care know that by
      now. The rest probably cant even understand the hex output.
      
      Since we might have misconfigured firmware or cpus that have a linesize
      that isnt 128 bytes, I still print it out for those cases. If people
      would prefer to remove it completely, lets do it.
      
      Also for lpar remove the htab_address printout since its not used.
      
      Anton
      ppc64 boot log usability expert
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      9697add0
    • M
      Convert cpu_sibling_map to be a per cpu variable · d5a7430d
      Mike Travis 提交于
      Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu
      variable.  This saves sizeof(cpumask_t) * NR unused cpus.  Access is mostly
      from startup and CPU HOTPLUG functions.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d5a7430d
  28. 11 10月, 2007 1 次提交
  29. 13 9月, 2007 1 次提交
  30. 19 7月, 2007 1 次提交
  31. 25 6月, 2007 1 次提交
  32. 03 5月, 2007 1 次提交