1. 14 10月, 2008 2 次提交
  2. 13 10月, 2008 1 次提交
  3. 09 10月, 2008 1 次提交
  4. 03 10月, 2008 1 次提交
  5. 01 10月, 2008 1 次提交
    • K
      powerpc/math-emu: Use kernel generic math-emu code · d2b194ed
      Kumar Gala 提交于
      The math emulation code is centered around a set of generic macros that
      provide the core of the emulation that are shared by the various
      architectures and other projects (like glibc).  Each arch implements its
      own sfp-machine.h to specific various arch specific details.
      
      For historic reasons that are now lost the powerpc math-emu code had
      its own version of the common headers.  This moves us to using the
      kernel generic version and thus getting fixes when those are updated.
      
      Also cleaned up exception/error reporting from the FP emulation functions.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      d2b194ed
  6. 30 9月, 2008 1 次提交
  7. 25 9月, 2008 7 次提交
  8. 23 9月, 2008 1 次提交
    • S
      signals: demultiplexing SIGTRAP signal · da654b74
      Srinivasa Ds 提交于
      Currently a SIGTRAP can denote any one of below reasons.
      	- Breakpoint hit
      	- H/W debug register hit
      	- Single step
      	- Signal sent through kill() or rasie()
      
      Architectures like powerpc/parisc provides infrastructure to demultiplex
      SIGTRAP signal by passing down the information for receiving SIGTRAP through
      si_code of siginfot_t structure. Here is an attempt is generalise this
      infrastructure by extending it to x86 and x86_64 archs.
      Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: akpm@linux-foundation.org
      Cc: paulus@samba.org
      Cc: linuxppc-dev@ozlabs.org
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      da654b74
  9. 18 9月, 2008 1 次提交
  10. 16 9月, 2008 6 次提交
    • S
      powerpc: Separate the irq radix tree insertion and lookup · 967e012e
      Sebastien Dugue 提交于
      irq_radix_revmap() currently serves 2 purposes, irq mapping lookup
      and insertion which happen in interrupt and process context respectively.
      
      Separate the function into its 2 components, one for lookup only and one
      for insertion only.
      
      Fix the only user of the revmap tree (XICS) to use the new functions.
      
      Also, move the insertion into the radix tree of those irqs that were
      requested before it was initialized at said tree initialization.
      
      Mutual exclusion between the tree initialization and readers/writers is
      handled via a state variable (revmap_trees_allocated) set to 1 when the tree
      has been initialized and set to 2 after the already requested irqs have been
      inserted in the tree by the init path. This state is checked before any reader
      or writer access just like we used to check for tree.gfp_mask != 0 before.
      
      Finally, now that we're not any longer inserting nodes into the radix-tree
      in interrupt context, turn the GFP_ATOMIC allocations into GFP_KERNEL ones.
      Signed-off-by: NSebastien Dugue <sebastien.dugue@bull.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      967e012e
    • C
      powerpc: Use sys_pause for 32-bit pause entry point · d6c93adb
      Christoph Hellwig 提交于
      sys32_pause is a useless copy of the generic sys_pause.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d6c93adb
    • P
      powerpc: Make the 64-bit kernel as a position-independent executable · 549e8152
      Paul Mackerras 提交于
      This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
      a position-independent executable (PIE) when it is set.  This involves
      processing the dynamic relocations in the image in the early stages of
      booting, even if the kernel is being run at the address it is linked at,
      since the linker does not necessarily fill in words in the image for
      which there are dynamic relocations.  (In fact the linker does fill in
      such words for 64-bit executables, though not for 32-bit executables,
      so in principle we could avoid calling relocate() entirely when we're
      running a 64-bit kernel at the linked address.)
      
      The dynamic relocations are processed by a new function relocate(addr),
      where the addr parameter is the virtual address where the image will be
      run.  In fact we call it twice; once before calling prom_init, and again
      when starting the main kernel.  This means that reloc_offset() returns
      0 in prom_init (since it has been relocated to the address it is running
      at), which necessitated a few adjustments.
      
      This also changes __va and __pa to use an equivalent definition that is
      simpler.  With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
      constants (for 64-bit) whereas PHYSICAL_START is a variable (and
      KERNELBASE ideally should be too, but isn't yet).
      
      With this, relocatable kernels still copy themselves down to physical
      address 0 and run there.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      549e8152
    • P
      powerpc: Use LOAD_REG_IMMEDIATE only for constants on 64-bit · e31aa453
      Paul Mackerras 提交于
      Using LOAD_REG_IMMEDIATE to get the address of kernel symbols
      generates 5 instructions where LOAD_REG_ADDR can do it in one,
      and will generate R_PPC64_ADDR16_* relocations in the output when
      we get to making the kernel as a position-independent executable,
      which we'd rather not have to handle.  This changes various bits
      of assembly code to use LOAD_REG_ADDR when we need to get the
      address of a symbol, or to use suitable position-independent code
      for cases where we can't access the TOC for various reasons, or
      if we're not running at the address we were linked at.
      
      It also cleans up a few minor things; there's no reason to save and
      restore SRR0/1 around RTAS calls, __mmu_off can get the return
      address from LR more conveniently than the caller can supply it in
      R4 (and we already assume elsewhere that EA == RA if the MMU is on
      in early boot), and enable_64b_mode was using 5 instructions where
      2 would do.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e31aa453
    • P
      powerpc: Make it possible to move the interrupt handlers away from the kernel · 1f6a93e4
      Paul Mackerras 提交于
      This changes the way that the exception prologs transfer control to
      the handlers in 64-bit kernels with the aim of making it possible to
      have the prologs separate from the main body of the kernel.  Now,
      instead of computing the address of the handler by taking the top
      32 bits of the paca address (to get the 0xc0000000........ part) and
      ORing in something in the bottom 16 bits, we get the base address of
      the kernel by doing a load from the paca and add an offset.
      
      This also replaces an mfmsr and an ori to compute the MSR value for
      the handler with a load from the paca.  That makes it unnecessary to
      have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit
      mode.
      
      We can no longer use a direct branches in the exception prolog code,
      which means that the SLB miss handlers can't branch directly to
      .slb_miss_realmode any more.  Instead we have to compute the address
      and do an indirect branch.  This is conditional on CONFIG_RELOCATABLE;
      for non-relocatable kernels we use a direct branch as before.  (A later
      change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.)
      
      Since the secondary CPUs on pSeries start execution in the first 0x100
      bytes of real memory and then have to get to wherever the kernel is,
      we can't use a direct branch to get there.  Instead this changes
      __secondary_hold_spinloop from a flag to a function pointer.  When it
      is set to a non-NULL value, the secondary CPUs jump to the function
      pointed to by that value.
      
      Finally this eliminates one code difference between 32-bit and 64-bit
      by making __secondary_hold be the text address of the secondary CPU
      spinloop rather than a function descriptor for it.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      1f6a93e4
    • M
      powerpc: Add new CPU feature: CPU_FTR_CP_USE_DCBTZ · 2a929436
      Mark Nelson 提交于
      Add a new CPU feature bit, CPU_FTR_CP_USE_DCBTZ, to be added to the
      64bit powerpc chips that benefit from having dcbt and dcbz
      instructions used in their memory copy routines.
      
      This will be used in a subsequent patch that updates copy_4K_page().
      The new bit is added to Cell, PPC970 and Power4 because they show
      better performance with the new copy_4K_page() when dcbt and dcbz
      instructions are used.
      Signed-off-by: NMark Nelson <markn@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2a929436
  11. 10 9月, 2008 1 次提交
  12. 07 9月, 2008 2 次提交
  13. 03 9月, 2008 1 次提交
  14. 27 8月, 2008 1 次提交
  15. 20 8月, 2008 6 次提交
  16. 19 8月, 2008 1 次提交
  17. 18 8月, 2008 4 次提交
  18. 15 8月, 2008 1 次提交
  19. 11 8月, 2008 1 次提交
    • B
      powerpc/mm: Fix attribute confusion with htab_bolt_mapping() · bc033b63
      Benjamin Herrenschmidt 提交于
      The function htab_bolt_mapping() is used to create permanent
      mappings in the MMU hash table, for example, in order to create
      the linear mapping of vmemmap.  It's also used by early boot
      ioremap (before mem_init_done).
      
      However, the way ioremap uses it is incorrect as it passes it the
      protection flags in the "linux PTE" form while htab_bolt_mapping()
      expects them in the hash table format.  This is made more confusing by
      the fact that some of those flags are actually in the same position in
      both cases.
      
      This fixes it all by making htab_bolt_mapping() take normal linux
      protection flags instead, and use a little helper to convert them to
      htab flags. Callers can now use the usual PAGE_* definitions safely.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/include/asm/mmu-hash64.h |    2 -
       arch/powerpc/mm/hash_utils_64.c       |   65 ++++++++++++++++++++--------------
       arch/powerpc/mm/init_64.c             |    9 +---
       3 files changed, 44 insertions(+), 32 deletions(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      bc033b63