1. 09 7月, 2008 14 次提交
  2. 03 7月, 2008 11 次提交
  3. 01 7月, 2008 15 次提交
    • M
      powerpc: Update for VSX core file and ptrace · f3e909c2
      Michael Neuling 提交于
      This correctly hooks the VSX dump into Roland McGrath core file
      infrastructure.  It adds the VSX dump information as an additional elf
      note in the core file (after talking more to the tool chain/gdb guys).
      This also ensures the formats are consistent between signals, ptrace
      and core files.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f3e909c2
    • M
      powerpc: Fix compile error for CONFIG_VSX · 436db693
      Michael Neuling 提交于
      Fix compile error when CONFIG_VSX is enabled.
      
      arch/powerpc/kernel/signal_64.c: In function 'restore_sigcontext':
      arch/powerpc/kernel/signal_64.c:241: error: 'i' undeclared (first use in this function)
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      436db693
    • E
      powerpc: Keep 3 high personality bytes across exec · a91a03ee
      Eric B Munson 提交于
      Currently when a 32 bit process is exec'd on a powerpc 64 bit host the
      value in the top three bytes of the personality is clobbered.  patch
      adds a check in the SET_PERSONALITY macro that will carry all the
      values in the top three bytes across the exec.
      
      These three bytes currently carry flags to disable address randomisation,
      limit the address space, force zeroing of an mmapped page, etc.  Should an
      application set any of these bits they will be maintained and honoured on
      homogeneous environment but discarded and ignored on a heterogeneous
      environment.  So if an application requires all mmapped pages to be initialised
      to zero and a wrapper is used to setup the personality and exec the target,
      these flags will remain set on an all 32 or all 64 bit envrionment, but they
      will be lost in the exec on a mixed 32/64 bit environment.  Losing these bits
      means that the same application would behave differently in different
      environments.  Tested on a POWER5+ machine with 64bit kernel and a mixed
      64/32 bit user space.
      Signed-off-by: NEric B Munson <ebmunson@us.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a91a03ee
    • B
      powerpc: Make sure that include/asm-powerpc/spinlock.h does not trigger compilation warnings · 89b5810f
      Bart Van Assche 提交于
      When compiling kernel modules for ppc that include <linux/spinlock.h>,
      gcc prints a warning message every time it encounters a function
      declaration where the inline keyword appears after the return type.
      This makes sure that the order of the inline keyword and the return
      type is as gcc expects it.  Additionally, the __inline__ keyword is
      replaced by inline, as checkpatch expects.
      Signed-off-by: NBart Van Assche <bart.vanassche@gmail.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      89b5810f
    • S
      powerpc: Explicitly copy elements of pt_regs · fcbc5a97
      Stephen Rothwell 提交于
      Gcc 4.3 produced this warning:
      
      arch/powerpc/kernel/signal_64.c: In function 'restore_sigcontext':
      arch/powerpc/kernel/signal_64.c:161: warning: array subscript is above array bounds
      
      This is caused by us copying to aliases of elements of the pt_regs
      structure.  Make those explicit.
      
      This adds one extra __get_user and unrolls a loop.
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      fcbc5a97
    • B
      powerpc: Remove experimental status of kdump on 64-bit powerpc · 3420b5da
      Bernhard Walle 提交于
      This removes the experimental status of kdump on PPC64.  kdump is on
      PPC64 now since more than one year and it has proven to be stable.
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      3420b5da
    • A
      powerpc: Add 64 bit version of huge_ptep_set_wrprotect · 016b33c4
      Andy Whitcroft 提交于
      The implementation of huge_ptep_set_wrprotect() directly calls
      ptep_set_wrprotect() to mark a hugepte write protected.  However this
      call is not appropriate on ppc64 kernels as this is a small page only
      implementation.  This can lead to the hash not being flushed correctly
      when a mapping is being converted to COW, allowing processes to continue
      using the original copy.
      
      Currently huge_ptep_set_wrprotect() unconditionally calls
      ptep_set_wrprotect().  This is fine on ppc32 kernels as this call is
      generic.  On 64 bit this is implemented as:
      
      	pte_update(mm, addr, ptep, _PAGE_RW, 0);
      
      On ppc64 this last parameter is the page size and is passed directly on
      to hpte_need_flush():
      
      	hpte_need_flush(mm, addr, ptep, old, huge);
      
      And this directly affects the page size we pass to flush_hash_page():
      
      	flush_hash_page(vaddr, rpte, psize, ssize, 0);
      
      As this changes the way the hash is calculated we will flush the wrong
      pages, potentially leaving live hashes to the original page.
      
      Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
      headers.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      016b33c4
    • A
      powerpc: Prevent memory corruption due to cache invalidation of unaligned DMA buffer · 03d70617
      Andrew Lewis 提交于
      On PowerPC processors with non-coherent cache architectures the DMA
      subsystem calls invalidate_dcache_range() before performing a DMA read
      operation.  If the address and length of the DMA buffer are not aligned
      to a cache-line boundary this can result in memory outside of the DMA
      buffer being invalidated in the cache.  If this memory has an
      uncommitted store then the data will be lost and a subsequent read of
      that address will result in an old value being returned from main memory.
      
      Only when the DMA buffer starts on a cache-line boundary and is an exact
      mutiple of the cache-line size can invalidate_dcache_range() be called,
      otherwise flush_dcache_range() must be called.  flush_dcache_range()
      will first flush uncommitted writes, and then invalidate the cache.
      
      Signed-off-by: Andrew Lewis <andrew-lewis at netspace.net.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      03d70617
    • K
      powerpc/bootwrapper: Pad .dtb by default · 9d4ae9fc
      Kumar Gala 提交于
      Since most bootloaders or wrappers tend to update or add some information
      to the .dtb they a handled they need some working space to do that in.
      
      By default add 1K of padding via a default setting of DTS_FLAGS.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      9d4ae9fc
    • M
      powerpc: Add CONFIG_VSX config option · 96d5b52c
      Michael Neuling 提交于
      Add CONFIG_VSX config build option.  Must compile with POWER4, FPU and ALTIVEC.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      96d5b52c
    • M
      powerpc: Add VSX context save/restore, ptrace and signal support · ce48b210
      Michael Neuling 提交于
      This patch extends the floating point save and restore code to use the
      VSX load/stores when VSX is available.  This will make FP context
      save/restore marginally slower on FP only code, when VSX is available,
      as it has to load/store 128bits rather than just 64bits.
      
      Mixing FP, VMX and VSX code will get constant architected state.
      
      The signals interface is extended to enable access to VSR 0-31
      doubleword 1 after discussions with tool chain maintainers.  Backward
      compatibility is maintained.
      
      The ptrace interface is also extended to allow access to VSR 0-31 full
      registers.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ce48b210
    • M
      powerpc: Add VSX assembler code macros · 72ffff5b
      Michael Neuling 提交于
      This adds the macros for the VSX load/store instruction as most
      binutils are not going to support this for a while.
      
      Also add VSX register save/restore macros and vsr[0-63] register definitions.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      72ffff5b
    • M
      powerpc: Add VSX CPU feature · b962ce9d
      Michael Neuling 提交于
      Add a VSX CPU feature.  Also add code to detect if VSX is available
      from the device tree.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      b962ce9d
    • M
      powerpc: Introduce VSX thread_struct and CONFIG_VSX · c6e6771b
      Michael Neuling 提交于
      The layout of the new VSR registers and how they overlap on top of the
      legacy FPR and VR registers is:
      
                         VSR doubleword 0               VSR doubleword 1
                ----------------------------------------------------------------
        VSR[0]  |             FPR[0]            |                              |
                ----------------------------------------------------------------
        VSR[1]  |             FPR[1]            |                              |
                ----------------------------------------------------------------
                |              ...              |                              |
                |              ...              |                              |
                ----------------------------------------------------------------
        VSR[30] |             FPR[30]           |                              |
                ----------------------------------------------------------------
        VSR[31] |             FPR[31]           |                              |
                ----------------------------------------------------------------
        VSR[32] |                             VR[0]                            |
                ----------------------------------------------------------------
        VSR[33] |                             VR[1]                            |
                ----------------------------------------------------------------
                |                              ...                             |
                |                              ...                             |
                ----------------------------------------------------------------
        VSR[62] |                             VR[30]                           |
                ----------------------------------------------------------------
        VSR[63] |                             VR[31]                           |
                ----------------------------------------------------------------
      
      VSX has 64 128bit registers.  The first 32 regs overlap with the FP
      registers and hence extend them with and additional 64 bits.  The
      second 32 regs overlap with the VMX registers.
      
      This commit introduces the thread_struct changes required to reflect
      this register layout.  Ptrace and signals code is updated so that the
      floating point registers are correctly accessed from the thread_struct
      when CONFIG_VSX is enabled.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      c6e6771b
    • M
      powerpc: Make load_up_fpu and load_up_altivec callable · 6f3d8e69
      Michael Neuling 提交于
      Make load_up_fpu and load_up_altivec callable so they can be reused by
      the VSX code.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      6f3d8e69