1. 23 2月, 2009 3 次提交
  2. 13 1月, 2009 2 次提交
  3. 12 1月, 2009 1 次提交
  4. 09 1月, 2009 1 次提交
  5. 08 1月, 2009 3 次提交
  6. 06 1月, 2009 1 次提交
  7. 26 12月, 2008 1 次提交
  8. 23 12月, 2008 3 次提交
  9. 21 12月, 2008 1 次提交
    • B
      powerpc/mm: Rework usage of _PAGE_COHERENT/NO_CACHE/GUARDED · 64b3d0e8
      Benjamin Herrenschmidt 提交于
      Currently, we never set _PAGE_COHERENT in the PTEs, we just OR it in
      in the hash code based on some CPU feature bit.  We also manipulate
      _PAGE_NO_CACHE and _PAGE_GUARDED by hand in all sorts of places.
      
      This changes the logic so that instead, the PTE now contains
      _PAGE_COHERENT for all normal RAM pages thay have I = 0 on platforms
      that need it.  The hash code clears it if the feature bit is not set.
      
      It also adds some clean accessors to setup various valid combinations
      of access flags and change various bits of code to use them instead.
      
      This should help having the PTE actually containing the bit
      combinations that we really want.
      
      I also removed _PAGE_GUARDED from _PAGE_BASE on 44x and instead
      set it explicitely from the TLB miss.  I will ultimately remove it
      completely as it appears that it might not be needed after all
      but in the meantime, having it in the TLB miss makes things a
      lot easier.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      64b3d0e8
  10. 16 12月, 2008 1 次提交
    • A
      powerpc/cell/axon-msi: Fix MSI after kexec · 23e0e8af
      Arnd Bergmann 提交于
      Commit d015fe99 'powerpc/cell/axon-msi: Retry on missing interrupt'
      has turned a rare failure to kexec on QS22 into a reproducible
      error, which we have now analysed.
      
      The problem is that after a kexec, the MSIC hardware still points
      into the middle of the old ring buffer.  We set up the ring buffer
      during reboot, but not the offset into it.  On older kernels, this
      would cause a storm of thousands of spurious interrupts after a
      kexec, which would most of the time get dropped silently.
      
      With the new code, we time out on each interrupt, waiting for
      it to become valid.  If more interrupts come in that we time
      out on, this goes on indefinitely, which eventually leads to
      a hard crash.
      
      The solution in this commit is to read the current offset from
      the MSIC when reinitializing it.  This now works correctly, as
      expected.
      Reported-by: NDirk Herrendoerfer <d.herrendoerfer@de.ibm.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      23e0e8af
  11. 01 12月, 2008 2 次提交
    • A
      powerpc/cell: Fix GDB watchpoints, again · 960cedb4
      Arnd Bergmann 提交于
      An earlier patch from Jens Osterkamp attempted to fix GDB
      watchpoints by enabling the DABRX register at boot time.
      Unfortunately, this did not work on SMP setups, where
      secondary CPUs were still using the power-on DABRX value.
      
      This introduces the same change for secondary CPUs on cell
      as well.
      Reported-by: NUlrich Weigand <Ulrich.Weigand@de.ibm.com>
      Tested-by: NUlrich Weigand <Ulrich.Weigand@de.ibm.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      960cedb4
    • A
      powerpc/cell/axon-msi: Retry on missing interrupt · d015fe99
      Arnd Bergmann 提交于
      The MSI capture logic for the axon bridge can sometimes
      lose interrupts in case of high DMA and interrupt load,
      when it signals an MSI interrupt to the MPIC interrupt
      controller while we are already handling another MSI.
      
      Each MSI vector gets written into a FIFO buffer in main
      memory using DMA, and that DMA access is normally flushed
      by the actual interrupt packet on the IOIF.  An MMIO
      register in the MSIC holds the position of the last
      entry in the FIFO buffer that was written.  However,
      reading that position does not flush the DMA, so that
      we can observe stale data in the buffer.
      
      In a stress test, we have observed the DMA to arrive
      up to 14 microseconds after reading the register.
      
      This patch works around this problem by retrying the
      access to the FIFO buffer.
      
      We can reliably detect the conditioning by writing
      an invalid MSI vector into the FIFO buffer after
      reading from it, assuming that all MSIs we get
      are valid.  After detecting an invalid MSI vector,
      we udelay(1) in the interrupt cascade for up to
      100 times before giving up.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d015fe99
  12. 21 11月, 2008 1 次提交
    • J
      powerpc/spufs: Fix spinning in spufs_ps_fault on signal · 60657263
      Jeremy Kerr 提交于
      Currently, we can end up in an infinite loop if we get a signal
      while the kernel has faulted in spufs_ps_fault. Eg:
      
       alarm(1);
      
       write(fd, some_spu_psmap_register_address, 4);
      
      - the write's copy_from_user will fault on the ps mapping, and
      signal_pending will be non-zero. Because returning from the fault
      handler will never clear TIF_SIGPENDING, so we'll just keep faulting,
      resulting in an unkillable process using 100% of CPU.
      
      This change returns VM_FAULT_SIGBUS if there's a fatal signal pending,
      letting us escape the loop.
      Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
      60657263
  13. 19 11月, 2008 1 次提交
  14. 14 11月, 2008 2 次提交
  15. 05 11月, 2008 1 次提交
  16. 31 10月, 2008 2 次提交
    • M
      powerpc: Update remaining dma_mapping_ops to use map/unmap_page · f9226d57
      Mark Nelson 提交于
      After the merge of the 32 and 64bit DMA code, dma_direct_ops lost
      their map/unmap_single() functions but gained map/unmap_page().  This
      caused a problem for Cell because Cell's dma_iommu_fixed_ops called
      the dma_direct_ops if the fixed linear mapping was to be used or the
      iommu ops if the dynamic window was to be used.  So in order to fix
      this problem we need to update the 64bit DMA code to use
      map/unmap_page.
      
      First, we update the generic IOMMU code so that iommu_map_single()
      becomes iommu_map_page() and iommu_unmap_single() becomes
      iommu_unmap_page().  Then we propagate these changes up through all
      the callers of these two functions and in the process update all the
      dma_mapping_ops so that they have map/unmap_page rahter than
      map/unmap_single.  We can do this because on 64bit there is no HIGHMEM
      memory so map/unmap_page ends up performing exactly the same function
      as map/unmap_single, just taking different arguments.
      
      This has no affect on drivers because the dma_map_single_attrs() just
      ends up calling the map_page() function of the appropriate
      dma_mapping_ops and similarly the dma_unmap_single_attrs() calls
      unmap_page().
      
      This fixes an oops on Cell blades, which oops on boot without this
      because they call dma_direct_ops.map_single, which is NULL.
      Signed-off-by: NMark Nelson <markn@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f9226d57
    • M
      powerpc: Use is_kdump_kernel() · 62a8bd6c
      Milton Miller 提交于
      linux/crash_dump.h defines is_kdump_kernel() to be used by code that
      needs to know if the previous kernel crashed instead of a (clean) boot
      or reboot.
      
      This updates the just added powerpc code to use it.  This is needed
      for the next commit, which will remove __kdump_flag.
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      62a8bd6c
  17. 23 10月, 2008 1 次提交
  18. 22 10月, 2008 1 次提交
    • M
      powerpc: Support for relocatable kdump kernel · 54622f10
      Mohan Kumar M 提交于
      This adds relocatable kernel support for kdump. With this one can
      use the same regular kernel to capture the kdump. A signature (0xfeed1234)
      is passed in r6 from panic code to the next kernel through kexec_sequence
      and purgatory code. The signature is used to differentiate between
      kdump kernel and non-kdump kernels.
      
      The purgatory code compares the signature and sets the __kdump_flag in
      head_64.S.  During the boot up, kernel code checks __kdump_flag and if it
      is set, the kernel will behave as relocatable kdump kernel. This kernel
      will boot at the address where it was loaded by kexec-tools ie. at the
      address reserved through crashkernel boot parameter.
      
      CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump
      kernel as relocatable. So the same kernel can be used as production and
      kdump kernel.
      
      This patch incorporates the changes suggested by Paul Mackerras to avoid
      GOT use and to avoid two copies of the code.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NMohan Kumar M <mohan@in.ibm.com>
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      54622f10
  19. 21 10月, 2008 9 次提交
  20. 14 10月, 2008 2 次提交
  21. 10 10月, 2008 1 次提交