1. 04 12月, 2006 4 次提交
  2. 05 10月, 2006 1 次提交
  3. 31 7月, 2006 1 次提交
  4. 15 6月, 2006 1 次提交
  5. 27 3月, 2006 2 次提交
  6. 09 1月, 2006 1 次提交
    • J
      [PATCH] powerpc/cell: add iommu support for larger memory · 49d65b3a
      Jens.Osterkamp@de.ibm.com 提交于
      So far, the iommu code was hardwired to a linear mapping
      between 0x20000000 and 0x40000000, so it could only support
      512MB of RAM.
      
      This patch still keeps the linear mapping, but looks for
      proper ibm,dma-window properties to set up larger windows,
      this makes the maximum supported RAM size 2GB.
      
      If there is anything unusual about the dma-window properties,
      we fall back to the old behavior.
      
      We also support switching off the iommu completely now
      with the regular iommu=off command line option.
      Signed-off-by: NArnd Bergmann <arndb@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      49d65b3a
  7. 01 11月, 2005 2 次提交
  8. 10 10月, 2005 1 次提交
  9. 09 10月, 2005 1 次提交
  10. 30 9月, 2005 1 次提交
  11. 28 9月, 2005 1 次提交
  12. 23 6月, 2005 1 次提交
    • A
      [PATCH] ppc64: Add driver for BPA iommu · ae209cf1
      Arnd Bergmann 提交于
      Implementation of software load support for the BE iommu. This is very
      different from other iommu code on ppc64, since we only do a static mapping.
      The mapping is currently hardcoded but should really be read from the
      firmware, but they don't set up the device nodes yet. There is a single
      512MB DMA window for PCI, USB and ethernet at 0x20000000 for our RAM.
      
      The Cell processor can put the I/O page table either in memory like
      the hashed page table (hardware load) or have the operating system
      write the entries into memory mapped CPU registers (software load).
      
      I use the software load mechanism because I know that all I/O page
      table entries for the amount of installed physical memory fit into
      the IO TLB cache. At the point when we get machines with more than
      4GB of installed memory, we can either use hardware I/O page table
      access like the other platforms do or dynamically update the I/O
      TLB entries when a page fault occurs in the I/O subsystem.
      
      The software load can then use the macros that I have implemented
      for the static mapping in order to do the TLB cache updates.
      Signed-off-by: NArnd Bergmann <arndb@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ae209cf1