1. 20 6月, 2013 1 次提交
    • A
      powerpc/vfio: Enable on PowerNV platform · 4e13c1ac
      Alexey Kardashevskiy 提交于
      This initializes IOMMU groups based on the IOMMU configuration
      discovered during the PCI scan on POWERNV (POWER non virtualized)
      platform.  The IOMMU groups are to be used later by the VFIO driver,
      which is used for PCI pass through.
      
      It also implements an API for mapping/unmapping pages for
      guest PCI drivers and providing DMA window properties.
      This API is going to be used later by QEMU-VFIO to handle
      h_put_tce hypercalls from the KVM guest.
      
      The iommu_put_tce_user_mode() does only a single page mapping
      as an API for adding many mappings at once is going to be
      added later.
      
      Although this driver has been tested only on the POWERNV
      platform, it should work on any platform which supports
      TCE tables.  As h_put_tce hypercall is received by the host
      kernel and processed by the QEMU (what involves calling
      the host kernel again), performance is not the best -
      circa 220MB/s on 10Gb ethernet network.
      
      To enable VFIO on POWER, enable SPAPR_TCE_IOMMU config
      option and configure VFIO as required.
      
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4e13c1ac
  2. 03 7月, 2012 1 次提交
    • A
      powerpc/iommu: Implement IOMMU pools to improve multiqueue adapter performance · b4c3a872
      Anton Blanchard 提交于
      At the moment all queues in a multiqueue adapter will serialise
      against the IOMMU table lock. This is proving to be a big issue,
      especially with 10Gbit ethernet.
      
      This patch creates 4 pools and tries to spread the load across
      them. If the table is under 1GB in size we revert back to the
      original behaviour of 1 pool and 1 largealloc pool.
      
      We create a hash to map CPUs to pools. Since we prefer interrupts to
      be affinitised to primary CPUs, without some form of hashing we are
      very likely to end up using the same pool. As an example, POWER7
      has 4 way SMT and with 4 pools all primary threads will map to the
      same pool.
      
      The largealloc pool is reduced from 1/2 to 1/4 of the space to
      partially offset the overhead of breaking the table up into pools.
      
      Some performance numbers were obtained with a Chelsio T3 adapter on
      two POWER7 boxes, running a 100 session TCP round robin test.
      
      Performance improved 69% with this patch applied.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b4c3a872
  3. 28 3月, 2012 1 次提交
  4. 24 9月, 2009 1 次提交
  5. 20 8月, 2009 1 次提交
  6. 15 6月, 2009 1 次提交
  7. 31 10月, 2008 1 次提交
    • M
      powerpc: Update remaining dma_mapping_ops to use map/unmap_page · f9226d57
      Mark Nelson 提交于
      After the merge of the 32 and 64bit DMA code, dma_direct_ops lost
      their map/unmap_single() functions but gained map/unmap_page().  This
      caused a problem for Cell because Cell's dma_iommu_fixed_ops called
      the dma_direct_ops if the fixed linear mapping was to be used or the
      iommu ops if the dynamic window was to be used.  So in order to fix
      this problem we need to update the 64bit DMA code to use
      map/unmap_page.
      
      First, we update the generic IOMMU code so that iommu_map_single()
      becomes iommu_map_page() and iommu_unmap_single() becomes
      iommu_unmap_page().  Then we propagate these changes up through all
      the callers of these two functions and in the process update all the
      dma_mapping_ops so that they have map/unmap_page rahter than
      map/unmap_single.  We can do this because on 64bit there is no HIGHMEM
      memory so map/unmap_page ends up performing exactly the same function
      as map/unmap_single, just taking different arguments.
      
      This has no affect on drivers because the dma_map_single_attrs() just
      ends up calling the map_page() function of the appropriate
      dma_mapping_ops and similarly the dma_unmap_single_attrs() calls
      unmap_page().
      
      This fixes an oops on Cell blades, which oops on boot without this
      because they call dma_direct_ops.map_single, which is NULL.
      Signed-off-by: NMark Nelson <markn@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f9226d57
  8. 04 8月, 2008 1 次提交
  9. 09 7月, 2008 2 次提交
  10. 06 2月, 2008 2 次提交
  11. 11 12月, 2007 1 次提交
  12. 20 10月, 2007 1 次提交
  13. 07 5月, 2007 1 次提交
  14. 07 2月, 2007 1 次提交
  15. 04 12月, 2006 2 次提交
  16. 13 11月, 2006 1 次提交
  17. 01 11月, 2006 1 次提交
    • L
      [POWERPC] Use 4kB iommu pages even on 64kB-page systems · 5d2efba6
      Linas Vepstas 提交于
      The 10Gigabit ethernet device drivers appear to be able to chew
      up all 256MB of TCE mappings on pSeries systems, as evidenced by
      numerous error messages:
      
       iommu_alloc failed, tbl c0000000010d5c48 vaddr c0000000d875eff0 npages 1
      
      Some experimentation indicates that this is essentially because
      one 1500 byte ethernet MTU gets mapped as a 64K DMA region when
      the large 64K pages are enabled. Thus, it doesn't take much to
      exhaust all of the available DMA mappings for a high-speed card.
      
      This patch changes the iommu allocator to work with its own
      unique, distinct page size. Although the patch is long, its
      actually quite simple: it just #defines a distinct IOMMU_PAGE_SIZE
      and then uses this in all the places that matter.
      
      As a side effect, it also dramatically improves network performance
      on platforms with H-calls on iommu translation inserts/removes (since
      we no longer call it 16 times for a 1500 bytes packet when the iommu HW
      is still 4k).
      
      In the future, we might want to make the IOMMU_PAGE_SIZE a variable
      in the iommu_table instance, thus allowing support for different HW
      page sizes in the iommu itself.
      Signed-off-by: NLinas Vepstas <linas@austin.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NOlof Johansson <olof@lixom.net>
      Acked-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      5d2efba6
  18. 15 6月, 2006 1 次提交
  19. 09 6月, 2006 1 次提交
  20. 26 4月, 2006 1 次提交
  21. 21 4月, 2006 1 次提交
  22. 12 1月, 2006 1 次提交
  23. 09 1月, 2006 2 次提交
  24. 23 11月, 2005 1 次提交
  25. 07 11月, 2005 1 次提交
    • B
      [PATCH] ppc64: support 64k pages · 3c726f8d
      Benjamin Herrenschmidt 提交于
      Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel
      base page size to 64K.  The resulting kernel still boots on any
      hardware.  On current machines with 4K pages support only, the kernel
      will maintain 16 "subpages" for each 64K page transparently.
      
      Note that while real 64K capable HW has been tested, the current patch
      will not enable it yet as such hardware is not released yet, and I'm
      still verifying with the firmware architects the proper to get the
      information from the newer hypervisors.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3c726f8d
  26. 14 10月, 2005 1 次提交
  27. 12 10月, 2005 1 次提交
  28. 06 10月, 2005 1 次提交
    • P
      powerpc: Merge in the ppc64 version of the prom code. · 9b6b563c
      Paul Mackerras 提交于
      This brings in the ppc64 version of prom_init.c, prom.c and btext.c
      and makes them work for ppc32.  This also brings in the new calling
      convention, where the first entry to the kernel (with r5 != 0) goes
      to the prom_init code, which then restarts from the beginning (with
      r5 == 0) after it has done its stuff.
      
      For now this also brings in the ppc32 version of setup.c.  It also
      merges lmb.h.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      9b6b563c
  29. 28 9月, 2005 1 次提交
  30. 21 9月, 2005 1 次提交
  31. 29 8月, 2005 1 次提交
  32. 22 6月, 2005 2 次提交
  33. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4