1. 15 7月, 2013 1 次提交
  2. 20 6月, 2013 1 次提交
    • A
      powerpc/vfio: Enable on PowerNV platform · 4e13c1ac
      Alexey Kardashevskiy 提交于
      This initializes IOMMU groups based on the IOMMU configuration
      discovered during the PCI scan on POWERNV (POWER non virtualized)
      platform.  The IOMMU groups are to be used later by the VFIO driver,
      which is used for PCI pass through.
      
      It also implements an API for mapping/unmapping pages for
      guest PCI drivers and providing DMA window properties.
      This API is going to be used later by QEMU-VFIO to handle
      h_put_tce hypercalls from the KVM guest.
      
      The iommu_put_tce_user_mode() does only a single page mapping
      as an API for adding many mappings at once is going to be
      added later.
      
      Although this driver has been tested only on the POWERNV
      platform, it should work on any platform which supports
      TCE tables.  As h_put_tce hypercall is received by the host
      kernel and processed by the QEMU (what involves calling
      the host kernel again), performance is not the best -
      circa 220MB/s on 10Gb ethernet network.
      
      To enable VFIO on POWER, enable SPAPR_TCE_IOMMU config
      option and configure VFIO as required.
      
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4e13c1ac
  3. 18 4月, 2013 1 次提交
  4. 10 1月, 2013 1 次提交
  5. 15 11月, 2012 1 次提交
  6. 04 10月, 2012 1 次提交
  7. 13 7月, 2012 1 次提交
  8. 10 7月, 2012 1 次提交
    • A
      powerpc: IOMMU fault injection · d6b9a81b
      Anton Blanchard 提交于
      Add the ability to inject IOMMU faults. We enable this per device
      via a fail_iommu sysfs property, similar to fault injection on other
      subsystems.
      
      An example:
      
      ...
      0003:01:00.1 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (be3) (rev 02)
      
      To inject one error to this device:
      
      echo 1 > /sys/bus/pci/devices/0003:01:00.1/fail_iommu
      echo 1 > /sys/kernel/debug/fail_iommu/probability
      echo 1 > /sys/kernel/debug/fail_iommu/times
      
      As feared, the first failure injected on the be3 results in an
      unrecoverable error, taking down both functions of the card
      permanently:
      
      be2net 0003:01:00.1: Unrecoverable error in the card
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d6b9a81b
  9. 03 7月, 2012 4 次提交
  10. 23 2月, 2012 1 次提交
    • M
      fadump: Register for firmware assisted dump. · 3ccc00a7
      Mahesh Salgaonkar 提交于
      On 2012-02-20 11:02:51 Mon, Paul Mackerras wrote:
      > On Thu, Feb 16, 2012 at 04:44:30PM +0530, Mahesh J Salgaonkar wrote:
      >
      > If I have read the code correctly, we are going to get this printk on
      > non-pSeries machines or on older pSeries machines, even if the user
      > has not put the fadump=on option on the kernel command line.  The
      > printk will be annoying since there is no actual error condition.  It
      > seems to me that the condition for the printk should include
      > fw_dump.fadump_enabled.  In other words you should probably add
      >
      > 	if (!fw_dump.fadump_enabled)
      > 		return 0;
      >
      > at the beginning of the function.
      
      Hi Paul,
      
      Thanks for pointing it out. Please find the updated patch below.
      
      The existing patches above this (4/10 through 10/10) cleanly applies
      on this update.
      
      Thanks,
      -Mahesh.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3ccc00a7
  11. 23 9月, 2011 1 次提交
  12. 09 12月, 2010 1 次提交
  13. 21 5月, 2010 1 次提交
  14. 19 3月, 2010 1 次提交
    • F
      powerpc: Remove IOMMU_VMERGE config option · 191aee58
      FUJITA Tomonori 提交于
      The description says:
      
       Cause IO segments sent to a device for DMA to be merged virtually
       by the IOMMU when they happen to have been allocated contiguously.
       This doesn't add pressure to the IOMMU allocator. However, some
       drivers don't support getting large merged segments coming back
       from *_map_sg().
      
       Most drivers don't have this problem; it is safe to say Y here.
      
      It's out of date. Long ago, drivers didn't have a way to tell IOMMUs
      about their segment length limit (that is, the maximum segment length
      that they can handle). So IOMMUs merged as many segments as possible
      and gave too large segments to drivers.
      
      dma_get_max_seg_size() was introduced to solve the above
      problem. Device drives can use the API to tell IOMMU about the maximum
      segment length that they can handle. In addition, the default limit
      (64K) should be safe for everyone.
      
      So this config option seems to be unnecessary.
      
      Note that this config option just enables users to disable the virtual
      merging by default. Users can still disable the virtual merging by the
      boot parameter.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      191aee58
  15. 16 12月, 2009 1 次提交
    • A
      iommu-helper: use bitmap library · a66022c4
      Akinobu Mita 提交于
      Use bitmap library and kill some unused iommu helper functions.
      
      1. s/iommu_area_free/bitmap_clear/
      
      2. s/iommu_area_reserve/bitmap_set/
      
      3. Use bitmap_find_next_zero_area instead of find_next_zero_area
      
        This cannot be simple substitution because find_next_zero_area
        doesn't check the last bit of the limit in bitmap
      
      4. Remove iommu_area_free, iommu_area_reserve, and find_next_zero_area
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Joerg Roedel <joerg.roedel@amd.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a66022c4
  16. 13 1月, 2009 1 次提交
  17. 31 10月, 2008 2 次提交
    • M
      powerpc: Update remaining dma_mapping_ops to use map/unmap_page · f9226d57
      Mark Nelson 提交于
      After the merge of the 32 and 64bit DMA code, dma_direct_ops lost
      their map/unmap_single() functions but gained map/unmap_page().  This
      caused a problem for Cell because Cell's dma_iommu_fixed_ops called
      the dma_direct_ops if the fixed linear mapping was to be used or the
      iommu ops if the dynamic window was to be used.  So in order to fix
      this problem we need to update the 64bit DMA code to use
      map/unmap_page.
      
      First, we update the generic IOMMU code so that iommu_map_single()
      becomes iommu_map_page() and iommu_unmap_single() becomes
      iommu_unmap_page().  Then we propagate these changes up through all
      the callers of these two functions and in the process update all the
      dma_mapping_ops so that they have map/unmap_page rahter than
      map/unmap_single.  We can do this because on 64bit there is no HIGHMEM
      memory so map/unmap_page ends up performing exactly the same function
      as map/unmap_single, just taking different arguments.
      
      This has no affect on drivers because the dma_map_single_attrs() just
      ends up calling the map_page() function of the appropriate
      dma_mapping_ops and similarly the dma_unmap_single_attrs() calls
      unmap_page().
      
      This fixes an oops on Cell blades, which oops on boot without this
      because they call dma_direct_ops.map_single, which is NULL.
      Signed-off-by: NMark Nelson <markn@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f9226d57
    • M
      powerpc: Use is_kdump_kernel() · 62a8bd6c
      Milton Miller 提交于
      linux/crash_dump.h defines is_kdump_kernel() to be used by code that
      needs to know if the previous kernel crashed instead of a (clean) boot
      or reboot.
      
      This updates the just added powerpc code to use it.  This is needed
      for the next commit, which will remove __kdump_flag.
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      62a8bd6c
  18. 22 10月, 2008 1 次提交
    • M
      powerpc: Support for relocatable kdump kernel · 54622f10
      Mohan Kumar M 提交于
      This adds relocatable kernel support for kdump. With this one can
      use the same regular kernel to capture the kdump. A signature (0xfeed1234)
      is passed in r6 from panic code to the next kernel through kexec_sequence
      and purgatory code. The signature is used to differentiate between
      kdump kernel and non-kdump kernels.
      
      The purgatory code compares the signature and sets the __kdump_flag in
      head_64.S.  During the boot up, kernel code checks __kdump_flag and if it
      is set, the kernel will behave as relocatable kdump kernel. This kernel
      will boot at the address where it was loaded by kexec-tools ie. at the
      address reserved through crashkernel boot parameter.
      
      CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump
      kernel as relocatable. So the same kernel can be used as production and
      kdump kernel.
      
      This patch incorporates the changes suggested by Paul Mackerras to avoid
      GOT use and to avoid two copies of the code.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NMohan Kumar M <mohan@in.ibm.com>
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      54622f10
  19. 17 10月, 2008 2 次提交
  20. 25 7月, 2008 1 次提交
    • R
      powerpc/pseries: iommu enablement for CMO · 6490c490
      Robert Jennings 提交于
      To support Cooperative Memory Overcommitment (CMO), we need to check
      for failure from some of the tce hcalls.
      
      These changes for the pseries platform affect the powerpc architecture;
      patches for the other affected platforms are included in this patch.
      
      pSeries platform IOMMU code changes:
       * platform TCE functions must handle H_NOT_ENOUGH_RESOURCES errors and
         return an error.
      
      Architecture IOMMU code changes:
       * Calls to ppc_md.tce_build need to check return values and return
         DMA_MAPPING_ERROR for transient errors.
      
      Architecture changes:
       * struct machdep_calls for tce_build*_pSeriesLP functions need to change
         to indicate failure.
       * all other platforms will need updates to iommu functions to match the new
         calling semantics; they will return 0 on success.  The other platforms
         default configs have been built, but no further testing was performed.
      Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com>
      Acked-by: NOlof Johansson <olof@lixom.net>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      6490c490
  21. 22 7月, 2008 1 次提交
  22. 09 7月, 2008 2 次提交
  23. 01 4月, 2008 1 次提交
  24. 06 2月, 2008 3 次提交
  25. 15 1月, 2008 1 次提交
    • B
      [POWERPC] Workaround for iommu page alignment · d262c32a
      Benjamin Herrenschmidt 提交于
      Commit 5d2efba6 changed our iommu code
      so that it always uses an iommu page size of 4kB.  That means with our
      current code, drivers may do a dma_map_sg() of a 64kB page and obtain
      a dma_addr_t that is only 4k aligned.
      
      This works fine in most cases except for some infiniband HW it seems,
      where they tell the HW about the page size and it ignores the low bits
      of the DMA address.
      
      This works around it by making our IOMMU code enforce a PAGE_SIZE alignment
      for mappings of objects that are page aligned in the first place and whose
      size is larger or equal to a page.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d262c32a
  26. 11 12月, 2007 1 次提交
  27. 23 10月, 2007 1 次提交
  28. 16 10月, 2007 1 次提交
  29. 17 8月, 2007 1 次提交
  30. 26 4月, 2007 1 次提交
  31. 13 4月, 2007 1 次提交
    • J
      [POWERPC] DMA 4GB boundary protection · 56997559
      Jake Moilanen 提交于
      There are many adapters which cannot handle DMAing across any 4 GB
      boundary.  For instance, the latest Emulex adapters.
      
      This normally is not an issue as firmware gives dma-windows under
      4gigs.  However, some of the new System-P boxes have dma-windows above
      4gigs, and this present a problem.
      
      During initialization of the IOMMU tables, the last entry at each 4GB
      boundary is marked as used.  Thus no mappings can cross the boundary.
      If a table ends at a 4GB boundary, the entry is not marked as used.
      
      A boot option to remove this 4GB protection is given w/ protect4gb=off.
      This exposes the potential issue for driver and hardware development
      purposes.
      Signed-off-by: NJake Moilanen <moilanen@austin.ibm.com>
      Acked-by: NOlof Johansson <olof@lixom.net>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      56997559
  32. 09 3月, 2007 1 次提交
    • J
      [POWERPC] DMA 4GB boundary protection · 618d3adc
      Jake Moilanen 提交于
      There are many adapters which can not handle DMAing acrosss any 4 GB
      boundary.  For instance the latest Emulex adapters.
      
      This normally is not an issue as firmware gives us dma-windows under
      4gigs.  However, some of the new System-P boxes have dma-windows above
      4gigs, and this present a problem.
      
      I propose fixing it in the IOMMU allocation instead of making each
      driver protect against it as it is more efficient, and won't require
      changing every driver which has not considered this issue.
      
      This patch checks to see if the mapping spans a 4 gig boundary, and if
      it does, retries the allocation.  It tries the next allocation at the
      start of the crossed 4 gig boundary.
      Signed-off-by: NJake Moilanen <moilanen@austin.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      618d3adc