1. 08 11月, 2016 1 次提交
    • A
      swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping function · 76418421
      Alexander Duyck 提交于
      The mapping function should always return DMA_ERROR_CODE when a mapping has
      failed as this is what the DMA API expects when a DMA error has occurred.
      The current function for mapping a page in Xen was returning either
      DMA_ERROR_CODE or 0 depending on where it failed.
      
      On x86 DMA_ERROR_CODE is 0, but on other architectures such as ARM it is
      ~0. We need to make sure we return the same error value if either the
      mapping failed or the device is not capable of accessing the mapping.
      
      If we are returning DMA_ERROR_CODE as our error value we can drop the
      function for checking the error code as the default is to compare the
      return value against DMA_ERROR_CODE if no function is defined.
      
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad@kernel.org>
      76418421
  2. 04 8月, 2016 1 次提交
    • K
      dma-mapping: use unsigned long for dma_attrs · 00085f1e
      Krzysztof Kozlowski 提交于
      The dma-mapping core and the implementations do not change the DMA
      attributes passed by pointer.  Thus the pointer can point to const data.
      However the attributes do not have to be a bitfield.  Instead unsigned
      long will do fine:
      
      1. This is just simpler.  Both in terms of reading the code and setting
         attributes.  Instead of initializing local attributes on the stack
         and passing pointer to it to dma_set_attr(), just set the bits.
      
      2. It brings safeness and checking for const correctness because the
         attributes are passed by value.
      
      Semantic patches for this change (at least most of them):
      
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
      
          @@
          f(...,
          - struct dma_attrs *attrs
          + unsigned long attrs
          , ...)
          {
          ...
          }
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      and
      
          // Options: --all-includes
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
          type t;
      
          @@
          t f(..., struct dma_attrs *attrs);
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Acked-by: NVineet Gupta <vgupta@synopsys.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no>
      Acked-by: Mark Salter <msalter@redhat.com> [c6x]
      Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
      Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
      Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
      Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
      Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
      Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
      Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
      Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00085f1e
  3. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  4. 23 10月, 2015 2 次提交
  5. 09 9月, 2015 1 次提交
    • J
      xen: Make clear that swiotlb and biomerge are dealing with DMA address · 32e09870
      Julien Grall 提交于
      The swiotlb is required when programming a DMA address on ARM when a
      device is not protected by an IOMMU.
      
      In this case, the DMA address should always be equal to the machine address.
      For DOM0 memory, Xen ensure it by have an identity mapping between the
      guest address and host address. However, when mapping a foreign grant
      reference, the 1:1 model doesn't work.
      
      For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN
      (Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point
      of view given that all ARM guest are auto-translated.
      
      Even though the name pfn_to_mfn is misleading, we need to ensure that
      those caller get a GFN and not by mistake a MFN. In pratical, I haven't
      seen error related to this but we should fix it for the sake of
      correctness.
      
      In order to fix the implementation of pfn_to_mfn on ARM in a follow-up
      patch, we have to introduce new helpers to return the DMA from a PFN and
      the invert.
      
      On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn.
      
      The helpers will be used in swiotlb and xen_biovec_phys_mergeable.
      
      This is necessary in the latter because we have to ensure that the
      biovec code will not try to merge a biovec using foreign page and
      another using Linux memory.
      
      Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn
      given that the only usage was in swiotlb.
      Signed-off-by: NJulien Grall <julien.grall@citrix.com>
      Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      32e09870
  6. 17 6月, 2015 1 次提交
  7. 06 5月, 2015 1 次提交
  8. 21 1月, 2015 1 次提交
  9. 04 12月, 2014 3 次提交
  10. 10 10月, 2013 3 次提交
    • S
      swiotlb-xen: use xen_alloc/free_coherent_pages · 1b65c4e5
      Stefano Stabellini 提交于
      Use xen_alloc_coherent_pages and xen_free_coherent_pages to allocate or
      free coherent pages.
      
      We need to be careful handling the pointer returned by
      xen_alloc_coherent_pages, because on ARM the pointer is not equal to
      phys_to_virt(*dma_handle). In fact virt_to_phys only works for kernel
      direct mapped RAM memory.
      In ARM case the pointer could be an ioremap address, therefore passing
      it to virt_to_phys would give you another physical address that doesn't
      correspond to it.
      
      Make xen_create_contiguous_region take a phys_addr_t as start parameter to
      avoid the virt_to_phys calls which would be incorrect.
      
      Changes in v6:
      - remove extra spaces.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      1b65c4e5
    • S
      swiotlb-xen: introduce xen_swiotlb_set_dma_mask · eb1ddc00
      Stefano Stabellini 提交于
      Implement xen_swiotlb_set_dma_mask, use it for set_dma_mask on arm.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      eb1ddc00
    • S
      xen/arm,arm64: enable SWIOTLB_XEN · 83862ccf
      Stefano Stabellini 提交于
      Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to
      program the hardware with mfns rather than pfns for dma addresses.
      Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select
      SWIOTLB_XEN on arm and arm64.
      
      At the moment always rely on swiotlb-xen, but when Xen starts supporting
      hardware IOMMUs we'll be able to avoid it conditionally on the presence
      of an IOMMU on the platform.
      
      Implement xen_create_contiguous_region on arm and arm64: for the moment
      we assume that dom0 has been mapped 1:1 (physical addresses == machine
      addresses) therefore we don't need to call XENMEM_exchange. Simply
      return the physical address as dma address.
      
      Initialize the xen-swiotlb from xen_early_init (before the native
      dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      
      
      Changes in v8:
      - assume dom0 is mapped 1:1, no need to call XENMEM_exchange.
      
      Changes in v7:
      - call __set_phys_to_machine_multi from xen_create_contiguous_region and
      xen_destroy_contiguous_region to update the P2M;
      - don't call XENMEM_unpin, it has been removed;
      - call XENMEM_exchange instead of XENMEM_exchange_and_pin;
      - set nr_exchanged to 0 before calling the hypercall.
      
      Changes in v6:
      - introduce and export xen_dma_ops;
      - call xen_mm_init from as arch_initcall.
      
      Changes in v4:
      - remove redefinition of DMA_ERROR_CODE;
      - update the code to use XENMEM_exchange_and_pin and XENMEM_unpin;
      - add a note about hardware IOMMU in the commit message.
      
      Changes in v3:
      - code style changes;
      - warn on XENMEM_put_dma_buf failures.
      83862ccf