1. 16 12月, 2015 1 次提交
    • D
      Revert "scatterlist: use sg_phys()" · 3e6110fd
      Dan Williams 提交于
      commit db0fa0cb "scatterlist: use sg_phys()" did replacements of
      the form:
      
          phys_addr_t phys = page_to_phys(sg_page(s));
          phys_addr_t phys = sg_phys(s) & PAGE_MASK;
      
      However, this breaks platforms where sizeof(phys_addr_t) >
      sizeof(unsigned long).  Revert for 4.3 and 4.4 to make room for a
      combined helper in 4.5.
      
      Cc: <stable@vger.kernel.org>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Fixes: db0fa0cb ("scatterlist: use sg_phys()")
      Suggested-by: NJoerg Roedel <joro@8bytes.org>
      Reported-by: NVitaly Lavrov <vel21ripn@gmail.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      3e6110fd
  2. 09 11月, 2015 1 次提交
  3. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  4. 02 11月, 2015 1 次提交
  5. 28 10月, 2015 1 次提交
  6. 25 10月, 2015 2 次提交
    • D
      iommu/vt-d: Clean up pasid_enabled() and ecs_enabled() dependencies · d42fde70
      David Woodhouse 提交于
      When booted with intel_iommu=ecs_off we were still allocating the PASID
      tables even though we couldn't actually use them. We really want to make
      the pasid_enabled() macro depend on ecs_enabled().
      
      Which is unfortunate, because currently they're the other way round to
      cope with the Broadwell/Skylake problems with ECS.
      
      Instead of having ecs_enabled() depend on pasid_enabled(), which was never
      something that made me happy anyway, make it depend in the normal case
      on the "broken PASID" bit 28 *not* being set.
      
      Then pasid_enabled() can depend on ecs_enabled() as it should. And we also
      don't need to mess with it if we ever see an implementation that has some
      features requiring ECS (like PRI) but which *doesn't* have PASID support.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      d42fde70
    • D
      iommu/vt-d: Handle Caching Mode implementations of SVM · 5a10ba27
      David Woodhouse 提交于
      Not entirely clear why, but it seems we need to reserve PASID zero and
      flush it when we make a PASID entry present.
      
      Quite we we couldn't use the true PASID value, isn't clear.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      5a10ba27
  7. 23 10月, 2015 2 次提交
  8. 22 10月, 2015 9 次提交
  9. 21 10月, 2015 13 次提交
  10. 20 10月, 2015 1 次提交
    • D
      iommu/vt-d: Fix SVM IOTLB flush handling · 5d52f482
      David Woodhouse 提交于
      Change the 'pages' parameter to 'unsigned long' to avoid overflow.
      
      Fix the device-IOTLB flush parameter calculation — the size of the IOTLB
      flush is indicated by the position of the least significant zero bit in
      the address field. For example, a value of 0x12345f000 will flush from
      0x123440000 to 0x12347ffff (256KiB).
      
      Finally, the cap_pgsel_inv() is not relevant to SVM; the spec says that
      *all* implementations must support page-selective invaliation for
      "first-level" translations. So don't check for it.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      5d52f482
  11. 19 10月, 2015 1 次提交
  12. 18 10月, 2015 1 次提交
  13. 17 10月, 2015 2 次提交
  14. 16 10月, 2015 2 次提交
  15. 15 10月, 2015 2 次提交