1. 18 11月, 2015 2 次提交
  2. 12 11月, 2015 6 次提交
  3. 10 11月, 2015 1 次提交
  4. 09 11月, 2015 2 次提交
  5. 08 11月, 2015 1 次提交
  6. 07 11月, 2015 4 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
    • Z
      arm64: bpf: fix mod-by-zero case · 14e589ff
      Zi Shen Lim 提交于
      Turns out in the case of modulo by zero in a BPF program:
      	A = A % X;  (X == 0)
      the expected behavior is to terminate with return value 0.
      
      The bug in JIT is exposed by a new test case [1].
      
      [1] https://lkml.org/lkml/2015/11/4/499Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Reported-by: NYang Shi <yang.shi@linaro.org>
      Reported-by: NXi Wang <xi.wang@gmail.com>
      CC: Alexei Starovoitov <ast@plumgrid.com>
      Fixes: e54bcde3 ("arm64: eBPF JIT compiler")
      Cc: <stable@vger.kernel.org> # 3.18+
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      14e589ff
    • Z
      arm64: bpf: fix div-by-zero case · 251599e1
      Zi Shen Lim 提交于
      In the case of division by zero in a BPF program:
      	A = A / X;  (X == 0)
      the expected behavior is to terminate with return value 0.
      
      This is confirmed by the test case introduced in commit 86bf1721
      ("test_bpf: add tests checking that JIT/interpreter sets A and X to 0.").
      Reported-by: NYang Shi <yang.shi@linaro.org>
      Tested-by: NYang Shi <yang.shi@linaro.org>
      CC: Xi Wang <xi.wang@gmail.com>
      CC: Alexei Starovoitov <ast@plumgrid.com>
      CC: linux-arm-kernel@lists.infradead.org
      CC: linux-kernel@vger.kernel.org
      Fixes: e54bcde3 ("arm64: eBPF JIT compiler")
      Cc: <stable@vger.kernel.org> # 3.18+
      Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      251599e1
    • C
      arm64: Enable CRYPTO_CRC32_ARM64 in defconfig · 4d17da4c
      Catalin Marinas 提交于
      CONFIG_CRYPTO_CRC32_ARM64 has been around since commit f6f203fa
      ("crypto: crc32 - Add ARM64 CRC32 hw accelerated module") but defconfig
      did not automatically enable it.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4d17da4c
  7. 06 11月, 2015 1 次提交
    • L
      arm64: cmpxchg_dbl: fix return value type · 57a65667
      Lorenzo Pieralisi 提交于
      The current arm64 __cmpxchg_double{_mb} implementations carry out the
      compare exchange by first comparing the old values passed in to the
      values read from the pointer provided and by stashing the cumulative
      bitwise difference in a 64-bit register.
      
      By comparing the register content against 0, it is possible to detect if
      the values read differ from the old values passed in, so that the compare
      exchange detects whether it has to bail out or carry on completing the
      operation with the exchange.
      
      Given the current implementation, to detect the cmpxchg operation
      status, the __cmpxchg_double{_mb} functions should return the 64-bit
      stashed bitwise difference so that the caller can detect cmpxchg failure
      by comparing the return value content against 0. The current implementation
      declares the return value as an int, which means that the 64-bit
      value stashing the bitwise difference is truncated before being
      returned to the __cmpxchg_double{_mb} callers, which means that
      any bitwise difference present in the top 32 bits goes undetected,
      triggering false positives and subsequent kernel failures.
      
      This patch fixes the issue by declaring the arm64 __cmpxchg_double{_mb}
      return values as a long, so that the bitwise difference is
      properly propagated on failure, restoring the expected behaviour.
      
      Fixes: e9a4b795 ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU")
      Cc: <stable@vger.kernel.org> # 4.3+
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      57a65667
  8. 31 10月, 2015 3 次提交
  9. 30 10月, 2015 4 次提交
  10. 29 10月, 2015 9 次提交
  11. 28 10月, 2015 3 次提交
  12. 26 10月, 2015 2 次提交
  13. 24 10月, 2015 2 次提交