1. 04 5月, 2014 36 次提交
  2. 28 4月, 2014 4 次提交
    • G
      Merge 3.15-rc3 into staging-next · 37aa4836
      Greg Kroah-Hartman 提交于
      37aa4836
    • L
      Linux 3.15-rc3 · d1db0eea
      Linus Torvalds 提交于
      d1db0eea
    • W
      word-at-a-time: avoid undefined behaviour in zero_bytemask macro · ec6931b2
      Will Deacon 提交于
      The asm-generic, big-endian version of zero_bytemask creates a mask of
      bytes preceding the first zero-byte by left shifting ~0ul based on the
      position of the first zero byte.
      
      Unfortunately, if the first (top) byte is zero, the output of
      prep_zero_mask has only the top bit set, resulting in undefined C
      behaviour as we shift left by an amount equal to the width of the type.
      As it happens, GCC doesn't manage to spot this through the call to fls(),
      but the issue remains if architectures choose to implement their shift
      instructions differently.
      
      An example would be arch/arm/ (AArch32), where LSL Rd, Rn, #32 results
      in Rd == 0x0, whilst on arch/arm64 (AArch64) LSL Xd, Xn, #64 results in
      Xd == Xn.
      
      Rather than check explicitly for the problematic shift, this patch adds
      an extra shift by 1, replacing fls with __fls. Since zero_bytemask is
      never called with a zero argument (has_zero() is used to check the data
      first), we don't need to worry about calling __fls(0), which is
      undefined.
      
      Cc: <stable@vger.kernel.org>
      Cc: Victor Kamensky <victor.kamensky@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ec6931b2
    • L
      Merge branch 'safe-dirty-tlb-flush' · ac6c9e2b
      Linus Torvalds 提交于
      This merges the patch to fix possible loss of dirty bit on munmap() or
      madvice(DONTNEED).  If there are concurrent writers on other CPU's that
      have the unmapped/unneeded page in their TLBs, their writes to the page
      could possibly get lost if a third CPU raced with the TLB flush and did
      a page_mkclean() before the page was fully written.
      
      Admittedly, if you unmap() or madvice(DONTNEED) an area _while_ another
      thread is still busy writing to it, you deserve all the lost writes you
      could get.  But we kernel people hold ourselves to higher quality
      standards than "crazy people deserve to lose", because, well, we've seen
      people do all kinds of crazy things.
      
      So let's get it right, just because we can, and we don't have to worry
      about it.
      
      * safe-dirty-tlb-flush:
        mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts
      ac6c9e2b