1. 06 10月, 2010 1 次提交
    • P
      SLUB: Optimize slab_free() debug check · 15b7c514
      Pekka Enberg 提交于
      This patch optimizes slab_free() debug check to use "c->node != NUMA_NO_NODE"
      instead of "c->node >= 0" because the former generates smaller code on x86-64:
      
        Before:
      
          4736:       48 39 70 08             cmp    %rsi,0x8(%rax)
          473a:       75 26                   jne    4762 <kfree+0xa2>
          473c:       44 8b 48 10             mov    0x10(%rax),%r9d
          4740:       45 85 c9                test   %r9d,%r9d
          4743:       78 1d                   js     4762 <kfree+0xa2>
      
        After:
      
          4736:       48 39 70 08             cmp    %rsi,0x8(%rax)
          473a:       75 23                   jne    475f <kfree+0x9f>
          473c:       83 78 10 ff             cmpl   $0xffffffffffffffff,0x10(%rax)
          4740:       74 1d                   je     475f <kfree+0x9f>
      
      This patch also cleans up __slab_alloc() to use NUMA_NO_NODE instead of "-1"
      for enabling debugging for a per-CPU cache.
      Acked-by: NChristoph Lameter <cl@linux.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      15b7c514
  2. 02 10月, 2010 22 次提交
  3. 26 9月, 2010 1 次提交
  4. 25 9月, 2010 1 次提交
  5. 24 9月, 2010 4 次提交
  6. 23 9月, 2010 4 次提交
  7. 22 9月, 2010 1 次提交
  8. 21 9月, 2010 2 次提交
    • T
      percpu: fix pcpu_last_unit_cpu · 46b30ea9
      Tejun Heo 提交于
      pcpu_first/last_unit_cpu are used to track which cpu has the first and
      last units assigned.  This in turn is used to determine the span of a
      chunk for man/unmap cache flushes and whether an address belongs to
      the first chunk or not in per_cpu_ptr_to_phys().
      
      When the number of possible CPUs isn't power of two, a chunk may
      contain unassigned units towards the end of a chunk.  The logic to
      determine pcpu_last_unit_cpu was incorrect when there was an unused
      unit at the end of a chunk.  It failed to ignore the unused unit and
      assigned the unused marker NR_CPUS to pcpu_last_unit_cpu.
      
      This was discovered through kdump failure which was caused by
      malfunctioning per_cpu_ptr_to_phys() on a kvm setup with 50 possible
      CPUs by CAI Qian.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NCAI Qian <caiqian@redhat.com>
      Cc: stable@kernel.org
      46b30ea9
    • H
      mm: further fix swapin race condition · 31c4a3d3
      Hugh Dickins 提交于
      Commit 4969c119 ("mm: fix swapin race condition") is now agreed to
      be incomplete.  There's a race, not very much less likely than the
      original race envisaged, in which it is further necessary to check that
      the swapcache page's swap has not changed.
      
      Here's the reasoning: cast in terms of reuse_swap_page(), but probably
      could be reformulated to rely on try_to_free_swap() instead, or on
      swapoff+swapon.
      
      A, faults into do_swap_page(): does page1 = lookup_swap_cache(swap1) and
      comes through the lock_page(page1).
      
      B, a racing thread of the same process, faults on the same address: does
      page1 = lookup_swap_cache(swap1) and now waits in lock_page(page1), but
      for whatever reason is unlucky not to get the lock any time soon.
      
      A carries on through do_swap_page(), a write fault, but cannot reuse the
      swap page1 (another reference to swap1).  Unlocks the page1 (but B
      doesn't get it yet), does COW in do_wp_page(), page2 now in that pte.
      
      C, perhaps the parent of A+B, comes in and write faults the same swap
      page1 into its mm, reuse_swap_page() succeeds this time, swap1 is freed.
      
      kswapd comes in after some time (B still unlucky) and swaps out some
      pages from A+B and C: it allocates the original swap1 to page2 in A+B,
      and some other swap2 to the original page1 now in C.  But does not
      immediately free page1 (actually it couldn't: B holds a reference),
      leaving it in swap cache for now.
      
      B at last gets the lock on page1, hooray! Is PageSwapCache(page1)? Yes.
      Is pte_same(*page_table, orig_pte)? Yes, because page2 has now been
      given the swap1 which page1 used to have.  So B proceeds to insert page1
      into A+B's page_table, though its content now belongs to C, quite
      different from what A wrote there.
      
      B ought to have checked that page1's swap was still swap1.
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      31c4a3d3
  9. 10 9月, 2010 4 次提交