• K
    mm: make compound_head() robust · 1d798ca3
    Kirill A. Shutemov 提交于
    Hugh has pointed that compound_head() call can be unsafe in some
    context. There's one example:
    
    	CPU0					CPU1
    
    isolate_migratepages_block()
      page_count()
        compound_head()
          !!PageTail() == true
    					put_page()
    					  tail->first_page = NULL
          head = tail->first_page
    					alloc_pages(__GFP_COMP)
    					   prep_compound_page()
    					     tail->first_page = head
    					     __SetPageTail(p);
          !!PageTail() == true
        <head == NULL dereferencing>
    
    The race is pure theoretical. I don't it's possible to trigger it in
    practice. But who knows.
    
    We can fix the race by changing how encode PageTail() and compound_head()
    within struct page to be able to update them in one shot.
    
    The patch introduces page->compound_head into third double word block in
    front of compound_dtor and compound_order. Bit 0 encodes PageTail() and
    the rest bits are pointer to head page if bit zero is set.
    
    The patch moves page->pmd_huge_pte out of word, just in case if an
    architecture defines pgtable_t into something what can have the bit 0
    set.
    
    hugetlb_cgroup uses page->lru.next in the second tail page to store
    pointer struct hugetlb_cgroup. The patch switch it to use page->private
    in the second tail page instead. The space is free since ->first_page is
    removed from the union.
    
    The patch also opens possibility to remove HUGETLB_CGROUP_MIN_ORDER
    limitation, since there's now space in first tail page to store struct
    hugetlb_cgroup pointer. But that's out of scope of the patch.
    
    That means page->compound_head shares storage space with:
    
     - page->lru.next;
     - page->next;
     - page->rcu_head.next;
    
    That's too long list to be absolutely sure, but looks like nobody uses
    bit 0 of the word.
    
    page->rcu_head.next guaranteed[1] to have bit 0 clean as long as we use
    call_rcu(), call_rcu_bh(), call_rcu_sched(), or call_srcu(). But future
    call_rcu_lazy() is not allowed as it makes use of the bit and we can
    get false positive PageTail().
    
    [1] http://lkml.kernel.org/g/20150827163634.GD4029@linux.vnet.ibm.comSigned-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Acked-by: NMichal Hocko <mhocko@suse.com>
    Reviewed-by: NAndrea Arcangeli <aarcange@redhat.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    1d798ca3
huge_memory.c 81.2 KB