• Z
    [PATCH] Enable mprotect on huge pages · 8f860591
    Zhang, Yanmin 提交于
    2.6.16-rc3 uses hugetlb on-demand paging, but it doesn_t support hugetlb
    mprotect.
    
    From: David Gibson <david@gibson.dropbear.id.au>
    
      Remove a test from the mprotect() path which checks that the mprotect()ed
      range on a hugepage VMA is hugepage aligned (yes, really, the sense of
      is_aligned_hugepage_range() is the opposite of what you'd guess :-/).
    
      In fact, we don't need this test.  If the given addresses match the
      beginning/end of a hugepage VMA they must already be suitably aligned.  If
      they don't, then mprotect_fixup() will attempt to split the VMA.  The very
      first test in split_vma() will check for a badly aligned address on a
      hugepage VMA and return -EINVAL if necessary.
    
    From: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
    
      On i386 and x86-64, pte flag _PAGE_PSE collides with _PAGE_PROTNONE.  The
      identify of hugetlb pte is lost when changing page protection via mprotect.
      A page fault occurs later will trigger a bug check in huge_pte_alloc().
    
      The fix is to always make new pte a hugetlb pte and also to clean up
      legacy code where _PAGE_PRESENT is forced on in the pre-faulting day.
    Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com>
    Cc: David Gibson <david@gibson.dropbear.id.au>
    Cc: "David S. Miller" <davem@davemloft.net>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: William Lee Irwin III <wli@holomorphy.com>
    Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
    Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
    Cc: Andi Kleen <ak@muc.de>
    Signed-off-by: NAndrew Morton <akpm@osdl.org>
    Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
    8f860591
pgtable.h 15.1 KB