1. 19 12月, 2008 1 次提交
  2. 23 10月, 2008 2 次提交
  3. 10 9月, 2008 1 次提交
    • H
      x86: unsigned long pte_pfn · 91030ca1
      Hugh Dickins 提交于
      pte_pfn() has always been of type unsigned long, even on 32-bit PAE;
      but in the current tip/next/mm tree it works out to be unsigned long
      long on 64-bit, which gives an irritating warning if you try to printk
      a pfn with the usual %lx.
      
      Now use the same pte_pfn() function, moved from pgtable-3level.h
      to pgtable.h, for all models: as suggested by Jeremy Fitzhardinge.
      And pte_page() can well move along with it (remaining a macro to
      avoid dependence on mm_types.h).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NJeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      91030ca1
  4. 15 8月, 2008 1 次提交
  5. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  6. 22 7月, 2008 1 次提交
    • J
      x86: rename PTE_MASK to PTE_PFN_MASK · 59438c9f
      Jeremy Fitzhardinge 提交于
      Rusty, in his peevish way, complained that macros defining constants
      should have a name which somewhat accurately reflects the actual
      purpose of the constant.
      
      Aside from the fact that PTE_MASK gives no clue as to what's actually
      being masked, and is misleadingly similar to the functionally entirely
      different PMD_MASK, PUD_MASK and PGD_MASK, I don't really see what the
      problem is.
      
      But if this patch silences the incessent noise, then it will have
      achieved its goal (TODO: write test-case).
      Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      59438c9f
  7. 16 7月, 2008 1 次提交
  8. 08 7月, 2008 3 次提交
  9. 20 5月, 2008 1 次提交
    • H
      x86: strengthen 64-bit p?d_bad() · a8375bd8
      Hugh Dickins 提交于
      The x86_64 pgd_bad(), pud_bad(), pmd_bad() inlines have differed from
      their x86_32 counterparts in a couple of ways: they've been unnecessarily
      weak (e.g. letting 0 or 1 count as good), and were typed as unsigned long.
      Strengthen them and return int.
      
      The PAE pmd_bad was too weak before, allowing any junk in the upper half;
      but got strengthened by the patch correcting its ~PAGE_MASK to ~PTE_MASK.
      The PAE pud_bad already said ~PTE_MASK; and since it folds into pgd_bad,
      and we don't set the protection bits at that level, it'll do as is.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a8375bd8
  10. 07 5月, 2008 1 次提交
    • H
      x86: fix PAE pmd_bad bootup warning · aeed5fce
      Hugh Dickins 提交于
      Fix warning from pmd_bad() at bootup on a HIGHMEM64G HIGHPTE x86_32.
      
      That came from 9fc34113 x86: debug pmd_bad();
      but we understand now that the typecasting was wrong for PAE in the previous
      version: pagetable pages above 4GB looked bad and stopped Arjan from booting.
      
      And revert that cded932b x86: fix pmd_bad
      and pud_bad to support huge pages.  It was the wrong way round: we shouldn't
      weaken every pmd_bad and pud_bad check to let huge pages slip through - in
      part they check that we _don't_ have a huge page where it's not expected.
      
      Put the x86 pmd_bad() and pud_bad() definitions back to what they have long
      been: they can be improved (x86_32 should use PTE_MASK, to stop PAE thinking
      junk in the upper word is good; and x86_64 should follow x86_32's stricter
      comparison, to stop thinking any subset of required bits is good); but that
      should be a later patch.
      
      Fix Hans' good observation that follow_page() will never find pmd_huge()
      because that would have already failed the pmd_bad test: test pmd_huge in
      between the pmd_none and pmd_bad tests.  Tighten x86's pmd_huge() check?
      No, once it's a hugepage entry, it can get quite far from a good pmd: for
      example, PROT_NONE leaves it with only ACCESSED of the KERN_PGTABLE bits.
      
      However... though follow_page() contains this and another test for huge
      pages, so it's nice to keep it working on them, where does it actually get
      called on a huge page?  get_user_pages() checks is_vm_hugetlb_page(vma) to
      to call alternative hugetlb processing, as does unmap_vmas() and others.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Earlier-version-tested-by: NIngo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jeff Chua <jeff.chua.linux@gmail.com>
      Cc: Hans Rosenfeld <hans.rosenfeld@amd.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      aeed5fce
  11. 25 4月, 2008 1 次提交
  12. 17 4月, 2008 4 次提交
  13. 04 3月, 2008 1 次提交
  14. 03 3月, 2008 1 次提交
  15. 01 3月, 2008 1 次提交
    • H
      x86: fix pmd_bad and pud_bad to support huge pages · cded932b
      Hans Rosenfeld 提交于
      I recently stumbled upon a problem in the support for huge pages. If a
      program using huge pages does not explicitly unmap them, they remain
      mapped (and therefore, are lost) after the program exits.
      
      I observed that the free huge page count in /proc/meminfo decreased when
      running my program, and it did not increase after the program exited.
      After running the program a few times, no more huge pages could be
      allocated.
      
      The reason for this seems to be that the x86 pmd_bad and pud_bad
      consider pmd/pud entries having the PSE bit set invalid. I think there
      is nothing wrong with this bit being set, it just indicates that the
      lowest level of translation has been reached. This bit has to be (and
      is) checked after the basic validity of the entry has been checked, like
      in this fragment from follow_page() in mm/memory.c:
      
        if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
                goto no_page_table;
      
        if (pmd_huge(*pmd)) {
                BUG_ON(flags & FOLL_GET);
                page = follow_huge_pmd(mm, address, pmd, flags & FOLL_WRITE);
                goto out;
        }
      
      Note that this code currently doesn't work as intended if the pmd refers
      to a huge page, the pmd_huge() check can not be reached if the page is
      huge.
      
      Extending pmd_bad() (and, for future 1GB page support, pud_bad()) to
      allow for the PSE bit being set fixes this. For similar reasons,
      allowing the NX bit being set is necessary, too. I have seen huge pages
      having the NX bit set in their pmd entry, which would cause the same
      problem.
      Signed-Off-By: NHans Rosenfeld <hans.rosenfeld@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cded932b
  16. 19 2月, 2008 2 次提交
  17. 04 2月, 2008 2 次提交
  18. 30 1月, 2008 15 次提交