1. 17 4月, 2008 2 次提交
    • I
      x86: add gbpages switches · 00d1c5e0
      Ingo Molnar 提交于
      These new controls toggle experimental support for a new CPU feature,
      the straightforward extension of largepages from the pmd level to the
      pud level, which allows 1GB (kernel) TLBs instead of 2MB TLBs.
      
      Turn it off by default, as this code has not been tested well enough yet.
      
      Use the CONFIG_DIRECT_GBPAGES=y .config option or gbpages on the
      boot line can be used to enable it. If enabled in the .config then
      nogbpages boot option disables it.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      00d1c5e0
    • I
      x86: increase the kernel text limit to 512 MB · 85eb69a1
      Ingo Molnar 提交于
      people sometimes do crazy stuff like building really large static
      arrays into their kernels or building allyesconfig kernels. Give
      more space to the kernel and push modules up a bit: kernel has
      512 MB and modules have 1.5 GB.
      
      Should be enough for a few years ;-)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      85eb69a1
  2. 04 3月, 2008 1 次提交
  3. 03 3月, 2008 1 次提交
  4. 01 3月, 2008 1 次提交
    • H
      x86: fix pmd_bad and pud_bad to support huge pages · cded932b
      Hans Rosenfeld 提交于
      I recently stumbled upon a problem in the support for huge pages. If a
      program using huge pages does not explicitly unmap them, they remain
      mapped (and therefore, are lost) after the program exits.
      
      I observed that the free huge page count in /proc/meminfo decreased when
      running my program, and it did not increase after the program exited.
      After running the program a few times, no more huge pages could be
      allocated.
      
      The reason for this seems to be that the x86 pmd_bad and pud_bad
      consider pmd/pud entries having the PSE bit set invalid. I think there
      is nothing wrong with this bit being set, it just indicates that the
      lowest level of translation has been reached. This bit has to be (and
      is) checked after the basic validity of the entry has been checked, like
      in this fragment from follow_page() in mm/memory.c:
      
        if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
                goto no_page_table;
      
        if (pmd_huge(*pmd)) {
                BUG_ON(flags & FOLL_GET);
                page = follow_huge_pmd(mm, address, pmd, flags & FOLL_WRITE);
                goto out;
        }
      
      Note that this code currently doesn't work as intended if the pmd refers
      to a huge page, the pmd_huge() check can not be reached if the page is
      huge.
      
      Extending pmd_bad() (and, for future 1GB page support, pud_bad()) to
      allow for the PSE bit being set fixes this. For similar reasons,
      allowing the NX bit being set is necessary, too. I have seen huge pages
      having the NX bit set in their pmd entry, which would cause the same
      problem.
      Signed-Off-By: NHans Rosenfeld <hans.rosenfeld@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cded932b
  5. 19 2月, 2008 2 次提交
  6. 04 2月, 2008 2 次提交
  7. 30 1月, 2008 20 次提交
  8. 20 10月, 2007 1 次提交
  9. 17 10月, 2007 1 次提交
  10. 11 10月, 2007 1 次提交
  11. 22 9月, 2007 1 次提交
  12. 23 7月, 2007 1 次提交
    • A
      x86: Fix alternatives and kprobes to remap write-protected kernel text · 19d36ccd
      Andi Kleen 提交于
      Reenable kprobes and alternative patching when the kernel text is write
      protected by DEBUG_RODATA
      
      Add a general utility function to change write protected text.  The new
      function remaps the code using vmap to write it and takes care of CPU
      synchronization.  It also does CLFLUSH to make icache recovery faster.
      
      There are some limitations on when the function can be used, see the
      comment.
      
      This is a newer version that also changes the paravirt_ops code.
      text_poke also supports multi byte patching now.
      
      Contains bug fixes from Zach Amsden and suggestions from Mathieu
      Desnoyers.
      
      Cc: Jan Beulich <jbeulich@novell.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Zach Amsden <zach@vmware.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      19d36ccd
  13. 22 7月, 2007 1 次提交
  14. 18 7月, 2007 1 次提交
  15. 17 7月, 2007 1 次提交
  16. 17 6月, 2007 1 次提交
  17. 09 5月, 2007 2 次提交