1. 30 1月, 2008 8 次提交
  2. 18 10月, 2007 1 次提交
  3. 11 10月, 2007 2 次提交
  4. 12 8月, 2007 1 次提交
  5. 22 7月, 2007 1 次提交
  6. 18 7月, 2007 1 次提交
  7. 21 6月, 2007 1 次提交
    • A
      x86: change_page_attr bandaids · 018d2ad0
      Andi Kleen 提交于
      - Disable CLFLUSH again; it is still broken. Always do WBINVD.
      - Always flush in the i386 case, not only when there are deferred pages.
      
      These are both brute-force inefficient fixes, to be improved
      next release cycle.
      
      The changes to i386 are a little more extensive than strictly
      needed (some dead code added), but it is more similar to the x86-64 version
      now and the dead code will be used soon.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      018d2ad0
  8. 03 5月, 2007 2 次提交
    • J
      [PATCH] i386: PARAVIRT: Allow paravirt backend to choose kernel PMD sharing · 5311ab62
      Jeremy Fitzhardinge 提交于
      Normally when running in PAE mode, the 4th PMD maps the kernel address space,
      which can be shared among all processes (since they all need the same kernel
      mappings).
      
      Xen, however, does not allow guests to have the kernel pmd shared between page
      tables, so parameterize pgtable.c to allow both modes of operation.
      
      There are several side-effects of this.  One is that vmalloc will update the
      kernel address space mappings, and those updates need to be propagated into
      all processes if the kernel mappings are not intrinsically shared.  In the
      non-PAE case, this is done by maintaining a pgd_list of all processes; this
      list is used when all process pagetables must be updated.  pgd_list is
      threaded via otherwise unused entries in the page structure for the pgd, which
      means that the pgd must be page-sized for this to work.
      
      Normally the PAE pgd is only 4x64 byte entries large, but Xen requires the PAE
      pgd to page aligned anyway, so this patch forces the pgd to be page
      aligned+sized when the kernel pmd is unshared, to accomodate both these
      requirements.
      
      Also, since there may be several distinct kernel pmds (if the user/kernel
      split is below 3G), there's no point in allocating them from a slab cache;
      they're just allocated with get_free_page and initialized appropriately.  (Of
      course the could be cached if there is just a single kernel pmd - which is the
      default with a 3G user/kernel split - but it doesn't seem worthwhile to add
      yet another case into this code).
      
      [ Many thanks to wli for review comments. ]
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NWilliam Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Zachary Amsden <zach@vmware.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      5311ab62
    • J
      [PATCH] x86: Improve handling of kernel mappings in change_page_attr · d01ad8dd
      Jan Beulich 提交于
      Fix various broken corner cases in i386 and x86-64 change_page_attr.
      
      AK: split off from tighten kernel image access rights
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      d01ad8dd
  9. 13 2月, 2007 1 次提交
    • Z
      [PATCH] MM: page allocation hooks for VMI backend · c119ecce
      Zachary Amsden 提交于
      The VMI backend uses explicit page type notification to track shadow page
      tables.  The allocation of page table roots is especially tricky.  We need to
      clone the root for non-PAE mode while it is protected under the pgd lock to
      correctly copy the shadow.
      
      We don't need to allocate pgds in PAE mode, (PDPs in Intel terminology) as
      they only have 4 entries, and are cached entirely by the processor, which
      makes shadowing them rather simple.
      
      For base page table level allocation, pmd_populate provides the exact hook
      point we need.  Also, we need to allocate pages when splitting a large page,
      and we must release pages before returning the page to any free pool.
      
      Despite being required with these slightly odd semantics for VMI, Xen also
      uses these hooks to determine the exact moment when page tables are created or
      released.
      
      AK: All nops for other architectures
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      c119ecce
  10. 10 2月, 2007 1 次提交
  11. 07 12月, 2006 1 次提交
  12. 01 7月, 2006 1 次提交
  13. 28 6月, 2006 1 次提交
  14. 23 6月, 2006 1 次提交
  15. 22 3月, 2006 1 次提交
  16. 12 1月, 2006 1 次提交
  17. 10 1月, 2006 1 次提交
  18. 07 1月, 2006 1 次提交
    • D
      [PATCH] x86: change_page_attr() fix · f8af095d
      Dave Jones 提交于
      The 'make rodata read-only' patch in -mm exposes a latent bug in the 32-bit
      change_page_attr() function, which causes certain CPUs (Those with NX
      basically) to reboot instantly after pages are marked read-only.
      
      The same bug got fixed a while back on x86-64, but never got propagated to
      i386.
      
      Stuart Hayes from Dell also picked up on this last June, but it never got
      fixed, as the only thing affected by it aparently was the nvidia driver.
      
      Blatantly stealing description from his post..
      
      "It doesn't appear to be fixed (in the i386 arch).  The
       change_page_attr()/split_large_page() code will still still set all the
       4K PTEs to PAGE_KERNEL (setting the _PAGE_NX bit) when a large page
       needs to be split.
      
       This wouldn't be a problem for the bulk of the kernel memory, but there
       are pages in the lower 4MB of memory that's free, and are part of large
       executable pages that also contain kernel code.  If change_page_attr()
       is called on these, it will set the _PAGE_NX bit on the whole 2MB region
       that was covered by the large page, causing a large chunk of kernel code
       to be non-executable."
      Signed-off-by: NArjan van de Ven <arjan@infradead.org>
      Signed-off-by: NDave Jones <davej@redhat.com>
      Cc: <Stuart_Hayes@Dell.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f8af095d
  19. 05 9月, 2005 2 次提交
    • Z
      [PATCH] i386: use set_pte macros in a couple places where they were missing · c9b02a24
      Zachary Amsden 提交于
      Also, setting PDPEs in PAE mode does not require atomic operations, since the
      PDPEs are cached by the processor, and only reloaded on an explicit or
      implicit reload of CR3.
      
      Since the four PDPEs must always be present in an active root, and the kernel
      PDPE is never updated, we are safe even from SMIs and interrupts / NMIs using
      task gates (which reload CR3).  Actually, much of this is moot, since the user
      PDPEs are never updated either, and the only usage of task gates is by the
      doublefault handler.  It appears the only place PGDs get updated in PAE mode
      is in init_low_mappings() / zap_low_mapping() for initial page table creation
      and recovery from ACPI sleep state, and these sites are safe by inspection.
      Getting rid of the cmpxchg8b saves code space and 720 cycles in pgd_alloc on
      P4.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c9b02a24
    • Z
      [PATCH] i386: inline asm cleanup · 4bb0d3ec
      Zachary Amsden 提交于
      i386 Inline asm cleanup.  Use cr/dr accessor functions.
      
      Also, a potential bugfix.  Also, some CR accessors really should be volatile.
      Reads from CR0 (numeric state may change in an exception handler), writes to
      CR4 (flipping CR4.TSD) and reads from CR2 (page fault) prevent instruction
      re-ordering.  I did not add memory clobber to CR3 / CR4 / CR0 updates, as it
      was not there to begin with, and in no case should kernel memory be clobbered,
      except when doing a TLB flush, which already has memory clobber.
      
      I noticed that page invalidation does not have a memory clobber.  I can't find
      a bug as a result, but there is definitely a potential for a bug here:
      
      #define __flush_tlb_single(addr) \
      	__asm__ __volatile__("invlpg %0": :"m" (*(char *) addr))
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4bb0d3ec
  20. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4