1. 02 2月, 2008 1 次提交
  2. 01 2月, 2008 1 次提交
  3. 30 1月, 2008 27 次提交
  4. 11 10月, 2007 2 次提交
  5. 22 7月, 2007 1 次提交
    • J
      i386: fix iounmap's use of vm_struct's size field · 9585116b
      Jeremy Fitzhardinge 提交于
      get_vm_area always returns an area with an adjacent guard page.  That guard
      page is included in vm_struct.size.  iounmap uses vm_struct.size to
      determine how much address space needs to have change_page_attr applied to
      it, which will BUG if applied to the guard page.
      
      This patch adds a helper function - get_vm_area_size() in linux/vmalloc.h -
      to return the actual size of a vm area, and uses it to make iounmap do the
      right thing.  There are probably other places which should be using
      get_vm_area_size().
      
      Thanks to Dave Young <hidave.darkstar@gmail.com> for debugging the
      problem.
      
      [ Andi, it wasn't clear to me whether x86_64 needs the same fix. ]
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Dave Young <hidave.darkstar@gmail.com>
      Cc: Chuck Ebbert <cebbert@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9585116b
  6. 01 10月, 2006 1 次提交
  7. 16 12月, 2005 1 次提交
  8. 13 12月, 2005 1 次提交
  9. 30 10月, 2005 1 次提交
    • H
      [PATCH] mm: init_mm without ptlock · 872fec16
      Hugh Dickins 提交于
      First step in pushing down the page_table_lock.  init_mm.page_table_lock has
      been used throughout the architectures (usually for ioremap): not to serialize
      kernel address space allocation (that's usually vmlist_lock), but because
      pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
      
      Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
      architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
      and drop it when allocating a new one, to check lest a racing task already
      did.  Similarly no page_table_lock in vmalloc's map_vm_area.
      
      Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
      user mms, which are converted only by a later patch, for now they have to lock
      differently according to whether or not it's init_mm.
      
      If sources get muddled, there's a danger that an arch source taking
      init_mm.page_table_lock will be mixed with common source also taking it (or
      neither take it).  So break the rules and make another change, which should
      break the build for such a mismatch: remove the redundant mm arg from
      pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
      
      Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
      used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
      pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
      map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
      took page_table_lock for no good reason.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      872fec16
  10. 08 7月, 2005 1 次提交
  11. 26 6月, 2005 1 次提交
  12. 24 6月, 2005 1 次提交
  13. 21 5月, 2005 1 次提交