1. 22 3月, 2006 1 次提交
  2. 17 3月, 2006 1 次提交
  3. 09 3月, 2006 1 次提交
    • A
      [PATCH] percpu_counter_sum() · e2bab3d9
      Andrew Morton 提交于
      Implement percpu_counter_sum().  This is a more accurate but slower version of
      percpu_counter_read_positive().
      
      We need this for Alex's speedup-ext3_statfs patch and for the nr_file
      accounting fix.  Otherwise these things would be too inaccurate on large CPU
      counts.
      
      Cc: Ravikiran G Thirumalai <kiran@scalex86.org>
      Cc: Alex Tomas <alex@clusterfs.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e2bab3d9
  4. 15 2月, 2006 1 次提交
    • H
      [PATCH] compound page: use page[1].lru · 41d78ba5
      Hugh Dickins 提交于
      If a compound page has its own put_page_testzero destructor (the only current
      example is free_huge_page), that is noted in page[1].mapping of the compound
      page.  But that's rather a poor place to keep it: functions which call
      set_page_dirty_lock after get_user_pages (e.g.  Infiniband's
      __ib_umem_release) ought to be checking first, otherwise set_page_dirty is
      liable to crash on what's not the address of a struct address_space.
      
      And now I'm about to make that worse: it turns out that every compound page
      needs a destructor, so we can no longer rely on hugetlb pages going their own
      special way, to avoid further problems of page->mapping reuse.  For example,
      not many people know that: on 50% of i386 -Os builds, the first tail page of a
      compound page purports to be PageAnon (when its destructor has an odd
      address), which surprises page_add_file_rmap.
      
      Keep the compound page destructor in page[1].lru.next instead.  And to free up
      the common pairing of mapping and index, also move compound page order from
      index to lru.prev.  Slab reuses page->lru too: but if we ever need slab to use
      compound pages, it can easily stack its use above this.
      
      (akpm: decoded version of the above: the tail pages of a compound page now
      have ->mapping==NULL, so there's no need for the set_page_dirty[_lock]()
      caller to check that they're not compund pages before doing the dirty).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      41d78ba5
  5. 08 2月, 2006 1 次提交
  6. 19 1月, 2006 1 次提交
  7. 11 1月, 2006 1 次提交
  8. 07 1月, 2006 1 次提交
  9. 23 11月, 2005 1 次提交
    • H
      [PATCH] unpaged: unifdefed PageCompound · 664beed0
      Hugh Dickins 提交于
      It looks like snd_xxx is not the only nopage to be using PageReserved as a way
      of holding a high-order page together: which no longer works, but is masked by
      our failure to free from VM_RESERVED areas.  We cannot fix that bug without
      first substituting another way to hold the high-order page together, while
      farming out the 0-order pages from within it.
      
      That's just what PageCompound is designed for, but it's been kept under
      CONFIG_HUGETLB_PAGE.  Remove the #ifdefs: which saves some space (out- of-line
      put_page), doesn't slow down what most needs to be fast (already using
      hugetlb), and unifies the way we handle high-order pages.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      664beed0
  10. 07 11月, 2005 1 次提交
  11. 02 11月, 2005 1 次提交
  12. 31 10月, 2005 1 次提交
  13. 30 10月, 2005 2 次提交
    • H
      [PATCH] mm: split page table lock · 4c21e2f2
      Hugh Dickins 提交于
      Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
      a many-threaded application which concurrently initializes different parts of
      a large anonymous area.
      
      This patch corrects that, by using a separate spinlock per page table page, to
      guard the page table entries in that page, instead of using the mm's single
      page_table_lock.  (But even then, page_table_lock is still used to guard page
      table allocation, and anon_vma allocation.)
      
      In this implementation, the spinlock is tucked inside the struct page of the
      page table page: with a BUILD_BUG_ON in case it overflows - which it would in
      the case of 32-bit PA-RISC with spinlock debugging enabled.
      
      Splitting the lock is not quite for free: another cacheline access.  Ideally,
      I suppose we would use split ptlock only for multi-threaded processes on
      multi-cpu machines; but deciding that dynamically would have its own costs.
      So for now enable it by config, at some number of cpus - since the Kconfig
      language doesn't support inequalities, let preprocessor compare that with
      NR_CPUS.  But I don't think it's worth being user-configurable: for good
      testing of both split and unsplit configs, split now at 4 cpus, and perhaps
      change that to 8 later.
      
      There is a benefit even for singly threaded processes: kswapd can be attacking
      one part of the mm while another part is busy faulting.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4c21e2f2
    • N
      [PATCH] core remove PageReserved · b5810039
      Nick Piggin 提交于
      Remove PageReserved() calls from core code by tightening VM_RESERVED
      handling in mm/ to cover PageReserved functionality.
      
      PageReserved special casing is removed from get_page and put_page.
      
      All setting and clearing of PageReserved is retained, and it is now flagged
      in the page_alloc checks to help ensure we don't introduce any refcount
      based freeing of Reserved pages.
      
      MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
      deprecated.  We never completely handled it correctly anyway, and is be
      reintroduced in future if required (Hugh has a proof of concept).
      
      Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
      arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
      be trivially removed.
      
      Last real user of PageReserved is swsusp, which uses PageReserved to
      determine whether a struct page points to valid memory or not.  This still
      needs to be addressed (a generic page_is_ram() should work).
      
      A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
      thus mapcounted and count towards shared rss).  These writes to the struct
      page could cause excessive cacheline bouncing on big systems.  There are a
      number of ways this could be addressed if it is an issue.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      
      Refcount bug fix for filemap_xip.c
      Signed-off-by: NCarsten Otte <cotte@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b5810039
  14. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4