1. 25 1月, 2008 5 次提交
    • G
      kset: move /sys/slab to /sys/kernel/slab · 081248de
      Greg Kroah-Hartman 提交于
      /sys/kernel is where these things should go.
      Also updated the documentation and tool that used this directory.
      
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      081248de
    • G
      kset: convert slub to use kset_create · 27c3a314
      Greg Kroah-Hartman 提交于
      Dynamically create the kset instead of declaring it statically.
      
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      27c3a314
    • G
      kobject: remove struct kobj_type from struct kset · 3514faca
      Greg Kroah-Hartman 提交于
      We don't need a "default" ktype for a kset.  We should set this
      explicitly every time for each kset.  This change is needed so that we
      can make ksets dynamic, and cleans up one of the odd, undocumented
      assumption that the kset/kobject/ktype model has.
      
      This patch is based on a lot of help from Kay Sievers.
      
      Nasty bug in the block code was found by Dave Young
      <hidave.darkstar@gmail.com>
      
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Dave Young <hidave.darkstar@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      3514faca
    • M
      slab: partially revert list3 changes · 9c09a95c
      Mel Gorman 提交于
      Partial revert the changes made by 04231b30
      to the kmem_list3 management. On a machine with a memoryless node, this
      BUG_ON was triggering
      
      	static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid)
      	{
      		struct list_head *entry;
      		struct slab *slabp;
      		struct kmem_list3 *l3;
      		void *obj;
      		int x;
      
      		l3 = cachep->nodelists[nodeid];
      		BUG_ON(!l3);
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Nishanth Aravamudan <nacc@us.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9c09a95c
    • L
      fix hugepages leak due to pagetable page sharing · c5c99429
      Larry Woodman 提交于
      The shared page table code for hugetlb memory on x86 and x86_64
      is causing a leak.  When a user of hugepages exits using this code
      the system leaks some of the hugepages.
      
      -------------------------------------------------------
      Part of /proc/meminfo just before database startup:
      HugePages_Total:  5500
      HugePages_Free:   5500
      HugePages_Rsvd:      0
      Hugepagesize:     2048 kB
      
      Just before shutdown:
      HugePages_Total:  5500
      HugePages_Free:   4475
      HugePages_Rsvd:      0
      Hugepagesize:     2048 kB
      
      After shutdown:
      HugePages_Total:  5500
      HugePages_Free:   4988
      HugePages_Rsvd:
      0 Hugepagesize:     2048 kB
      ----------------------------------------------------------
      
      The problem occurs durring a fork, in copy_hugetlb_page_range().  It
      locates the dst_pte using huge_pte_alloc().  Since huge_pte_alloc() calls
      huge_pmd_share() it will share the pmd page if can, yet the main loop in
      copy_hugetlb_page_range() does a get_page() on every hugepage.  This is a
      violation of the shared hugepmd pagetable protocol and creates additional
      referenced to the hugepages causing a leak when the unmap of the VMA
      occurs.  We can skip the entire replication of the ptes when the hugepage
      pagetables are shared.  The attached patch skips copying the ptes and the
      get_page() calls if the hugetlbpage pagetable is shared.
      
      [akpm@linux-foundation.org: coding-style cleanups]
      Signed-off-by: NLarry Woodman <lwoodman@redhat.com>
      Signed-off-by: NAdam Litke <agl@us.ibm.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Ken Chen <kenchen@google.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c5c99429
  2. 24 1月, 2008 1 次提交
  3. 18 1月, 2008 2 次提交
  4. 15 1月, 2008 3 次提交
  5. 09 1月, 2008 2 次提交
  6. 03 1月, 2008 1 次提交
  7. 02 1月, 2008 1 次提交
  8. 22 12月, 2007 1 次提交
  9. 20 12月, 2007 1 次提交
  10. 18 12月, 2007 7 次提交
  11. 11 12月, 2007 1 次提交
  12. 10 12月, 2007 1 次提交
  13. 06 12月, 2007 5 次提交
  14. 05 12月, 2007 3 次提交
  15. 01 12月, 2007 1 次提交
  16. 30 11月, 2007 2 次提交
  17. 29 11月, 2007 2 次提交
  18. 20 11月, 2007 1 次提交
    • C
      [S390] Optimize storage key handling for anonymous pages · ce7e9fae
      Christian Borntraeger 提交于
      page_mkclean used to call page_clear_dirty for every given page. This
      is different to all other architectures, where the dirty bit in the
      PTEs is only resetted, if page_mapping() returns a non-NULL pointer.
      We can move the page_test_dirty/page_clear_dirty sequence into the
      2nd if to avoid unnecessary iske/sske sequences, which are expensive.
      
      This change also helps kvm for s390 as the host must transfer the
      dirty bit into the guest status bits. By moving the page_clear_dirty
      operation into the 2nd if, the vm will only call page_clear_dirty
      for pages where it walks the mapping anyway. There it calls
      ptep_clear_flush for writable ptes, so we can transfer the dirty bit
      to the guest.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      ce7e9fae