1. 08 5月, 2007 40 次提交
    • J
      uml: more page fault path trimming · 16dd07bc
      Jeff Dike 提交于
      More trimming of the page fault path.
      
      Permissions are passed around in a single int rather than one bit per
      int.  The permission values are copied from libc so that they can be
      passed to mmap and mprotect without any further conversion.
      
      The register sets used by do_syscall_stub and copy_context_skas0 are
      initialized once, at boot time, rather than once per call.
      
      wait_stub_done checks whether it is getting the signals it expects by
      comparing the wait status to a mask containing bits for the signals of
      interest rather than comparing individually to the signal numbers.  It
      also has one check for a wait failure instead of two.  The caller is
      expected to do the initial continue of the stub.  This gets rid of an
      argument and some logic.  The fname argument is gone, as that can be
      had from a stack trace.
      
      user_signal() is collapsed into userspace() as it is basically one or
      two lines of code afterwards.
      
      The physical memory remapping stuff is gone, as it is unused.
      
      flush_tlb_page is inlined.
      Signed-off-by: NJeff Dike <jdike@linux.intel.com>
      Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      16dd07bc
    • P
      uml: improve checking and diagnostics of ethernet MACs · e024715f
      Paolo 'Blaisorblade' Giarrusso 提交于
      Improve checking and diagnostics for broadcast and multicast Ethernet MAC
      addresses, and distinguish between those cases in output; also make sure the
      device is assigned a MAC address valid only locally to avoid collisions.
      Signed-off-by: NPaolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Signed-off-by: NJeff Dike <jdike@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e024715f
    • R
      ARRAY_SIZE: check for type · c5e631cf
      Rusty Russell 提交于
      We can use a gcc extension to ensure that ARRAY_SIZE() is handed an array,
      not a pointer.  This is especially important when code is changed from a
      fixed array to a pointer.  I assume the Intel compiler doesn't support
      __builtin_types_compatible_p.
      
      [jdike@addtoit.com: uml: update UML definition of ARRAY_SIZE]
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NJeff Dike <jdike@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c5e631cf
    • R
      swsusp: free more memory · 56f99bcb
      Rafael J. Wysocki 提交于
      Move the definition of PAGES_FOR_IO to kernel/power/power.h and introduce
      SPARE_PAGES representing the number of pages that should be freed by the
      swsusp's memory shrinker in addition to PAGES_FOR_IO so that device drivers
      can allocate some memory (up to 1 MB total) in their .suspend() routines
      without causing the suspend to fail.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NPavel Machek <pavel@ucw.cz>
      Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56f99bcb
    • J
      remove software_suspend() · ab3bfca7
      Johannes Berg 提交于
      Remove software_suspend() and all its users since
      pm_suspend(PM_SUSPEND_DISK) should be equivalent and there's no point in
      having two interfaces for the same thing.
      
      The patch also changes the valid_state function to return 0 (false) for
      PM_SUSPEND_DISK when SOFTWARE_SUSPEND is not configured instead of
      accepting it and having the whole thing fail later.
      Signed-off-by: NJohannes Berg <johannes@sipsolutions.net>
      Acked-by: N"Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ab3bfca7
    • R
      mm: remove unused page flags · 04293355
      Rafael J. Wysocki 提交于
      Remove the two page flags that were previously used by swsusp and are no
      longer needed.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NPavel Machek <pavel@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04293355
    • R
      swsusp: do not use page flags · 74dfd666
      Rafael J. Wysocki 提交于
      Make swsusp use memory bitmaps instead of page flags for marking 'nosave' and
      free pages.  This allows us to 'recycle' two page flags that can be used for
      other purposes.  Also, the memory needed to store the bitmaps is allocated
      when necessary (ie.  before the suspend) and freed after the resume which is
      more reasonable.
      
      The patch is designed to minimize the amount of changes and there are some
      nice simplifications and optimizations possible on top of it.  I am going to
      implement them separately in the future.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NPavel Machek <pavel@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      74dfd666
    • R
      swsusp: use inline functions for changing page flags · 7be98234
      Rafael J. Wysocki 提交于
      Replace direct invocations of SetPageNosave(), SetPageNosaveFree() etc.  with
      calls to inline functions that can be changed in subsequent patches without
      modifying the code calling them.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NPavel Machek <pavel@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7be98234
    • I
      ALPHA: "prctl" macros · ed58a593
      Ivan Kokshaysky 提交于
      Files:
      
      include/asm-alpha/thread_info.h
      
      	Provide "prctl" macros for ALPHA.
      Signed-off-by: NJay Estabrook <jay.estabrook@hp.com>
      Signed-off-by: NIvan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Richard Henderson <rth@twiddle.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ed58a593
    • Y
      h8300 generic irq · c728d604
      Yoshinori Sato 提交于
      h8300 using generic irq handler patch.
      Signed-off-by: NYoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c728d604
    • B
      blackfin: serial driver · 194de561
      Bryan Wu 提交于
      This patch implements the driver necessary use the Analog Devices Blackfin
      processor's Serial Port.
      Signed-off-by: NBryan Wu <bryan.wu@analog.com>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Russell King <rmk+lkml@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      194de561
    • B
      blackfin architecture · 1394f032
      Bryan Wu 提交于
      This adds support for the Analog Devices Blackfin processor architecture, and
      currently supports the BF533, BF532, BF531, BF537, BF536, BF534, and BF561
      (Dual Core) devices, with a variety of development platforms including those
      avaliable from Analog Devices (BF533-EZKit, BF533-STAMP, BF537-STAMP,
      BF561-EZKIT), and Bluetechnix!  Tinyboards.
      
      The Blackfin architecture was jointly developed by Intel and Analog Devices
      Inc.  (ADI) as the Micro Signal Architecture (MSA) core and introduced it in
      December of 2000.  Since then ADI has put this core into its Blackfin
      processor family of devices.  The Blackfin core has the advantages of a clean,
      orthogonal,RISC-like microprocessor instruction set.  It combines a dual-MAC
      (Multiply/Accumulate), state-of-the-art signal processing engine and
      single-instruction, multiple-data (SIMD) multimedia capabilities into a single
      instruction-set architecture.
      
      The Blackfin architecture, including the instruction set, is described by the
      ADSP-BF53x/BF56x Blackfin Processor Programming Reference
      http://blackfin.uclinux.org/gf/download/frsrelease/29/2549/Blackfin_PRM.pdf
      
      The Blackfin processor is already supported by major releases of gcc, and
      there are binary and source rpms/tarballs for many architectures at:
      http://blackfin.uclinux.org/gf/project/toolchain/frs There is complete
      documentation, including "getting started" guides available at:
      http://docs.blackfin.uclinux.org/ which provides links to the sources and
      patches you will need in order to set up a cross-compiling environment for
      bfin-linux-uclibc
      
      This patch, as well as the other patches (toolchain, distribution,
      uClibc) are actively supported by Analog Devices Inc, at:
      http://blackfin.uclinux.org/
      
      We have tested this on LTP, and our test plan (including pass/fails) can
      be found at:
      http://docs.blackfin.uclinux.org/doku.php?id=testing_the_linux_kernel
      
      [m.kozlowski@tuxland.pl: balance parenthesis in blackfin header files]
      Signed-off-by: NBryan Wu <bryan.wu@analog.com>
      Signed-off-by: NMariusz Kozlowski <m.kozlowski@tuxland.pl>
      Signed-off-by: NAubrey Li <aubrey.li@analog.com>
      Signed-off-by: NJie Zhang <jie.zhang@analog.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1394f032
    • C
      page migration: Only migrate pages if allocation in the highest zone is possible · 906e0be1
      Christoph Lameter 提交于
      Address spaces contain an allocation flag that specifies restriction on the
      zone for pages placed in the mapping.  I.e.  some device may require pages
      to be allocated from a DMA zone.  Block devices may not be able to use
      pages from HIGHMEM.
      
      Memory policies and the common use of page migration works only on the
      highest zone.  If the address space does not allow allocation from the
      highest zone then the pages in the address space are not migratable simply
      because we can only allocate memory for a specified node if we allow
      allocation for the highest zone on each node.
      Acked-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      906e0be1
    • C
      Slab allocators: remove useless __GFP_NO_GROW flag · cfce6604
      Christoph Lameter 提交于
      There is no user remaining and I have never seen any use of that flag.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cfce6604
    • C
      slab allocators: Remove SLAB_CTOR_ATOMIC · 4f104934
      Christoph Lameter 提交于
      SLAB_CTOR atomic is never used which is no surprise since I cannot imagine
      that one would want to do something serious in a constructor or destructor.
       In particular given that the slab allocators run with interrupts disabled.
       Actions in constructors and destructors are by their nature very limited
      and usually do not go beyond initializing variables and list operations.
      
      (The i386 pgd ctor and dtors do take a spinlock in constructor and
      destructor.....  I think that is the furthest we go at this point.)
      
      There is no flag passed to the destructor so removing SLAB_CTOR_ATOMIC also
      establishes a certain symmetry.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f104934
    • C
      slab allocators: Remove SLAB_DEBUG_INITIAL flag · 50953fe9
      Christoph Lameter 提交于
      I have never seen a use of SLAB_DEBUG_INITIAL.  It is only supported by
      SLAB.
      
      I think its purpose was to have a callback after an object has been freed
      to verify that the state is the constructor state again?  The callback is
      performed before each freeing of an object.
      
      I would think that it is much easier to check the object state manually
      before the free.  That also places the check near the code object
      manipulation of the object.
      
      Also the SLAB_DEBUG_INITIAL callback is only performed if the kernel was
      compiled with SLAB debugging on.  If there would be code in a constructor
      handling SLAB_DEBUG_INITIAL then it would have to be conditional on
      SLAB_DEBUG otherwise it would just be dead code.  But there is no such code
      in the kernel.  I think SLUB_DEBUG_INITIAL is too problematic to make real
      use of, difficult to understand and there are easier ways to accomplish the
      same effect (i.e.  add debug code before kfree).
      
      There is a related flag SLAB_CTOR_VERIFY that is frequently checked to be
      clear in fs inode caches.  Remove the pointless checks (they would even be
      pointless without removeal of SLAB_DEBUG_INITIAL) from the fs constructors.
      
      This is the last slab flag that SLUB did not support.  Remove the check for
      unimplemented flags from SLUB.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50953fe9
    • C
      KMEM_CACHE(): simplify slab cache creation · 0a31bd5f
      Christoph Lameter 提交于
      This patch provides a new macro
      
      KMEM_CACHE(<struct>, <flags>)
      
      to simplify slab creation. KMEM_CACHE creates a slab with the name of the
      struct, with the size of the struct and with the alignment of the struct.
      Additional slab flags may be specified if necessary.
      
      Example
      
      struct test_slab {
      	int a,b,c;
      	struct list_head;
      } __cacheline_aligned_in_smp;
      
      test_slab_cache = KMEM_CACHE(test_slab, SLAB_PANIC)
      
      will create a new slab named "test_slab" of the size sizeof(struct
      test_slab) and aligned to the alignment of test slab.  If it fails then we
      panic.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0a31bd5f
    • C
      slab allocators: Remove obsolete SLAB_MUST_HWCACHE_ALIGN · 5af60839
      Christoph Lameter 提交于
      This patch was recently posted to lkml and acked by Pekka.
      
      The flag SLAB_MUST_HWCACHE_ALIGN is
      
      1. Never checked by SLAB at all.
      
      2. A duplicate of SLAB_HWCACHE_ALIGN for SLUB
      
      3. Fulfills the role of SLAB_HWCACHE_ALIGN for SLOB.
      
      The only remaining use is in sparc64 and ppc64 and their use there
      reflects some earlier role that the slab flag once may have had. If
      its specified then SLAB_HWCACHE_ALIGN is also specified.
      
      The flag is confusing, inconsistent and has no purpose.
      
      Remove it.
      Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5af60839
    • P
      mm: optimize kill_bdev() · f9a14399
      Peter Zijlstra 提交于
      Remove duplicate work in kill_bdev().
      
      It currently invalidates and then truncates the bdev's mapping.
      invalidate_mapping_pages() will opportunistically remove pages from the
      mapping.  And truncate_inode_pages() will forcefully remove all pages.
      
      The only thing truncate doesn't do is flush the bh lrus.  So do that
      explicitly.  This avoids (very unlikely) but possible invalid lookup
      results if the same bdev is quickly re-issued.
      
      It also will prevent extreme kernel latencies which are observed when
      blockdevs which have a large amount of pagecache are unmounted, by avoiding
      invalidate_mapping_pages() on that path.  invalidate_mapping_pages() has no
      cond_resched (it can be called under spinlock), whereas truncate_inode_pages()
      has one.
      
      [akpm@linux-foundation.org: restore nrpages==0 optimisation]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f9a14399
    • P
      mm: remove destroy_dirty_buffers from invalidate_bdev() · f98393a6
      Peter Zijlstra 提交于
      Remove the destroy_dirty_buffers argument from invalidate_bdev(), it hasn't
      been used in 6 years (so akpm says).
      
      find * -name \*.[ch] | xargs grep -l invalidate_bdev |
      while read file; do
      	quilt add $file;
      	sed -ie 's/invalidate_bdev(\([^,]*\),[^)]*)/invalidate_bdev(\1)/g' $file;
      done
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f98393a6
    • D
      Quicklist support for sparc64 · 3a2cba99
      David Miller 提交于
      I ported this to sparc64 as per the patch below, tested on UP SunBlade1500 and
      24 cpu Niagara T1000.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Andi Kleen <ak@suse.de>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3a2cba99
    • C
      Quicklists for page table pages · 6225e937
      Christoph Lameter 提交于
      On x86_64 this cuts allocation overhead for page table pages down to a
      fraction (kernel compile / editing load.  TSC based measurement of times spend
      in each function):
      
      no quicklist
      
      pte_alloc               1569048 4.3s(401ns/2.7us/179.7us)
      pmd_alloc                780988 2.1s(337ns/2.7us/86.1us)
      pud_alloc                780072 2.2s(424ns/2.8us/300.6us)
      pgd_alloc                260022 1s(920ns/4us/263.1us)
      
      quicklist:
      
      pte_alloc                452436 573.4ms(8ns/1.3us/121.1us)
      pmd_alloc                196204 174.5ms(7ns/889ns/46.1us)
      pud_alloc                195688 172.4ms(7ns/881ns/151.3us)
      pgd_alloc                 65228 9.8ms(8ns/150ns/6.1us)
      
      pgd allocations are the most complex and there we see the most dramatic
      improvement (may be we can cut down the amount of pgds cached somewhat?).  But
      even the pte allocations still see a doubling of performance.
      
      1. Proven code from the IA64 arch.
      
      	The method used here has been fine tuned for years and
      	is NUMA aware. It is based on the knowledge that accesses
      	to page table pages are sparse in nature. Taking a page
      	off the freelists instead of allocating a zeroed pages
      	allows a reduction of number of cachelines touched
      	in addition to getting rid of the slab overhead. So
      	performance improves. This is particularly useful if pgds
      	contain standard mappings. We can save on the teardown
      	and setup of such a page if we have some on the quicklists.
      	This includes avoiding lists operations that are otherwise
      	necessary on alloc and free to track pgds.
      
      2. Light weight alternative to use slab to manage page size pages
      
      	Slab overhead is significant and even page allocator use
      	is pretty heavy weight. The use of a per cpu quicklist
      	means that we touch only two cachelines for an allocation.
      	There is no need to access the page_struct (unless arch code
      	needs to fiddle around with it). So the fast past just
      	means bringing in one cacheline at the beginning of the
      	page. That same cacheline may then be used to store the
      	page table entry. Or a second cacheline may be used
      	if the page table entry is not in the first cacheline of
      	the page. The current code will zero the page which means
      	touching 32 cachelines (assuming 128 byte). We get down
      	from 32 to 2 cachelines in the fast path.
      
      3. x86_64 gets lightweight page table page management.
      
      	This will allow x86_64 arch code to faster repopulate pgds
      	and other page table entries. The list operations for pgds
      	are reduced in the same way as for i386 to the point where
      	a pgd is allocated from the page allocator and when it is
      	freed back to the page allocator. A pgd can pass through
      	the quicklists without having to be reinitialized.
      
      64 Consolidation of code from multiple arches
      
      	So far arches have their own implementation of quicklist
      	management. This patch moves that feature into the core allowing
      	an easier maintenance and consistent management of quicklists.
      
      Page table pages have the characteristics that they are typically zero or in a
      known state when they are freed.  This is usually the exactly same state as
      needed after allocation.  So it makes sense to build a list of freed page
      table pages and then consume the pages already in use first.  Those pages have
      already been initialized correctly (thus no need to zero them) and are likely
      already cached in such a way that the MMU can use them most effectively.  Page
      table pages are used in a sparse way so zeroing them on allocation is not too
      useful.
      
      Such an implementation already exits for ia64.  Howver, that implementation
      did not support constructors and destructors as needed by i386 / x86_64.  It
      also only supported a single quicklist.  The implementation here has
      constructor and destructor support as well as the ability for an arch to
      specify how many quicklists are needed.
      
      Quicklists are defined by an arch defining CONFIG_QUICKLIST.  If more than one
      quicklist is necessary then we can define NR_QUICK for additional lists.  F.e.
       i386 needs two and thus has
      
      config NR_QUICK
      	int
      	default 2
      
      If an arch has requested quicklist support then pages can be allocated
      from the quicklist (or from the page allocator if the quicklist is
      empty) via:
      
      quicklist_alloc(<quicklist-nr>, <gfpflags>, <constructor>)
      
      Page table pages can be freed using:
      
      quicklist_free(<quicklist-nr>, <destructor>, <page>)
      
      Pages must have a definite state after allocation and before
      they are freed. If no constructor is specified then pages
      will be zeroed on allocation and must be zeroed before they are
      freed.
      
      If a constructor is used then the constructor will establish
      a definite page state. F.e. the i386 and x86_64 pgd constructors
      establish certain mappings.
      
      Constructors and destructors can also be used to track the pages.
      i386 and x86_64 use a list of pgds in order to be able to dynamically
      update standard mappings.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Andi Kleen <ak@suse.de>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6225e937
    • C
      slub: enable tracking of full slabs · 643b1138
      Christoph Lameter 提交于
      If slab tracking is on then build a list of full slabs so that we can verify
      the integrity of all slabs and are also able to built list of alloc/free
      callers.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      643b1138
    • C
    • C
      mm: optimize compound_head() by avoiding a shared page flag · 6d777953
      Christoph Lameter 提交于
      The patch adds PageTail(page) and PageHead(page) to check if a page is the
      head or the tail of a compound page.  This is done by masking the two bits
      describing the state of a compound page and then comparing them.  So one
      comparision and a branch instead of two bit checks and two branches.
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d777953
    • C
      Make page->private usable in compound pages · d85f3385
      Christoph Lameter 提交于
      If we add a new flag so that we can distinguish between the first page and the
      tail pages then we can avoid to use page->private in the first page.
      page->private == page for the first page, so there is no real information in
      there.
      
      Freeing up page->private makes the use of compound pages more transparent.
      They become more usable like real pages.  Right now we have to be careful f.e.
       if we are going beyond PAGE_SIZE allocations in the slab on i386 because we
      can then no longer use the private field.  This is one of the issues that
      cause us not to support debugging for page size slabs in SLAB.
      
      Having page->private available for SLUB would allow more meta information in
      the page struct.  I can probably avoid the 16 bit ints that I have in there
      right now.
      
      Also if page->private is available then a compound page may be equipped with
      buffer heads.  This may free up the way for filesystems to support larger
      blocks than page size.
      
      We add PageTail as an alias of PageReclaim.  Compound pages cannot currently
      be reclaimed.  Because of the alias one needs to check PageCompound first.
      
      The RFC for the this approach was discussed at
      http://marc.info/?t=117574302800001&r=1&w=2
      
      [nacc@us.ibm.com: fix hugetlbfs]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d85f3385
    • C
      SLUB: allocate smallest object size if the user asks for 0 bytes · 614410d5
      Christoph Lameter 提交于
      Makes SLUB behave like SLAB in this area to avoid issues....
      
      Throw a stack dump to alert people.
      
      At some point the behavior should be switched back.  NULL is no memory as
      far as I can tell and if the use asked for 0 bytes then he need to get no
      memory.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      614410d5
    • C
      SLUB core · 81819f0f
      Christoph Lameter 提交于
      This is a new slab allocator which was motivated by the complexity of the
      existing code in mm/slab.c. It attempts to address a variety of concerns
      with the existing implementation.
      
      A. Management of object queues
      
         A particular concern was the complex management of the numerous object
         queues in SLAB. SLUB has no such queues. Instead we dedicate a slab for
         each allocating CPU and use objects from a slab directly instead of
         queueing them up.
      
      B. Storage overhead of object queues
      
         SLAB Object queues exist per node, per CPU. The alien cache queue even
         has a queue array that contain a queue for each processor on each
         node. For very large systems the number of queues and the number of
         objects that may be caught in those queues grows exponentially. On our
         systems with 1k nodes / processors we have several gigabytes just tied up
         for storing references to objects for those queues  This does not include
         the objects that could be on those queues. One fears that the whole
         memory of the machine could one day be consumed by those queues.
      
      C. SLAB meta data overhead
      
         SLAB has overhead at the beginning of each slab. This means that data
         cannot be naturally aligned at the beginning of a slab block. SLUB keeps
         all meta data in the corresponding page_struct. Objects can be naturally
         aligned in the slab. F.e. a 128 byte object will be aligned at 128 byte
         boundaries and can fit tightly into a 4k page with no bytes left over.
         SLAB cannot do this.
      
      D. SLAB has a complex cache reaper
      
         SLUB does not need a cache reaper for UP systems. On SMP systems
         the per CPU slab may be pushed back into partial list but that
         operation is simple and does not require an iteration over a list
         of objects. SLAB expires per CPU, shared and alien object queues
         during cache reaping which may cause strange hold offs.
      
      E. SLAB has complex NUMA policy layer support
      
         SLUB pushes NUMA policy handling into the page allocator. This means that
         allocation is coarser (SLUB does interleave on a page level) but that
         situation was also present before 2.6.13. SLABs application of
         policies to individual slab objects allocated in SLAB is
         certainly a performance concern due to the frequent references to
         memory policies which may lead a sequence of objects to come from
         one node after another. SLUB will get a slab full of objects
         from one node and then will switch to the next.
      
      F. Reduction of the size of partial slab lists
      
         SLAB has per node partial lists. This means that over time a large
         number of partial slabs may accumulate on those lists. These can
         only be reused if allocator occur on specific nodes. SLUB has a global
         pool of partial slabs and will consume slabs from that pool to
         decrease fragmentation.
      
      G. Tunables
      
         SLAB has sophisticated tuning abilities for each slab cache. One can
         manipulate the queue sizes in detail. However, filling the queues still
         requires the uses of the spin lock to check out slabs. SLUB has a global
         parameter (min_slab_order) for tuning. Increasing the minimum slab
         order can decrease the locking overhead. The bigger the slab order the
         less motions of pages between per CPU and partial lists occur and the
         better SLUB will be scaling.
      
      G. Slab merging
      
         We often have slab caches with similar parameters. SLUB detects those
         on boot up and merges them into the corresponding general caches. This
         leads to more effective memory use. About 50% of all caches can
         be eliminated through slab merging. This will also decrease
         slab fragmentation because partial allocated slabs can be filled
         up again. Slab merging can be switched off by specifying
         slub_nomerge on boot up.
      
         Note that merging can expose heretofore unknown bugs in the kernel
         because corrupted objects may now be placed differently and corrupt
         differing neighboring objects. Enable sanity checks to find those.
      
      H. Diagnostics
      
         The current slab diagnostics are difficult to use and require a
         recompilation of the kernel. SLUB contains debugging code that
         is always available (but is kept out of the hot code paths).
         SLUB diagnostics can be enabled via the "slab_debug" option.
         Parameters can be specified to select a single or a group of
         slab caches for diagnostics. This means that the system is running
         with the usual performance and it is much more likely that
         race conditions can be reproduced.
      
      I. Resiliency
      
         If basic sanity checks are on then SLUB is capable of detecting
         common error conditions and recover as best as possible to allow the
         system to continue.
      
      J. Tracing
      
         Tracing can be enabled via the slab_debug=T,<slabcache> option
         during boot. SLUB will then protocol all actions on that slabcache
         and dump the object contents on free.
      
      K. On demand DMA cache creation.
      
         Generally DMA caches are not needed. If a kmalloc is used with
         __GFP_DMA then just create this single slabcache that is needed.
         For systems that have no ZONE_DMA requirement the support is
         completely eliminated.
      
      L. Performance increase
      
         Some benchmarks have shown speed improvements on kernbench in the
         range of 5-10%. The locking overhead of slub is based on the
         underlying base allocation size. If we can reliably allocate
         larger order pages then it is possible to increase slub
         performance much further. The anti-fragmentation patches may
         enable further performance increases.
      
      Tested on:
      i386 UP + SMP, x86_64 UP + SMP + NUMA emulation, IA64 NUMA + Simulator
      
      SLUB Boot options
      
      slub_nomerge		Disable merging of slabs
      slub_min_order=x	Require a minimum order for slab caches. This
      			increases the managed chunk size and therefore
      			reduces meta data and locking overhead.
      slub_min_objects=x	Mininum objects per slab. Default is 8.
      slub_max_order=x	Avoid generating slabs larger than order specified.
      slub_debug		Enable all diagnostics for all caches
      slub_debug=<options>	Enable selective options for all caches
      slub_debug=<o>,<cache>	Enable selective options for a certain set of
      			caches
      
      Available Debug options
      F		Double Free checking, sanity and resiliency
      R		Red zoning
      P		Object / padding poisoning
      U		Track last free / alloc
      T		Trace all allocs / frees (only use for individual slabs).
      
      To use SLUB: Apply this patch and then select SLUB as the default slab
      allocator.
      
      [hugh@veritas.com: fix an oops-causing locking error]
      [akpm@linux-foundation.org: various stupid cleanups and small fixes]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      81819f0f
    • C
      i386: use page allocator to allocate thread_info structure · b5637e65
      Christoph Lameter 提交于
      i386 uses kmalloc to allocate the threadinfo structure assuming that the
      allocations result in a page sized aligned allocation.  That has worked so
      far because SLAB exempts page sized slabs from debugging and aligns them in
      special ways that goes beyond the restrictions imposed by
      KMALLOC_ARCH_MINALIGN valid for other slabs in the kmalloc array.
      
      SLUB also works fine without debugging since page sized allocations neatly
      align at page boundaries.  However, if debugging is switched on then SLUB
      will extend the slab with debug information.  The resulting slab is not
      longer of page size.  It will only be aligned following the requirements
      imposed by KMALLOC_ARCH_MINALIGN.  As a result the threadinfo structure may
      not be page aligned which makes i386 fail to boot with SLUB debug on.
      
      Replace the calls to kmalloc with calls into the page allocator.
      
      An alternate solution may be to create a custom slab cache where the
      alignment is set to PAGE_SIZE.  That would allow slub debugging to be
      applied to the threadinfo structure.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b5637e65
    • J
      readahead: code cleanup · 6ce745ed
      Jan Kara 提交于
      Rename file_ra_state.prev_page to prev_index and file_ra_state.offset to
      prev_offset.  Also update of prev_index in do_generic_mapping_read() is now
      moved close to the update of prev_offset.
      
      [wfg@mail.ustc.edu.cn: fix it]
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: WU Fengguang <wfg@mail.ustc.edu.cn>
      Signed-off-by: NFengguang Wu <wfg@mail.ustc.edu.cn>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6ce745ed
    • J
      readahead: improve heuristic detecting sequential reads · ec0f1637
      Jan Kara 提交于
      Introduce ra.offset and store in it an offset where the previous read
      ended.  This way we can detect whether reads are really sequential (and
      thus we should not mark the page as accessed repeatedly) or whether they
      are random and just happen to be in the same page (and the page should
      really be marked accessed again).
      Signed-off-by: NJan Kara <jack@suse.cz>
      Acked-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Cc: WU Fengguang <wfg@mail.ustc.edu.cn>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ec0f1637
    • D
      smaps: add clear_refs file to clear reference · b813e931
      David Rientjes 提交于
      Adds /proc/pid/clear_refs.  When any non-zero number is written to this file,
      pte_mkold() and ClearPageReferenced() is called for each pte and its
      corresponding page, respectively, in that task's VMAs.  This file is only
      writable by the user who owns the task.
      
      It is now possible to measure _approximately_ how much memory a task is using
      by clearing the reference bits with
      
      	echo 1 > /proc/pid/clear_refs
      
      and checking the reference count for each VMA from the /proc/pid/smaps output
      at a measured time interval.  For example, to observe the approximate change
      in memory footprint for a task, write a script that clears the references
      (echo 1 > /proc/pid/clear_refs), sleeps, and then greps for Pgs_Referenced and
      extracts the size in kB.  Add the sizes for each VMA together for the total
      referenced footprint.  Moments later, repeat the process and observe the
      difference.
      
      For example, using an efficient Mozilla:
      
      	accumulated time		referenced memory
      	----------------		-----------------
      		 0 s				 408 kB
      		 1 s				 408 kB
      		 2 s				 556 kB
      		 3 s				1028 kB
      		 4 s				 872 kB
      		 5 s				1956 kB
      		 6 s				 416 kB
      		 7 s				1560 kB
      		 8 s				2336 kB
      		 9 s				1044 kB
      		10 s				 416 kB
      
      This is a valuable tool to get an approximate measurement of the memory
      footprint for a task.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      [akpm@linux-foundation.org: build fixes]
      [mpm@selenic.com: rename for_each_pmd]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b813e931
    • Z
      i386: use pte_update_defer in ptep_test_and_clear_{dirty,young} · 0013572b
      Zachary Amsden 提交于
      If you actually clear the bit, you need to:
      
      +         pte_update_defer(vma->vm_mm, addr, ptep);
      
      The reason is, when updating PTEs, the hypervisor must be notified.  Using
      atomic operations to do this is fine for all hypervisors I am aware of.
      However, for hypervisors which shadow page tables, if these PTE
      modifications are not trapped, you need a post-modification call to fulfill
      the update of the shadow page table.
      Acked-by: NZachary Amsden <zach@vmware.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0013572b
    • D
      i386: add ptep_test_and_clear_{dirty,young} · 10a8d6ae
      David Rientjes 提交于
      Add ptep_test_and_clear_{dirty,young} to i386.  They advertise that they
      have it and there is at least one place where it needs to be called without
      the page table lock: to clear the accessed bit on write to
      /proc/pid/clear_refs.
      
      ptep_clear_flush_{dirty,young} are updated to use the new functions.  The
      overall net effect to current users of ptep_clear_flush_{dirty,young} is
      that we introduce an additional branch.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      10a8d6ae
    • B
      Add unitialized_var() macro for suppressing gcc warnings · 94909914
      Borislav Petkov 提交于
      Introduce a macro for suppressing gcc from generating a warning about a
      probable uninitialized state of a variable.
      
      Example:
      
      -	spinlock_t *ptl;
      +	spinlock_t *uninitialized_var(ptl);
      
      Not a happy solution, but those warnings are obnoxious.
      
      - Using the usual pointlessly-set-it-to-zero approach wastes several
        bytes of text.
      
      - Using a macro means we can (hopefully) do something else if gcc changes
        cause the `x = x' hack to stop working
      
      - Using a macro means that people who are worried about hiding true bugs
        can easily turn it off.
      Signed-off-by: NBorislav Petkov <bbpetkov@yahoo.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      94909914
    • A
      add pfn_valid_within helper for sub-MAX_ORDER hole detection · 14e07298
      Andy Whitcroft 提交于
      Generally we work under the assumption that memory the mem_map array is
      contigious and valid out to MAX_ORDER_NR_PAGES block of pages, ie.  that if we
      have validated any page within this MAX_ORDER_NR_PAGES block we need not check
      any other.  This is not true when CONFIG_HOLES_IN_ZONE is set and we must
      check each and every reference we make from a pfn.
      
      Add a pfn_valid_within() helper which should be used when scanning pages
      within a MAX_ORDER_NR_PAGES block when we have already checked the validility
      of the block normally with pfn_valid().  This can then be optimised away when
      we do not have holes within a MAX_ORDER_NR_PAGES block of pages.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NBob Picco <bob.picco@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      14e07298
    • A
      mm/slab.c: proper prototypes · ac267728
      Adrian Bunk 提交于
      Add proper prototypes in include/linux/slab.h.
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ac267728
    • H
      Introduce CONFIG_HAS_DMA · 411f0f3e
      Heiko Carstens 提交于
      Architectures that don't support DMA can say so by adding a config NO_DMA
      to their Kconfig file.  This will prevent compilation of some dma specific
      driver code.  Also dma-mapping-broken.h isn't needed anymore on at least
      s390.  This avoids compilation and linking of otherwise dead/broken code.
      
      Other architectures that include dma-mapping-broken.h are arm26, h8300,
      m68k, m68knommu and v850.  If these could be converted as well we could get
      rid of the header file.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      "John W. Linville" <linville@tuxdriver.com>
      Cc: Kyle McMartin <kyle@parisc-linux.org>
      Cc: <James.Bottomley@SteelEye.com>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Jeff Garzik <jeff@garzik.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: <geert@linux-m68k.org>
      Cc: <zippel@linux-m68k.org>
      Cc: <spyro@f2s.com>
      Cc: <uclinux-v850@lsi.nec.co.jp>
      Cc: <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      411f0f3e
    • N
      mm: make read_cache_page synchronous · 6fe6900e
      Nick Piggin 提交于
      Ensure pages are uptodate after returning from read_cache_page, which allows
      us to cut out most of the filesystem-internal PageUptodate calls.
      
      I didn't have a great look down the call chains, but this appears to fixes 7
      possible use-before uptodate in hfs, 2 in hfsplus, 1 in jfs, a few in
      ecryptfs, 1 in jffs2, and a possible cleared data overwritten with readpage in
      block2mtd.  All depending on whether the filler is async and/or can return
      with a !uptodate page.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6fe6900e
    • N
      mm: remove gcc workaround · 5f22df00
      Nick Piggin 提交于
      Minimum gcc version is 3.2 now.  However, with likely profiling, even
      modern gcc versions cannot always eliminate the call.
      
      Replace the placeholder functions with the more conventional empty static
      inlines, which should be optimal for everyone.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f22df00