1. 08 5月, 2007 36 次提交
    • B
      blackfin architecture · 1394f032
      Bryan Wu 提交于
      This adds support for the Analog Devices Blackfin processor architecture, and
      currently supports the BF533, BF532, BF531, BF537, BF536, BF534, and BF561
      (Dual Core) devices, with a variety of development platforms including those
      avaliable from Analog Devices (BF533-EZKit, BF533-STAMP, BF537-STAMP,
      BF561-EZKIT), and Bluetechnix!  Tinyboards.
      
      The Blackfin architecture was jointly developed by Intel and Analog Devices
      Inc.  (ADI) as the Micro Signal Architecture (MSA) core and introduced it in
      December of 2000.  Since then ADI has put this core into its Blackfin
      processor family of devices.  The Blackfin core has the advantages of a clean,
      orthogonal,RISC-like microprocessor instruction set.  It combines a dual-MAC
      (Multiply/Accumulate), state-of-the-art signal processing engine and
      single-instruction, multiple-data (SIMD) multimedia capabilities into a single
      instruction-set architecture.
      
      The Blackfin architecture, including the instruction set, is described by the
      ADSP-BF53x/BF56x Blackfin Processor Programming Reference
      http://blackfin.uclinux.org/gf/download/frsrelease/29/2549/Blackfin_PRM.pdf
      
      The Blackfin processor is already supported by major releases of gcc, and
      there are binary and source rpms/tarballs for many architectures at:
      http://blackfin.uclinux.org/gf/project/toolchain/frs There is complete
      documentation, including "getting started" guides available at:
      http://docs.blackfin.uclinux.org/ which provides links to the sources and
      patches you will need in order to set up a cross-compiling environment for
      bfin-linux-uclibc
      
      This patch, as well as the other patches (toolchain, distribution,
      uClibc) are actively supported by Analog Devices Inc, at:
      http://blackfin.uclinux.org/
      
      We have tested this on LTP, and our test plan (including pass/fails) can
      be found at:
      http://docs.blackfin.uclinux.org/doku.php?id=testing_the_linux_kernel
      
      [m.kozlowski@tuxland.pl: balance parenthesis in blackfin header files]
      Signed-off-by: NBryan Wu <bryan.wu@analog.com>
      Signed-off-by: NMariusz Kozlowski <m.kozlowski@tuxland.pl>
      Signed-off-by: NAubrey Li <aubrey.li@analog.com>
      Signed-off-by: NJie Zhang <jie.zhang@analog.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1394f032
    • C
      page migration: Only migrate pages if allocation in the highest zone is possible · 906e0be1
      Christoph Lameter 提交于
      Address spaces contain an allocation flag that specifies restriction on the
      zone for pages placed in the mapping.  I.e.  some device may require pages
      to be allocated from a DMA zone.  Block devices may not be able to use
      pages from HIGHMEM.
      
      Memory policies and the common use of page migration works only on the
      highest zone.  If the address space does not allow allocation from the
      highest zone then the pages in the address space are not migratable simply
      because we can only allocate memory for a specified node if we allow
      allocation for the highest zone on each node.
      Acked-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      906e0be1
    • C
      Slab allocators: remove useless __GFP_NO_GROW flag · cfce6604
      Christoph Lameter 提交于
      There is no user remaining and I have never seen any use of that flag.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cfce6604
    • C
      slab allocators: Remove SLAB_CTOR_ATOMIC · 4f104934
      Christoph Lameter 提交于
      SLAB_CTOR atomic is never used which is no surprise since I cannot imagine
      that one would want to do something serious in a constructor or destructor.
       In particular given that the slab allocators run with interrupts disabled.
       Actions in constructors and destructors are by their nature very limited
      and usually do not go beyond initializing variables and list operations.
      
      (The i386 pgd ctor and dtors do take a spinlock in constructor and
      destructor.....  I think that is the furthest we go at this point.)
      
      There is no flag passed to the destructor so removing SLAB_CTOR_ATOMIC also
      establishes a certain symmetry.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f104934
    • C
      slab allocators: Remove SLAB_DEBUG_INITIAL flag · 50953fe9
      Christoph Lameter 提交于
      I have never seen a use of SLAB_DEBUG_INITIAL.  It is only supported by
      SLAB.
      
      I think its purpose was to have a callback after an object has been freed
      to verify that the state is the constructor state again?  The callback is
      performed before each freeing of an object.
      
      I would think that it is much easier to check the object state manually
      before the free.  That also places the check near the code object
      manipulation of the object.
      
      Also the SLAB_DEBUG_INITIAL callback is only performed if the kernel was
      compiled with SLAB debugging on.  If there would be code in a constructor
      handling SLAB_DEBUG_INITIAL then it would have to be conditional on
      SLAB_DEBUG otherwise it would just be dead code.  But there is no such code
      in the kernel.  I think SLUB_DEBUG_INITIAL is too problematic to make real
      use of, difficult to understand and there are easier ways to accomplish the
      same effect (i.e.  add debug code before kfree).
      
      There is a related flag SLAB_CTOR_VERIFY that is frequently checked to be
      clear in fs inode caches.  Remove the pointless checks (they would even be
      pointless without removeal of SLAB_DEBUG_INITIAL) from the fs constructors.
      
      This is the last slab flag that SLUB did not support.  Remove the check for
      unimplemented flags from SLUB.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50953fe9
    • C
      KMEM_CACHE(): simplify slab cache creation · 0a31bd5f
      Christoph Lameter 提交于
      This patch provides a new macro
      
      KMEM_CACHE(<struct>, <flags>)
      
      to simplify slab creation. KMEM_CACHE creates a slab with the name of the
      struct, with the size of the struct and with the alignment of the struct.
      Additional slab flags may be specified if necessary.
      
      Example
      
      struct test_slab {
      	int a,b,c;
      	struct list_head;
      } __cacheline_aligned_in_smp;
      
      test_slab_cache = KMEM_CACHE(test_slab, SLAB_PANIC)
      
      will create a new slab named "test_slab" of the size sizeof(struct
      test_slab) and aligned to the alignment of test slab.  If it fails then we
      panic.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0a31bd5f
    • C
      slab allocators: Remove obsolete SLAB_MUST_HWCACHE_ALIGN · 5af60839
      Christoph Lameter 提交于
      This patch was recently posted to lkml and acked by Pekka.
      
      The flag SLAB_MUST_HWCACHE_ALIGN is
      
      1. Never checked by SLAB at all.
      
      2. A duplicate of SLAB_HWCACHE_ALIGN for SLUB
      
      3. Fulfills the role of SLAB_HWCACHE_ALIGN for SLOB.
      
      The only remaining use is in sparc64 and ppc64 and their use there
      reflects some earlier role that the slab flag once may have had. If
      its specified then SLAB_HWCACHE_ALIGN is also specified.
      
      The flag is confusing, inconsistent and has no purpose.
      
      Remove it.
      Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5af60839
    • P
      mm: optimize kill_bdev() · f9a14399
      Peter Zijlstra 提交于
      Remove duplicate work in kill_bdev().
      
      It currently invalidates and then truncates the bdev's mapping.
      invalidate_mapping_pages() will opportunistically remove pages from the
      mapping.  And truncate_inode_pages() will forcefully remove all pages.
      
      The only thing truncate doesn't do is flush the bh lrus.  So do that
      explicitly.  This avoids (very unlikely) but possible invalid lookup
      results if the same bdev is quickly re-issued.
      
      It also will prevent extreme kernel latencies which are observed when
      blockdevs which have a large amount of pagecache are unmounted, by avoiding
      invalidate_mapping_pages() on that path.  invalidate_mapping_pages() has no
      cond_resched (it can be called under spinlock), whereas truncate_inode_pages()
      has one.
      
      [akpm@linux-foundation.org: restore nrpages==0 optimisation]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f9a14399
    • P
      mm: remove destroy_dirty_buffers from invalidate_bdev() · f98393a6
      Peter Zijlstra 提交于
      Remove the destroy_dirty_buffers argument from invalidate_bdev(), it hasn't
      been used in 6 years (so akpm says).
      
      find * -name \*.[ch] | xargs grep -l invalidate_bdev |
      while read file; do
      	quilt add $file;
      	sed -ie 's/invalidate_bdev(\([^,]*\),[^)]*)/invalidate_bdev(\1)/g' $file;
      done
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f98393a6
    • D
      Quicklist support for sparc64 · 3a2cba99
      David Miller 提交于
      I ported this to sparc64 as per the patch below, tested on UP SunBlade1500 and
      24 cpu Niagara T1000.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Andi Kleen <ak@suse.de>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3a2cba99
    • C
      Quicklists for page table pages · 6225e937
      Christoph Lameter 提交于
      On x86_64 this cuts allocation overhead for page table pages down to a
      fraction (kernel compile / editing load.  TSC based measurement of times spend
      in each function):
      
      no quicklist
      
      pte_alloc               1569048 4.3s(401ns/2.7us/179.7us)
      pmd_alloc                780988 2.1s(337ns/2.7us/86.1us)
      pud_alloc                780072 2.2s(424ns/2.8us/300.6us)
      pgd_alloc                260022 1s(920ns/4us/263.1us)
      
      quicklist:
      
      pte_alloc                452436 573.4ms(8ns/1.3us/121.1us)
      pmd_alloc                196204 174.5ms(7ns/889ns/46.1us)
      pud_alloc                195688 172.4ms(7ns/881ns/151.3us)
      pgd_alloc                 65228 9.8ms(8ns/150ns/6.1us)
      
      pgd allocations are the most complex and there we see the most dramatic
      improvement (may be we can cut down the amount of pgds cached somewhat?).  But
      even the pte allocations still see a doubling of performance.
      
      1. Proven code from the IA64 arch.
      
      	The method used here has been fine tuned for years and
      	is NUMA aware. It is based on the knowledge that accesses
      	to page table pages are sparse in nature. Taking a page
      	off the freelists instead of allocating a zeroed pages
      	allows a reduction of number of cachelines touched
      	in addition to getting rid of the slab overhead. So
      	performance improves. This is particularly useful if pgds
      	contain standard mappings. We can save on the teardown
      	and setup of such a page if we have some on the quicklists.
      	This includes avoiding lists operations that are otherwise
      	necessary on alloc and free to track pgds.
      
      2. Light weight alternative to use slab to manage page size pages
      
      	Slab overhead is significant and even page allocator use
      	is pretty heavy weight. The use of a per cpu quicklist
      	means that we touch only two cachelines for an allocation.
      	There is no need to access the page_struct (unless arch code
      	needs to fiddle around with it). So the fast past just
      	means bringing in one cacheline at the beginning of the
      	page. That same cacheline may then be used to store the
      	page table entry. Or a second cacheline may be used
      	if the page table entry is not in the first cacheline of
      	the page. The current code will zero the page which means
      	touching 32 cachelines (assuming 128 byte). We get down
      	from 32 to 2 cachelines in the fast path.
      
      3. x86_64 gets lightweight page table page management.
      
      	This will allow x86_64 arch code to faster repopulate pgds
      	and other page table entries. The list operations for pgds
      	are reduced in the same way as for i386 to the point where
      	a pgd is allocated from the page allocator and when it is
      	freed back to the page allocator. A pgd can pass through
      	the quicklists without having to be reinitialized.
      
      64 Consolidation of code from multiple arches
      
      	So far arches have their own implementation of quicklist
      	management. This patch moves that feature into the core allowing
      	an easier maintenance and consistent management of quicklists.
      
      Page table pages have the characteristics that they are typically zero or in a
      known state when they are freed.  This is usually the exactly same state as
      needed after allocation.  So it makes sense to build a list of freed page
      table pages and then consume the pages already in use first.  Those pages have
      already been initialized correctly (thus no need to zero them) and are likely
      already cached in such a way that the MMU can use them most effectively.  Page
      table pages are used in a sparse way so zeroing them on allocation is not too
      useful.
      
      Such an implementation already exits for ia64.  Howver, that implementation
      did not support constructors and destructors as needed by i386 / x86_64.  It
      also only supported a single quicklist.  The implementation here has
      constructor and destructor support as well as the ability for an arch to
      specify how many quicklists are needed.
      
      Quicklists are defined by an arch defining CONFIG_QUICKLIST.  If more than one
      quicklist is necessary then we can define NR_QUICK for additional lists.  F.e.
       i386 needs two and thus has
      
      config NR_QUICK
      	int
      	default 2
      
      If an arch has requested quicklist support then pages can be allocated
      from the quicklist (or from the page allocator if the quicklist is
      empty) via:
      
      quicklist_alloc(<quicklist-nr>, <gfpflags>, <constructor>)
      
      Page table pages can be freed using:
      
      quicklist_free(<quicklist-nr>, <destructor>, <page>)
      
      Pages must have a definite state after allocation and before
      they are freed. If no constructor is specified then pages
      will be zeroed on allocation and must be zeroed before they are
      freed.
      
      If a constructor is used then the constructor will establish
      a definite page state. F.e. the i386 and x86_64 pgd constructors
      establish certain mappings.
      
      Constructors and destructors can also be used to track the pages.
      i386 and x86_64 use a list of pgds in order to be able to dynamically
      update standard mappings.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Andi Kleen <ak@suse.de>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6225e937
    • C
      slub: enable tracking of full slabs · 643b1138
      Christoph Lameter 提交于
      If slab tracking is on then build a list of full slabs so that we can verify
      the integrity of all slabs and are also able to built list of alloc/free
      callers.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      643b1138
    • C
    • C
      mm: optimize compound_head() by avoiding a shared page flag · 6d777953
      Christoph Lameter 提交于
      The patch adds PageTail(page) and PageHead(page) to check if a page is the
      head or the tail of a compound page.  This is done by masking the two bits
      describing the state of a compound page and then comparing them.  So one
      comparision and a branch instead of two bit checks and two branches.
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d777953
    • C
      Make page->private usable in compound pages · d85f3385
      Christoph Lameter 提交于
      If we add a new flag so that we can distinguish between the first page and the
      tail pages then we can avoid to use page->private in the first page.
      page->private == page for the first page, so there is no real information in
      there.
      
      Freeing up page->private makes the use of compound pages more transparent.
      They become more usable like real pages.  Right now we have to be careful f.e.
       if we are going beyond PAGE_SIZE allocations in the slab on i386 because we
      can then no longer use the private field.  This is one of the issues that
      cause us not to support debugging for page size slabs in SLAB.
      
      Having page->private available for SLUB would allow more meta information in
      the page struct.  I can probably avoid the 16 bit ints that I have in there
      right now.
      
      Also if page->private is available then a compound page may be equipped with
      buffer heads.  This may free up the way for filesystems to support larger
      blocks than page size.
      
      We add PageTail as an alias of PageReclaim.  Compound pages cannot currently
      be reclaimed.  Because of the alias one needs to check PageCompound first.
      
      The RFC for the this approach was discussed at
      http://marc.info/?t=117574302800001&r=1&w=2
      
      [nacc@us.ibm.com: fix hugetlbfs]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d85f3385
    • C
      SLUB: allocate smallest object size if the user asks for 0 bytes · 614410d5
      Christoph Lameter 提交于
      Makes SLUB behave like SLAB in this area to avoid issues....
      
      Throw a stack dump to alert people.
      
      At some point the behavior should be switched back.  NULL is no memory as
      far as I can tell and if the use asked for 0 bytes then he need to get no
      memory.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      614410d5
    • C
      SLUB core · 81819f0f
      Christoph Lameter 提交于
      This is a new slab allocator which was motivated by the complexity of the
      existing code in mm/slab.c. It attempts to address a variety of concerns
      with the existing implementation.
      
      A. Management of object queues
      
         A particular concern was the complex management of the numerous object
         queues in SLAB. SLUB has no such queues. Instead we dedicate a slab for
         each allocating CPU and use objects from a slab directly instead of
         queueing them up.
      
      B. Storage overhead of object queues
      
         SLAB Object queues exist per node, per CPU. The alien cache queue even
         has a queue array that contain a queue for each processor on each
         node. For very large systems the number of queues and the number of
         objects that may be caught in those queues grows exponentially. On our
         systems with 1k nodes / processors we have several gigabytes just tied up
         for storing references to objects for those queues  This does not include
         the objects that could be on those queues. One fears that the whole
         memory of the machine could one day be consumed by those queues.
      
      C. SLAB meta data overhead
      
         SLAB has overhead at the beginning of each slab. This means that data
         cannot be naturally aligned at the beginning of a slab block. SLUB keeps
         all meta data in the corresponding page_struct. Objects can be naturally
         aligned in the slab. F.e. a 128 byte object will be aligned at 128 byte
         boundaries and can fit tightly into a 4k page with no bytes left over.
         SLAB cannot do this.
      
      D. SLAB has a complex cache reaper
      
         SLUB does not need a cache reaper for UP systems. On SMP systems
         the per CPU slab may be pushed back into partial list but that
         operation is simple and does not require an iteration over a list
         of objects. SLAB expires per CPU, shared and alien object queues
         during cache reaping which may cause strange hold offs.
      
      E. SLAB has complex NUMA policy layer support
      
         SLUB pushes NUMA policy handling into the page allocator. This means that
         allocation is coarser (SLUB does interleave on a page level) but that
         situation was also present before 2.6.13. SLABs application of
         policies to individual slab objects allocated in SLAB is
         certainly a performance concern due to the frequent references to
         memory policies which may lead a sequence of objects to come from
         one node after another. SLUB will get a slab full of objects
         from one node and then will switch to the next.
      
      F. Reduction of the size of partial slab lists
      
         SLAB has per node partial lists. This means that over time a large
         number of partial slabs may accumulate on those lists. These can
         only be reused if allocator occur on specific nodes. SLUB has a global
         pool of partial slabs and will consume slabs from that pool to
         decrease fragmentation.
      
      G. Tunables
      
         SLAB has sophisticated tuning abilities for each slab cache. One can
         manipulate the queue sizes in detail. However, filling the queues still
         requires the uses of the spin lock to check out slabs. SLUB has a global
         parameter (min_slab_order) for tuning. Increasing the minimum slab
         order can decrease the locking overhead. The bigger the slab order the
         less motions of pages between per CPU and partial lists occur and the
         better SLUB will be scaling.
      
      G. Slab merging
      
         We often have slab caches with similar parameters. SLUB detects those
         on boot up and merges them into the corresponding general caches. This
         leads to more effective memory use. About 50% of all caches can
         be eliminated through slab merging. This will also decrease
         slab fragmentation because partial allocated slabs can be filled
         up again. Slab merging can be switched off by specifying
         slub_nomerge on boot up.
      
         Note that merging can expose heretofore unknown bugs in the kernel
         because corrupted objects may now be placed differently and corrupt
         differing neighboring objects. Enable sanity checks to find those.
      
      H. Diagnostics
      
         The current slab diagnostics are difficult to use and require a
         recompilation of the kernel. SLUB contains debugging code that
         is always available (but is kept out of the hot code paths).
         SLUB diagnostics can be enabled via the "slab_debug" option.
         Parameters can be specified to select a single or a group of
         slab caches for diagnostics. This means that the system is running
         with the usual performance and it is much more likely that
         race conditions can be reproduced.
      
      I. Resiliency
      
         If basic sanity checks are on then SLUB is capable of detecting
         common error conditions and recover as best as possible to allow the
         system to continue.
      
      J. Tracing
      
         Tracing can be enabled via the slab_debug=T,<slabcache> option
         during boot. SLUB will then protocol all actions on that slabcache
         and dump the object contents on free.
      
      K. On demand DMA cache creation.
      
         Generally DMA caches are not needed. If a kmalloc is used with
         __GFP_DMA then just create this single slabcache that is needed.
         For systems that have no ZONE_DMA requirement the support is
         completely eliminated.
      
      L. Performance increase
      
         Some benchmarks have shown speed improvements on kernbench in the
         range of 5-10%. The locking overhead of slub is based on the
         underlying base allocation size. If we can reliably allocate
         larger order pages then it is possible to increase slub
         performance much further. The anti-fragmentation patches may
         enable further performance increases.
      
      Tested on:
      i386 UP + SMP, x86_64 UP + SMP + NUMA emulation, IA64 NUMA + Simulator
      
      SLUB Boot options
      
      slub_nomerge		Disable merging of slabs
      slub_min_order=x	Require a minimum order for slab caches. This
      			increases the managed chunk size and therefore
      			reduces meta data and locking overhead.
      slub_min_objects=x	Mininum objects per slab. Default is 8.
      slub_max_order=x	Avoid generating slabs larger than order specified.
      slub_debug		Enable all diagnostics for all caches
      slub_debug=<options>	Enable selective options for all caches
      slub_debug=<o>,<cache>	Enable selective options for a certain set of
      			caches
      
      Available Debug options
      F		Double Free checking, sanity and resiliency
      R		Red zoning
      P		Object / padding poisoning
      U		Track last free / alloc
      T		Trace all allocs / frees (only use for individual slabs).
      
      To use SLUB: Apply this patch and then select SLUB as the default slab
      allocator.
      
      [hugh@veritas.com: fix an oops-causing locking error]
      [akpm@linux-foundation.org: various stupid cleanups and small fixes]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      81819f0f
    • C
      i386: use page allocator to allocate thread_info structure · b5637e65
      Christoph Lameter 提交于
      i386 uses kmalloc to allocate the threadinfo structure assuming that the
      allocations result in a page sized aligned allocation.  That has worked so
      far because SLAB exempts page sized slabs from debugging and aligns them in
      special ways that goes beyond the restrictions imposed by
      KMALLOC_ARCH_MINALIGN valid for other slabs in the kmalloc array.
      
      SLUB also works fine without debugging since page sized allocations neatly
      align at page boundaries.  However, if debugging is switched on then SLUB
      will extend the slab with debug information.  The resulting slab is not
      longer of page size.  It will only be aligned following the requirements
      imposed by KMALLOC_ARCH_MINALIGN.  As a result the threadinfo structure may
      not be page aligned which makes i386 fail to boot with SLUB debug on.
      
      Replace the calls to kmalloc with calls into the page allocator.
      
      An alternate solution may be to create a custom slab cache where the
      alignment is set to PAGE_SIZE.  That would allow slub debugging to be
      applied to the threadinfo structure.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b5637e65
    • J
      readahead: code cleanup · 6ce745ed
      Jan Kara 提交于
      Rename file_ra_state.prev_page to prev_index and file_ra_state.offset to
      prev_offset.  Also update of prev_index in do_generic_mapping_read() is now
      moved close to the update of prev_offset.
      
      [wfg@mail.ustc.edu.cn: fix it]
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: WU Fengguang <wfg@mail.ustc.edu.cn>
      Signed-off-by: NFengguang Wu <wfg@mail.ustc.edu.cn>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6ce745ed
    • J
      readahead: improve heuristic detecting sequential reads · ec0f1637
      Jan Kara 提交于
      Introduce ra.offset and store in it an offset where the previous read
      ended.  This way we can detect whether reads are really sequential (and
      thus we should not mark the page as accessed repeatedly) or whether they
      are random and just happen to be in the same page (and the page should
      really be marked accessed again).
      Signed-off-by: NJan Kara <jack@suse.cz>
      Acked-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Cc: WU Fengguang <wfg@mail.ustc.edu.cn>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ec0f1637
    • D
      smaps: add clear_refs file to clear reference · b813e931
      David Rientjes 提交于
      Adds /proc/pid/clear_refs.  When any non-zero number is written to this file,
      pte_mkold() and ClearPageReferenced() is called for each pte and its
      corresponding page, respectively, in that task's VMAs.  This file is only
      writable by the user who owns the task.
      
      It is now possible to measure _approximately_ how much memory a task is using
      by clearing the reference bits with
      
      	echo 1 > /proc/pid/clear_refs
      
      and checking the reference count for each VMA from the /proc/pid/smaps output
      at a measured time interval.  For example, to observe the approximate change
      in memory footprint for a task, write a script that clears the references
      (echo 1 > /proc/pid/clear_refs), sleeps, and then greps for Pgs_Referenced and
      extracts the size in kB.  Add the sizes for each VMA together for the total
      referenced footprint.  Moments later, repeat the process and observe the
      difference.
      
      For example, using an efficient Mozilla:
      
      	accumulated time		referenced memory
      	----------------		-----------------
      		 0 s				 408 kB
      		 1 s				 408 kB
      		 2 s				 556 kB
      		 3 s				1028 kB
      		 4 s				 872 kB
      		 5 s				1956 kB
      		 6 s				 416 kB
      		 7 s				1560 kB
      		 8 s				2336 kB
      		 9 s				1044 kB
      		10 s				 416 kB
      
      This is a valuable tool to get an approximate measurement of the memory
      footprint for a task.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      [akpm@linux-foundation.org: build fixes]
      [mpm@selenic.com: rename for_each_pmd]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b813e931
    • Z
      i386: use pte_update_defer in ptep_test_and_clear_{dirty,young} · 0013572b
      Zachary Amsden 提交于
      If you actually clear the bit, you need to:
      
      +         pte_update_defer(vma->vm_mm, addr, ptep);
      
      The reason is, when updating PTEs, the hypervisor must be notified.  Using
      atomic operations to do this is fine for all hypervisors I am aware of.
      However, for hypervisors which shadow page tables, if these PTE
      modifications are not trapped, you need a post-modification call to fulfill
      the update of the shadow page table.
      Acked-by: NZachary Amsden <zach@vmware.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0013572b
    • D
      i386: add ptep_test_and_clear_{dirty,young} · 10a8d6ae
      David Rientjes 提交于
      Add ptep_test_and_clear_{dirty,young} to i386.  They advertise that they
      have it and there is at least one place where it needs to be called without
      the page table lock: to clear the accessed bit on write to
      /proc/pid/clear_refs.
      
      ptep_clear_flush_{dirty,young} are updated to use the new functions.  The
      overall net effect to current users of ptep_clear_flush_{dirty,young} is
      that we introduce an additional branch.
      
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      10a8d6ae
    • B
      Add unitialized_var() macro for suppressing gcc warnings · 94909914
      Borislav Petkov 提交于
      Introduce a macro for suppressing gcc from generating a warning about a
      probable uninitialized state of a variable.
      
      Example:
      
      -	spinlock_t *ptl;
      +	spinlock_t *uninitialized_var(ptl);
      
      Not a happy solution, but those warnings are obnoxious.
      
      - Using the usual pointlessly-set-it-to-zero approach wastes several
        bytes of text.
      
      - Using a macro means we can (hopefully) do something else if gcc changes
        cause the `x = x' hack to stop working
      
      - Using a macro means that people who are worried about hiding true bugs
        can easily turn it off.
      Signed-off-by: NBorislav Petkov <bbpetkov@yahoo.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      94909914
    • A
      add pfn_valid_within helper for sub-MAX_ORDER hole detection · 14e07298
      Andy Whitcroft 提交于
      Generally we work under the assumption that memory the mem_map array is
      contigious and valid out to MAX_ORDER_NR_PAGES block of pages, ie.  that if we
      have validated any page within this MAX_ORDER_NR_PAGES block we need not check
      any other.  This is not true when CONFIG_HOLES_IN_ZONE is set and we must
      check each and every reference we make from a pfn.
      
      Add a pfn_valid_within() helper which should be used when scanning pages
      within a MAX_ORDER_NR_PAGES block when we have already checked the validility
      of the block normally with pfn_valid().  This can then be optimised away when
      we do not have holes within a MAX_ORDER_NR_PAGES block of pages.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NBob Picco <bob.picco@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      14e07298
    • A
      mm/slab.c: proper prototypes · ac267728
      Adrian Bunk 提交于
      Add proper prototypes in include/linux/slab.h.
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ac267728
    • H
      Introduce CONFIG_HAS_DMA · 411f0f3e
      Heiko Carstens 提交于
      Architectures that don't support DMA can say so by adding a config NO_DMA
      to their Kconfig file.  This will prevent compilation of some dma specific
      driver code.  Also dma-mapping-broken.h isn't needed anymore on at least
      s390.  This avoids compilation and linking of otherwise dead/broken code.
      
      Other architectures that include dma-mapping-broken.h are arm26, h8300,
      m68k, m68knommu and v850.  If these could be converted as well we could get
      rid of the header file.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      "John W. Linville" <linville@tuxdriver.com>
      Cc: Kyle McMartin <kyle@parisc-linux.org>
      Cc: <James.Bottomley@SteelEye.com>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Jeff Garzik <jeff@garzik.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: <geert@linux-m68k.org>
      Cc: <zippel@linux-m68k.org>
      Cc: <spyro@f2s.com>
      Cc: <uclinux-v850@lsi.nec.co.jp>
      Cc: <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      411f0f3e
    • N
      mm: make read_cache_page synchronous · 6fe6900e
      Nick Piggin 提交于
      Ensure pages are uptodate after returning from read_cache_page, which allows
      us to cut out most of the filesystem-internal PageUptodate calls.
      
      I didn't have a great look down the call chains, but this appears to fixes 7
      possible use-before uptodate in hfs, 2 in hfsplus, 1 in jfs, a few in
      ecryptfs, 1 in jffs2, and a possible cleared data overwritten with readpage in
      block2mtd.  All depending on whether the filler is async and/or can return
      with a !uptodate page.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6fe6900e
    • N
      mm: remove gcc workaround · 5f22df00
      Nick Piggin 提交于
      Minimum gcc version is 3.2 now.  However, with likely profiling, even
      modern gcc versions cannot always eliminate the call.
      
      Replace the placeholder functions with the more conventional empty static
      inlines, which should be optimal for everyone.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f22df00
    • A
      proper prototype for hugetlb_get_unmapped_area() · d2ba27e8
      Adrian Bunk 提交于
      Add a proper prototype for hugetlb_get_unmapped_area() in
      include/linux/hugetlb.h.
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Acked-by: NWilliam Irwin <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d2ba27e8
    • J
      Add apply_to_page_range() which applies a function to a pte range · aee16b3c
      Jeremy Fitzhardinge 提交于
      Add a new mm function apply_to_page_range() which applies a given function to
      every pte in a given virtual address range in a given mm structure.  This is a
      generic alternative to cut-and-pasting the Linux idiomatic pagetable walking
      code in every place that a sequence of PTEs must be accessed.
      
      Although this interface is intended to be useful in a wide range of
      situations, it is currently used specifically by several Xen subsystems, for
      example: to ensure that pagetables have been allocated for a virtual address
      range, and to construct batched special pagetable update requests to map I/O
      memory (in ioremap()).
      
      [akpm@linux-foundation.org: fix warning, unpleasantly]
      Signed-off-by: NIan Pratt <ian.pratt@xensource.com>
      Signed-off-by: NChristian Limpach <Christian.Limpach@cl.cam.ac.uk>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Matt Mackall <mpm@waste.org>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      aee16b3c
    • D
      serial: define FIXED_PORT flag for serial_core · abb4a239
      David Gibson 提交于
      At present, the serial core always allows setserial in userspace to change the
      port address, irq and base clock of any serial port.  That makes sense for
      legacy ISA ports, but not for (say) embedded ns16550 compatible serial ports
      at peculiar addresses.  In these cases, the kernel code configuring the ports
      must know exactly where they are, and their clocking arrangements (which can
      be unusual on embedded boards).  It doesn't make sense for userspace to change
      these settings.
      
      Therefore, this patch defines a UPF_FIXED_PORT flag for the uart_port
      structure.  If this flag is set when the serial port is configured, any
      attempts to alter the port's type, io address, irq or base clock with
      setserial are ignored.
      
      In addition this patch uses the new flag for on-chip serial ports probed in
      arch/powerpc/kernel/legacy_serial.c, and for other hard-wired serial ports
      probed by drivers/serial/of_serial.c.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      abb4a239
    • T
      RM9000 serial driver · bd71c182
      Thomas Koeller 提交于
      Add support for the integrated serial ports of the MIPS RM9122 processor
      and its relatives.
      
      The patch also does some whitespace cleanup.
      
      [akpm@linux-foundation.org: cleanups]
      Signed-off-by: NThomas Koeller <thomas.koeller@baslerweb.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bd71c182
    • M
      serial driver PMC MSP71xx · beab697a
      Marc St-Jean 提交于
      Serial driver patch for the PMC-Sierra MSP71xx devices.
      
      There are three different fixes:
      
      1 Fix for DesignWare APB THRE errata: In brief, this is a non-standard
        16550 in that the THRE interrupt will not re-assert itself simply by
        disabling and re-enabling the THRI bit in the IER, it is only re-enabled
        if a character is actually sent out.
      
        It appears that the "8250-uart-backup-timer.patch" in the "mm" tree
        also fixes it so we have dropped our initial workaround.  This patch now
        needs to be applied on top of that "mm" patch.
      
      2 Fix for Busy Detect on LCR write: The DesignWare APB UART has a feature
        which causes a new Busy Detect interrupt to be generated if it's busy
        when the LCR is written.  This fix saves the value of the LCR and
        rewrites it after clearing the interrupt.
      
      3 Workaround for interrupt/data concurrency issue: The SoC needs to
        ensure that writes that can cause interrupts to be cleared reach the UART
        before returning from the ISR.  This fix reads a non-destructive register
        on the UART so the read transaction completion ensures the previously
        queued write transaction has also completed.
      Signed-off-by: NMarc St-Jean <Marc_St-Jean@pmc-sierra.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      beab697a
    • B
      add new_id to PCMCIA drivers · 6179b556
      Bernhard Walle 提交于
      PCI drivers have the new_id file in sysfs which allows new IDs to be added
      at runtime.  The advantage is to avoid re-compilation of a driver that
      works for a new device, but it's ID table doesn't contain the new device.
      This mechanism is only meant for testing, after the driver has been tested
      successfully, the ID should be added in source code so that new revisions
      of the kernel automatically detect the device.
      
      The implementation follows the PCI implementation. The interface is documented
      in Documentation/pcmcia/driver.txt. Computations should be done in userspace,
      so the sysfs string contains the raw structure members for matching.
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: Greg KH <greg@kroah.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6179b556
    • P
      slab: introduce krealloc · fd76bab2
      Pekka Enberg 提交于
      This introduce krealloc() that reallocates memory while keeping the contents
      unchanged.  The allocator avoids reallocation if the new size fits the
      currently used cache.  I also added a simple non-optimized version for
      mm/slob.c for compatibility.
      
      [akpm@linux-foundation.org: fix warnings]
      Acked-by: NJosef Sipek <jsipek@fsl.cs.sunysb.edu>
      Acked-by: NMatt Mackall <mpm@selenic.com>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fd76bab2
  2. 07 5月, 2007 1 次提交
    • L
      Revert "[PATCH] x86: __pa and __pa_symbol address space separation" · e3ebadd9
      Linus Torvalds 提交于
      This was broken.  It adds complexity, for no good reason.  Rather than
      separate __pa() and __pa_symbol(), we should deprecate __pa_symbol(),
      and preferably __pa() too - and just use "virt_to_phys()" instead, which
      is more readable and has nicer semantics.
      
      However, right now, just undo the separation, and make __pa_symbol() be
      the exact same as __pa().  That fixes the bugs this patch introduced,
      and we can do the fairly obvious cleanups later.
      
      Do the new __phys_addr() function (which is now the actual workhorse for
      the unified __pa()/__pa_symbol()) as a real external function, that way
      all the potential issues with compile/link-time optimizations of
      constant symbol addresses go away, and we can also, if we choose to, add
      more sanity-checking of the argument.
      
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e3ebadd9
  3. 06 5月, 2007 3 次提交