1. 17 10月, 2007 3 次提交
    • C
      Memoryless nodes: introduce mask of nodes with memory · 7ea1530a
      Christoph Lameter 提交于
      It is necessary to know if nodes have memory since we have recently begun to
      add support for memoryless nodes.  For that purpose we introduce a two new
      node states: N_HIGH_MEMORY and N_NORMAL_MEMORY.
      
      A node has its bit in N_HIGH_MEMORY set if it has any memory regardless of the
      type of mmemory.  If a node has memory then it has at least one zone defined
      in its pgdat structure that is located in the pgdat itself.
      
      A node has its bit in N_NORMAL_MEMORY set if it has a lower zone than
      ZONE_HIGHMEM.  This means it is possible to allocate memory that is not
      subject to kmap.
      
      N_HIGH_MEMORY and N_NORMAL_MEMORY can then be used in various places to insure
      that we do the right thing when we encounter a memoryless node.
      
      [akpm@linux-foundation.org: build fix]
      [Lee.Schermerhorn@hp.com: update N_HIGH_MEMORY node state for memory hotadd]
      [y-goto@jp.fujitsu.com: Fix memory hotplug + sparsemem build]
      Signed-off-by: NLee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Acked-by: NBob Picco <bob.picco@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7ea1530a
    • C
      Memoryless nodes: Generic management of nodemasks for various purposes · 13808910
      Christoph Lameter 提交于
      Why do we need to support memoryless nodes?
      
      KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
      
      > For fujitsu, problem is called "empty" node.
      >
      > When ACPI's SRAT table includes "possible nodes", ia64 bootstrap(acpi_numa_init)
      > creates nodes, which includes no memory, no cpu.
      >
      > I tried to remove empty-node in past, but that was denied.
      > It was because we can hot-add cpu to the empty node.
      > (node-hotplug triggered by cpu is not implemented now. and it will be ugly.)
      >
      >
      > For HP, (Lee can comment on this later), they have memory-less-node.
      > As far as I hear, HP's machine can have following configration.
      >
      > (example)
      > Node0: CPU0   memory AAA MB
      > Node1: CPU1   memory AAA MB
      > Node2: CPU2   memory AAA MB
      > Node3: CPU3   memory AAA MB
      > Node4: Memory XXX GB
      >
      > AAA is very small value (below 16MB)  and will be omitted by ia64 bootstrap.
      > After boot, only Node 4 has valid memory (but have no cpu.)
      >
      > Maybe this is memory-interleave by firmware config.
      
      Christoph Lameter <clameter@sgi.com> wrote:
      
      > Future SGI platforms (actually also current one can have but nothing like
      > that is deployed to my knowledge) have nodes with only cpus. Current SGI
      > platforms have nodes with just I/O that we so far cannot manage in the
      > core. So the arch code maps them to the nearest memory node.
      
      Lee Schermerhorn <Lee.Schermerhorn@hp.com> wrote:
      
      > For the HP platforms, we can configure each cell with from 0% to 100%
      > "cell local memory".  When we configure with <100% CLM, the "missing
      > percentages" are interleaved by hardware on a cache-line granularity to
      > improve bandwidth at the expense of latency for numa-challenged
      > applications [and OSes, but not our problem ;-)].  When we boot Linux on
      > such a config, all of the real nodes have no memory--it all resides in a
      > single interleaved pseudo-node.
      >
      > When we boot Linux on a 100% CLM configuration [== NUMA], we still have
      > the interleaved pseudo-node.  It contains a few hundred MB stolen from
      > the real nodes to contain the DMA zone.  [Interleaved memory resides at
      > phys addr 0].  The memoryless-nodes patches, along with the zoneorder
      > patches, support this config as well.
      >
      > Also, when we boot a NUMA config with the "mem=" command line,
      > specifying less memory than actually exists, Linux takes the excluded
      > memory "off the top" rather than distributing it across the nodes.  This
      > can result in memoryless nodes, as well.
      >
      
      This patch:
      
      Preparation for memoryless node patches.
      
      Provide a generic way to keep nodemasks describing various characteristics of
      NUMA nodes.
      
      Remove the node_online_map and the node_possible map and realize the same
      functionality using two nodes stats: N_POSSIBLE and N_ONLINE.
      
      [Lee.Schermerhorn@hp.com: Initialize N_*_MEMORY and N_CPU masks for non-NUMA config]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Tested-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: NBob Picco <bob.picco@hp.com>
      Cc: Nishanth Aravamudan <nacc@us.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: "Serge E. Hallyn" <serge@hallyn.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      13808910
    • J
      mm: no need to cast vmalloc() return value in zone_wait_table_init() · 8691f3a7
      Jesper Juhl 提交于
      vmalloc() returns a void pointer, so there's no need to cast its
      return value in mm/page_alloc.c::zone_wait_table_init().
      Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8691f3a7
  2. 31 8月, 2007 1 次提交
  3. 23 8月, 2007 1 次提交
    • M
      Apply memory policies to top two highest zones when highest zone is ZONE_MOVABLE · b377fd39
      Mel Gorman 提交于
      The NUMA layer only supports NUMA policies for the highest zone.  When
      ZONE_MOVABLE is configured with kernelcore=, the the highest zone becomes
      ZONE_MOVABLE.  The result is that policies are only applied to allocations
      like anonymous pages and page cache allocated from ZONE_MOVABLE when the
      zone is used.
      
      This patch applies policies to the two highest zones when the highest zone
      is ZONE_MOVABLE.  As ZONE_MOVABLE consists of pages from the highest "real"
      zone, it's always functionally equivalent.
      
      The patch has been tested on a variety of machines both NUMA and non-NUMA
      covering x86, x86_64 and ppc64.  No abnormal results were seen in
      kernbench, tbench, dbench or hackbench.  It passes regression tests from
      the numactl package with and without kernelcore= once numactl tests are
      patched to wait for vmstat counters to update.
      
      akpm: this is the nasty hack to fix NUMA mempolicies in the presence of
      ZONE_MOVABLE and kernelcore= in 2.6.23.  Christoph says "For .24 either merge
      the mobility or get the other solution that Mel is working on.  That solution
      would only use a single zonelist per node and filter on the fly.  That may
      help performance and also help to make memory policies work better."
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Tested-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b377fd39
  4. 01 8月, 2007 1 次提交
    • M
      Do not trigger OOM-killer for high-order allocation failures · a8bbf72a
      Mel Gorman 提交于
      out_of_memory() may be called when an allocation is failing and the direct
      reclaim is not making any progress.  This does not take into account the
      requested order of the allocation.  If the request if for an order larger
      than PAGE_ALLOC_COSTLY_ORDER, it is reasonable to fail the allocation
      because the kernel makes no guarantees about those allocations succeeding.
      
      This false OOM situation can occur if a user is trying to grow the hugepage
      pool in a script like;
      
      #!/bin/bash
      REQUIRED=$1
      echo 1 > /proc/sys/vm/hugepages_treat_as_movable
      echo $REQUIRED > /proc/sys/vm/nr_hugepages
      ACTUAL=`cat /proc/sys/vm/nr_hugepages`
      while [ $REQUIRED -ne $ACTUAL ]; do
      	echo Huge page pool at $ACTUAL growing to $REQUIRED
      	echo $REQUIRED > /proc/sys/vm/nr_hugepages
      	ACTUAL=`cat /proc/sys/vm/nr_hugepages`
      	sleep 1
      done
      
      This is a reasonable scenario when ZONE_MOVABLE is in use but triggers OOM
      easily on 2.6.23-rc1. This patch will fail an allocation for an order above
      PAGE_ALLOC_COSTLY_ORDER instead of killing processes and retrying.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NAndy Whitcroft <apw@shadowen.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a8bbf72a
  5. 30 7月, 2007 1 次提交
    • R
      Introduce CONFIG_SUSPEND for suspend-to-Ram and standby · 296699de
      Rafael J. Wysocki 提交于
      Introduce CONFIG_SUSPEND representing the ability to enter system sleep
      states, such as the ACPI S3 state, and allow the user to choose SUSPEND
      and HIBERNATION independently of each other.
      
      Make HOTPLUG_CPU be selected automatically if SUSPEND or HIBERNATION has
      been chosen and the kernel is intended for SMP systems.
      
      Also, introduce CONFIG_PM_SLEEP which is automatically selected if
      CONFIG_SUSPEND or CONFIG_HIBERNATION is set and use it to select the
      code needed for both suspend and hibernation.
      
      The top-level power management headers and the ACPI code related to
      suspend and hibernation are modified to use the new definitions (the
      changes in drivers/acpi/sleep/main.c are, mostly, moving code to reduce
      the number of ifdefs).
      
      There are many other files in which CONFIG_PM can be replaced with
      CONFIG_PM_SLEEP or even with CONFIG_SUSPEND, but they can be updated in
      the future.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      296699de
  6. 27 7月, 2007 1 次提交
    • M
      Allow nodes to exist that only contain ZONE_MOVABLE · b5445f95
      Mel Gorman 提交于
      With the introduction of kernelcore=, a configurable zone is created on
      request.  In some cases, this value will be small enough that some nodes
      contain only ZONE_MOVABLE.  On some NUMA configurations when this occurs,
      arch-independent zone-sizing will get the size of the memory holes within
      the node incorrect.  The value of present_pages goes negative and the boot
      fails.
      
      This patch fixes the bug in the calculation of the size of the hole.  The
      test case is to boot test a NUMA machine with a low value of kernelcore=
      before and after the patch is applied.  While this bug exists in early
      kernel it cannot be triggered in practice.
      
      This patch has been boot-tested on a variety machines with and without
      kernelcore= set.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b5445f95
  7. 20 7月, 2007 3 次提交
    • P
      mm: fix memory hotplug oops from ZONE_MOVABLE changes. · e228929b
      Paul Mundt 提交于
      zone_movable_pfn is presently marked as __initdata and referenced from
      adjust_zone_range_for_zone_movable(), which in turn is referenced by
      zone_spanned_pages_in_node().  Both of these are __meminit annotated.  When
      memory hotplug is enabled, this will oops on a hot-add, due to
      zone_movable_pfn having been freed.
      
      __meminitdata annotation gives the desired behaviour.
      
      This will only impact platforms that enable both memory hotplug
      and ARCH_POPULATES_NODE_MAP.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e228929b
    • F
      mm: share PG_readahead and PG_reclaim · fe3cba17
      Fengguang Wu 提交于
      Share the same page flag bit for PG_readahead and PG_reclaim.
      
      One is used only on file reads, another is only for emergency writes.  One
      is used mostly for fresh/young pages, another is for old pages.
      
      Combinations of possible interactions are:
      
      a) clear PG_reclaim => implicit clear of PG_readahead
      	it will delay an asynchronous readahead into a synchronous one
      	it actually does _good_ for readahead:
      		the pages will be reclaimed soon, it's readahead thrashing!
      		in this case, synchronous readahead makes more sense.
      
      b) clear PG_readahead => implicit clear of PG_reclaim
      	one(and only one) page will not be reclaimed in time
      	it can be avoided by checking PageWriteback(page) in readahead first
      
      c) set PG_reclaim => implicit set of PG_readahead
      	will confuse readahead and make it restart the size rampup process
      	it's a trivial problem, and can mostly be avoided by checking
      	PageWriteback(page) first in readahead
      
      d) set PG_readahead => implicit set of PG_reclaim
      	PG_readahead will never be set on already cached pages.
      	PG_reclaim will always be cleared on dirtying a page.
      	so not a problem.
      
      In summary,
      	a)   we get better behavior
      	b,d) possible interactions can be avoided
      	c)   racy condition exists that might affect readahead, but the chance
      	     is _really_ low, and the hurt on readahead is trivial.
      
      Compound pages also use PG_reclaim, but for now they do not interact with
      reclaim/readahead code.
      Signed-off-by: NFengguang Wu <wfg@mail.ustc.edu.cn>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fe3cba17
    • F
      readahead: introduce PG_readahead · d77c2d7c
      Fengguang Wu 提交于
      Introduce a new page flag: PG_readahead.
      
      It acts as a look-ahead mark, which tells the page reader: Hey, it's time to
      invoke the read-ahead logic.  For the sake of I/O pipelining, don't wait until
      it runs out of cached pages!
      Signed-off-by: NFengguang Wu <wfg@mail.ustc.edu.cn>
      Cc: Steven Pratt <slpratt@austin.ibm.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d77c2d7c
  8. 18 7月, 2007 5 次提交
    • M
      knfsd: nfsd4: vary maximum delegation limit based on RAM size · c2f1a551
      Meelap Shah 提交于
      Our original NFSv4 delegation policy was to give out a read delegation on any
      open when it was possible to.
      
      Since the lifetime of a delegation isn't limited to that of an open, a client
      may quite reasonably hang on to a delegation as long as it has the inode
      cached.  This becomes an obvious problem the first time a client's inode cache
      approaches the size of the server's total memory.
      
      Our first quick solution was to add a hard-coded limit.  This patch makes a
      mild incremental improvement by varying that limit according to the server's
      total memory size, allowing at most 4 delegations per megabyte of RAM.
      
      My quick back-of-the-envelope calculation finds that in the worst case (where
      every delegation is for a different inode), a delegation could take about
      1.5K, which would make the worst case usage about 6% of memory.  The new limit
      works out to be about the same as the old on a 1-gig server.
      
      [akpm@linux-foundation.org: Don't needlessly bloat vmlinux]
      [akpm@linux-foundation.org: Make it right for highmem machines]
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c2f1a551
    • A
      Lumpy Reclaim V4 · 5ad333eb
      Andy Whitcroft 提交于
      When we are out of memory of a suitable size we enter reclaim.  The current
      reclaim algorithm targets pages in LRU order, which is great for fairness at
      order-0 but highly unsuitable if you desire pages at higher orders.  To get
      pages of higher order we must shoot down a very high proportion of memory;
      >95% in a lot of cases.
      
      This patch set adds a lumpy reclaim algorithm to the allocator.  It targets
      groups of pages at the specified order anchored at the end of the active and
      inactive lists.  This encourages groups of pages at the requested orders to
      move from active to inactive, and active to free lists.  This behaviour is
      only triggered out of direct reclaim when higher order pages have been
      requested.
      
      This patch set is particularly effective when utilised with an
      anti-fragmentation scheme which groups pages of similar reclaimability
      together.
      
      This patch set is based on Peter Zijlstra's lumpy reclaim V2 patch which forms
      the foundation.  Credit to Mel Gorman for sanitity checking.
      
      Mel said:
      
        The patches have an application with hugepage pool resizing.
      
        When lumpy-reclaim is used used with ZONE_MOVABLE, the hugepages pool can
        be resized with greater reliability.  Testing on a desktop machine with 2GB
        of RAM showed that growing the hugepage pool with ZONE_MOVABLE on it's own
        was very slow as the success rate was quite low.  Without lumpy-reclaim,
        each attempt to grow the pool by 100 pages would yield 1 or 2 hugepages.
        With lumpy-reclaim, getting 40 to 70 hugepages on each attempt was typical.
      
      [akpm@osdl.org: ia64 pfn_to_nid fixes and loop cleanup]
      [bunk@stusta.de: static declarations for internal functions]
      [a.p.zijlstra@chello.nl: initial lumpy V2 implementation]
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Bob Picco <bob.picco@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5ad333eb
    • M
      Add a movablecore= parameter for sizing ZONE_MOVABLE · 7e63efef
      Mel Gorman 提交于
      This patch adds a new parameter for sizing ZONE_MOVABLE called
      movablecore=.  While kernelcore= is used to specify the minimum amount of
      memory that must be available for all allocation types, movablecore= is
      used to specify the minimum amount of memory that is used for migratable
      allocations.  The amount of memory used for migratable allocations
      determines how large the huge page pool could be dynamically resized to at
      runtime for example.
      
      How movablecore is actually handled is that the total number of pages in
      the system is calculated and a value is set for kernelcore that is
      
      kernelcore == totalpages - movablecore
      
      Both kernelcore= and movablecore= can be safely specified at the same time.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e63efef
    • M
      handle kernelcore=: generic · ed7ed365
      Mel Gorman 提交于
      This patch adds the kernelcore= parameter for x86.
      
      Once all patches are applied, a new command-line parameter exist and a new
      sysctl.  This patch adds the necessary documentation.
      
      From: Yasunori Goto <y-goto@jp.fujitsu.com>
      
        When "kernelcore" boot option is specified, kernel can't boot up on ia64
        because of an infinite loop.  In addition, the parsing code can be handled
        in an architecture-independent manner.
      
        This patch uses common code to handle the kernelcore= parameter.  It is
        only available to architectures that support arch-independent zone-sizing
        (i.e.  define CONFIG_ARCH_POPULATES_NODE_MAP).  Other architectures will
        ignore the boot parameter.
      
      [bunk@stusta.de: make cmdline_parse_kernelcore() static]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Acked-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ed7ed365
    • M
      Create the ZONE_MOVABLE zone · 2a1e274a
      Mel Gorman 提交于
      The following 8 patches against 2.6.20-mm2 create a zone called ZONE_MOVABLE
      that is only usable by allocations that specify both __GFP_HIGHMEM and
      __GFP_MOVABLE.  This has the effect of keeping all non-movable pages within a
      single memory partition while allowing movable allocations to be satisfied
      from either partition.  The patches may be applied with the list-based
      anti-fragmentation patches that groups pages together based on mobility.
      
      The size of the zone is determined by a kernelcore= parameter specified at
      boot-time.  This specifies how much memory is usable by non-movable
      allocations and the remainder is used for ZONE_MOVABLE.  Any range of pages
      within ZONE_MOVABLE can be released by migrating the pages or by reclaiming.
      
      When selecting a zone to take pages from for ZONE_MOVABLE, there are two
      things to consider.  First, only memory from the highest populated zone is
      used for ZONE_MOVABLE.  On the x86, this is probably going to be ZONE_HIGHMEM
      but it would be ZONE_DMA on ppc64 or possibly ZONE_DMA32 on x86_64.  Second,
      the amount of memory usable by the kernel will be spread evenly throughout
      NUMA nodes where possible.  If the nodes are not of equal size, the amount of
      memory usable by the kernel on some nodes may be greater than others.
      
      By default, the zone is not as useful for hugetlb allocations because they are
      pinned and non-migratable (currently at least).  A sysctl is provided that
      allows huge pages to be allocated from that zone.  This means that the huge
      page pool can be resized to the size of ZONE_MOVABLE during the lifetime of
      the system assuming that pages are not mlocked.  Despite huge pages being
      non-movable, we do not introduce additional external fragmentation of note as
      huge pages are always the largest contiguous block we care about.
      
      Credit goes to Andy Whitcroft for catching a large variety of problems during
      review of the patches.
      
      This patch creates an additional zone, ZONE_MOVABLE.  This zone is only usable
      by allocations which specify both __GFP_HIGHMEM and __GFP_MOVABLE.  Hot-added
      memory continues to be placed in their existing destination as there is no
      mechanism to redirect them to a specific zone.
      
      [y-goto@jp.fujitsu.com: Fix section mismatch of memory hotplug related code]
      [akpm@linux-foundation.org: various fixes]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a1e274a
  9. 17 7月, 2007 6 次提交
    • A
      fault-injection: add min-order parameter to fail_page_alloc · 54114994
      Akinobu Mita 提交于
      Limiting smaller allocation failures by fault injection helps to find real
      possible bugs.  Because higher order allocations are likely to fail and
      zero-order allocations are not likely to fail.
      
      This patch adds min-order parameter to fail_page_alloc.  It specifies the
      minimum page allocation order to be injected failures.
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54114994
    • D
      b49ad484
    • P
      mm: more __meminit annotations · 6ea6e688
      Paul Mundt 提交于
      Currently zone_spanned_pages_in_node() and zone_absent_pages_in_node() are
      non-static for ARCH_POPULATES_NODE_MAP and static otherwise.  However, only
      the non-static versions are __meminit annotated, despite only being called
      from __meminit functions in either case.
      
      zone_init_free_lists() is currently non-static and not __meminit annotated
      either, despite only being called once in the entire tree by
      init_currently_empty_zone(), which too is __meminit.  So make it static and
      properly annotated.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6ea6e688
    • J
      mm: fix improper .init-type section references · 98011f56
      Jan Beulich 提交于
      .. which modpost started warning about.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      98011f56
    • E
      MM: alloc_large_system_hash() can free some memory for non power-of-two bucketsize · 1037b83b
      Eric Dumazet 提交于
      alloc_large_system_hash() is called at boot time to allocate space for
      several large hash tables.
      
      Lately, TCP hash table was changed and its bucketsize is not a power-of-two
      anymore.
      
      On most setups, alloc_large_system_hash() allocates one big page (order >
      0) with __get_free_pages(GFP_ATOMIC, order).  This single high_order page
      has a power-of-two size, bigger than the needed size.
      
      We can free all pages that wont be used by the hash table.
      
      On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.
      
      TCP established hash table entries: 32768 (order: 6, 393216 bytes)
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1037b83b
    • K
      change zonelist order: zonelist order selection logic · f0c0b2b8
      KAMEZAWA Hiroyuki 提交于
      Make zonelist creation policy selectable from sysctl/boot option v6.
      
      This patch makes NUMA's zonelist (of pgdat) order selectable.
      Available order are Default(automatic)/ Node-based / Zone-based.
      
      [Default Order]
      The kernel selects Node-based or Zone-based order automatically.
      
      [Node-based Order]
      This policy treats the locality of memory as the most important parameter.
      Zonelist order is created by each zone's locality. This means lower zones
      (ex. ZONE_DMA) can be used before higher zone (ex. ZONE_NORMAL) exhausion.
      IOW. ZONE_DMA will be in the middle of zonelist.
      current 2.6.21 kernel uses this.
      
      Pros.
       * A user can expect local memory as much as possible.
      Cons.
       * lower zone will be exhansted before higher zone. This may cause OOM_KILL.
      
      Maybe suitable if ZONE_DMA is relatively big and you never see OOM_KILL
      because of ZONE_DMA exhaution and you need the best locality.
      
      (example)
      assume 2 node NUMA. node(0) has ZONE_DMA/ZONE_NORMAL, node(1) has ZONE_NORMAL.
      
      *node(0)'s memory allocation order:
      
       node(0)'s NORMAL -> node(0)'s DMA -> node(1)'s NORMAL.
      
      *node(1)'s memory allocation order:
      
       node(1)'s NORMAL -> node(0)'s NORMAL -> node(0)'s DMA.
      
      [Zone-based order]
      This policy treats the zone type as the most important parameter.
      Zonelist order is created by zone-type order. This means lower zone
      never be used bofere higher zone exhaustion.
      IOW. ZONE_DMA will be always at the tail of zonelist.
      
      Pros.
       * OOM_KILL(bacause of lower zone) occurs only if the whole zones are exhausted.
      Cons.
       * memory locality may not be best.
      
      (example)
      assume 2 node NUMA. node(0) has ZONE_DMA/ZONE_NORMAL, node(1) has ZONE_NORMAL.
      
      *node(0)'s memory allocation order:
      
       node(0)'s NORMAL -> node(1)'s NORMAL -> node(0)'s DMA.
      
      *node(1)'s memory allocation order:
      
       node(1)'s NORMAL -> node(0)'s NORMAL -> node(0)'s DMA.
      
      bootoption "numa_zonelist_order=" and proc/sysctl is supporetd.
      
      command:
      %echo N > /proc/sys/vm/numa_zonelist_order
      
      Will rebuild zonelist in Node-based order.
      
      command:
      %echo Z > /proc/sys/vm/numa_zonelist_order
      
      Will rebuild zonelist in Zone-based order.
      
      Thanks to Lee Schermerhorn, he gives me much help and codes.
      
      [Lee.Schermerhorn@hp.com: add check_highest_zone to build_zonelists_in_zone_order]
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: "jesse.barnes@intel.com" <jesse.barnes@intel.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f0c0b2b8
  10. 16 6月, 2007 1 次提交
    • P
      mm: Fix memory/cpu hotplug section mismatch and oops. · d09c6b80
      Paul Mundt 提交于
      When building with memory hotplug enabled and cpu hotplug disabled, we
      end up with the following section mismatch:
      
      WARNING: mm/built-in.o(.text+0x4e58): Section mismatch: reference to
      .init.text: (between 'free_area_init_node' and '__build_all_zonelists')
      
      This happens as a result of:
      
              -> free_area_init_node()
                -> free_area_init_core()
                  -> zone_pcp_init() <-- all __meminit up to this point
                    -> zone_batchsize() <-- marked as __cpuinit                     fo
      
      This happens because CONFIG_HOTPLUG_CPU=n sets __cpuinit to __init, but
      CONFIG_MEMORY_HOTPLUG=y unsets __meminit.
      
      Changing zone_batchsize() to __devinit fixes this.
      
      __devinit is the only thing that is common between CONFIG_HOTPLUG_CPU=y and
      CONFIG_MEMORY_HOTPLUG=y. In the long run, perhaps this should be moved to
      another section identifier completely. Without this, memory hot-add
      of offline nodes (via hotadd_new_pgdat()) will oops if CPU hotplug is
      not also enabled.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      
      --
      
       mm/page_alloc.c |    2 +-
       1 file changed, 1 insertion(+), 1 deletion(-)
      d09c6b80
  11. 31 5月, 2007 1 次提交
  12. 24 5月, 2007 1 次提交
  13. 19 5月, 2007 1 次提交
    • S
      mm: fix section mismatch warnings · 577a32f6
      Sam Ravnborg 提交于
      modpost had two cases hardcoded for mm/
      Shift over to __init_refok and kill the
      hardcoded function names in modpost.
      
      This has the drawback that the functions
      will always be kept no matter configuration.
      With previous code the function were placed in
      init section if configuration allowed it.
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      577a32f6
  14. 11 5月, 2007 1 次提交
  15. 10 5月, 2007 2 次提交
    • C
      Move remote node draining out of slab allocators · 4037d452
      Christoph Lameter 提交于
      Currently the slab allocators contain callbacks into the page allocator to
      perform the draining of pagesets on remote nodes.  This requires SLUB to have
      a whole subsystem in order to be compatible with SLAB.  Moving node draining
      out of the slab allocators avoids a section of code in SLUB.
      
      Move the node draining so that is is done when the vm statistics are updated.
      At that point we are already touching all the cachelines with the pagesets of
      a processor.
      
      Add a expire counter there.  If we have to update per zone or global vm
      statistics then assume that the pageset will require subsequent draining.
      
      The expire counter will be decremented on each vm stats update pass until it
      reaches zero.  Then we will drain one batch from the pageset.  The draining
      will cause vm counter updates which will then cause another expiration until
      the pcp is empty.  So we will drain a batch every 3 seconds.
      
      Note that remote node draining is a somewhat esoteric feature that is required
      on large NUMA systems because otherwise significant portions of system memory
      can become trapped in pcp queues.  The number of pcp is determined by the
      number of processors and nodes in a system.  A system with 4 processors and 2
      nodes has 8 pcps which is okay.  But a system with 1024 processors and 512
      nodes has 512k pcps with a high potential for large amount of memory being
      caught in them.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4037d452
    • R
      Add suspend-related notifications for CPU hotplug · 8bb78442
      Rafael J. Wysocki 提交于
      Since nonboot CPUs are now disabled after tasks and devices have been
      frozen and the CPU hotplug infrastructure is used for this purpose, we need
      special CPU hotplug notifications that will help the CPU-hotplug-aware
      subsystems distinguish normal CPU hotplug events from CPU hotplug events
      related to a system-wide suspend or resume operation in progress.  This
      patch introduces such notifications and causes them to be used during
      suspend and resume transitions.  It also changes all of the
      CPU-hotplug-aware subsystems to take these notifications into consideration
      (for now they are handled in the same way as the corresponding "normal"
      ones).
      
      [oleg@tv-sign.ru: cleanups]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8bb78442
  16. 09 5月, 2007 2 次提交
  17. 08 5月, 2007 6 次提交
  18. 02 3月, 2007 1 次提交
  19. 21 2月, 2007 2 次提交