1. 01 7月, 2006 5 次提交
    • C
      [PATCH] zoned vm counters: remove NR_FILE_MAPPED from scan control structure · bf02cf4b
      Christoph Lameter 提交于
      We can now access the number of pages in a mapped state in an inexpensive way
      in shrink_active_list.  So drop the nr_mapped field from scan_control.
      
      [akpm@osdl.org: bugfix]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bf02cf4b
    • C
      [PATCH] zoned vm counters: conversion of nr_pagecache to per zone counter · 347ce434
      Christoph Lameter 提交于
      Currently a single atomic variable is used to establish the size of the page
      cache in the whole machine.  The zoned VM counters have the same method of
      implementation as the nr_pagecache code but also allow the determination of
      the pagecache size per zone.
      
      Remove the special implementation for nr_pagecache and make it a zoned counter
      named NR_FILE_PAGES.
      
      Updates of the page cache counters are always performed with interrupts off.
      We can therefore use the __ variant here.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      347ce434
    • C
      [PATCH] zoned vm counters: convert nr_mapped to per zone counter · 65ba55f5
      Christoph Lameter 提交于
      nr_mapped is important because it allows a determination of how many pages of
      a zone are not mapped, which would allow a more efficient means of determining
      when we need to reclaim memory in a zone.
      
      We take the nr_mapped field out of the page state structure and define a new
      per zone counter named NR_FILE_MAPPED (the anonymous pages will be split off
      from NR_MAPPED in the next patch).
      
      We replace the use of nr_mapped in various kernel locations.  This avoids the
      looping over all processors in try_to_free_pages(), writeback, reclaim (swap +
      zone reclaim).
      
      [akpm@osdl.org: bugfix]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      65ba55f5
    • C
      [PATCH] zoned vm counters: basic ZVC (zoned vm counter) implementation · 2244b95a
      Christoph Lameter 提交于
      Per zone counter infrastructure
      
      The counters that we currently have for the VM are split per processor.  The
      processor however has not much to do with the zone these pages belong to.  We
      cannot tell f.e.  how many ZONE_DMA pages are dirty.
      
      So we are blind to potentially inbalances in the usage of memory in various
      zones.  F.e.  in a NUMA system we cannot tell how many pages are dirty on a
      particular node.  If we knew then we could put measures into the VM to balance
      the use of memory between different zones and different nodes in a NUMA
      system.  For example it would be possible to limit the dirty pages per node so
      that fast local memory is kept available even if a process is dirtying huge
      amounts of pages.
      
      Another example is zone reclaim.  We do not know how many unmapped pages exist
      per zone.  So we just have to try to reclaim.  If it is not working then we
      pause and try again later.  It would be better if we knew when it makes sense
      to reclaim unmapped pages from a zone.  This patchset allows the determination
      of the number of unmapped pages per zone.  We can remove the zone reclaim
      interval with the counters introduced here.
      
      Futhermore the ability to have various usage statistics available will allow
      the development of new NUMA balancing algorithms that may be able to improve
      the decision making in the scheduler of when to move a process to another node
      and hopefully will also enable automatic page migration through a user space
      program that can analyse the memory load distribution and then rebalance
      memory use in order to increase performance.
      
      The counter framework here implements differential counters for each processor
      in struct zone.  The differential counters are consolidated when a threshold
      is exceeded (like done in the current implementation for nr_pageache), when
      slab reaping occurs or when a consolidation function is called.
      
      Consolidation uses atomic operations and accumulates counters per zone in the
      zone structure and also globally in the vm_stat array.  VM functions can
      access the counts by simply indexing a global or zone specific array.
      
      The arrangement of counters in an array also simplifies processing when output
      has to be generated for /proc/*.
      
      Counters can be updated by calling inc/dec_zone_page_state or
      _inc/dec_zone_page_state analogous to *_page_state.  The second group of
      functions can be called if it is known that interrupts are disabled.
      
      Special optimized increment and decrement functions are provided.  These can
      avoid certain checks and use increment or decrement instructions that an
      architecture may provide.
      
      We also add a new CONFIG_DMA_IS_NORMAL that signifies that an architecture can
      do DMA to all memory and therefore ZONE_NORMAL will not be populated.  This is
      only currently set for IA64 SGI SN2 and currently only affects
      node_page_state().  In the best case node_page_state can be reduced to
      retrieving a single counter for the one zone on the node.
      
      [akpm@osdl.org: cleanups]
      [akpm@osdl.org: export vm_stat[] for filesystems]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2244b95a
    • C
      [PATCH] zoned vm counters: create vmstat.c/.h from page_alloc.c/.h · f6ac2354
      Christoph Lameter 提交于
      NOTE: ZVC are *not* the lightweight event counters.  ZVCs are reliable whereas
      event counters do not need to be.
      
      Zone based VM statistics are necessary to be able to determine what the state
      of memory in one zone is.  In a NUMA system this can be helpful for local
      reclaim and other memory optimizations that may be able to shift VM load in
      order to get more balanced memory use.
      
      It is also useful to know how the computing load affects the memory
      allocations on various zones.  This patchset allows the retrieval of that data
      from userspace.
      
      The patchset introduces a framework for counters that is a cross between the
      existing page_stats --which are simply global counters split per cpu-- and the
      approach of deferred incremental updates implemented for nr_pagecache.
      
      Small per cpu 8 bit counters are added to struct zone.  If the counter exceeds
      certain thresholds then the counters are accumulated in an array of
      atomic_long in the zone and in a global array that sums up all zone values.
      The small 8 bit counters are next to the per cpu page pointers and so they
      will be in high in the cpu cache when pages are allocated and freed.
      
      Access to VM counter information for a zone and for the whole machine is then
      possible by simply indexing an array (Thanks to Nick Piggin for pointing out
      that approach).  The access to the total number of pages of various types does
      no longer require the summing up of all per cpu counters.
      
      Benefits of this patchset right now:
      
      - Ability for UP and SMP configuration to determine how memory
        is balanced between the DMA, NORMAL and HIGHMEM zones.
      
      - loops over all processors are avoided in writeback and
        reclaim paths. We can avoid caching the writeback information
        because the needed information is directly accessible.
      
      - Special handling for nr_pagecache removed.
      
      - zone_reclaim_interval vanishes since VM stats can now determine
        when it is worth to do local reclaim.
      
      - Fast inline per node page state determination.
      
      - Accurate counters in /sys/devices/system/node/node*/meminfo. Current
        counters are counting simply which processor allocated a page somewhere
        and guestimate based on that. So the counters were not useful to show
        the actual distribution of page use on a specific zone.
      
      - The swap_prefetch patch requires per node statistics in order to
        figure out when processors of a node can prefetch. This patch provides
        some of the needed numbers.
      
      - Detailed VM counters available in more /proc and /sys status files.
      
      References to earlier discussions:
      V1 http://marc.theaimsgroup.com/?l=linux-kernel&m=113511649910826&w=2
      V2 http://marc.theaimsgroup.com/?l=linux-kernel&m=114980851924230&w=2
      V3 http://marc.theaimsgroup.com/?l=linux-kernel&m=115014697910351&w=2
      V4 http://marc.theaimsgroup.com/?l=linux-kernel&m=115024767318740&w=2
      
      Performance tests with AIM7 did not show any regressions.  Seems to be a tad
      faster even.  Tested on ia64/NUMA.  Builds fine on i386, SMP / UP.  Includes
      fixes for s390/arm/uml arch code.
      
      This patch:
      
      Move counter code from page_alloc.c/page-flags.h to vmstat.c/h.
      
      Create vmstat.c/vmstat.h by separating the counter code and the proc
      functions.
      
      Move the vm_stat_text array before zoneinfo_show.
      
      [akpm@osdl.org: s390 build fix]
      [akpm@osdl.org: HOTPLUG_CPU build fix]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f6ac2354
  2. 30 6月, 2006 2 次提交
    • Y
      [PATCH] solve config broken: undefined reference to `online_page' · cc57637b
      Yasunori Goto 提交于
      Memory hotplug code of i386 adds memory to only highmem.  So, if
      CONFIG_HIGHMEM is not set, CONFIG_MEMORY_HOTPLUG shouldn't be set.
      Otherwise, it causes compile error.
      
      In addition, many architecture can't use memory hotplug feature yet.  So, I
      introduce CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG.
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      cc57637b
    • A
      [PATCH] generic_file_buffered_write(): handle zero-length iovec segments · 81b0c871
      Andrew Morton 提交于
      The recent generic_file_write() deadlock fix caused
      generic_file_buffered_write() to loop inifinitely when presented with a
      zero-length iovec segment.  Fix.
      
      Note that this fix deliberately avoids calling ->prepare_write(),
      ->commit_write() etc with a zero-length write.  This is because I don't trust
      all filesystems to get that right.
      
      This is a cautious approach, for 2.6.17.x.  For 2.6.18 we should just go ahead
      and call ->prepare_write() and ->commit_write() with the zero length and fix
      any broken filesystems.  So I'll make that change once this code is stabilised
      and backported into 2.6.17.x.
      
      The reason for preferring to call ->prepare_write() and ->commit_write() with
      the zero-length segment: a zero-length segment _should_ be sufficiently
      uncommon that this is the correct way of handling it.  We don't want to
      optimise for poorly-written userspace at the expense of well-written
      userspace.
      
      Cc: "Vladimir V. Saveliev" <vs@namesys.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Greg KH <greg@kroah.com>
      Cc: <stable@kernel.org>
      Cc: walt <wa1ter@myrealbox.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      81b0c871
  3. 29 6月, 2006 1 次提交
  4. 28 6月, 2006 16 次提交
  5. 27 6月, 2006 5 次提交
  6. 26 6月, 2006 8 次提交
    • W
      [PATCH] readahead: backoff on I/O error · 76d42bd9
      Wu Fengguang 提交于
      Backoff readahead size exponentially on I/O error.
      
      Michael Tokarev <mjt@tls.msk.ru> described the problem as:
      
      [QUOTE]
      Suppose there's a CD-rom with a scratch/etc, one sector is unreadable.
      In order to "fix" it, one have to read it and write to another CD-rom,
      or something.. or just ignore the error (if it's just a skip in a video
      stream).  Let's assume the unreadable block is number U.
      
      But current behavior is just insane.  An application requests block
      number N, which is before U. Kernel tries to read-ahead blocks N..U.
      Cdrom drive tries to read it, re-read it.. for some time.  Finally,
      when all the N..U-1 blocks are read, kernel returns block number N
      (as requested) to an application, successefully.
      
      Now an app requests block number N+1, and kernel tries to read
      blocks N+1..U+1.  Retrying again as in previous step.
      
      And so on, up to when an app requests block number U-1.  And when,
      finally, it requests block U, it receives read error.
      
      So, kernel currentry tries to re-read the same failing block as
      many times as the current readahead value (256 (times?) by default).
      
      This whole process already killed my cdrom drive (I posted about it
      to LKML several months ago) - literally, the drive has fried, and
      does not work anymore.  Ofcourse that problem was a bug in firmware
      (or whatever) of the drive *too*, but.. main problem with that is
      current readahead logic as described above.
      [/QUOTE]
      
      Which was confirmed by Jens Axboe <axboe@suse.de>:
      
      [QUOTE]
      For ide-cd, it tends do only end the first part of the request on a
      medium error. So you may see a lot of repeats :/
      [/QUOTE]
      
      With this patch, retries are expected to be reduced from, say, 256, to 5.
      
      [akpm@osdl.org: cleanups]
      Signed-off-by: NWu Fengguang <wfg@mail.ustc.edu.cn>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      76d42bd9
    • R
      [PATCH] kernel-doc: mm/readhead fixup · bd40cdda
      Randy Dunlap 提交于
      Put short function description for read_cache_pages() on one line as needed
      by kernel-doc.
      Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bd40cdda
    • N
      [PATCH] Prepare for __copy_from_user_inatomic to not zero missed bytes · 01408c49
      NeilBrown 提交于
      The problem is that when we write to a file, the copy from userspace to
      pagecache is first done with preemption disabled, so if the source address is
      not immediately available the copy fails *and* *zeros* *the* *destination*.
      
      This is a problem because a concurrent read (which admittedly is an odd thing
      to do) might see zeros rather that was there before the write, or what was
      there after, or some mixture of the two (any of these being a reasonable thing
      to see).
      
      If the copy did fail, it will immediately be retried with preemption
      re-enabled so any transient problem with accessing the source won't cause an
      error.
      
      The first copying does not need to zero any uncopied bytes, and doing so
      causes the problem.  It uses copy_from_user_atomic rather than copy_from_user
      so the simple expedient is to change copy_from_user_atomic to *not* zero out
      bytes on failure.
      
      The first of these two patches prepares for the change by fixing two places
      which assume copy_from_user_atomic does zero the tail.  The two usages are
      very similar pieces of code which copy from a userspace iovec into one or more
      page-cache pages.  These are changed to remove the assumption.
      
      The second patch changes __copy_from_user_inatomic* to not zero the tail.
      Once these are accepted, I will look at similar patches of other architectures
      where this is important (ppc, mips and sparc being the ones I can find).
      
      This patch:
      
      There is a problem with __copy_from_user_inatomic zeroing the tail of the
      buffer in the case of an error.  As it is called in atomic context, the error
      may be transient, so it results in zeros being written where maybe they
      shouldn't be.
      
      In the usage in filemap, this opens a window for a well timed read to see data
      (zeros) which is not consistent with any ordering of reads and writes.
      
      Most cases where __copy_from_user_inatomic is called, a failure results in
      __copy_from_user being called immediately.  As long as the latter zeros the
      tail, the former doesn't need to.  However in *copy_from_user_iovec
      implementations (in both filemap and ntfs/file), it is assumed that
      copy_from_user_inatomic will zero the tail.
      
      This patch removes that assumption, so that after this patch it will
      be safe for copy_from_user_inatomic to not zero the tail.
      
      This patch also adds some commentary to filemap.h and asm-i386/uaccess.h.
      
      After this patch, all architectures that might disable preempt when
      kmap_atomic is called need to have their __copy_from_user_inatomic* "fixed".
      This includes
       - powerpc
       - i386
       - mips
       - sparc
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      01408c49
    • C
      [PATCH] cpuset: remove extra cpuset_zone_allowed check in __alloc_pages · 43b0bc00
      Chris Wright 提交于
      This is redundant with check in wakeup_kswapd.
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Acked-by: NPaul Jackson <pj@sgi.com>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      43b0bc00
    • A
      [PATCH] pdflush: handle resume wakeups · d616e09a
      Andrew Morton 提交于
      pdflush is carefully designed to ensure that all wakeups have some
      corresponding work to do - if a woken-up pdflush thread discovers that it
      hasn't been given any work to do then this is considered an error.
      
      That all broke when swsusp came along - because a timer-delivered wakeup to a
      frozen pdflush thread will just get lost.  This causes the pdflush thread to
      get lost as well: the writeback timer is supposed to be re-armed by pdflush in
      process context, but pdflush doesn't execute the callout which does this.
      
      Fix that up by ignoring the return value from try_to_freeze(): jsut proceed,
      see if we have any work pending and only go back to sleep if that is not the
      case.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d616e09a
    • C
      [PATCH] Allow migration of mlocked pages · e6a1530d
      Christoph Lameter 提交于
      Hugh clarified the role of VM_LOCKED.  So we can now implement page
      migration for mlocked pages.
      
      Allow the migration of mlocked pages.  This means that try_to_unmap must
      unmap mlocked pages in the migration case.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Acked-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e6a1530d
    • C
      [PATCH] page migration: Support a vma migration function · 7b2259b3
      Christoph Lameter 提交于
      Hooks for calling vma specific migration functions
      
      With this patch a vma may define a vma->vm_ops->migrate function.  That
      function may perform page migration on its own (some vmas may not contain page
      structs and therefore cannot be handled by regular page migration.  Pages in a
      vma may require special preparatory treatment before migration is possible
      etc) .  Only mmap_sem is held when the migration function is called.  The
      migrate() function gets passed two sets of nodemasks describing the source and
      the target of the migration.  The flags parameter either contains
      
      MPOL_MF_MOVE	which means that only pages used exclusively by
      		the specified mm should be moved
      
      or
      
      MPOL_MF_MOVE_ALL which means that pages shared with other processes
      		should also be moved.
      
      The migration function returns 0 on success or an error condition.  An error
      condition will prevent regular page migration from occurring.
      
      On its own this patch cannot be included since there are no users for this
      functionality.  But it seems that the uncached allocator will need this
      functionality at some point.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7b2259b3
    • Z
      [PATCH] AOP_TRUNCATED_PAGE victims in read_pages() belong in the LRU · 9f1a3cfc
      Zach Brown 提交于
      AOP_TRUNCATED_PAGE victims in read_pages() belong in the LRU
      
      Nick Piggin rightly pointed out that the introduction of AOP_TRUNCATED_PAGE
      to read_pages() was wrong to leave A_T_P victim pages in the page cache but
      not put them in the LRU.  Failing to do so hid them from the VM.
      
      A_T_P just means that the aop method unlocked the page rather than
      performing IO.  It would be very rare that the page was truncated between
      the unlock and testing A_T_P.  So we leave the pages in the LRU for likely
      reuse soon rather than backing them back out of the page cache.  We do this
      by matching the behaviour before the A_T_P introduction which added pages
      to the LRU regardless of what ->readpage() did.
      
      This doesn't include the unrelated cleanup in Nick's initial fix which
      changed read_pages() to return void to match its only caller's behaviour of
      ignoring errors.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NZach Brown <zach.brown@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9f1a3cfc
  7. 23 6月, 2006 3 次提交
    • J
      [PATCH] Kill PF_SYNCWRITE flag · b31dc66a
      Jens Axboe 提交于
      A process flag to indicate whether we are doing sync io is incredibly
      ugly. It also causes performance problems when one does a lot of async
      io and then proceeds to sync it. Part of the io will go out as async,
      and the other part as sync. This causes a disconnect between the
      previously submitted io and the synced io. For io schedulers such as CFQ,
      this will cause us lost merges and suboptimal behaviour in scheduling.
      
      Remove PF_SYNCWRITE completely from the fsync/msync paths, and let
      the O_DIRECT path just directly indicate that the writes are sync
      by using WRITE_SYNC instead.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      b31dc66a
    • E
      [PATCH] More BUG_ON conversion · 125e1874
      Eric Sesterhenn 提交于
      Signed-off-by: NEric Sesterhenn <snakebyte@gmx.de>
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Bartlomiej Zolnierkiewicz <B.Zolnierkiewicz@elka.pw.edu.pl>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: James Bottomley <James.Bottomley@steeleye.com>
      Acked-by: N"Salyzyn, Mark" <mark_salyzyn@adaptec.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      125e1874
    • N
      [PATCH] Remove semi-softlockup from invalidate_mapping_pages · e0f23603
      NeilBrown 提交于
      If invalidate_mapping_pages is called to invalidate a very large mapping
      (e.g.  a very large block device) and if the only active page in that
      device is near the end (or at least, at a very large index), such as, say,
      the superblock of an md array, and if that page happens to be locked when
      invalidate_mapping_pages is called, then
      
        pagevec_lookup will return this page and
        as it is locked, 'next' will be incremented and pagevec_lookup
        will be called again. and again. and again.
        while we count from 0 upto a very large number.
      
      We should really always set 'next' to 'page->index+1' before going around
      the loop again, not just if the page isn't locked.
      
      Cc: "Steinar H. Gunderson" <sgunderson@bigfoot.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e0f23603