1. 04 12月, 2009 7 次提交
  2. 03 12月, 2009 1 次提交
  3. 20 11月, 2009 1 次提交
    • D
      FS-Cache: Handle pages pending storage that get evicted under OOM conditions · 201a1542
      David Howells 提交于
      Handle netfs pages that the vmscan algorithm wants to evict from the pagecache
      under OOM conditions, but that are waiting for write to the cache.  Under these
      conditions, vmscan calls the releasepage() function of the netfs, asking if a
      page can be discarded.
      
      The problem is typified by the following trace of a stuck process:
      
      	kslowd005     D 0000000000000000     0  4253      2 0x00000080
      	 ffff88001b14f370 0000000000000046 ffff880020d0d000 0000000000000007
      	 0000000000000006 0000000000000001 ffff88001b14ffd8 ffff880020d0d2a8
      	 000000000000ddf0 00000000000118c0 00000000000118c0 ffff880020d0d2a8
      	Call Trace:
      	 [<ffffffffa00782d8>] __fscache_wait_on_page_write+0x8b/0xa7 [fscache]
      	 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
      	 [<ffffffffa0078240>] ? __fscache_check_page_write+0x63/0x70 [fscache]
      	 [<ffffffffa00b671d>] nfs_fscache_release_page+0x4e/0xc4 [nfs]
      	 [<ffffffffa00927f0>] nfs_release_page+0x3c/0x41 [nfs]
      	 [<ffffffff810885d3>] try_to_release_page+0x32/0x3b
      	 [<ffffffff81093203>] shrink_page_list+0x316/0x4ac
      	 [<ffffffff8109372b>] shrink_inactive_list+0x392/0x67c
      	 [<ffffffff813532fa>] ? __mutex_unlock_slowpath+0x100/0x10b
      	 [<ffffffff81058df0>] ? trace_hardirqs_on_caller+0x10c/0x130
      	 [<ffffffff8135330e>] ? mutex_unlock+0x9/0xb
      	 [<ffffffff81093aa2>] shrink_list+0x8d/0x8f
      	 [<ffffffff81093d1c>] shrink_zone+0x278/0x33c
      	 [<ffffffff81052d6c>] ? ktime_get_ts+0xad/0xba
      	 [<ffffffff81094b13>] try_to_free_pages+0x22e/0x392
      	 [<ffffffff81091e24>] ? isolate_pages_global+0x0/0x212
      	 [<ffffffff8108e743>] __alloc_pages_nodemask+0x3dc/0x5cf
      	 [<ffffffff81089529>] grab_cache_page_write_begin+0x65/0xaa
      	 [<ffffffff8110f8c0>] ext3_write_begin+0x78/0x1eb
      	 [<ffffffff81089ec5>] generic_file_buffered_write+0x109/0x28c
      	 [<ffffffff8103cb69>] ? current_fs_time+0x22/0x29
      	 [<ffffffff8108a509>] __generic_file_aio_write+0x350/0x385
      	 [<ffffffff8108a588>] ? generic_file_aio_write+0x4a/0xae
      	 [<ffffffff8108a59e>] generic_file_aio_write+0x60/0xae
      	 [<ffffffff810b2e82>] do_sync_write+0xe3/0x120
      	 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
      	 [<ffffffff810b18e1>] ? __dentry_open+0x1a5/0x2b8
      	 [<ffffffff810b1a76>] ? dentry_open+0x82/0x89
      	 [<ffffffffa00e693c>] cachefiles_write_page+0x298/0x335 [cachefiles]
      	 [<ffffffffa0077147>] fscache_write_op+0x178/0x2c2 [fscache]
      	 [<ffffffffa0075656>] fscache_op_execute+0x7a/0xd1 [fscache]
      	 [<ffffffff81082093>] slow_work_execute+0x18f/0x2d1
      	 [<ffffffff8108239a>] slow_work_thread+0x1c5/0x308
      	 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
      	 [<ffffffff810821d5>] ? slow_work_thread+0x0/0x308
      	 [<ffffffff8104be91>] kthread+0x7a/0x82
      	 [<ffffffff8100beda>] child_rip+0xa/0x20
      	 [<ffffffff8100b87c>] ? restore_args+0x0/0x30
      	 [<ffffffff8102ef83>] ? tg_shares_up+0x171/0x227
      	 [<ffffffff8104be17>] ? kthread+0x0/0x82
      	 [<ffffffff8100bed0>] ? child_rip+0x0/0x20
      
      In the above backtrace, the following is happening:
      
       (1) A page storage operation is being executed by a slow-work thread
           (fscache_write_op()).
      
       (2) FS-Cache farms the operation out to the cache to perform
           (cachefiles_write_page()).
      
       (3) CacheFiles is then calling Ext3 to perform the actual write, using Ext3's
           standard write (do_sync_write()) under KERNEL_DS directly from the netfs
           page.
      
       (4) However, for Ext3 to perform the write, it must allocate some memory, in
           particular, it must allocate at least one page cache page into which it
           can copy the data from the netfs page.
      
       (5) Under OOM conditions, the memory allocator can't immediately come up with
           a page, so it uses vmscan to find something to discard
           (try_to_free_pages()).
      
       (6) vmscan finds a clean netfs page it might be able to discard (possibly the
           one it's trying to write out).
      
       (7) The netfs is called to throw the page away (nfs_release_page()) - but it's
           called with __GFP_WAIT, so the netfs decides to wait for the store to
           complete (__fscache_wait_on_page_write()).
      
       (8) This blocks a slow-work processing thread - possibly against itself.
      
      The system ends up stuck because it can't write out any netfs pages to the
      cache without allocating more memory.
      
      To avoid this, we make FS-Cache cancel some writes that aren't in the middle of
      actually being performed.  This means that some data won't make it into the
      cache this time.  To support this, a new FS-Cache function is added
      fscache_maybe_release_page() that replaces what the netfs releasepage()
      functions used to do with respect to the cache.
      
      The decisions fscache_maybe_release_page() makes are counted and displayed
      through /proc/fs/fscache/stats on a line labelled "VmScan".  There are four
      counters provided: "nos=N" - pages that weren't pending storage; "gon=N" -
      pages that were pending storage when we first looked, but weren't by the time
      we got the object lock; "bsy=N" - pages that we ignored as they were actively
      being written when we looked; and "can=N" - pages that we cancelled the storage
      of.
      
      What I'd really like to do is alter the behaviour of the cancellation
      heuristics, depending on how necessary it is to expel pages.  If there are
      plenty of other pages that aren't waiting to be written to the cache that
      could be ejected first, then it would be nice to hold up on immediate
      cancellation of cache writes - but I don't see a way of doing that.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      201a1542
  4. 11 11月, 2009 1 次提交
  5. 26 10月, 2009 2 次提交
  6. 24 10月, 2009 2 次提交
    • T
      NFSv4: Fix a bug when the server returns NFS4ERR_RESOURCE · 52567b03
      Trond Myklebust 提交于
      RFC 3530 states that when we recieve the error NFS4ERR_RESOURCE, we are not
      supposed to bump the sequence number on OPEN, LOCK, LOCKU, CLOSE, etc
      operations. The problem is that we map that error into EREMOTEIO in the XDR
      layer, and so the NFSv4 middle-layer routines like seqid_mutating_err(),
      and nfs_increment_seqid() don't recognise it.
      
      The fix is to defer the mapping until after the middle layers have
      processed the error.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      52567b03
    • T
      nfs: Panic when commit fails · a8b40bc7
      Terry Loftin 提交于
      Actually pass the NFS_FILE_SYNC option to the server to avoid a
      Panic in nfs_direct_write_complete() when a commit fails.
      
      At the end of an nfs write, if the nfs commit fails, all the writes
      will be rescheduled.  They are supposed to be rescheduled as NFS_FILE_SYNC
      writes, but the rpc_task structure is not completely intialized and so
      the option is not passed.  When the rescheduled writes complete, the
      return indicates that they are NFS_UNSTABLE and we try to do another
      commit.  This leads to a Panic because the commit data structure pointer
      was set to null in the initial (failed) commit attempt.
      Signed-off-by: NTerry Loftin <terry.loftin@hp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      a8b40bc7
  7. 22 10月, 2009 1 次提交
  8. 13 10月, 2009 1 次提交
  9. 08 10月, 2009 1 次提交
    • T
      NFSv4: Kill nfs4_renewd_prepare_shutdown() · 3050141b
      Trond Myklebust 提交于
      The NFSv4 renew daemon is shared between all active super blocks that refer
      to a particular NFS server, so it is wrong to be shutting it down in
      nfs4_kill_super every time a super block is destroyed.
      
      This patch therefore kills nfs4_renewd_prepare_shutdown altogether, and
      leaves it up to nfs4_shutdown_client() to also shut down the renew daemon
      by means of the existing call to nfs4_kill_renewd().
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      3050141b
  10. 07 10月, 2009 5 次提交
  11. 28 9月, 2009 1 次提交
  12. 25 9月, 2009 1 次提交
  13. 24 9月, 2009 5 次提交
  14. 23 9月, 2009 1 次提交
  15. 22 9月, 2009 1 次提交
  16. 21 9月, 2009 3 次提交
  17. 16 9月, 2009 3 次提交
  18. 11 9月, 2009 1 次提交
  19. 09 9月, 2009 2 次提交