1. 10 12月, 2019 12 次提交
  2. 08 12月, 2019 2 次提交
  3. 01 12月, 2019 2 次提交
  4. 20 11月, 2019 1 次提交
  5. 16 11月, 2019 1 次提交
    • A
      new helper: lookup_positive_unlocked() · 6c2d4798
      Al Viro 提交于
      Most of the callers of lookup_one_len_unlocked() treat negatives are
      ERR_PTR(-ENOENT).  Provide a helper that would do just that.  Note
      that a pinned positive dentry remains positive - it's ->d_inode is
      stable, etc.; a pinned _negative_ dentry can become positive at any
      point as long as you are not holding its parent at least shared.
      So using lookup_one_len_unlocked() needs to be careful;
      lookup_positive_unlocked() is safer and that's what the callers
      end up open-coding anyway.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6c2d4798
  6. 13 11月, 2019 2 次提交
  7. 09 11月, 2019 5 次提交
  8. 12 10月, 2019 1 次提交
  9. 10 10月, 2019 1 次提交
    • S
      nfsd4: fix up replay_matches_cache() · 6e73e92b
      Scott Mayhew 提交于
      When running an nfs stress test, I see quite a few cached replies that
      don't match up with the actual request.  The first comment in
      replay_matches_cache() makes sense, but the code doesn't seem to
      match... fix it.
      
      This isn't exactly a bugfix, as the server isn't required to catch every
      case of a false retry.  So, we may as well do this, but if this is
      fixing a problem then that suggests there's a client bug.
      
      Fixes: 53da6a53 ("nfsd4: catch some false session retries")
      Signed-off-by: NScott Mayhew <smayhew@redhat.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      6e73e92b
  10. 09 10月, 2019 2 次提交
  11. 24 9月, 2019 2 次提交
  12. 21 9月, 2019 2 次提交
    • N
      nfsd: degraded slot-count more gracefully as allocation nears exhaustion. · 2030ca56
      NeilBrown 提交于
      This original code in nfsd4_get_drc_mem() would hand out 30
      slots (approximately NFSD_MAX_MEM_PER_SESSION bytes at slightly
      over 2K per slot) to each requesting client until it ran out
      of space, then it would possibly give one last client a reduced
      allocation, then fail the allocation.
      
      Since commit de766e57 ("nfsd: give out fewer session slots as
      limit approaches") the last 90 slots to be given to about 12
      clients with quickly reducing slot counts (better than just 3
      clients).  This still seems unnecessarily hasty.
      A subsequent patch allows over-allocation so every client gets
      at least one slot, but that might be a bit restrictive.
      
      The requested number of nfsd threads is the best guide we have to the
      expected number of clients, so use that - if it is at least 8.
      
      256 threads on a 256Meg machine - which is a lot for a tiny machine -
      would result in nfsd_drc_max_mem being 2Meg, so 8K (3 slots) would be
      available for the first client, and over 200 clients would get more
      than 1 slot.  So I don't think this change will be too debilitating on
      poorly configured machines, though it does mean that a sensible
      configuration is a little more important.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      2030ca56
    • N
      nfsd: handle drc over-allocation gracefully. · 7f49fd5d
      NeilBrown 提交于
      Currently, if there are more clients than allowed for by the
      space allocation in set_max_drc(), we fail a SESSION_CREATE
      request with NFS4ERR_DELAY.
      This means that the client retries indefinitely, which isn't
      a user-friendly response.
      
      The RFC requires NFS4ERR_NOSPC, but that would at best result in a
      clean failure on the client, which is not much more friendly.
      
      The current space allocation is a best-guess and doesn't provide any
      guarantees, we could still run out of space when trying to allocate
      drc space.
      
      So fail more gracefully - always give out at least one slot.
      If all clients used all the space in all slots, we might start getting
      memory pressure, but that is possible anyway.
      
      So ensure 'num' is always at least 1, and remove the test for it
      being zero.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      7f49fd5d
  13. 10 9月, 2019 6 次提交
  14. 06 9月, 2019 1 次提交