1. 24 4月, 2008 5 次提交
    • J
      lockd: clean up __nsm_find() · a95e56e7
      J. Bruce Fields 提交于
      Use list_for_each_entry().  Also, in keeping with kernel style, make the
      normal case (kzalloc succeeds) unindented and handle the abnormal case
      with a goto.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      a95e56e7
    • J
      lockd: fix race in nlm_release() · 164f98ad
      J. Bruce Fields 提交于
      The sm_count is decremented to zero but left on the nsm_handles list.
      So in the space between decrementing sm_count and acquiring nsm_mutex,
      it is possible for another task to find this nsm_handle, increment the
      use count and then enter nsm_release itself.
      
      Thus there's nothing to prevent the nsm being freed before we acquire
      nsm_mutex here.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      164f98ad
    • H
      lockd: fix sparse warning in svcshare.c · 93245d11
      Harvey Harrison 提交于
      fs/lockd/svcshare.c:74:50: warning: Using plain integer as NULL pointer
      Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com>
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      93245d11
    • J
      NLM: Convert lockd to use kthreads · d751a7cd
      Jeff Layton 提交于
      Have lockd_up start lockd using kthread_run. With this change,
      lockd_down now blocks until lockd actually exits, so there's no longer
      need for the waitqueue code at the end of lockd_down. This also means
      that only one lockd can be running at a time which simplifies the code
      within lockd's main loop.
      
      This also adds a check for kthread_should_stop in the main loop of
      nlmsvc_retry_blocked and after that function returns. There's no sense
      continuing to retry blocks if lockd is coming down anyway.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      d751a7cd
    • N
      knfsd: Remove NLM_HOST_MAX and associated logic. · 1447d25e
      NeilBrown 提交于
      Lockd caches information about hosts that have recently held locks to
      expedite the taking of further locks.
      
      It periodically discards this information for hosts that have not been
      used for a few minutes.
      
      lockd currently has a value NLM_HOST_MAX, and changes the 'garbage
      collection' behaviour when the number of hosts exceeds this threshold.
      
      However its behaviour is strange, and likely not what was intended.
      When the number of hosts exceeds the max, it scans *less* often (every
      2 minutes vs every minute) and allows unused host information to
      remain around longer (5 minutes instead of 2).
      
      Having this limit is of dubious value anyway, and we have not
      suffered from the code not getting the limit right, so remove the
      limit altogether.  We go with the larger values (discard 5 minute old
      hosts every 2 minutes) as they are probably safer.
      
      Maybe the periodic garbage collection should be replace to with
      'shrinker' handler so we just respond to memory pressure....
      Acked-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      1447d25e
  2. 22 2月, 2008 1 次提交
  3. 11 2月, 2008 4 次提交
    • J
      NLM: don't requeue block if it was invalidated while GRANT_MSG was in flight · c64e80d5
      Jeff Layton 提交于
      It's possible for lockd to catch a SIGKILL while a GRANT_MSG callback
      is in flight. If this happens we don't want lockd to insert the block
      back into the nlm_blocked list.
      
      This helps that situation, but there's still a possible race. Fixing
      that will mean adding real locking for nlm_blocked.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      c64e80d5
    • J
      NLM: don't reattempt GRANT_MSG when there is already an RPC in flight · 9706501e
      Jeff Layton 提交于
      With the current scheme in nlmsvc_grant_blocked, we can end up with more
      than one GRANT_MSG callback for a block in flight. Right now, we requeue
      the block unconditionally so that a GRANT_MSG callback is done again in
      30s. If the client is unresponsive, it can take more than 30s for the
      call already in flight to time out.
      
      There's no benefit to having more than one GRANT_MSG RPC queued up at a
      time, so put it on the list with a timeout of NLM_NEVER before doing the
      RPC call. If the RPC call submission fails, we requeue it with a short
      timeout. If it works, then nlmsvc_grant_callback will end up requeueing
      it with a shorter timeout after it completes.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      9706501e
    • J
      NLM: have server-side RPC clients default to soft RPC tasks · 90bd17c8
      Jeff Layton 提交于
      Now that it no longer does an RPC ping, lockd always ends up queueing
      an RPC task for the GRANT_MSG callback. But, it also requeues the block
      for later attempts. Since these are hard RPC tasks, if the client we're
      calling back goes unresponsive the GRANT_MSG callbacks can stack up in
      the RPC queue.
      
      Fix this by making server-side RPC clients default to soft RPC tasks.
      lockd requeues the block anyway, so this should be OK.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      90bd17c8
    • J
      NLM: set RPC_CLNT_CREATE_NOPING for NLM RPC clients · 031fd3aa
      Jeff Layton 提交于
      It's currently possible for an unresponsive NLM client to completely
      lock up a server's lockd. The scenario is something like this:
      
      1) client1 (or a process on the server) takes a lock on a file
      2) client2 tries to take a blocking lock on the same file and
         awaits the callback
      3) client2 goes unresponsive (plug pulled, network partition, etc)
      4) client1 releases the lock
      
      ...at that point the server's lockd will try to queue up a GRANT_MSG
      callback for client2, but first it requeues the block with a timeout of
      30s. nlm_async_call will attempt to bind the RPC client to client2 and
      will call rpc_ping. rpc_ping entails a sync RPC call and if client2 is
      unresponsive it will take around 60s for that to time out. Once it times
      out, it's already time to retry the block and the whole process repeats.
      
      Once in this situation, nlmsvc_retry_blocked will never return until
      the host starts responding again. lockd won't service new calls.
      
      Fix this by skipping the RPC ping on NLM RPC clients. This makes
      nlm_async_call return quickly when called.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      031fd3aa
  4. 02 2月, 2008 10 次提交
  5. 30 1月, 2008 5 次提交
  6. 10 10月, 2007 3 次提交
  7. 27 9月, 2007 1 次提交
  8. 27 7月, 2007 1 次提交
  9. 18 7月, 2007 2 次提交
    • M
      knfsd: lockd: nfsd4: use same grace period for lockd and nfsd4 · 9a8db97e
      Marc Eshel 提交于
      Both lockd and (in the nfsv4 case) nfsd enforce a "grace period" after reboot,
      during which clients may reclaim locks from the previous server instance, but
      may not acquire new locks.
      
      Currently the lockd and nfsd enforce grace periods of different lengths.  This
      may cause problems when we reboot a server with both v2/v3 and v4 clients.
      For example, if the lockd grace period is shorter (as is likely the case),
      then a v3 client might acquire a new lock that conflicts with a lock already
      held (but not yet reclaimed) by a v4 client.
      
      This patch calculates a lease time that lockd and nfsd can both use.
      Signed-off-by: NMarc Eshel <eshel@almaden.ibm.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a8db97e
    • R
      Freezer: make kernel threads nonfreezable by default · 83144186
      Rafael J. Wysocki 提交于
      Currently, the freezer treats all tasks as freezable, except for the kernel
      threads that explicitly set the PF_NOFREEZE flag for themselves.  This
      approach is problematic, since it requires every kernel thread to either
      set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
      care for the freezing of tasks at all.
      
      It seems better to only require the kernel threads that want to or need to
      be frozen to use some freezer-related code and to remove any
      freezer-related code from the other (nonfreezable) kernel threads, which is
      done in this patch.
      
      The patch causes all kernel threads to be nonfreezable by default (ie.  to
      have PF_NOFREEZE set by default) and introduces the set_freezable()
      function that should be called by the freezable kernel threads in order to
      unset PF_NOFREEZE.  It also makes all of the currently freezable kernel
      threads call set_freezable(), so it shouldn't cause any (intentional)
      change of behaviour to appear.  Additionally, it updates documentation to
      describe the freezing of tasks more accurately.
      
      [akpm@linux-foundation.org: build fixes]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NNigel Cunningham <nigel@nigel.suspend2.net>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83144186
  10. 11 7月, 2007 4 次提交
  11. 15 5月, 2007 3 次提交
  12. 09 5月, 2007 1 次提交