1. 09 2月, 2015 9 次提交
  2. 06 2月, 2015 1 次提交
  3. 04 2月, 2015 2 次提交
  4. 31 1月, 2015 1 次提交
  5. 30 1月, 2015 20 次提交
  6. 25 1月, 2015 2 次提交
  7. 08 1月, 2015 1 次提交
  8. 10 12月, 2014 4 次提交
    • A
      sunrpc/cache: convert to use string_escape_str() · 1b2e122d
      Andy Shevchenko 提交于
      There is nice kernel helper to escape a given strings by provided rules. Let's
      use it instead of custom approach.
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      [bfields@redhat.com: fix length calculation]
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      1b2e122d
    • J
      sunrpc: only call test_bit once in svc_xprt_received · acf06a7f
      Jeff Layton 提交于
      ...move the WARN_ON_ONCE inside the following if block since they use
      the same condition.
      Signed-off-by: NJeff Layton <jlayton@primarydata.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      acf06a7f
    • J
      sunrpc: add some tracepoints around enqueue and dequeue of svc_xprt · 83a712e0
      Jeff Layton 提交于
      These were useful when I was tracking down a race condition between
      svc_xprt_do_enqueue and svc_get_next_xprt.
      Signed-off-by: NJeff Layton <jlayton@primarydata.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      83a712e0
    • J
      sunrpc: convert to lockless lookup of queued server threads · b1691bc0
      Jeff Layton 提交于
      Testing has shown that the pool->sp_lock can be a bottleneck on a busy
      server. Every time data is received on a socket, the server must take
      that lock in order to dequeue a thread from the sp_threads list.
      
      Address this problem by eliminating the sp_threads list (which contains
      threads that are currently idle) and replacing it with a RQ_BUSY flag in
      svc_rqst. This allows us to walk the sp_all_threads list under the
      rcu_read_lock and find a suitable thread for the xprt by doing a
      test_and_set_bit.
      
      Note that we do still have a potential atomicity problem however with
      this approach.  We don't want svc_xprt_do_enqueue to set the
      rqst->rq_xprt pointer unless a test_and_set_bit of RQ_BUSY returned
      zero (which indicates that the thread was idle). But, by the time we
      check that, the bit could be flipped by a waking thread.
      
      To address this, we acquire a new per-rqst spinlock (rq_lock) and take
      that before doing the test_and_set_bit. If that returns false, then we
      can set rq_xprt and drop the spinlock. Then, when the thread wakes up,
      it must set the bit under the same spinlock and can trust that if it was
      already set then the rq_xprt is also properly set.
      
      With this scheme, the case where we have an idle thread no longer needs
      to take the highly contended pool->sp_lock at all, and that removes the
      bottleneck.
      
      That still leaves one issue: What of the case where we walk the whole
      sp_all_threads list and don't find an idle thread? Because the search is
      lockess, it's possible for the queueing to race with a thread that is
      going to sleep. To address that, we queue the xprt and then search again.
      
      If we find an idle thread at that point, we can't attach the xprt to it
      directly since that might race with a different thread waking up and
      finding it.  All we can do is wake the idle thread back up and let it
      attempt to find the now-queued xprt.
      Signed-off-by: NJeff Layton <jlayton@primarydata.com>
      Tested-by: NChris Worley <chris.worley@primarydata.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      b1691bc0