1. 29 8月, 2014 1 次提交
    • J
      SUNRPC: Fix compile on non-x86 · ae89254d
      J. Bruce Fields 提交于
      current_task appears to be x86-only, oops.
      
      Let's just delete this check entirely:
      
      Any developer that adds a new user without setting rq_task will get a
      crash the first time they test it.  I also don't think there are
      normally any important locks held here, and I can't see any other reason
      why killing a server thread would bring the whole box down.
      
      So the effort to fail gracefully here looks like overkill.
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Fixes: 983c6844 "SUNRPC: get rid of the request wait queue"
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      ae89254d
  2. 18 8月, 2014 5 次提交
  3. 30 7月, 2014 2 次提交
  4. 31 5月, 2014 1 次提交
  5. 23 5月, 2014 2 次提交
  6. 28 3月, 2014 1 次提交
  7. 10 2月, 2014 1 次提交
  8. 17 2月, 2013 2 次提交
  9. 24 1月, 2013 1 次提交
  10. 05 11月, 2012 3 次提交
  11. 22 8月, 2012 7 次提交
  12. 21 8月, 2012 2 次提交
    • J
      svcrpc: fix svc_xprt_enqueue/svc_recv busy-looping · d10f27a7
      J. Bruce Fields 提交于
      The rpc server tries to ensure that there will be room to send a reply
      before it receives a request.
      
      It does this by tracking, in xpt_reserved, an upper bound on the total
      size of the replies that is has already committed to for the socket.
      
      Currently it is adding in the estimate for a new reply *before* it
      checks whether there is space available.  If it finds that there is not
      space, it then subtracts the estimate back out.
      
      This may lead the subsequent svc_xprt_enqueue to decide that there is
      space after all.
      
      The results is a svc_recv() that will repeatedly return -EAGAIN, causing
      server threads to loop without doing any actual work.
      
      Cc: stable@vger.kernel.org
      Reported-by: NMichael Tokarev <mjt@tls.msk.ru>
      Tested-by: NMichael Tokarev <mjt@tls.msk.ru>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      d10f27a7
    • J
      svcrpc: sends on closed socket should stop immediately · f06f00a2
      J. Bruce Fields 提交于
      svc_tcp_sendto sets XPT_CLOSE if we fail to transmit the entire reply.
      However, the XPT_CLOSE won't be acted on immediately.  Meanwhile other
      threads could send further replies before the socket is really shut
      down.  This can manifest as data corruption: for example, if a truncated
      read reply is followed by another rpc reply, that second reply will look
      to the client like further read data.
      
      Symptoms were data corruption preceded by svc_tcp_sendto logging
      something like
      
      	kernel: rpc-srv/tcp: nfsd: sent only 963696 when sending 1048708 bytes - shutting down socket
      
      Cc: stable@vger.kernel.org
      Reported-by: NMalahal Naineni <malahal@us.ibm.com>
      Tested-by: NMalahal Naineni <malahal@us.ibm.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      f06f00a2
  13. 01 6月, 2012 2 次提交
  14. 16 5月, 2012 1 次提交
  15. 15 2月, 2012 3 次提交
  16. 01 2月, 2012 1 次提交
  17. 12 12月, 2011 1 次提交
  18. 07 12月, 2011 4 次提交
    • S
      SUNRPC: create svc_xprt in proper network namespace · bd4620dd
      Stanislav Kinsbursky 提交于
      This patch makes svc_xprt inherit network namespace link from its socket.
      Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      bd4620dd
    • J
      svcrpc: avoid memory-corruption on pool shutdown · b4f36f88
      J. Bruce Fields 提交于
      Socket callbacks use svc_xprt_enqueue() to add an xprt to a
      pool->sp_sockets list.  In normal operation a server thread will later
      come along and take the xprt off that list.  On shutdown, after all the
      threads have exited, we instead manually walk the sv_tempsocks and
      sv_permsocks lists to find all the xprt's and delete them.
      
      So the sp_sockets lists don't really matter any more.  As a result,
      we've mostly just ignored them and hoped they would go away.
      
      Which has gotten us into trouble; witness for example ebc63e53
      "svcrpc: fix list-corrupting race on nfsd shutdown", the result of Ben
      Greear noticing that a still-running svc_xprt_enqueue() could re-add an
      xprt to an sp_sockets list just before it was deleted.  The fix was to
      remove it from the list at the end of svc_delete_xprt().  But that only
      made corruption less likely--I can see nothing that prevents a
      svc_xprt_enqueue() from adding another xprt to the list at the same
      moment that we're removing this xprt from the list.  In fact, despite
      the earlier xpo_detach(), I don't even see what guarantees that
      svc_xprt_enqueue() couldn't still be running on this xprt.
      
      So, instead, note that svc_xprt_enqueue() essentially does:
      	lock sp_lock
      		if XPT_BUSY unset
      			add to sp_sockets
      	unlock sp_lock
      
      So, if we do:
      
      	set XPT_BUSY on every xprt.
      	Empty every sp_sockets list, under the sp_socks locks.
      
      Then we're left knowing that the sp_sockets lists are all empty and will
      stay that way, since any svc_xprt_enqueue() will check XPT_BUSY under
      the sp_lock and see it set.
      
      And *then* we can continue deleting the xprt's.
      
      (Thanks to Jeff Layton for being correctly suspicious of this code....)
      
      Cc: Ben Greear <greearb@candelatech.com>
      Cc: Jeff Layton <jlayton@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      b4f36f88
    • J
      svcrpc: destroy server sockets all at once · 2fefb8a0
      J. Bruce Fields 提交于
      There's no reason I can see that we need to call sv_shutdown between
      closing the two lists of sockets.
      
      Cc: stable@kernel.org
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      2fefb8a0
    • J
      svcrpc: make svc_delete_xprt static · 7710ec36
      J. Bruce Fields 提交于
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      7710ec36