1. 02 10月, 2010 3 次提交
  2. 11 8月, 2010 1 次提交
    • R
      param: use ops in struct kernel_param, rather than get and set fns directly · 9bbb9e5a
      Rusty Russell 提交于
      This is more kernel-ish, saves some space, and also allows us to
      expand the ops without breaking all the callers who are happy for the
      new members to be NULL.
      
      The few places which defined their own param types are changed to the
      new scheme (more which crept in recently fixed in following patches).
      
      Since we're touching them anyway, we change get() and set() to take a
      const struct kernel_param (which they really are).  This causes some
      harmless warnings until we fix them (in following patches).
      
      To reduce churn, module_param_call creates the ops struct so the callers
      don't have to change (and casts the functions to reduce warnings).
      The modern version which takes an ops struct is called module_param_cb.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Reviewed-by: NTakashi Iwai <tiwai@suse.de>
      Tested-by: NPhil Carmody <ext-phil.2.carmody@nokia.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Ville Syrjala <syrjala@sci.fi>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Alessandro Rubini <rubini@ipvvis.unipv.it>
      Cc: Michal Januszewski <spock@gentoo.org>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-input@vger.kernel.org
      Cc: linux-fbdev-devel@lists.sourceforge.net
      Cc: linux-nfs@vger.kernel.org
      Cc: netdev@vger.kernel.org
      9bbb9e5a
  3. 10 8月, 2010 1 次提交
  4. 23 6月, 2010 1 次提交
  5. 26 5月, 2010 1 次提交
  6. 18 5月, 2010 1 次提交
  7. 15 5月, 2010 5 次提交
  8. 22 3月, 2010 1 次提交
  9. 09 3月, 2010 3 次提交
  10. 03 3月, 2010 1 次提交
  11. 10 2月, 2010 1 次提交
  12. 04 12月, 2009 1 次提交
    • C
      SUNRPC: Allow RPCs to fail quickly if the server is unreachable · 09a21c41
      Chuck Lever 提交于
      The kernel sometimes makes RPC calls to services that aren't running.
      Because the kernel's RPC client always assumes the hard retry semantic
      when reconnecting a connection-oriented RPC transport, the underlying
      reconnect logic takes a long while to time out, even though the remote
      may have responded immediately with ECONNREFUSED.
      
      In certain cases, like upcalls to our local rpcbind daemon, or for NFS
      mount requests, we'd like the kernel to fail immediately if the remote
      service isn't reachable.  This allows another transport to be tried
      immediately, or the pending request can be abandoned quickly.
      
      Introduce a per-request flag which controls how call_transmit_status()
      behaves when request transmission fails because the server cannot be
      reached.
      
      We don't want soft connection semantics to apply to other errors.  The
      default case of the switch statement in call_transmit_status() no
      longer falls through; the fall through code is copied to the default
      case, and a "break;" is added.
      
      The transport's connection re-establishment timeout is also ignored for
      such requests.  We want the request to fail immediately, so the
      reconnect delay is skipped.  Additionally, we don't want a connect
      failure here to further increase the reconnect timeout value, since
      this request will not be retried.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      09a21c41
  13. 19 11月, 2009 1 次提交
  14. 12 11月, 2009 1 次提交
    • E
      sysctl net: Remove unused binary sysctl code · f8572d8f
      Eric W. Biederman 提交于
      Now that sys_sysctl is a compatiblity wrapper around /proc/sys
      all sysctl strategy routines, and all ctl_name and strategy
      entries in the sysctl tables are unused, and can be
      revmoed.
      
      In addition neigh_sysctl_register has been modified to no longer
      take a strategy argument and it's callers have been modified not
      to pass one.
      
      Cc: "David Miller" <davem@davemloft.net>
      Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
      Cc: netdev@vger.kernel.org
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      f8572d8f
  15. 24 9月, 2009 1 次提交
    • N
      NFS/RPC: fix problems with reestablish_timeout and related code. · 61d0a8e6
      Neil Brown 提交于
      
      [[resending with correct cc:  - "vfs.kernel.org" just isn't right!]]
      
      xprt->reestablish_timeout is used to cause TCP connection attempts to
      back off if the connection fails so as not to hammer the network,
      but to still allow immediate connections when there is no reason to
      believe there is a problem.
      
      It is not used for the first connection (when transport->sock is NULL)
      but only on reconnects.
      
      It is currently set:
      
       a/ to 0 when xs_tcp_state_change finds a state of TCP_FIN_WAIT1
          on the assumption that the client has closed the connection
          so the reconnect should be immediate when needed.
       b/ to at least XS_TCP_INIT_REEST_TO when xs_tcp_state_change
          detects TCP_CLOSING or TCP_CLOSE_WAIT on the assumption that the
          server closed the connection so a small delay at least is
          required.
       c/ as above when xs_tcp_state_change detects TCP_SYN_SENT, so that
          it is never 0 while a connection has been attempted, else
          the doubling will produce 0 and there will be no backoff.
       d/ to double is value (up to a limit) when delaying a connection,
          thus providing exponential backoff and
       e/ to XS_TCP_INIT_REEST_TO in xs_setup_tcp as simple initialisation.
      
      So you can see it is highly dependant on xs_tcp_state_change being
      called as expected.  However experimental evidence shows that
      xs_tcp_state_change does not see all state changes.
      ("rpcdebug -m rpc trans" can help show what actually happens).
      
      Results show:
       TCP_ESTABLISHED is reported when a connection is made.  TCP_SYN_SENT
       is never reported, so rule 'c' above is never effective.
      
       When the server closes the connection, TCP_CLOSE_WAIT and
       TCP_LAST_ACK *might* be reported, and TCP_CLOSE is always
       reported.  This rule 'b' above will sometimes be effective, but
       not reliably.
      
       When the client closes the connection, it used to result in
       TCP_FIN_WAIT1, TCP_FIN_WAIT2, TCP_CLOSE.  However since commit
       f75e6745 (SUNRPC: Fix the problem of EADDRNOTAVAIL syslog floods on
       reconnect) we don't see *any* events on client-close.  I think this
       is because xs_restore_old_callbacks is called to disconnect
       xs_tcp_state_change before the socket is closed.
       In any case, rule 'a' no longer applies.
      
      So all that is left are rule d, which successfully doubles the
      timeout which is never rest, and rule e which initialises the timeout.
      
      Even if the rules worked as expected, there would be a problem because
      a successful connection does not reset the timeout, so a sequence
      of events where the server closes the connection (e.g. during failover
      testing) will cause longer and longer timeouts with no good reason.
      
      This patch:
      
       - sets reestablish_timeout to 0 in xs_close thus effecting rule 'a'
       - sets it to 0 in xs_tcp_data_ready to ensure that a successful
         connection resets the timeout
       - sets it to at least XS_TCP_INIT_REEST_TO after it is doubled,
         thus effecting rule c
      
      I have not reimplemented rule b and the new version of rule c
      seems sufficient.
      
      I suspect other code in xs_tcp_data_ready needs to be revised as well.
      For example I don't think connect_cookie is being incremented as often
      as it should be.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      61d0a8e6
  16. 14 9月, 2009 1 次提交
  17. 12 9月, 2009 1 次提交
  18. 10 8月, 2009 6 次提交
  19. 18 6月, 2009 5 次提交
    • R
      nfs41: Backchannel callback service helper routines · 0d90ba1c
      Ricardo Labiaga 提交于
      Executes the backchannel task on the RPC state machine using
      the existing open connection previously established by the client.
      Signed-off-by: NRicardo Labiaga <ricardo.labiaga@netapp.com>
      
      nfs41: Add bc_svc.o to sunrpc Makefile.
      
      [nfs41: bc_send() does not need to be exported outside RPC module]
      [nfs41: xprt_free_bc_request() need not be exported outside RPC module]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [Update copyright]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      0d90ba1c
    • T
      SUNRPC: Fix a missing "break" option in xs_tcp_setup_socket() · 88b5ed73
      Trond Myklebust 提交于
      In the case of -EADDRNOTAVAIL and/or unhandled connection errors, we want
      to get rid of the existing socket and retry immediately, just as the
      comment says. Currently we end up sleeping for a minute, due to the missing
      "break" statement.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      88b5ed73
    • R
      nfs41: New xs_tcp_read_data() · 44b98efd
      Ricardo Labiaga 提交于
      Handles RPC replies and backchannel callbacks.  Traditionally the NFS
      client has expected only RPC replies on its open connections.  With
      NFSv4.1, callbacks can arrive over an existing open connection.
      
      This patch refactors the old xs_tcp_read_request() into an RPC reply handler:
      xs_tcp_read_reply(), a new backchannel callback handler: xs_tcp_read_callback(),
      and a common routine to read the data off the transport: xs_tcp_read_common().
      The new xs_tcp_read_callback() queues callback requests onto a queue where
      the callback service (a separate thread) is listening for the processing.
      
      This patch incorporates work and suggestions from Rahul Iyer (iyer@netapp.com)
      and Benny Halevy (bhalevy@panasas.com).
      
      xs_tcp_read_callback() drops the connection when the number of expected
      callbacks is exceeded.  Use xprt_force_disconnect(), ensuring tasks on
      the pending queue are awaken on disconnect.
      
      [nfs41: Keep track of RPC call/reply direction with a flag]
      [nfs41: Preallocate rpc_rqst receive buffer for handling callbacks]
      Signed-off-by: NRicardo Labiaga <ricardo.labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [nfs41: sunrpc: xs_tcp_read_callback() should use xprt_force_disconnect()]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [Moves embedded #ifdefs into #ifdef function blocks]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      44b98efd
    • R
      nfs41: Process the RPC call direction · f4a2e418
      Ricardo Labiaga 提交于
      Reading and storing the RPC direction is a three step process.
      
      1. xs_tcp_read_calldir() reads the RPC direction, but it will not store it
      in the XDR buffer since the 'struct rpc_rqst' is not yet available.
      
      2. The 'struct rpc_rqst' is obtained during the TCP_RCV_COPY_DATA state.
      This state need not necessarily be preceeded by the TCP_RCV_READ_CALLDIR.
      For example, we may be reading a continuation packet to a large reply.
      Therefore, we can't simply obtain the 'struct rpc_rqst' during the
      TCP_RCV_READ_CALLDIR state and assume it's available during TCP_RCV_COPY_DATA.
      
      This patch adds a new TCP_RCV_READ_CALLDIR flag to indicate the need to
      read the RPC direction.  It then uses TCP_RCV_COPY_CALLDIR to indicate the
      RPC direction needs to be saved after the 'struct rpc_rqst' has been allocated.
      
      3. The 'struct rpc_rqst' is obtained by the xs_tcp_read_data() helper
      functions.  xs_tcp_read_common() then saves the RPC direction in the XDR
      buffer if TCP_RCV_COPY_CALLDIR is set.  This will happen when we're reading
      the data immediately after the direction was read.  xs_tcp_read_common()
      then clears this flag.
      
      [was nfs41: Skip past the RPC call direction]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [nfs41: sunrpc: Add RPC direction back into the XDR buffer]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [nfs41: sunrpc: Don't skip past the RPC call direction]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      f4a2e418
    • R
      nfs41: Add ability to read RPC call direction on TCP stream. · 18dca02a
      Ricardo Labiaga 提交于
      NFSv4.1 callbacks can arrive over an existing connection. This patch adds
      the logic to read the RPC call direction (call or reply). It does this by
      updating the state machine to look for the call direction invoking
      xs_tcp_read_calldir(...) after reading the XID.
      
      [nfs41: Keep track of RPC call/reply direction with a flag]
      
      As per 11/14/08 review of RFC 53/85.
      
      Add a new flag to track whether the incoming message is an RPC call or an
      RPC reply.  TCP_RPC_REPLY is set in the 'struct sock_xprt' tcp_flags in
      xs_tcp_read_calldir() if the message is an RPC reply sent on the forechannel.
      It is cleared if the message is an RPC request sent on the back channel.
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      18dca02a
  20. 03 6月, 2009 1 次提交
  21. 03 5月, 2009 1 次提交
  22. 20 3月, 2009 2 次提交