1. 07 9月, 2016 1 次提交
    • D
      rxrpc: Calls shouldn't hold socket refs · 8d94aa38
      David Howells 提交于
      rxrpc calls shouldn't hold refs on the sock struct.  This was done so that
      the socket wouldn't go away whilst the call was in progress, such that the
      call could reach the socket's queues.
      
      However, we can mark the socket as requiring an RCU release and rely on the
      RCU read lock.
      
      To make this work, we do:
      
       (1) rxrpc_release_call() removes the call's call user ID.  This is now
           only called from socket operations and not from the call processor:
      
      	rxrpc_accept_call() / rxrpc_kernel_accept_call()
      	rxrpc_reject_call() / rxrpc_kernel_reject_call()
      	rxrpc_kernel_end_call()
      	rxrpc_release_calls_on_socket()
      	rxrpc_recvmsg()
      
           Though it is also called in the cleanup path of
           rxrpc_accept_incoming_call() before we assign a user ID.
      
       (2) Pass the socket pointer into rxrpc_release_call() rather than getting
           it from the call so that we can get rid of uninitialised calls.
      
       (3) Fix call processor queueing to pass a ref to the work queue and to
           release that ref at the end of the processor function (or to pass it
           back to the work queue if we have to requeue).
      
       (4) Skip out of the call processor function asap if the call is complete
           and don't requeue it if the call is complete.
      
       (5) Clean up the call immediately that the refcount reaches 0 rather than
           trying to defer it.  Actual deallocation is deferred to RCU, however.
      
       (6) Don't hold socket refs for allocated calls.
      
       (7) Use the RCU read lock when queueing a message on a socket and treat
           the call's socket pointer according to RCU rules and check it for
           NULL.
      
           We also need to use the RCU read lock when viewing a call through
           procfs.
      
       (8) Transmit the final ACK/ABORT to a client call in rxrpc_release_call()
           if this hasn't been done yet so that we can then disconnect the call.
           Once the call is disconnected, it won't have any access to the
           connection struct and the UDP socket for the call work processor to be
           able to send the ACK.  Terminal retransmission will be handled by the
           connection processor.
      
       (9) Release all calls immediately on the closing of a socket rather than
           trying to defer this.  Incomplete calls will be aborted.
      
      The call refcount model is much simplified.  Refs are held on the call by:
      
       (1) A socket's user ID tree.
      
       (2) A socket's incoming call secureq and acceptq.
      
       (3) A kernel service that has a call in progress.
      
       (4) A queued call work processor.  We have to take care to put any call
           that we failed to queue.
      
       (5) sk_buffs on a socket's receive queue.  A future patch will get rid of
           this.
      
      Whilst we're at it, we can do:
      
       (1) Get rid of the RXRPC_CALL_EV_RELEASE event.  Release is now done
           entirely from the socket routines and never from the call's processor.
      
       (2) Get rid of the RXRPC_CALL_DEAD state.  Calls now end in the
           RXRPC_CALL_COMPLETE state.
      
       (3) Get rid of the rxrpc_call::destroyer work item.  Calls are now torn
           down when their refcount reaches 0 and then handed over to RCU for
           final cleanup.
      
       (4) Get rid of the rxrpc_call::deadspan timer.  Calls are cleaned up
           immediately they're finished with and don't hang around.
           Post-completion retransmission is handled by the connection processor
           once the call is disconnected.
      
       (5) Get rid of the dead call expiry setting as there's no longer a timer
           to set.
      
       (6) rxrpc_destroy_all_calls() can just check that the call list is empty.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      8d94aa38
  2. 24 8月, 2016 1 次提交
    • D
      rxrpc: Improve management and caching of client connection objects · 45025bce
      David Howells 提交于
      Improve the management and caching of client rxrpc connection objects.
      From this point, client connections will be managed separately from service
      connections because AF_RXRPC controls the creation and re-use of client
      connections but doesn't have that luxury with service connections.
      
      Further, there will be limits on the numbers of client connections that may
      be live on a machine.  No direct restriction will be placed on the number
      of client calls, excepting that each client connection can support a
      maximum of four concurrent calls.
      
      Note that, for a number of reasons, we don't want to simply discard a
      client connection as soon as the last call is apparently finished:
      
       (1) Security is negotiated per-connection and the context is then shared
           between all calls on that connection.  The context can be negotiated
           again if the connection lapses, but that involves holding up calls
           whilst at least two packets are exchanged and various crypto bits are
           performed - so we'd ideally like to cache it for a little while at
           least.
      
       (2) If a packet goes astray, we will need to retransmit a final ACK or
           ABORT packet.  To make this work, we need to keep around the
           connection details for a little while.
      
       (3) The locally held structures represent some amount of setup time, to be
           weighed against their occupation of memory when idle.
      
      
      To this end, the client connection cache is managed by a state machine on
      each connection.  There are five states:
      
       (1) INACTIVE - The connection is not held in any list and may not have
           been exposed to the world.  If it has been previously exposed, it was
           discarded from the idle list after expiring.
      
       (2) WAITING - The connection is waiting for the number of client conns to
           drop below the maximum capacity.  Calls may be in progress upon it
           from when it was active and got culled.
      
           The connection is on the rxrpc_waiting_client_conns list which is kept
           in to-be-granted order.  Culled conns with waiters go to the back of
           the queue just like new conns.
      
       (3) ACTIVE - The connection has at least one call in progress upon it, it
           may freely grant available channels to new calls and calls may be
           waiting on it for channels to become available.
      
           The connection is on the rxrpc_active_client_conns list which is kept
           in activation order for culling purposes.
      
       (4) CULLED - The connection got summarily culled to try and free up
           capacity.  Calls currently in progress on the connection are allowed
           to continue, but new calls will have to wait.  There can be no waiters
           in this state - the conn would have to go to the WAITING state
           instead.
      
       (5) IDLE - The connection has no calls in progress upon it and must have
           been exposed to the world (ie. the EXPOSED flag must be set).  When it
           expires, the EXPOSED flag is cleared and the connection transitions to
           the INACTIVE state.
      
           The connection is on the rxrpc_idle_client_conns list which is kept in
           order of how soon they'll expire.
      
      A connection in the ACTIVE or CULLED state must have at least one active
      call upon it; if in the WAITING state it may have active calls upon it;
      other states may not have active calls.
      
      As long as a connection remains active and doesn't get culled, it may
      continue to process calls - even if there are connections on the wait
      queue.  This simplifies things a bit and reduces the amount of checking we
      need do.
      
      
      There are a couple flags of relevance to the cache:
      
       (1) EXPOSED - The connection ID got exposed to the world.  If this flag is
           set, an extra ref is added to the connection preventing it from being
           reaped when it has no calls outstanding.  This flag is cleared and the
           ref dropped when a conn is discarded from the idle list.
      
       (2) DONT_REUSE - The connection should be discarded as soon as possible and
           should not be reused.
      
      
      This commit also provides a number of new settings:
      
       (*) /proc/net/rxrpc/max_client_conns
      
           The maximum number of live client connections.  Above this number, new
           connections get added to the wait list and must wait for an active
           conn to be culled.  Culled connections can be reused, but they will go
           to the back of the wait list and have to wait.
      
       (*) /proc/net/rxrpc/reap_client_conns
      
           If the number of desired connections exceeds the maximum above, the
           active connection list will be culled until there are only this many
           left in it.
      
       (*) /proc/net/rxrpc/idle_conn_expiry
      
           The normal expiry time for a client connection, provided there are
           fewer than reap_client_conns of them around.
      
       (*) /proc/net/rxrpc/idle_conn_fast_expiry
      
           The expedited expiry time, used when there are more than
           reap_client_conns of them around.
      
      
      Note that I combined the Tx wait queue with the channel grant wait queue to
      save space as only one of these should be in use at once.
      
      Note also that, for the moment, the service connection cache still uses the
      old connection management code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      45025bce
  3. 22 6月, 2016 1 次提交
    • D
      rxrpc: Kill off the rxrpc_transport struct · aa390bbe
      David Howells 提交于
      The rxrpc_transport struct is now redundant, given that the rxrpc_peer
      struct is now per peer port rather than per peer host, so get rid of it.
      
      Service connection lists are transferred to the rxrpc_peer struct, as is
      the conn_lock.  Previous patches moved the client connection handling out
      of the rxrpc_transport struct and discarded the connection bundling code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      aa390bbe
  4. 11 6月, 2016 1 次提交
    • D
      rxrpc: Limit the listening backlog · 0e119b41
      David Howells 提交于
      Limit the socket incoming call backlog queue size so that a remote client
      can't pump in sufficient new calls that the server runs out of memory.  Note
      that this is partially theoretical at the moment since whilst the number of
      calls is limited, the number of packets trying to set up new calls is not.
      This will be addressed in a later patch.
      
      If the caller of listen() specifies a backlog INT_MAX, then they get the
      current maximum; anything else greater than max_backlog or anything
      negative incurs EINVAL.
      
      The limit on the maximum queue size can be set by:
      
      	echo N >/proc/sys/net/rxrpc/max_backlog
      
      where 4<=N<=32.
      
      Further, set the default backlog to 0, requiring listen() to be called
      before we start actually queueing new calls.  Whilst this kind of is a
      change in the UAPI, the caller can't actually *accept* new calls anyway
      unless they've first called listen() to put the socket into the LISTENING
      state - thus the aforementioned new calls would otherwise just sit there,
      eating up kernel memory.  (Note that sockets that don't have a non-zero
      service ID bound don't get incoming calls anyway.)
      
      Given that the default backlog is now 0, make the AFS filesystem call
      kernel_listen() to set the maximum backlog for itself.
      
      Possible improvements include:
      
       (1) Trimming a too-large backlog to max_backlog when listen is called.
      
       (2) Trimming the backlog value whenever the value is used so that changes
           to max_backlog are applied to an open socket automatically.  Note that
           the AFS filesystem opens one socket and keeps it open for extended
           periods, so would miss out on changes to max_backlog.
      
       (3) Having a separate setting for the AFS filesystem.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e119b41
  5. 14 3月, 2016 1 次提交
  6. 04 3月, 2016 1 次提交
  7. 27 2月, 2014 2 次提交
  8. 27 4月, 2007 1 次提交
  9. 15 2月, 2007 1 次提交
    • E
      [PATCH] sysctl: remove insert_at_head from register_sysctl · 0b4d4147
      Eric W. Biederman 提交于
      The semantic effect of insert_at_head is that it would allow new registered
      sysctl entries to override existing sysctl entries of the same name.  Which is
      pain for caching and the proc interface never implemented.
      
      I have done an audit and discovered that none of the current users of
      register_sysctl care as (excpet for directories) they do not register
      duplicate sysctl entries.
      
      So this patch simply removes the support for overriding existing entries in
      the sys_sysctl interface since no one uses it or cares and it makes future
      enhancments harder.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: David Howells <dhowells@redhat.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Corey Minyard <minyard@acm.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: "John W. Linville" <linville@tuxdriver.com>
      Cc: James Bottomley <James.Bottomley@steeleye.com>
      Cc: Jan Kara <jack@ucw.cz>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: David Chinner <dgc@sgi.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Patrick McHardy <kaber@trash.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0b4d4147
  10. 11 2月, 2007 1 次提交
  11. 01 7月, 2006 1 次提交
  12. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4