1. 08 9月, 2016 3 次提交
    • D
      rxrpc: Rewrite the data and ack handling code · 248f219c
      David Howells 提交于
      Rewrite the data and ack handling code such that:
      
       (1) Parsing of received ACK and ABORT packets and the distribution and the
           filing of DATA packets happens entirely within the data_ready context
           called from the UDP socket.  This allows us to process and discard ACK
           and ABORT packets much more quickly (they're no longer stashed on a
           queue for a background thread to process).
      
       (2) We avoid calling skb_clone(), pskb_pull() and pskb_trim().  We instead
           keep track of the offset and length of the content of each packet in
           the sk_buff metadata.  This means we don't do any allocation in the
           receive path.
      
       (3) Jumbo DATA packet parsing is now done in data_ready context.  Rather
           than cloning the packet once for each subpacket and pulling/trimming
           it, we file the packet multiple times with an annotation for each
           indicating which subpacket is there.  From that we can directly
           calculate the offset and length.
      
       (4) A call's receive queue can be accessed without taking locks (memory
           barriers do have to be used, though).
      
       (5) Incoming calls are set up from preallocated resources and immediately
           made live.  They can than have packets queued upon them and ACKs
           generated.  If insufficient resources exist, DATA packet #1 is given a
           BUSY reply and other DATA packets are discarded).
      
       (6) sk_buffs no longer take a ref on their parent call.
      
      To make this work, the following changes are made:
      
       (1) Each call's receive buffer is now a circular buffer of sk_buff
           pointers (rxtx_buffer) rather than a number of sk_buff_heads spread
           between the call and the socket.  This permits each sk_buff to be in
           the buffer multiple times.  The receive buffer is reused for the
           transmit buffer.
      
       (2) A circular buffer of annotations (rxtx_annotations) is kept parallel
           to the data buffer.  Transmission phase annotations indicate whether a
           buffered packet has been ACK'd or not and whether it needs
           retransmission.
      
           Receive phase annotations indicate whether a slot holds a whole packet
           or a jumbo subpacket and, if the latter, which subpacket.  They also
           note whether the packet has been decrypted in place.
      
       (3) DATA packet window tracking is much simplified.  Each phase has just
           two numbers representing the window (rx_hard_ack/rx_top and
           tx_hard_ack/tx_top).
      
           The hard_ack number is the sequence number before base of the window,
           representing the last packet the other side says it has consumed.
           hard_ack starts from 0 and the first packet is sequence number 1.
      
           The top number is the sequence number of the highest-numbered packet
           residing in the buffer.  Packets between hard_ack+1 and top are
           soft-ACK'd to indicate they've been received, but not yet consumed.
      
           Four macros, before(), before_eq(), after() and after_eq() are added
           to compare sequence numbers within the window.  This allows for the
           top of the window to wrap when the hard-ack sequence number gets close
           to the limit.
      
           Two flags, RXRPC_CALL_RX_LAST and RXRPC_CALL_TX_LAST, are added also
           to indicate when rx_top and tx_top point at the packets with the
           LAST_PACKET bit set, indicating the end of the phase.
      
       (4) Calls are queued on the socket 'receive queue' rather than packets.
           This means that we don't need have to invent dummy packets to queue to
           indicate abnormal/terminal states and we don't have to keep metadata
           packets (such as ABORTs) around
      
       (5) The offset and length of a (sub)packet's content are now passed to
           the verify_packet security op.  This is currently expected to decrypt
           the packet in place and validate it.
      
           However, there's now nowhere to store the revised offset and length of
           the actual data within the decrypted blob (there may be a header and
           padding to skip) because an sk_buff may represent multiple packets, so
           a locate_data security op is added to retrieve these details from the
           sk_buff content when needed.
      
       (6) recvmsg() now has to handle jumbo subpackets, where each subpacket is
           individually secured and needs to be individually decrypted.  The code
           to do this is broken out into rxrpc_recvmsg_data() and shared with the
           kernel API.  It now iterates over the call's receive buffer rather
           than walking the socket receive queue.
      
      Additional changes:
      
       (1) The timers are condensed to a single timer that is set for the soonest
           of three timeouts (delayed ACK generation, DATA retransmission and
           call lifespan).
      
       (2) Transmission of ACK and ABORT packets is effected immediately from
           process-context socket ops/kernel API calls that cause them instead of
           them being punted off to a background work item.  The data_ready
           handler still has to defer to the background, though.
      
       (3) A shutdown op is added to the AF_RXRPC socket so that the AFS
           filesystem can shut down the socket and flush its own work items
           before closing the socket to deal with any in-progress service calls.
      
      Future additional changes that will need to be considered:
      
       (1) Make sure that a call doesn't hog the front of the queue by receiving
           data from the network as fast as userspace is consuming it to the
           exclusion of other calls.
      
       (2) Transmit delayed ACKs from within recvmsg() when we've consumed
           sufficiently more packets to avoid the background work item needing to
           run.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      248f219c
    • D
      rxrpc: Preallocate peers, conns and calls for incoming service requests · 00e90712
      David Howells 提交于
      Make it possible for the data_ready handler called from the UDP transport
      socket to completely instantiate an rxrpc_call structure and make it
      immediately live by preallocating all the memory it might need.  The idea
      is to cut out the background thread usage as much as possible.
      
      [Note that the preallocated structs are not actually used in this patch -
       that will be done in a future patch.]
      
      If insufficient resources are available in the preallocation buffers, it
      will be possible to discard the DATA packet in the data_ready handler or
      schedule a BUSY packet without the need to schedule an attempt at
      allocation in a background thread.
      
      To this end:
      
       (1) Preallocate rxrpc_peer, rxrpc_connection and rxrpc_call structs to a
           maximum number each of the listen backlog size.  The backlog size is
           limited to a maxmimum of 32.  Only this many of each can be in the
           preallocation buffer.
      
       (2) For userspace sockets, the preallocation is charged initially by
           listen() and will be recharged by accepting or rejecting pending
           new incoming calls.
      
       (3) For kernel services {,re,dis}charging of the preallocation buffers is
           handled manually.  Two notifier callbacks have to be provided before
           kernel_listen() is invoked:
      
           (a) An indication that a new call has been instantiated.  This can be
           	 used to trigger background recharging.
      
           (b) An indication that a call is being discarded.  This is used when
           	 the socket is being released.
      
           A function, rxrpc_kernel_charge_accept() is called by the kernel
           service to preallocate a single call.  It should be passed the user ID
           to be used for that call and a callback to associate the rxrpc call
           with the kernel service's side of the ID.
      
       (4) Discard the preallocation when the socket is closed.
      
       (5) Temporarily bump the refcount on the call allocated in
           rxrpc_incoming_call() so that rxrpc_release_call() can ditch the
           preallocation ref on service calls unconditionally.  This will no
           longer be necessary once the preallocation is used.
      
      Note that this does not yet control the number of active service calls on a
      client - that will come in a later patch.
      
      A future development would be to provide a setsockopt() call that allows a
      userspace server to manually charge the preallocation buffer.  This would
      allow user call IDs to be provided in advance and the awkward manual accept
      stage to be bypassed.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      00e90712
    • D
      rxrpc: Convert rxrpc_local::services to an hlist · de8d6c74
      David Howells 提交于
      Convert the rxrpc_local::services list to an hlist so that it can be
      accessed under RCU conditions more readily.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      de8d6c74
  2. 07 9月, 2016 2 次提交
    • D
      rxrpc: Calls shouldn't hold socket refs · 8d94aa38
      David Howells 提交于
      rxrpc calls shouldn't hold refs on the sock struct.  This was done so that
      the socket wouldn't go away whilst the call was in progress, such that the
      call could reach the socket's queues.
      
      However, we can mark the socket as requiring an RCU release and rely on the
      RCU read lock.
      
      To make this work, we do:
      
       (1) rxrpc_release_call() removes the call's call user ID.  This is now
           only called from socket operations and not from the call processor:
      
      	rxrpc_accept_call() / rxrpc_kernel_accept_call()
      	rxrpc_reject_call() / rxrpc_kernel_reject_call()
      	rxrpc_kernel_end_call()
      	rxrpc_release_calls_on_socket()
      	rxrpc_recvmsg()
      
           Though it is also called in the cleanup path of
           rxrpc_accept_incoming_call() before we assign a user ID.
      
       (2) Pass the socket pointer into rxrpc_release_call() rather than getting
           it from the call so that we can get rid of uninitialised calls.
      
       (3) Fix call processor queueing to pass a ref to the work queue and to
           release that ref at the end of the processor function (or to pass it
           back to the work queue if we have to requeue).
      
       (4) Skip out of the call processor function asap if the call is complete
           and don't requeue it if the call is complete.
      
       (5) Clean up the call immediately that the refcount reaches 0 rather than
           trying to defer it.  Actual deallocation is deferred to RCU, however.
      
       (6) Don't hold socket refs for allocated calls.
      
       (7) Use the RCU read lock when queueing a message on a socket and treat
           the call's socket pointer according to RCU rules and check it for
           NULL.
      
           We also need to use the RCU read lock when viewing a call through
           procfs.
      
       (8) Transmit the final ACK/ABORT to a client call in rxrpc_release_call()
           if this hasn't been done yet so that we can then disconnect the call.
           Once the call is disconnected, it won't have any access to the
           connection struct and the UDP socket for the call work processor to be
           able to send the ACK.  Terminal retransmission will be handled by the
           connection processor.
      
       (9) Release all calls immediately on the closing of a socket rather than
           trying to defer this.  Incomplete calls will be aborted.
      
      The call refcount model is much simplified.  Refs are held on the call by:
      
       (1) A socket's user ID tree.
      
       (2) A socket's incoming call secureq and acceptq.
      
       (3) A kernel service that has a call in progress.
      
       (4) A queued call work processor.  We have to take care to put any call
           that we failed to queue.
      
       (5) sk_buffs on a socket's receive queue.  A future patch will get rid of
           this.
      
      Whilst we're at it, we can do:
      
       (1) Get rid of the RXRPC_CALL_EV_RELEASE event.  Release is now done
           entirely from the socket routines and never from the call's processor.
      
       (2) Get rid of the RXRPC_CALL_DEAD state.  Calls now end in the
           RXRPC_CALL_COMPLETE state.
      
       (3) Get rid of the rxrpc_call::destroyer work item.  Calls are now torn
           down when their refcount reaches 0 and then handed over to RCU for
           final cleanup.
      
       (4) Get rid of the rxrpc_call::deadspan timer.  Calls are cleaned up
           immediately they're finished with and don't hang around.
           Post-completion retransmission is handled by the connection processor
           once the call is disconnected.
      
       (5) Get rid of the dead call expiry setting as there's no longer a timer
           to set.
      
       (6) rxrpc_destroy_all_calls() can just check that the call list is empty.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      8d94aa38
    • D
      rxrpc: Improve the call tracking tracepoint · fff72429
      David Howells 提交于
      Improve the call tracking tracepoint by showing more differentiation
      between some of the put and get events, including:
      
        (1) Getting and putting refs for the socket call user ID tree.
      
        (2) Getting and putting refs for queueing and failing to queue the call
            processor work item.
      
      Note that these aren't necessarily used in this patch, but will be taken
      advantage of in future patches.
      
      An enum is added for the event subtype numbers rather than coding them
      directly as decimal numbers and a table of 3-letter strings is provided
      rather than a sequence of ?: operators.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      fff72429
  3. 05 9月, 2016 1 次提交
  4. 02 9月, 2016 1 次提交
    • D
      rxrpc: Don't expose skbs to in-kernel users [ver #2] · d001648e
      David Howells 提交于
      Don't expose skbs to in-kernel users, such as the AFS filesystem, but
      instead provide a notification hook the indicates that a call needs
      attention and another that indicates that there's a new call to be
      collected.
      
      This makes the following possibilities more achievable:
      
       (1) Call refcounting can be made simpler if skbs don't hold refs to calls.
      
       (2) skbs referring to non-data events will be able to be freed much sooner
           rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
           will be able to consult the call state.
      
       (3) We can shortcut the receive phase when a call is remotely aborted
           because we don't have to go through all the packets to get to the one
           cancelling the operation.
      
       (4) It makes it easier to do encryption/decryption directly between AFS's
           buffers and sk_buffs.
      
       (5) Encryption/decryption can more easily be done in the AFS's thread
           contexts - usually that of the userspace process that issued a syscall
           - rather than in one of rxrpc's background threads on a workqueue.
      
       (6) AFS will be able to wait synchronously on a call inside AF_RXRPC.
      
      To make this work, the following interface function has been added:
      
           int rxrpc_kernel_recv_data(
      		struct socket *sock, struct rxrpc_call *call,
      		void *buffer, size_t bufsize, size_t *_offset,
      		bool want_more, u32 *_abort_code);
      
      This is the recvmsg equivalent.  It allows the caller to find out about the
      state of a specific call and to transfer received data into a buffer
      piecemeal.
      
      afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
      logic between them.  They don't wait synchronously yet because the socket
      lock needs to be dealt with.
      
      Five interface functions have been removed:
      
      	rxrpc_kernel_is_data_last()
          	rxrpc_kernel_get_abort_code()
          	rxrpc_kernel_get_error_number()
          	rxrpc_kernel_free_skb()
          	rxrpc_kernel_data_consumed()
      
      As a temporary hack, sk_buffs going to an in-kernel call are queued on the
      rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
      in-kernel user.  To process the queue internally, a temporary function,
      temp_deliver_data() has been added.  This will be replaced with common code
      between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
      future patch.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d001648e
  5. 30 8月, 2016 1 次提交
    • D
      rxrpc: Pass struct socket * to more rxrpc kernel interface functions · 4de48af6
      David Howells 提交于
      Pass struct socket * to more rxrpc kernel interface functions.  They should
      be starting from this rather than the socket pointer in the rxrpc_call
      struct if they need to access the socket.
      
      I have left:
      
      	rxrpc_kernel_is_data_last()
      	rxrpc_kernel_get_abort_code()
      	rxrpc_kernel_get_error_number()
      	rxrpc_kernel_free_skb()
      	rxrpc_kernel_data_consumed()
      
      unmodified as they're all about to be removed (and, in any case, don't
      touch the socket).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4de48af6
  6. 23 8月, 2016 1 次提交
  7. 13 7月, 2016 1 次提交
  8. 06 7月, 2016 2 次提交
  9. 22 6月, 2016 8 次提交
    • D
      rxrpc: Kill off the rxrpc_transport struct · aa390bbe
      David Howells 提交于
      The rxrpc_transport struct is now redundant, given that the rxrpc_peer
      struct is now per peer port rather than per peer host, so get rid of it.
      
      Service connection lists are transferred to the rxrpc_peer struct, as is
      the conn_lock.  Previous patches moved the client connection handling out
      of the rxrpc_transport struct and discarded the connection bundling code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      aa390bbe
    • D
      rxrpc: Kill the client connection bundle concept · 999b69f8
      David Howells 提交于
      Kill off the concept of maintaining a bundle of connections to a particular
      target service to increase the number of call slots available for any
      beyond four for that service (there are four call slots per connection).
      
      This will make cleaning up the connection handling code easier and
      facilitate removal of the rxrpc_transport struct.  Bundling can be
      reintroduced later if necessary.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      999b69f8
    • D
      rxrpc: Provide more refcount helper functions · 5627cc8b
      David Howells 提交于
      Provide refcount helper functions for connections so that the code doesn't
      touch local or connection usage counts directly.
      
      Also make it such that local and peer put functions can take a NULL
      pointer.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5627cc8b
    • D
      rxrpc: Validate the net address given to rxrpc_kernel_begin_call() · f4552c2d
      David Howells 提交于
      Validate the net address given to rxrpc_kernel_begin_call() before using
      it.
      
      Whilst this should be mostly unnecessary for in-kernel users, it does clear
      the tail of the address struct in case we want to hash or compare the whole
      thing.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      f4552c2d
    • D
      rxrpc: Use IDR to allocate client conn IDs on a machine-wide basis · 4a3388c8
      David Howells 提交于
      Use the IDR facility to allocate client connection IDs on a machine-wide
      basis so that each client connection has a unique identifier.  When the
      connection ID space wraps, we advance the epoch by 1, thereby effectively
      having a 62-bit ID space.  The IDR facility is then used to look up client
      connections during incoming packet routing instead of using an rbtree
      rooted on the transport.
      
      This change allows for the removal of the transport in the future and also
      means that client connections can be looked up directly in the data-ready
      handler by connection ID.
      
      The ID management code is placed in a new file, conn-client.c, to which all
      the client connection-specific code will eventually move.
      
      Note that the IDR tree gets very expensive on memory if the connection IDs
      are widely scattered throughout the number space, so we shall need to
      retire connections that have, say, an ID more than four times the maximum
      number of client conns away from the current allocation point to try and
      keep the IDs concentrated.  We will also need to retire connections from an
      old epoch.
      
      Also note that, for the moment, a pointer to the transport has to be passed
      through into the ID allocation function so that we can take a BH lock to
      prevent a locking issue against in-BH lookup of client connections.  This
      will go away later when RCU is used for server connections also.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4a3388c8
    • D
      rxrpc: Fix exclusive connection handling · cc8feb8e
      David Howells 提交于
      "Exclusive connections" are meant to be used for a single client call and
      then scrapped.  The idea is to limit the use of the negotiated security
      context.  The current code, however, isn't doing this: it is instead
      restricting the socket to a single virtual connection and doing all the
      calls over that.
      
      This is changed such that the socket no longer maintains a special virtual
      connection over which it will do all the calls, but rather gets a new one
      each time a new exclusive call is made.
      
      Further, using a socket option for this is a poor choice.  It should be
      done on sendmsg with a control message marker instead so that calls can be
      marked exclusive individually.  To that end, add RXRPC_EXCLUSIVE_CALL
      which, if passed to sendmsg() as a control message element, will cause the
      call to be done on an single-use connection.
      
      The socket option (RXRPC_EXCLUSIVE_CONNECTION) still exists and, if set,
      will override any lack of RXRPC_EXCLUSIVE_CALL being specified so that
      programs using the setsockopt() will appear to work the same.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      cc8feb8e
    • D
      rxrpc: Use structs to hold connection params and protocol info · 19ffa01c
      David Howells 提交于
      Define and use a structure to hold connection parameters.  This makes it
      easier to pass multiple connection parameters around.
      
      Define and use a structure to hold protocol information used to hash a
      connection for lookup on incoming packet.  Most of these fields will be
      disposed of eventually, including the duplicate local pointer.
      
      Whilst we're at it rename "proto" to "family" when referring to a protocol
      family.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      19ffa01c
    • D
      rxrpc: checking for IS_ERR() instead of NULL · 0e4699e4
      Dan Carpenter 提交于
      rxrpc_lookup_peer_rcu() and rxrpc_lookup_peer() return NULL on error, never
      error pointers, so IS_ERR() can't be used.
      
      Fix three callers of those functions.
      
      Fixes: be6e6707 ('rxrpc: Rework peer object handling to use hash table and RCU')
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      0e4699e4
  10. 15 6月, 2016 2 次提交
    • D
      rxrpc: Rework local endpoint management · 4f95dd78
      David Howells 提交于
      Rework the local RxRPC endpoint management.
      
      Local endpoint objects are maintained in a flat list as before.  This
      should be okay as there shouldn't be more than one per open AF_RXRPC socket
      (there can be fewer as local endpoints can be shared if their local service
      ID is 0 and they share the same local transport parameters).
      
      Changes:
      
       (1) Local endpoints may now only be shared if they have local service ID 0
           (ie. they're not being used for listening).
      
           This prevents a scenario where process A is listening of the Cache
           Manager port and process B contacts a fileserver - which may then
           attempt to send CM requests back to B.  But if A and B are sharing a
           local endpoint, A will get the CM requests meant for B.
      
       (2) We use a mutex to handle lookups and don't provide RCU-only lookups
           since we only expect to access the list when opening a socket or
           destroying an endpoint.
      
           The local endpoint object is pointed to by the transport socket's
           sk_user_data for the life of the transport socket - allowing us to
           refer to it directly from the sk_data_ready and sk_error_report
           callbacks.
      
       (3) atomic_inc_not_zero() now exists and can be used to only share a local
           endpoint if the last reference hasn't yet gone.
      
       (4) We can remove rxrpc_local_lock - a spinlock that had to be taken with
           BH processing disabled given that we assume sk_user_data won't change
           under us.
      
       (5) The transport socket is shut down before we clear the sk_user_data
           pointer so that we can be sure that the transport socket's callbacks
           won't be invoked once the RCU destruction is scheduled.
      
       (6) Local endpoints have a work item that handles both destruction and
           event processing.  The means that destruction doesn't then need to
           wait for event processing.  The event queues can then be cleared after
           the transport socket is shut down.
      
       (7) Local endpoints are no longer available for resurrection beyond the
           life of the sockets that had them open.  As soon as their last ref
           goes, they are scheduled for destruction and may not have their usage
           count moved from 0.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4f95dd78
    • D
      rxrpc: Rework peer object handling to use hash table and RCU · be6e6707
      David Howells 提交于
      Rework peer object handling to use a hash table instead of a flat list and
      to use RCU.  Peer objects are no longer destroyed by passing them to a
      workqueue to process, but rather are just passed to the RCU garbage
      collector as kfree'able objects.
      
      The hash function uses the local endpoint plus all the components of the
      remote address, except for the RxRPC service ID.  Peers thus represent a
      UDP port on the remote machine as contacted by a UDP port on this machine.
      
      The RCU read lock is used to handle non-creating lookups so that they can
      be called from bottom half context in the sk_error_report handler without
      having to lock the hash table against modification.
      rxrpc_lookup_peer_rcu() *does* take a reference on the peer object as in
      the future, this will be passed to a work item for error distribution in
      the error_report path and this function will cease being used in the
      data_ready path.
      
      Creating lookups are done under spinlock rather than mutex as they might be
      set up due to an external stimulus if the local endpoint is a server.
      
      Captured network error messages (ICMP) are handled with respect to this
      struct and MTU size and RTT are cached here.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      be6e6707
  11. 11 6月, 2016 1 次提交
    • D
      rxrpc: Limit the listening backlog · 0e119b41
      David Howells 提交于
      Limit the socket incoming call backlog queue size so that a remote client
      can't pump in sufficient new calls that the server runs out of memory.  Note
      that this is partially theoretical at the moment since whilst the number of
      calls is limited, the number of packets trying to set up new calls is not.
      This will be addressed in a later patch.
      
      If the caller of listen() specifies a backlog INT_MAX, then they get the
      current maximum; anything else greater than max_backlog or anything
      negative incurs EINVAL.
      
      The limit on the maximum queue size can be set by:
      
      	echo N >/proc/sys/net/rxrpc/max_backlog
      
      where 4<=N<=32.
      
      Further, set the default backlog to 0, requiring listen() to be called
      before we start actually queueing new calls.  Whilst this kind of is a
      change in the UAPI, the caller can't actually *accept* new calls anyway
      unless they've first called listen() to put the socket into the LISTENING
      state - thus the aforementioned new calls would otherwise just sit there,
      eating up kernel memory.  (Note that sockets that don't have a non-zero
      service ID bound don't get incoming calls anyway.)
      
      Given that the default backlog is now 0, make the AFS filesystem call
      kernel_listen() to set the maximum backlog for itself.
      
      Possible improvements include:
      
       (1) Trimming a too-large backlog to max_backlog when listen is called.
      
       (2) Trimming the backlog value whenever the value is used so that changes
           to max_backlog are applied to an open socket automatically.  Note that
           the AFS filesystem opens one socket and keeps it open for extended
           periods, so would miss out on changes to max_backlog.
      
       (3) Having a separate setting for the AFS filesystem.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e119b41
  12. 10 6月, 2016 1 次提交
    • D
      rxrpc: Simplify connect() implementation and simplify sendmsg() op · 2341e077
      David Howells 提交于
      Simplify the RxRPC connect() implementation.  It will just note the
      destination address it is given, and if a sendmsg() comes along with no
      address, this will be assigned as the address.  No transport struct will be
      held internally, which will allow us to remove this later.
      
      Simplify sendmsg() also.  Whilst a call is active, userspace refers to it
      by a private unique user ID specified in a control message.  When sendmsg()
      sees a user ID that doesn't map to an extant call, it creates a new call
      for that user ID and attempts to add it.  If, when we try to add it, the
      user ID is now registered, we now reject the message with -EEXIST.  We
      should never see this situation unless two threads are racing, trying to
      create a call with the same ID - which would be an error.
      
      It also isn't required to provide sendmsg() with an address - provided the
      control message data holds a user ID that maps to a currently active call.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2341e077
  13. 04 6月, 2016 1 次提交
    • J
      rxrpc: Use pr_<level> and pr_fmt, reduce object size a few KB · 9b6d5398
      Joe Perches 提交于
      Use the more common kernel logging style and reduce object size.
      
      The logging message prefix changes from a mixture of
      "RxRPC:" and "RXRPC:" to "af_rxrpc: ".
      
      $ size net/rxrpc/built-in.o*
         text	   data	    bss	    dec	    hex	filename
        64172	   1972	   8304	  74448	  122d0	net/rxrpc/built-in.o.new
        67512	   1972	   8304	  77788	  12fdc	net/rxrpc/built-in.o.old
      
      Miscellanea:
      
      o Consolidate the ASSERT macros to use a single pr_err call with
        decimal and hexadecimal output and a stringified #OP argument
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9b6d5398
  14. 12 4月, 2016 1 次提交
  15. 14 3月, 2016 1 次提交
  16. 04 3月, 2016 4 次提交
  17. 01 12月, 2015 1 次提交
  18. 21 10月, 2015 1 次提交
    • D
      KEYS: Merge the type-specific data with the payload data · 146aa8b1
      David Howells 提交于
      Merge the type-specific data with the payload data into one four-word chunk
      as it seems pointless to keep them separate.
      
      Use user_key_payload() for accessing the payloads of overloaded
      user-defined keys.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      cc: linux-cifs@vger.kernel.org
      cc: ecryptfs@vger.kernel.org
      cc: linux-ext4@vger.kernel.org
      cc: linux-f2fs-devel@lists.sourceforge.net
      cc: linux-nfs@vger.kernel.org
      cc: ceph-devel@vger.kernel.org
      cc: linux-ima-devel@lists.sourceforge.net
      146aa8b1
  19. 11 5月, 2015 1 次提交
  20. 03 3月, 2015 1 次提交
  21. 27 2月, 2014 1 次提交
  22. 19 2月, 2013 2 次提交
  23. 10 1月, 2013 1 次提交
  24. 16 4月, 2012 1 次提交