1. 07 9月, 2016 3 次提交
    • D
      rxrpc: Add tracepoint for working out where aborts happen · 5a42976d
      David Howells 提交于
      Add a tracepoint for working out where local aborts happen.  Each
      tracepoint call is labelled with a 3-letter code so that they can be
      distinguished - and the DATA sequence number is added too where available.
      
      rxrpc_kernel_abort_call() also takes a 3-letter code so that AFS can
      indicate the circumstances when it aborts a call.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5a42976d
    • D
      rxrpc: Calls shouldn't hold socket refs · 8d94aa38
      David Howells 提交于
      rxrpc calls shouldn't hold refs on the sock struct.  This was done so that
      the socket wouldn't go away whilst the call was in progress, such that the
      call could reach the socket's queues.
      
      However, we can mark the socket as requiring an RCU release and rely on the
      RCU read lock.
      
      To make this work, we do:
      
       (1) rxrpc_release_call() removes the call's call user ID.  This is now
           only called from socket operations and not from the call processor:
      
      	rxrpc_accept_call() / rxrpc_kernel_accept_call()
      	rxrpc_reject_call() / rxrpc_kernel_reject_call()
      	rxrpc_kernel_end_call()
      	rxrpc_release_calls_on_socket()
      	rxrpc_recvmsg()
      
           Though it is also called in the cleanup path of
           rxrpc_accept_incoming_call() before we assign a user ID.
      
       (2) Pass the socket pointer into rxrpc_release_call() rather than getting
           it from the call so that we can get rid of uninitialised calls.
      
       (3) Fix call processor queueing to pass a ref to the work queue and to
           release that ref at the end of the processor function (or to pass it
           back to the work queue if we have to requeue).
      
       (4) Skip out of the call processor function asap if the call is complete
           and don't requeue it if the call is complete.
      
       (5) Clean up the call immediately that the refcount reaches 0 rather than
           trying to defer it.  Actual deallocation is deferred to RCU, however.
      
       (6) Don't hold socket refs for allocated calls.
      
       (7) Use the RCU read lock when queueing a message on a socket and treat
           the call's socket pointer according to RCU rules and check it for
           NULL.
      
           We also need to use the RCU read lock when viewing a call through
           procfs.
      
       (8) Transmit the final ACK/ABORT to a client call in rxrpc_release_call()
           if this hasn't been done yet so that we can then disconnect the call.
           Once the call is disconnected, it won't have any access to the
           connection struct and the UDP socket for the call work processor to be
           able to send the ACK.  Terminal retransmission will be handled by the
           connection processor.
      
       (9) Release all calls immediately on the closing of a socket rather than
           trying to defer this.  Incomplete calls will be aborted.
      
      The call refcount model is much simplified.  Refs are held on the call by:
      
       (1) A socket's user ID tree.
      
       (2) A socket's incoming call secureq and acceptq.
      
       (3) A kernel service that has a call in progress.
      
       (4) A queued call work processor.  We have to take care to put any call
           that we failed to queue.
      
       (5) sk_buffs on a socket's receive queue.  A future patch will get rid of
           this.
      
      Whilst we're at it, we can do:
      
       (1) Get rid of the RXRPC_CALL_EV_RELEASE event.  Release is now done
           entirely from the socket routines and never from the call's processor.
      
       (2) Get rid of the RXRPC_CALL_DEAD state.  Calls now end in the
           RXRPC_CALL_COMPLETE state.
      
       (3) Get rid of the rxrpc_call::destroyer work item.  Calls are now torn
           down when their refcount reaches 0 and then handed over to RCU for
           final cleanup.
      
       (4) Get rid of the rxrpc_call::deadspan timer.  Calls are cleaned up
           immediately they're finished with and don't hang around.
           Post-completion retransmission is handled by the connection processor
           once the call is disconnected.
      
       (5) Get rid of the dead call expiry setting as there's no longer a timer
           to set.
      
       (6) rxrpc_destroy_all_calls() can just check that the call list is empty.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      8d94aa38
    • D
      rxrpc: Improve the call tracking tracepoint · fff72429
      David Howells 提交于
      Improve the call tracking tracepoint by showing more differentiation
      between some of the put and get events, including:
      
        (1) Getting and putting refs for the socket call user ID tree.
      
        (2) Getting and putting refs for queueing and failing to queue the call
            processor work item.
      
      Note that these aren't necessarily used in this patch, but will be taken
      advantage of in future patches.
      
      An enum is added for the event subtype numbers rather than coding them
      directly as decimal numbers and a table of 3-letter strings is provided
      rather than a sequence of ?: operators.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      fff72429
  2. 03 9月, 2016 1 次提交
  3. 30 8月, 2016 3 次提交
  4. 24 8月, 2016 1 次提交
    • D
      rxrpc: Improve management and caching of client connection objects · 45025bce
      David Howells 提交于
      Improve the management and caching of client rxrpc connection objects.
      From this point, client connections will be managed separately from service
      connections because AF_RXRPC controls the creation and re-use of client
      connections but doesn't have that luxury with service connections.
      
      Further, there will be limits on the numbers of client connections that may
      be live on a machine.  No direct restriction will be placed on the number
      of client calls, excepting that each client connection can support a
      maximum of four concurrent calls.
      
      Note that, for a number of reasons, we don't want to simply discard a
      client connection as soon as the last call is apparently finished:
      
       (1) Security is negotiated per-connection and the context is then shared
           between all calls on that connection.  The context can be negotiated
           again if the connection lapses, but that involves holding up calls
           whilst at least two packets are exchanged and various crypto bits are
           performed - so we'd ideally like to cache it for a little while at
           least.
      
       (2) If a packet goes astray, we will need to retransmit a final ACK or
           ABORT packet.  To make this work, we need to keep around the
           connection details for a little while.
      
       (3) The locally held structures represent some amount of setup time, to be
           weighed against their occupation of memory when idle.
      
      
      To this end, the client connection cache is managed by a state machine on
      each connection.  There are five states:
      
       (1) INACTIVE - The connection is not held in any list and may not have
           been exposed to the world.  If it has been previously exposed, it was
           discarded from the idle list after expiring.
      
       (2) WAITING - The connection is waiting for the number of client conns to
           drop below the maximum capacity.  Calls may be in progress upon it
           from when it was active and got culled.
      
           The connection is on the rxrpc_waiting_client_conns list which is kept
           in to-be-granted order.  Culled conns with waiters go to the back of
           the queue just like new conns.
      
       (3) ACTIVE - The connection has at least one call in progress upon it, it
           may freely grant available channels to new calls and calls may be
           waiting on it for channels to become available.
      
           The connection is on the rxrpc_active_client_conns list which is kept
           in activation order for culling purposes.
      
       (4) CULLED - The connection got summarily culled to try and free up
           capacity.  Calls currently in progress on the connection are allowed
           to continue, but new calls will have to wait.  There can be no waiters
           in this state - the conn would have to go to the WAITING state
           instead.
      
       (5) IDLE - The connection has no calls in progress upon it and must have
           been exposed to the world (ie. the EXPOSED flag must be set).  When it
           expires, the EXPOSED flag is cleared and the connection transitions to
           the INACTIVE state.
      
           The connection is on the rxrpc_idle_client_conns list which is kept in
           order of how soon they'll expire.
      
      A connection in the ACTIVE or CULLED state must have at least one active
      call upon it; if in the WAITING state it may have active calls upon it;
      other states may not have active calls.
      
      As long as a connection remains active and doesn't get culled, it may
      continue to process calls - even if there are connections on the wait
      queue.  This simplifies things a bit and reduces the amount of checking we
      need do.
      
      
      There are a couple flags of relevance to the cache:
      
       (1) EXPOSED - The connection ID got exposed to the world.  If this flag is
           set, an extra ref is added to the connection preventing it from being
           reaped when it has no calls outstanding.  This flag is cleared and the
           ref dropped when a conn is discarded from the idle list.
      
       (2) DONT_REUSE - The connection should be discarded as soon as possible and
           should not be reused.
      
      
      This commit also provides a number of new settings:
      
       (*) /proc/net/rxrpc/max_client_conns
      
           The maximum number of live client connections.  Above this number, new
           connections get added to the wait list and must wait for an active
           conn to be culled.  Culled connections can be reused, but they will go
           to the back of the wait list and have to wait.
      
       (*) /proc/net/rxrpc/reap_client_conns
      
           If the number of desired connections exceeds the maximum above, the
           active connection list will be culled until there are only this many
           left in it.
      
       (*) /proc/net/rxrpc/idle_conn_expiry
      
           The normal expiry time for a client connection, provided there are
           fewer than reap_client_conns of them around.
      
       (*) /proc/net/rxrpc/idle_conn_fast_expiry
      
           The expedited expiry time, used when there are more than
           reap_client_conns of them around.
      
      
      Note that I combined the Tx wait queue with the channel grant wait queue to
      save space as only one of these should be in use at once.
      
      Note also that, for the moment, the service connection cache still uses the
      old connection management code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      45025bce
  5. 23 8月, 2016 3 次提交
  6. 10 8月, 2016 1 次提交
    • D
      rxrpc: Don't access connection from call if pointer is NULL · f9dc5757
      David Howells 提交于
      The call state machine processor sets up the message parameters for a UDP
      message that it might need to transmit in advance on the basis that there's
      a very good chance it's going to have to transmit either an ACK or an
      ABORT.  This requires it to look in the connection struct to retrieve some
      of the parameters.
      
      However, if the call is complete, the call connection pointer may be NULL
      to dissuade the processor from transmitting a message.  However, there are
      some situations where the processor is still going to be called - and it's
      still going to set up message parameters whether it needs them or not.
      
      This results in a NULL pointer dereference at:
      
      	net/rxrpc/call_event.c:837
      
      To fix this, skip the message pre-initialisation if there's no connection
      attached.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      f9dc5757
  7. 06 8月, 2016 1 次提交
    • D
      rxrpc: Fix races between skb free, ACK generation and replying · 372ee163
      David Howells 提交于
      Inside the kafs filesystem it is possible to occasionally have a call
      processed and terminated before we've had a chance to check whether we need
      to clean up the rx queue for that call because afs_send_simple_reply() ends
      the call when it is done, but this is done in a workqueue item that might
      happen to run to completion before afs_deliver_to_call() completes.
      
      Further, it is possible for rxrpc_kernel_send_data() to be called to send a
      reply before the last request-phase data skb is released.  The rxrpc skb
      destructor is where the ACK processing is done and the call state is
      advanced upon release of the last skb.  ACK generation is also deferred to
      a work item because it's possible that the skb destructor is not called in
      a context where kernel_sendmsg() can be invoked.
      
      To this end, the following changes are made:
      
       (1) kernel_rxrpc_data_consumed() is added.  This should be called whenever
           an skb is emptied so as to crank the ACK and call states.  This does
           not release the skb, however.  kernel_rxrpc_free_skb() must now be
           called to achieve that.  These together replace
           rxrpc_kernel_data_delivered().
      
       (2) kernel_rxrpc_data_consumed() is wrapped by afs_data_consumed().
      
           This makes afs_deliver_to_call() easier to work as the skb can simply
           be discarded unconditionally here without trying to work out what the
           return value of the ->deliver() function means.
      
           The ->deliver() functions can, via afs_data_complete(),
           afs_transfer_reply() and afs_extract_data() mark that an skb has been
           consumed (thereby cranking the state) without the need to
           conditionally free the skb to make sure the state is correct on an
           incoming call for when the call processor tries to send the reply.
      
       (3) rxrpc_recvmsg() now has to call kernel_rxrpc_data_consumed() when it
           has finished with a packet and MSG_PEEK isn't set.
      
       (4) rxrpc_packet_destructor() no longer calls rxrpc_hard_ACK_data().
      
           Because of this, we no longer need to clear the destructor and put the
           call before we free the skb in cases where we don't want the ACK/call
           state to be cranked.
      
       (5) The ->deliver() call-type callbacks are made to return -EAGAIN rather
           than 0 if they expect more data (afs_extract_data() returns -EAGAIN to
           the delivery function already), and the caller is now responsible for
           producing an abort if that was the last packet.
      
       (6) There are many bits of unmarshalling code where:
      
       		ret = afs_extract_data(call, skb, last, ...);
      		switch (ret) {
      		case 0:		break;
      		case -EAGAIN:	return 0;
      		default:	return ret;
      		}
      
           is to be found.  As -EAGAIN can now be passed back to the caller, we
           now just return if ret < 0:
      
       		ret = afs_extract_data(call, skb, last, ...);
      		if (ret < 0)
      			return ret;
      
       (7) Checks for trailing data and empty final data packets has been
           consolidated as afs_data_complete().  So:
      
      		if (skb->len > 0)
      			return -EBADMSG;
      		if (!last)
      			return 0;
      
           becomes:
      
      		ret = afs_data_complete(call, skb, last);
      		if (ret < 0)
      			return ret;
      
       (8) afs_transfer_reply() now checks the amount of data it has against the
           amount of data desired and the amount of data in the skb and returns
           an error to induce an abort if we don't get exactly what we want.
      
      Without these changes, the following oops can occasionally be observed,
      particularly if some printks are inserted into the delivery path:
      
      general protection fault: 0000 [#1] SMP
      Modules linked in: kafs(E) af_rxrpc(E) [last unloaded: af_rxrpc]
      CPU: 0 PID: 1305 Comm: kworker/u8:3 Tainted: G            E   4.7.0-fsdevel+ #1303
      Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014
      Workqueue: kafsd afs_async_workfn [kafs]
      task: ffff88040be041c0 ti: ffff88040c070000 task.ti: ffff88040c070000
      RIP: 0010:[<ffffffff8108fd3c>]  [<ffffffff8108fd3c>] __lock_acquire+0xcf/0x15a1
      RSP: 0018:ffff88040c073bc0  EFLAGS: 00010002
      RAX: 6b6b6b6b6b6b6b6b RBX: 0000000000000000 RCX: ffff88040d29a710
      RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88040d29a710
      RBP: ffff88040c073c70 R08: 0000000000000001 R09: 0000000000000001
      R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000000
      R13: 0000000000000000 R14: ffff88040be041c0 R15: ffffffff814c928f
      FS:  0000000000000000(0000) GS:ffff88041fa00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 00007fa4595f4750 CR3: 0000000001c14000 CR4: 00000000001406f0
      Stack:
       0000000000000006 000000000be04930 0000000000000000 ffff880400000000
       ffff880400000000 ffffffff8108f847 ffff88040be041c0 ffffffff81050446
       ffff8803fc08a920 ffff8803fc08a958 ffff88040be041c0 ffff88040c073c38
      Call Trace:
       [<ffffffff8108f847>] ? mark_held_locks+0x5e/0x74
       [<ffffffff81050446>] ? __local_bh_enable_ip+0x9b/0xa1
       [<ffffffff8108f9ca>] ? trace_hardirqs_on_caller+0x16d/0x189
       [<ffffffff810915f4>] lock_acquire+0x122/0x1b6
       [<ffffffff810915f4>] ? lock_acquire+0x122/0x1b6
       [<ffffffff814c928f>] ? skb_dequeue+0x18/0x61
       [<ffffffff81609dbf>] _raw_spin_lock_irqsave+0x35/0x49
       [<ffffffff814c928f>] ? skb_dequeue+0x18/0x61
       [<ffffffff814c928f>] skb_dequeue+0x18/0x61
       [<ffffffffa009aa92>] afs_deliver_to_call+0x344/0x39d [kafs]
       [<ffffffffa009ab37>] afs_process_async_call+0x4c/0xd5 [kafs]
       [<ffffffffa0099e9c>] afs_async_workfn+0xe/0x10 [kafs]
       [<ffffffff81063a3a>] process_one_work+0x29d/0x57c
       [<ffffffff81064ac2>] worker_thread+0x24a/0x385
       [<ffffffff81064878>] ? rescuer_thread+0x2d0/0x2d0
       [<ffffffff810696f5>] kthread+0xf3/0xfb
       [<ffffffff8160a6ff>] ret_from_fork+0x1f/0x40
       [<ffffffff81069602>] ? kthread_create_on_node+0x1cf/0x1cf
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      372ee163
  8. 06 7月, 2016 2 次提交
    • D
      rxrpc: Access socket accept queue under right lock · 30b515f4
      David Howells 提交于
      The socket's accept queue (socket->acceptq) should be accessed under
      socket->call_lock, not under the connection lock.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      30b515f4
    • D
      rxrpc: Release a call's connection ref on call disconnection · e653cfe4
      David Howells 提交于
      When a call is disconnected, clear the call's pointer to the connection and
      release the associated ref on that connection.  This means that the call no
      longer pins the connection and the connection can be discarded even before
      the call is.
      
      As the code currently stands, the call struct is effectively pinned by
      userspace until userspace has enacted a recvmsg() to retrieve the final
      call state as sk_buffs on the receive queue pin the call to which they're
      related because:
      
       (1) The rxrpc_call struct contains the userspace ID that recvmsg() has to
           include in the control message buffer to indicate which call is being
           referred to.  This ID must remain valid until the terminal packet is
           completely read and must be invalidated immediately at that point as
           userspace is entitled to immediately reuse it.
      
       (2) The final ACK to the reply to a client call isn't sent until the last
           data packet is entirely read (it's probably worth altering this in
           future to be send the ACK as soon as all the data has been received).
      
      
      This change requires a bit of rearrangement to make sure that the call
      isn't going to try and access the connection again after protocol
      completion:
      
       (1) Delete the error link earlier when we're releasing the call.  Possibly
           network errors should be distributed via connections at the cost of
           adding in an access to the rxrpc_connection struct.
      
       (2) Remove the call from the connection's call tree before disconnecting
           the call.  The call tree needs to be removed anyway and incoming
           packets delivered by channel pointer instead.
      
       (3) The release call event should be considered last after all other
           events have been processed so that we don't need access to the
           connection again.
      
       (4) Move the channel_lock taking from rxrpc_release_call() to
           rxrpc_disconnect_call() where it will be required in future.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      e653cfe4
  9. 22 6月, 2016 3 次提交
  10. 15 6月, 2016 1 次提交
    • D
      rxrpc: Use the peer record to distribute network errors · f66d7490
      David Howells 提交于
      Use the peer record to distribute network errors rather than the transport
      object (which I want to get rid of).  An error from a particular peer
      terminates all calls on that peer.
      
      For future consideration:
      
       (1) For ICMP-induced errors it might be worth trying to extract the RxRPC
           header from the offending packet, if one is returned attached to the
           ICMP packet, to better direct the error.
      
           This may be overkill, though, since an ICMP packet would be expected
           to be relating to the destination port, machine or network.  RxRPC
           ABORT and BUSY packets give notice at RxRPC level.
      
       (2) To also abort connection-level communications (such as CHALLENGE
           packets) where indicted by an error - but that requires some revamping
           of the connection event handling first.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      f66d7490
  11. 13 6月, 2016 1 次提交
    • D
      rxrpc: Rename files matching ar-*.c to git rid of the "ar-" prefix · 8c3e34a4
      David Howells 提交于
      Rename files matching net/rxrpc/ar-*.c to get rid of the "ar-" prefix.
      This will aid splitting those files by making easier to come up with new
      names.
      
      Note that the not all files are simply renamed from ar-X.c to X.c.  The
      following exceptions are made:
      
       (*) ar-call.c -> call_object.c
           ar-ack.c -> call_event.c
      
           call_object.c is going to contain the core of the call object
           handling.  Call event handling is all going to be in call_event.c.
      
       (*) ar-accept.c -> call_accept.c
      
           Incoming call handling is going to be here.
      
       (*) ar-connection.c -> conn_object.c
           ar-connevent.c -> conn_event.c
      
           The former file is going to have the basic connection object handling,
           but there will likely be some differentiation between client
           connections and service connections in additional files later.  The
           latter file will have all the connection-level event handling.
      
       (*) ar-local.c -> local_object.c
      
           This will have the local endpoint object handling code.  The local
           endpoint event handling code will later be split out into
           local_event.c.
      
       (*) ar-peer.c -> peer_object.c
      
           This will have the peer endpoint object handling code.  Peer event
           handling code will be placed in peer_event.c (for the moment, there is
           none).
      
       (*) ar-error.c -> peer_event.c
      
           This will become the peer event handling code, though for the moment
           it's actually driven from the local endpoint's perspective.
      
      Note that I haven't renamed ar-transport.c to transport_object.c as the
      intention is to delete it when the rxrpc_transport struct is excised.
      
      The only file that actually has its contents changed is net/rxrpc/Makefile.
      
      net/rxrpc/ar-internal.h will need its section marker comments updating, but
      I'll do that in a separate patch to make it easier for git to follow the
      history across the rename.  I may also want to rename ar-internal.h at some
      point - but that would mean updating all the #includes and I'd rather do
      that in a separate step.
      
      Signed-off-by: David Howells <dhowells@redhat.com.
      8c3e34a4
  12. 04 6月, 2016 1 次提交
    • J
      rxrpc: Use pr_<level> and pr_fmt, reduce object size a few KB · 9b6d5398
      Joe Perches 提交于
      Use the more common kernel logging style and reduce object size.
      
      The logging message prefix changes from a mixture of
      "RxRPC:" and "RXRPC:" to "af_rxrpc: ".
      
      $ size net/rxrpc/built-in.o*
         text	   data	    bss	    dec	    hex	filename
        64172	   1972	   8304	  74448	  122d0	net/rxrpc/built-in.o.new
        67512	   1972	   8304	  77788	  12fdc	net/rxrpc/built-in.o.old
      
      Miscellanea:
      
      o Consolidate the ASSERT macros to use a single pr_err call with
        decimal and hexadecimal output and a stringified #OP argument
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9b6d5398
  13. 12 4月, 2016 5 次提交
  14. 14 3月, 2016 1 次提交
  15. 04 3月, 2016 3 次提交
    • D
      rxrpc: Keep the skb private record of the Rx header in host byte order · 0d12f8a4
      David Howells 提交于
      Currently, a copy of the Rx packet header is copied into the the sk_buff
      private data so that we can advance the pointer into the buffer,
      potentially discarding the original.  At the moment, this copy is held in
      network byte order, but this means we're doing a lot of unnecessary
      translations.
      
      The reasons it was done this way are that we need the values in network
      byte order occasionally and we can use the copy, slightly modified, as part
      of an iov array when sending an ack or an abort packet.
      
      However, it seems more reasonable on review that it would be better kept in
      host byte order and that we make up a new header when we want to send
      another packet.
      
      To this end, rename the original header struct to rxrpc_wire_header (with
      BE fields) and institute a variant called rxrpc_host_header that has host
      order fields.  Change the struct in the sk_buff private data into an
      rxrpc_host_header and translate the values when filling it in.
      
      This further allows us to keep values kept in various structures in host
      byte order rather than network byte order and allows removal of some fields
      that are byteswapped duplicates.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      0d12f8a4
    • D
      rxrpc: Rename call events to begin RXRPC_CALL_EV_ · 4c198ad1
      David Howells 提交于
      Rename call event names to begin RXRPC_CALL_EV_ to distinguish them from the
      flags.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4c198ad1
    • D
      rxrpc: Convert call flag and event numbers into enums · 5b8848d1
      David Howells 提交于
      Convert call flag and event numbers into enums and move their definitions
      outside of the struct.
      
      Also move the call state enum outside of the struct and add an extra
      element to count the number of states.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5b8848d1
  16. 25 11月, 2015 1 次提交
    • D
      rxrpc: Correctly handle ack at end of client call transmit phase · 33c40e24
      David Howells 提交于
      Normally, the transmit phase of a client call is implicitly ack'd by the
      reception of the first data packet of the response being received.
      However, if a security negotiation happens, the transmit phase, if it is
      entirely contained in a single packet, may get an ack packet in response
      and then may get aborted due to security negotiation failure.
      
      Because the client has shifted state to RXRPC_CALL_CLIENT_AWAIT_REPLY due
      to having transmitted all the data, the code that handles processing of the
      received ack packet doesn't note the hard ack the data packet.
      
      The following abort packet in the case of security negotiation failure then
      incurs an assertion failure when it tries to drain the Tx queue because the
      hard ack state is out of sync (hard ack means the packets have been
      processed and can be discarded by the sender; a soft ack means that the
      packets are received but could still be discarded and rerequested by the
      receiver).
      
      To fix this, we should record the hard ack we received for the ack packet.
      
      The assertion failure looks like:
      
      	RxRPC: Assertion failed
      	1 <= 0 is false
      	0x1 <= 0x0 is false
      	------------[ cut here ]------------
      	kernel BUG at ../net/rxrpc/ar-ack.c:431!
      	...
      	RIP: 0010:[<ffffffffa006857b>]  [<ffffffffa006857b>] rxrpc_rotate_tx_window+0xbc/0x131 [af_rxrpc]
      	...
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      33c40e24
  17. 02 3月, 2015 2 次提交
  18. 27 2月, 2014 3 次提交
    • D
      af_rxrpc: Expose more RxRPC parameters via sysctls · 817913d8
      David Howells 提交于
      Expose RxRPC parameters via sysctls to control the Rx window size, the Rx MTU
      maximum size and the number of packets that can be glued into a jumbo packet.
      
      More info added to Documentation/networking/rxrpc.txt.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      817913d8
    • D
      af_rxrpc: Improve ACK production · 9823f39a
      David Howells 提交于
      Improve ACK production by the following means:
      
       (1) Don't send an ACK_REQUESTED ack immediately even if the RXRPC_MORE_PACKETS
           flag isn't set on a data packet that has also has RXRPC_REQUEST_ACK set.
      
           MORE_PACKETS just means that the sender just emptied its Tx data buffer.
           More data will be forthcoming unless RXRPC_LAST_PACKET is also flagged.
      
           It is possible to see runs of DATA packets with MORE_PACKETS unset that
           aren't waiting for an ACK.
      
           It is therefore better to wait a small instant to see if we can combine an
           ACK for several packets.
      
       (2) Don't send an ACK_IDLE ack immediately unless we're responding to the
           terminal data packet of a call.
      
           Whilst sending an ACK_IDLE mid-call serves to let the other side know
           that we won't be asking it to resend certain Tx buffers and that it can
           discard them, spamming it with loads of acks just because we've
           temporarily run out of data just distracts it.
      
       (3) Put the ACK_IDLE ack generation timeout up to half a second rather than a
           single jiffy.  Just because we haven't been given more data immediately
           doesn't mean that more isn't forthcoming.  The other side may be busily
           finding the data to send to us.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      9823f39a
    • D
      af_rxrpc: Add sysctls for configuring RxRPC parameters · 5873c083
      David Howells 提交于
      Add sysctls for configuring RxRPC protocol handling, specifically controls on
      delays before ack generation, the delay before resending a packet, the maximum
      lifetime of a call and the expiration times of calls, connections and
      transports that haven't been recently used.
      
      More info added in Documentation/networking/rxrpc.txt.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5873c083
  19. 22 1月, 2014 1 次提交
  20. 16 4月, 2012 1 次提交
  21. 20 12月, 2011 1 次提交
  22. 20 5月, 2011 1 次提交