1. 25 9月, 2016 1 次提交
    • D
      rxrpc: Implement slow-start · 57494343
      David Howells 提交于
      Implement RxRPC slow-start, which is similar to RFC 5681 for TCP.  A
      tracepoint is added to log the state of the congestion management algorithm
      and the decisions it makes.
      
      Notes:
      
       (1) Since we send fixed-size DATA packets (apart from the final packet in
           each phase), counters and calculations are in terms of packets rather
           than bytes.
      
       (2) The ACK packet carries the equivalent of TCP SACK.
      
       (3) The FLIGHT_SIZE calculation in RFC 5681 doesn't seem particularly
           suited to SACK of a small number of packets.  It seems that, almost
           inevitably, by the time three 'duplicate' ACKs have been seen, we have
           narrowed the loss down to one or two missing packets, and the
           FLIGHT_SIZE calculation ends up as 2.
      
       (4) In rxrpc_resend(), if there was no data that apparently needed
           retransmission, we transmit a PING ACK to ask the peer to tell us what
           its Rx window state is.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      57494343
  2. 23 9月, 2016 1 次提交
  3. 17 9月, 2016 3 次提交
  4. 08 9月, 2016 1 次提交
    • D
      rxrpc: Rewrite the data and ack handling code · 248f219c
      David Howells 提交于
      Rewrite the data and ack handling code such that:
      
       (1) Parsing of received ACK and ABORT packets and the distribution and the
           filing of DATA packets happens entirely within the data_ready context
           called from the UDP socket.  This allows us to process and discard ACK
           and ABORT packets much more quickly (they're no longer stashed on a
           queue for a background thread to process).
      
       (2) We avoid calling skb_clone(), pskb_pull() and pskb_trim().  We instead
           keep track of the offset and length of the content of each packet in
           the sk_buff metadata.  This means we don't do any allocation in the
           receive path.
      
       (3) Jumbo DATA packet parsing is now done in data_ready context.  Rather
           than cloning the packet once for each subpacket and pulling/trimming
           it, we file the packet multiple times with an annotation for each
           indicating which subpacket is there.  From that we can directly
           calculate the offset and length.
      
       (4) A call's receive queue can be accessed without taking locks (memory
           barriers do have to be used, though).
      
       (5) Incoming calls are set up from preallocated resources and immediately
           made live.  They can than have packets queued upon them and ACKs
           generated.  If insufficient resources exist, DATA packet #1 is given a
           BUSY reply and other DATA packets are discarded).
      
       (6) sk_buffs no longer take a ref on their parent call.
      
      To make this work, the following changes are made:
      
       (1) Each call's receive buffer is now a circular buffer of sk_buff
           pointers (rxtx_buffer) rather than a number of sk_buff_heads spread
           between the call and the socket.  This permits each sk_buff to be in
           the buffer multiple times.  The receive buffer is reused for the
           transmit buffer.
      
       (2) A circular buffer of annotations (rxtx_annotations) is kept parallel
           to the data buffer.  Transmission phase annotations indicate whether a
           buffered packet has been ACK'd or not and whether it needs
           retransmission.
      
           Receive phase annotations indicate whether a slot holds a whole packet
           or a jumbo subpacket and, if the latter, which subpacket.  They also
           note whether the packet has been decrypted in place.
      
       (3) DATA packet window tracking is much simplified.  Each phase has just
           two numbers representing the window (rx_hard_ack/rx_top and
           tx_hard_ack/tx_top).
      
           The hard_ack number is the sequence number before base of the window,
           representing the last packet the other side says it has consumed.
           hard_ack starts from 0 and the first packet is sequence number 1.
      
           The top number is the sequence number of the highest-numbered packet
           residing in the buffer.  Packets between hard_ack+1 and top are
           soft-ACK'd to indicate they've been received, but not yet consumed.
      
           Four macros, before(), before_eq(), after() and after_eq() are added
           to compare sequence numbers within the window.  This allows for the
           top of the window to wrap when the hard-ack sequence number gets close
           to the limit.
      
           Two flags, RXRPC_CALL_RX_LAST and RXRPC_CALL_TX_LAST, are added also
           to indicate when rx_top and tx_top point at the packets with the
           LAST_PACKET bit set, indicating the end of the phase.
      
       (4) Calls are queued on the socket 'receive queue' rather than packets.
           This means that we don't need have to invent dummy packets to queue to
           indicate abnormal/terminal states and we don't have to keep metadata
           packets (such as ABORTs) around
      
       (5) The offset and length of a (sub)packet's content are now passed to
           the verify_packet security op.  This is currently expected to decrypt
           the packet in place and validate it.
      
           However, there's now nowhere to store the revised offset and length of
           the actual data within the decrypted blob (there may be a header and
           padding to skip) because an sk_buff may represent multiple packets, so
           a locate_data security op is added to retrieve these details from the
           sk_buff content when needed.
      
       (6) recvmsg() now has to handle jumbo subpackets, where each subpacket is
           individually secured and needs to be individually decrypted.  The code
           to do this is broken out into rxrpc_recvmsg_data() and shared with the
           kernel API.  It now iterates over the call's receive buffer rather
           than walking the socket receive queue.
      
      Additional changes:
      
       (1) The timers are condensed to a single timer that is set for the soonest
           of three timeouts (delayed ACK generation, DATA retransmission and
           call lifespan).
      
       (2) Transmission of ACK and ABORT packets is effected immediately from
           process-context socket ops/kernel API calls that cause them instead of
           them being punted off to a background work item.  The data_ready
           handler still has to defer to the background, though.
      
       (3) A shutdown op is added to the AF_RXRPC socket so that the AFS
           filesystem can shut down the socket and flush its own work items
           before closing the socket to deal with any in-progress service calls.
      
      Future additional changes that will need to be considered:
      
       (1) Make sure that a call doesn't hog the front of the queue by receiving
           data from the network as fast as userspace is consuming it to the
           exclusion of other calls.
      
       (2) Transmit delayed ACKs from within recvmsg() when we've consumed
           sufficiently more packets to avoid the background work item needing to
           run.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      248f219c
  5. 07 9月, 2016 1 次提交
  6. 02 9月, 2016 1 次提交
    • D
      rxrpc: Don't expose skbs to in-kernel users [ver #2] · d001648e
      David Howells 提交于
      Don't expose skbs to in-kernel users, such as the AFS filesystem, but
      instead provide a notification hook the indicates that a call needs
      attention and another that indicates that there's a new call to be
      collected.
      
      This makes the following possibilities more achievable:
      
       (1) Call refcounting can be made simpler if skbs don't hold refs to calls.
      
       (2) skbs referring to non-data events will be able to be freed much sooner
           rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
           will be able to consult the call state.
      
       (3) We can shortcut the receive phase when a call is remotely aborted
           because we don't have to go through all the packets to get to the one
           cancelling the operation.
      
       (4) It makes it easier to do encryption/decryption directly between AFS's
           buffers and sk_buffs.
      
       (5) Encryption/decryption can more easily be done in the AFS's thread
           contexts - usually that of the userspace process that issued a syscall
           - rather than in one of rxrpc's background threads on a workqueue.
      
       (6) AFS will be able to wait synchronously on a call inside AF_RXRPC.
      
      To make this work, the following interface function has been added:
      
           int rxrpc_kernel_recv_data(
      		struct socket *sock, struct rxrpc_call *call,
      		void *buffer, size_t bufsize, size_t *_offset,
      		bool want_more, u32 *_abort_code);
      
      This is the recvmsg equivalent.  It allows the caller to find out about the
      state of a specific call and to transfer received data into a buffer
      piecemeal.
      
      afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
      logic between them.  They don't wait synchronously yet because the socket
      lock needs to be dealt with.
      
      Five interface functions have been removed:
      
      	rxrpc_kernel_is_data_last()
          	rxrpc_kernel_get_abort_code()
          	rxrpc_kernel_get_error_number()
          	rxrpc_kernel_free_skb()
          	rxrpc_kernel_data_consumed()
      
      As a temporary hack, sk_buffs going to an in-kernel call are queued on the
      rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
      in-kernel user.  To process the queue internally, a temporary function,
      temp_deliver_data() has been added.  This will be replaced with common code
      between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
      future patch.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d001648e
  7. 30 8月, 2016 3 次提交
  8. 24 8月, 2016 1 次提交
    • D
      rxrpc: Fix conn-based retransmit · 2266ffde
      David Howells 提交于
      If a duplicate packet comes in for a call that has just completed on a
      connection's channel then there will be an oops in the data_ready handler
      because it tries to examine the connection struct via a call struct (which
      we don't have - the pointer is unset).
      
      Since the connection struct pointer is available to us, go direct instead.
      
      Also, the ACK packet to be retransmitted needs three octets of padding
      between the soft ack list and the ackinfo.
      
      Fixes: 18bfeba5 ("rxrpc: Perform terminal call ACK/ABORT retransmission from conn processor")
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      2266ffde
  9. 23 8月, 2016 2 次提交
  10. 06 7月, 2016 6 次提交
    • D
      rxrpc: Call channels should have separate call number spaces · a1399f8b
      David Howells 提交于
      Each channel on a connection has a separate, independent number space from
      which to allocate callNumber values.  It is entirely possible, for example,
      to have a connection with four active calls, each with call number 1.
      
      Note that the callNumber values for any particular channel don't have to
      start at 1, but they are supposed to increment monotonically for that
      channel from a client's perspective and may not be reused once the call
      number is transmitted (until the epoch cycles all the way back round).
      
      Currently, however, call numbers are allocated on a per-connection basis
      and, further, are held in an rb-tree.  The rb-tree is redundant as the four
      channel pointers in the rxrpc_connection struct are entirely capable of
      pointing to all the calls currently in progress on a connection.
      
      To this end, make the following changes:
      
       (1) Handle call number allocation independently per channel.
      
       (2) Get rid of the conn->calls rb-tree.  This is overkill as a connection
           may have a maximum of four calls in progress at any one time.  Use the
           pointers in the channels[] array instead, indexed by the channel
           number from the packet.
      
       (3) For each channel, save the result of the last call that was in
           progress on that channel in conn->channels[] so that the final ACK or
           ABORT packet can be replayed if necessary.  Any call earlier than that
           is just ignored.  If we've seen the next call number in a packet, the
           last one is most definitely defunct.
      
       (4) When generating a RESPONSE packet for a connection, the call number
           counter for each channel must be included in it.
      
       (5) When parsing a RESPONSE packet for a connection, the call number
           counters contained therein should be used to set the minimum expected
           call numbers on each channel.
      
      To do in future commits:
      
       (1) Replay terminal packets based on the last call stored in
           conn->channels[].
      
       (2) Connections should be retired before the callNumber space on any
           channel runs out.
      
       (3) A server is expected to disregard or reject any new incoming call that
           has a call number less than the current call number counter.  The call
           number counter for that channel must be advanced to the new call
           number.
      
           Note that the server cannot just require that the next call that it
           sees on a channel be exactly the call number counter + 1 because then
           there's a scenario that could cause a problem: The client transmits a
           packet to initiate a connection, the network goes out, the server
           sends an ACK (which gets lost), the client sends an ABORT (which also
           gets lost); the network then reconnects, the client then reuses the
           call number for the next call (it doesn't know the server already saw
           the call number), but the server thinks it already has the first
           packet of this call (it doesn't know that the client doesn't know that
           it saw the call number the first time).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a1399f8b
    • D
      rxrpc: Add RCU destruction for connections and calls · dee46364
      David Howells 提交于
      Add RCU destruction for connections and calls as the RCU lookup from the
      transport socket data_ready handler is going to come along shortly.
      
      Whilst we're at it, move the cleanup workqueue flushing and RCU barrierage
      into the destruction code for the objects that need it (locals and
      connections) and add the extra RCU barrier required for connection cleanup.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      dee46364
    • D
      rxrpc: Move usage count getting into rxrpc_queue_conn() · 2c4579e4
      David Howells 提交于
      Rather than calling rxrpc_get_connection() manually before calling
      rxrpc_queue_conn(), do it inside the queue wrapper.
      
      This allows us to do some important fixes:
      
       (1) If the usage count is 0, do nothing.  This prevents connections from
           being reanimated once they're dead.
      
       (2) If rxrpc_queue_work() fails because the work item is already queued,
           retract the usage count increment which would otherwise be lost.
      
       (3) Don't take a ref on the connection in the work function.  By passing
           the ref through the work item, this is unnecessary.  Doing it in the
           work function is too late anyway.  Previously, connection-directed
           packets held a ref on the connection, but that's not really the best
           idea.
      
      And another useful changes:
      
       (*) Don't need to take a refcount on the connection in the data_ready
           handler unless we invoke the connection's work item.  We're using RCU
           there so that's otherwise redundant.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      2c4579e4
    • D
      rxrpc: Turn connection #defines into enums and put outside struct def · bba304db
      David Howells 提交于
      Turn the connection event and state #define lists into enums and move
      outside of the struct definition.
      
      Whilst we're at it, change _SERVER to _SERVICE in those identifiers and add
      EV_ into the event name to distinguish them from flags and states.
      
      Also add a symbol indicating the number of states and use that in the state
      text array.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      bba304db
    • D
      rxrpc: Provide queuing helper functions · 5acbee46
      David Howells 提交于
      Provide queueing helper functions so that the queueing of local and
      connection objects can be fixed later.
      
      The issue is that a ref on the object needs to be passed to the work queue,
      but the act of queueing the object may fail because the object is already
      queued.  Testing the queuedness of an object before hand doesn't work
      because there can be a race with someone else trying to queue it.  What
      will have to be done is to adjust the refcount depending on the result of
      the queue operation.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5acbee46
    • H
      rxrpc: Avoid using stack memory in SG lists in rxkad · a263629d
      Herbert Xu 提交于
      rxkad uses stack memory in SG lists which would not work if stacks were
      allocated from vmalloc memory.  In fact, in most cases this isn't even
      necessary as the stack memory ends up getting copied over to kmalloc
      memory.
      
      This patch eliminates all the unnecessary stack memory uses by supplying
      the final destination directly to the crypto API.  In two instances where a
      temporary buffer is actually needed we also switch use a scratch area in
      the rxrpc_call struct (only one DATA packet will be being secured or
      verified at a time).
      
      Finally there is no need to split a split-page buffer into two SG entries
      so code dealing with that has been removed.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a263629d
  11. 22 6月, 2016 3 次提交
  12. 15 6月, 2016 1 次提交
    • D
      rxrpc: Rework local endpoint management · 4f95dd78
      David Howells 提交于
      Rework the local RxRPC endpoint management.
      
      Local endpoint objects are maintained in a flat list as before.  This
      should be okay as there shouldn't be more than one per open AF_RXRPC socket
      (there can be fewer as local endpoints can be shared if their local service
      ID is 0 and they share the same local transport parameters).
      
      Changes:
      
       (1) Local endpoints may now only be shared if they have local service ID 0
           (ie. they're not being used for listening).
      
           This prevents a scenario where process A is listening of the Cache
           Manager port and process B contacts a fileserver - which may then
           attempt to send CM requests back to B.  But if A and B are sharing a
           local endpoint, A will get the CM requests meant for B.
      
       (2) We use a mutex to handle lookups and don't provide RCU-only lookups
           since we only expect to access the list when opening a socket or
           destroying an endpoint.
      
           The local endpoint object is pointed to by the transport socket's
           sk_user_data for the life of the transport socket - allowing us to
           refer to it directly from the sk_data_ready and sk_error_report
           callbacks.
      
       (3) atomic_inc_not_zero() now exists and can be used to only share a local
           endpoint if the last reference hasn't yet gone.
      
       (4) We can remove rxrpc_local_lock - a spinlock that had to be taken with
           BH processing disabled given that we assume sk_user_data won't change
           under us.
      
       (5) The transport socket is shut down before we clear the sk_user_data
           pointer so that we can be sure that the transport socket's callbacks
           won't be invoked once the RCU destruction is scheduled.
      
       (6) Local endpoints have a work item that handles both destruction and
           event processing.  The means that destruction doesn't then need to
           wait for event processing.  The event queues can then be cleared after
           the transport socket is shut down.
      
       (7) Local endpoints are no longer available for resurrection beyond the
           life of the sockets that had them open.  As soon as their last ref
           goes, they are scheduled for destruction and may not have their usage
           count moved from 0.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4f95dd78
  13. 13 6月, 2016 1 次提交
    • D
      rxrpc: Rename files matching ar-*.c to git rid of the "ar-" prefix · 8c3e34a4
      David Howells 提交于
      Rename files matching net/rxrpc/ar-*.c to get rid of the "ar-" prefix.
      This will aid splitting those files by making easier to come up with new
      names.
      
      Note that the not all files are simply renamed from ar-X.c to X.c.  The
      following exceptions are made:
      
       (*) ar-call.c -> call_object.c
           ar-ack.c -> call_event.c
      
           call_object.c is going to contain the core of the call object
           handling.  Call event handling is all going to be in call_event.c.
      
       (*) ar-accept.c -> call_accept.c
      
           Incoming call handling is going to be here.
      
       (*) ar-connection.c -> conn_object.c
           ar-connevent.c -> conn_event.c
      
           The former file is going to have the basic connection object handling,
           but there will likely be some differentiation between client
           connections and service connections in additional files later.  The
           latter file will have all the connection-level event handling.
      
       (*) ar-local.c -> local_object.c
      
           This will have the local endpoint object handling code.  The local
           endpoint event handling code will later be split out into
           local_event.c.
      
       (*) ar-peer.c -> peer_object.c
      
           This will have the peer endpoint object handling code.  Peer event
           handling code will be placed in peer_event.c (for the moment, there is
           none).
      
       (*) ar-error.c -> peer_event.c
      
           This will become the peer event handling code, though for the moment
           it's actually driven from the local endpoint's perspective.
      
      Note that I haven't renamed ar-transport.c to transport_object.c as the
      intention is to delete it when the rxrpc_transport struct is excised.
      
      The only file that actually has its contents changed is net/rxrpc/Makefile.
      
      net/rxrpc/ar-internal.h will need its section marker comments updating, but
      I'll do that in a separate patch to make it easier for git to follow the
      history across the rename.  I may also want to rename ar-internal.h at some
      point - but that would mean updating all the #includes and I'd rather do
      that in a separate step.
      
      Signed-off-by: David Howells <dhowells@redhat.com.
      8c3e34a4
  14. 04 6月, 2016 1 次提交
    • J
      rxrpc: Use pr_<level> and pr_fmt, reduce object size a few KB · 9b6d5398
      Joe Perches 提交于
      Use the more common kernel logging style and reduce object size.
      
      The logging message prefix changes from a mixture of
      "RxRPC:" and "RXRPC:" to "af_rxrpc: ".
      
      $ size net/rxrpc/built-in.o*
         text	   data	    bss	    dec	    hex	filename
        64172	   1972	   8304	  74448	  122d0	net/rxrpc/built-in.o.new
        67512	   1972	   8304	  77788	  12fdc	net/rxrpc/built-in.o.old
      
      Miscellanea:
      
      o Consolidate the ASSERT macros to use a single pr_err call with
        decimal and hexadecimal output and a stringified #OP argument
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9b6d5398
  15. 12 4月, 2016 3 次提交
  16. 04 3月, 2016 2 次提交
    • D
      rxrpc: Keep the skb private record of the Rx header in host byte order · 0d12f8a4
      David Howells 提交于
      Currently, a copy of the Rx packet header is copied into the the sk_buff
      private data so that we can advance the pointer into the buffer,
      potentially discarding the original.  At the moment, this copy is held in
      network byte order, but this means we're doing a lot of unnecessary
      translations.
      
      The reasons it was done this way are that we need the values in network
      byte order occasionally and we can use the copy, slightly modified, as part
      of an iov array when sending an ack or an abort packet.
      
      However, it seems more reasonable on review that it would be better kept in
      host byte order and that we make up a new header when we want to send
      another packet.
      
      To this end, rename the original header struct to rxrpc_wire_header (with
      BE fields) and institute a variant called rxrpc_host_header that has host
      order fields.  Change the struct in the sk_buff private data into an
      rxrpc_host_header and translate the values when filling it in.
      
      This further allows us to keep values kept in various structures in host
      byte order rather than network byte order and allows removal of some fields
      that are byteswapped duplicates.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      0d12f8a4
    • D
      rxrpc: Rename call events to begin RXRPC_CALL_EV_ · 4c198ad1
      David Howells 提交于
      Rename call event names to begin RXRPC_CALL_EV_ to distinguish them from the
      flags.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4c198ad1
  17. 20 5月, 2011 1 次提交
  18. 17 6月, 2009 1 次提交
  19. 11 12月, 2008 1 次提交
  20. 27 4月, 2007 2 次提交
    • D
      [AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use · 651350d1
      David Howells 提交于
      Add an interface to the AF_RXRPC module so that the AFS filesystem module can
      more easily make use of the services available.  AFS still opens a socket but
      then uses the action functions in lieu of sendmsg() and registers an intercept
      functions to grab messages before they're queued on the socket Rx queue.
      
      This permits AFS (or whatever) to:
      
       (1) Avoid the overhead of using the recvmsg() call.
      
       (2) Use different keys directly on individual client calls on one socket
           rather than having to open a whole slew of sockets, one for each key it
           might want to use.
      
       (3) Avoid calling request_key() at the point of issue of a call or opening of
           a socket.  This is done instead by AFS at the point of open(), unlink() or
           other VFS operation and the key handed through.
      
       (4) Request the use of something other than GFP_KERNEL to allocate memory.
      
      Furthermore:
      
       (*) The socket buffer markings used by RxRPC are made available for AFS so
           that it can interpret the cooked RxRPC messages itself.
      
       (*) rxgen (un)marshalling abort codes are made available.
      
      
      The following documentation for the kernel interface is added to
      Documentation/networking/rxrpc.txt:
      
      =========================
      AF_RXRPC KERNEL INTERFACE
      =========================
      
      The AF_RXRPC module also provides an interface for use by in-kernel utilities
      such as the AFS filesystem.  This permits such a utility to:
      
       (1) Use different keys directly on individual client calls on one socket
           rather than having to open a whole slew of sockets, one for each key it
           might want to use.
      
       (2) Avoid having RxRPC call request_key() at the point of issue of a call or
           opening of a socket.  Instead the utility is responsible for requesting a
           key at the appropriate point.  AFS, for instance, would do this during VFS
           operations such as open() or unlink().  The key is then handed through
           when the call is initiated.
      
       (3) Request the use of something other than GFP_KERNEL to allocate memory.
      
       (4) Avoid the overhead of using the recvmsg() call.  RxRPC messages can be
           intercepted before they get put into the socket Rx queue and the socket
           buffers manipulated directly.
      
      To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
      bind an addess as appropriate and listen if it's to be a server socket, but
      then it passes this to the kernel interface functions.
      
      The kernel interface functions are as follows:
      
       (*) Begin a new client call.
      
      	struct rxrpc_call *
      	rxrpc_kernel_begin_call(struct socket *sock,
      				struct sockaddr_rxrpc *srx,
      				struct key *key,
      				unsigned long user_call_ID,
      				gfp_t gfp);
      
           This allocates the infrastructure to make a new RxRPC call and assigns
           call and connection numbers.  The call will be made on the UDP port that
           the socket is bound to.  The call will go to the destination address of a
           connected client socket unless an alternative is supplied (srx is
           non-NULL).
      
           If a key is supplied then this will be used to secure the call instead of
           the key bound to the socket with the RXRPC_SECURITY_KEY sockopt.  Calls
           secured in this way will still share connections if at all possible.
      
           The user_call_ID is equivalent to that supplied to sendmsg() in the
           control data buffer.  It is entirely feasible to use this to point to a
           kernel data structure.
      
           If this function is successful, an opaque reference to the RxRPC call is
           returned.  The caller now holds a reference on this and it must be
           properly ended.
      
       (*) End a client call.
      
      	void rxrpc_kernel_end_call(struct rxrpc_call *call);
      
           This is used to end a previously begun call.  The user_call_ID is expunged
           from AF_RXRPC's knowledge and will not be seen again in association with
           the specified call.
      
       (*) Send data through a call.
      
      	int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
      				   size_t len);
      
           This is used to supply either the request part of a client call or the
           reply part of a server call.  msg.msg_iovlen and msg.msg_iov specify the
           data buffers to be used.  msg_iov may not be NULL and must point
           exclusively to in-kernel virtual addresses.  msg.msg_flags may be given
           MSG_MORE if there will be subsequent data sends for this call.
      
           The msg must not specify a destination address, control data or any flags
           other than MSG_MORE.  len is the total amount of data to transmit.
      
       (*) Abort a call.
      
      	void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
      
           This is used to abort a call if it's still in an abortable state.  The
           abort code specified will be placed in the ABORT message sent.
      
       (*) Intercept received RxRPC messages.
      
      	typedef void (*rxrpc_interceptor_t)(struct sock *sk,
      					    unsigned long user_call_ID,
      					    struct sk_buff *skb);
      
      	void
      	rxrpc_kernel_intercept_rx_messages(struct socket *sock,
      					   rxrpc_interceptor_t interceptor);
      
           This installs an interceptor function on the specified AF_RXRPC socket.
           All messages that would otherwise wind up in the socket's Rx queue are
           then diverted to this function.  Note that care must be taken to process
           the messages in the right order to maintain DATA message sequentiality.
      
           The interceptor function itself is provided with the address of the socket
           and handling the incoming message, the ID assigned by the kernel utility
           to the call and the socket buffer containing the message.
      
           The skb->mark field indicates the type of message:
      
      	MARK				MEANING
      	===============================	=======================================
      	RXRPC_SKB_MARK_DATA		Data message
      	RXRPC_SKB_MARK_FINAL_ACK	Final ACK received for an incoming call
      	RXRPC_SKB_MARK_BUSY		Client call rejected as server busy
      	RXRPC_SKB_MARK_REMOTE_ABORT	Call aborted by peer
      	RXRPC_SKB_MARK_NET_ERROR	Network error detected
      	RXRPC_SKB_MARK_LOCAL_ERROR	Local error encountered
      	RXRPC_SKB_MARK_NEW_CALL		New incoming call awaiting acceptance
      
           The remote abort message can be probed with rxrpc_kernel_get_abort_code().
           The two error messages can be probed with rxrpc_kernel_get_error_number().
           A new call can be accepted with rxrpc_kernel_accept_call().
      
           Data messages can have their contents extracted with the usual bunch of
           socket buffer manipulation functions.  A data message can be determined to
           be the last one in a sequence with rxrpc_kernel_is_data_last().  When a
           data message has been used up, rxrpc_kernel_data_delivered() should be
           called on it..
      
           Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
           of.  It is possible to get extra refs on all types of message for later
           freeing, but this may pin the state of a call until the message is finally
           freed.
      
       (*) Accept an incoming call.
      
      	struct rxrpc_call *
      	rxrpc_kernel_accept_call(struct socket *sock,
      				 unsigned long user_call_ID);
      
           This is used to accept an incoming call and to assign it a call ID.  This
           function is similar to rxrpc_kernel_begin_call() and calls accepted must
           be ended in the same way.
      
           If this function is successful, an opaque reference to the RxRPC call is
           returned.  The caller now holds a reference on this and it must be
           properly ended.
      
       (*) Reject an incoming call.
      
      	int rxrpc_kernel_reject_call(struct socket *sock);
      
           This is used to reject the first incoming call on the socket's queue with
           a BUSY message.  -ENODATA is returned if there were no incoming calls.
           Other errors may be returned if the call had been aborted (-ECONNABORTED)
           or had timed out (-ETIME).
      
       (*) Record the delivery of a data message and free it.
      
      	void rxrpc_kernel_data_delivered(struct sk_buff *skb);
      
           This is used to record a data message as having been delivered and to
           update the ACK state for the call.  The socket buffer will be freed.
      
       (*) Free a message.
      
      	void rxrpc_kernel_free_skb(struct sk_buff *skb);
      
           This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
           socket.
      
       (*) Determine if a data message is the last one on a call.
      
      	bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
      
           This is used to determine if a socket buffer holds the last data message
           to be received for a call (true will be returned if it does, false
           if not).
      
           The data message will be part of the reply on a client call and the
           request on an incoming call.  In the latter case there will be more
           messages, but in the former case there will not.
      
       (*) Get the abort code from an abort message.
      
      	u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
      
           This is used to extract the abort code from a remote abort message.
      
       (*) Get the error number from a local or network error message.
      
      	int rxrpc_kernel_get_error_number(struct sk_buff *skb);
      
           This is used to extract the error number from a message indicating either
           a local error occurred or a network error occurred.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      651350d1
    • D
      [AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both · 17926a79
      David Howells 提交于
      Provide AF_RXRPC sockets that can be used to talk to AFS servers, or serve
      answers to AFS clients.  KerberosIV security is fully supported.  The patches
      and some example test programs can be found in:
      
      	http://people.redhat.com/~dhowells/rxrpc/
      
      This will eventually replace the old implementation of kernel-only RxRPC
      currently resident in net/rxrpc/.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      17926a79