1. 06 7月, 2016 18 次提交
    • D
      rxrpc: Move data_ready peer lookup into rxrpc_find_connection() · 1291e9d1
      David Howells 提交于
      Move the peer lookup done in input.c by data_ready into
      rxrpc_find_connection().
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      1291e9d1
    • D
      rxrpc: Prune the contents of the rxrpc_conn_proto struct · e8d70ce1
      David Howells 提交于
      Prune the contents of the rxrpc_conn_proto struct.  Most of the fields aren't
      used anymore.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      e8d70ce1
    • D
      rxrpc: Maintain an extra ref on a conn for the cache list · 001c1122
      David Howells 提交于
      Overhaul the usage count accounting for the rxrpc_connection struct to make
      it easier to implement RCU access from the data_ready handler.
      
      The problem is that currently we're using a lock to prevent the garbage
      collector from trying to clean up a connection that we're contemplating
      unidling.  We could just stick incoming packets on the connection we find,
      but we've then got a problem that we may race when dispatching a work item
      to process it as we need to give that a ref to prevent the rxrpc_connection
      struct from disappearing in the meantime.
      
      Further, incoming packets may get discarded if attached to an
      rxrpc_connection struct that is going away.  Whilst this is not a total
      disaster - the client will presumably resend - it would delay processing of
      the call.  This would affect the AFS client filesystem's service manager
      operation.
      
      To this end:
      
       (1) We now maintain an extra count on the connection usage count whilst it
           is on the connection list.  This mean it is not in use when its
           refcount is 1.
      
       (2) When trying to reuse an old connection, we only increment the refcount
           if it is greater than 0.  If it is 0, we replace it in the tree with a
           new candidate connection.
      
       (3) Two connection flags are added to indicate whether or not a connection
           is in the local's client connection tree (used by sendmsg) or the
           peer's service connection tree (used by data_ready).  This makes sure
           that we don't try and remove a connection if it got replaced.
      
           The flags are tested under lock with the removal operation to prevent
           the reaper from killing the rxrpc_connection struct whilst someone
           else is trying to effect a replacement.
      
           This could probably be alleviated by using memory barriers between the
           flag set/test and the rb_tree ops.  The rb_tree op would still need to
           be under the lock, however.
      
       (4) When trying to reap an old connection, we try to flip the usage count
           from 1 to 0.  If it's not 1 at that point, then it must've come back
           to life temporarily and we ignore it.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      001c1122
    • D
      rxrpc: Move peer lookup from call-accept to new-incoming-conn · d991b4a3
      David Howells 提交于
      Move the lookup of a peer from a call that's being accepted into the
      function that creates a new incoming connection.  This will allow us to
      avoid incrementing the peer's usage count in some cases in future.
      
      Note that I haven't bother to integrate rxrpc_get_addr_from_skb() with
      rxrpc_extract_addr_from_skb() as I'm going to delete the former in the very
      near future.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      d991b4a3
    • D
      rxrpc: Split service connection code out into its own file · 7877a4a4
      David Howells 提交于
      Split the service-specific connection code out into into its own file.  The
      client-specific code has already been split out.  This will leave just the
      common code in the original file.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      7877a4a4
    • D
      rxrpc: Split client connection code out into its own file · c6d2b8d7
      David Howells 提交于
      Split the client-specific connection code out into its own file.  It will
      behave somewhat differently from the service-specific connection code, so
      it makes sense to separate them.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      c6d2b8d7
    • D
      rxrpc: Call channels should have separate call number spaces · a1399f8b
      David Howells 提交于
      Each channel on a connection has a separate, independent number space from
      which to allocate callNumber values.  It is entirely possible, for example,
      to have a connection with four active calls, each with call number 1.
      
      Note that the callNumber values for any particular channel don't have to
      start at 1, but they are supposed to increment monotonically for that
      channel from a client's perspective and may not be reused once the call
      number is transmitted (until the epoch cycles all the way back round).
      
      Currently, however, call numbers are allocated on a per-connection basis
      and, further, are held in an rb-tree.  The rb-tree is redundant as the four
      channel pointers in the rxrpc_connection struct are entirely capable of
      pointing to all the calls currently in progress on a connection.
      
      To this end, make the following changes:
      
       (1) Handle call number allocation independently per channel.
      
       (2) Get rid of the conn->calls rb-tree.  This is overkill as a connection
           may have a maximum of four calls in progress at any one time.  Use the
           pointers in the channels[] array instead, indexed by the channel
           number from the packet.
      
       (3) For each channel, save the result of the last call that was in
           progress on that channel in conn->channels[] so that the final ACK or
           ABORT packet can be replayed if necessary.  Any call earlier than that
           is just ignored.  If we've seen the next call number in a packet, the
           last one is most definitely defunct.
      
       (4) When generating a RESPONSE packet for a connection, the call number
           counter for each channel must be included in it.
      
       (5) When parsing a RESPONSE packet for a connection, the call number
           counters contained therein should be used to set the minimum expected
           call numbers on each channel.
      
      To do in future commits:
      
       (1) Replay terminal packets based on the last call stored in
           conn->channels[].
      
       (2) Connections should be retired before the callNumber space on any
           channel runs out.
      
       (3) A server is expected to disregard or reject any new incoming call that
           has a call number less than the current call number counter.  The call
           number counter for that channel must be advanced to the new call
           number.
      
           Note that the server cannot just require that the next call that it
           sees on a channel be exactly the call number counter + 1 because then
           there's a scenario that could cause a problem: The client transmits a
           packet to initiate a connection, the network goes out, the server
           sends an ACK (which gets lost), the client sends an ABORT (which also
           gets lost); the network then reconnects, the client then reuses the
           call number for the next call (it doesn't know the server already saw
           the call number), but the server thinks it already has the first
           packet of this call (it doesn't know that the client doesn't know that
           it saw the call number the first time).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a1399f8b
    • D
      rxrpc: Access socket accept queue under right lock · 30b515f4
      David Howells 提交于
      The socket's accept queue (socket->acceptq) should be accessed under
      socket->call_lock, not under the connection lock.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      30b515f4
    • D
      rxrpc: Add RCU destruction for connections and calls · dee46364
      David Howells 提交于
      Add RCU destruction for connections and calls as the RCU lookup from the
      transport socket data_ready handler is going to come along shortly.
      
      Whilst we're at it, move the cleanup workqueue flushing and RCU barrierage
      into the destruction code for the objects that need it (locals and
      connections) and add the extra RCU barrier required for connection cleanup.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      dee46364
    • D
      rxrpc: Release a call's connection ref on call disconnection · e653cfe4
      David Howells 提交于
      When a call is disconnected, clear the call's pointer to the connection and
      release the associated ref on that connection.  This means that the call no
      longer pins the connection and the connection can be discarded even before
      the call is.
      
      As the code currently stands, the call struct is effectively pinned by
      userspace until userspace has enacted a recvmsg() to retrieve the final
      call state as sk_buffs on the receive queue pin the call to which they're
      related because:
      
       (1) The rxrpc_call struct contains the userspace ID that recvmsg() has to
           include in the control message buffer to indicate which call is being
           referred to.  This ID must remain valid until the terminal packet is
           completely read and must be invalidated immediately at that point as
           userspace is entitled to immediately reuse it.
      
       (2) The final ACK to the reply to a client call isn't sent until the last
           data packet is entirely read (it's probably worth altering this in
           future to be send the ACK as soon as all the data has been received).
      
      
      This change requires a bit of rearrangement to make sure that the call
      isn't going to try and access the connection again after protocol
      completion:
      
       (1) Delete the error link earlier when we're releasing the call.  Possibly
           network errors should be distributed via connections at the cost of
           adding in an access to the rxrpc_connection struct.
      
       (2) Remove the call from the connection's call tree before disconnecting
           the call.  The call tree needs to be removed anyway and incoming
           packets delivered by channel pointer instead.
      
       (3) The release call event should be considered last after all other
           events have been processed so that we don't need access to the
           connection again.
      
       (4) Move the channel_lock taking from rxrpc_release_call() to
           rxrpc_disconnect_call() where it will be required in future.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      e653cfe4
    • D
      rxrpc: Fix handling of connection failure in client call creation · d1e858c5
      David Howells 提交于
      If rxrpc_connect_call() fails during the creation of a client connection,
      there are two bugs that we can hit that need fixing:
      
       (1) The call state should be moved to RXRPC_CALL_DEAD before the call
           cleanup phase is invoked.  If not, this can cause an assertion failure
           later.
      
       (2) call->link should be reinitialised after being deleted in
           rxrpc_new_client_call() - which otherwise leads to a failure later
           when the call cleanup attempts to delete the link again.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      d1e858c5
    • D
      rxrpc: Move usage count getting into rxrpc_queue_conn() · 2c4579e4
      David Howells 提交于
      Rather than calling rxrpc_get_connection() manually before calling
      rxrpc_queue_conn(), do it inside the queue wrapper.
      
      This allows us to do some important fixes:
      
       (1) If the usage count is 0, do nothing.  This prevents connections from
           being reanimated once they're dead.
      
       (2) If rxrpc_queue_work() fails because the work item is already queued,
           retract the usage count increment which would otherwise be lost.
      
       (3) Don't take a ref on the connection in the work function.  By passing
           the ref through the work item, this is unnecessary.  Doing it in the
           work function is too late anyway.  Previously, connection-directed
           packets held a ref on the connection, but that's not really the best
           idea.
      
      And another useful changes:
      
       (*) Don't need to take a refcount on the connection in the data_ready
           handler unless we invoke the connection's work item.  We're using RCU
           there so that's otherwise redundant.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      2c4579e4
    • D
      rxrpc: Check that the client conns cache is empty before module removal · eb9b9d22
      David Howells 提交于
      Check that the client conns cache is empty before module removal and bug if
      not, listing any offending connections that are still present.  Unfortunately,
      if there are connections still around, then the transport socket is still
      unexpectedly open and active, so we can't just unallocate the connections.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      eb9b9d22
    • D
      rxrpc: Turn connection #defines into enums and put outside struct def · bba304db
      David Howells 提交于
      Turn the connection event and state #define lists into enums and move
      outside of the struct definition.
      
      Whilst we're at it, change _SERVER to _SERVICE in those identifiers and add
      EV_ into the event name to distinguish them from flags and states.
      
      Also add a symbol indicating the number of states and use that in the state
      text array.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      bba304db
    • D
      rxrpc: Provide queuing helper functions · 5acbee46
      David Howells 提交于
      Provide queueing helper functions so that the queueing of local and
      connection objects can be fixed later.
      
      The issue is that a ref on the object needs to be passed to the work queue,
      but the act of queueing the object may fail because the object is already
      queued.  Testing the queuedness of an object before hand doesn't work
      because there can be a race with someone else trying to queue it.  What
      will have to be done is to adjust the refcount depending on the result of
      the queue operation.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5acbee46
    • H
      rxrpc: Avoid using stack memory in SG lists in rxkad · a263629d
      Herbert Xu 提交于
      rxkad uses stack memory in SG lists which would not work if stacks were
      allocated from vmalloc memory.  In fact, in most cases this isn't even
      necessary as the stack memory ends up getting copied over to kmalloc
      memory.
      
      This patch eliminates all the unnecessary stack memory uses by supplying
      the final destination directly to the crypto API.  In two instances where a
      temporary buffer is actually needed we also switch use a scratch area in
      the rxrpc_call struct (only one DATA packet will be being secured or
      verified at a time).
      
      Finally there is no need to split a split-page buffer into two SG entries
      so code dealing with that has been removed.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a263629d
    • D
      rxrpc: Check the source of a packet to a client conn · 689f4c64
      David Howells 提交于
      When looking up a client connection to which to route a packet, we need to
      check that the packet came from the correct source so that a peer can't try
      to muck around with another peer's connection.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      689f4c64
    • D
      rxrpc: Fix some sparse errors · 88b99d0b
      David Howells 提交于
      Fix the following sparse errors:
      
      ../net/rxrpc/conn_object.c:77:17: warning: incorrect type in assignment (different base types)
      ../net/rxrpc/conn_object.c:77:17:    expected restricted __be32 [usertype] call_id
      ../net/rxrpc/conn_object.c:77:17:    got unsigned int [unsigned] [usertype] call_id
      ../net/rxrpc/conn_object.c:84:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:86:26: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:357:15: warning: incorrect type in assignment (different base types)
      ../net/rxrpc/conn_object.c:357:15:    expected restricted __be32 [usertype] epoch
      ../net/rxrpc/conn_object.c:357:15:    got unsigned int [unsigned] [usertype] epoch
      ../net/rxrpc/conn_object.c:369:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:371:26: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:411:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:413:26: warning: restricted __be32 degrades to integer
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      88b99d0b
  2. 01 7月, 2016 1 次提交
    • D
      rxrpc: Fix processing of authenticated/encrypted jumbo packets · ac5d2683
      David Howells 提交于
      When a jumbo packet is being split up and processed, the crypto checksum
      for each split-out packet is in the jumbo header and needs placing in the
      reconstructed packet header.
      
      When the code was changed to keep the stored copy of the packet header in
      host byte order, this reconstruction was missed.
      
      Found with sparse with CF=-D__CHECK_ENDIAN__:
      
          ../net/rxrpc/input.c:479:33: warning: incorrect type in assignment (different base types)
          ../net/rxrpc/input.c:479:33:    expected unsigned short [unsigned] [usertype] _rsvd
          ../net/rxrpc/input.c:479:33:    got restricted __be16 [addressable] [usertype] _rsvd
      
      Fixes: 0d12f8a4 ("rxrpc: Keep the skb private record of the Rx header in host byte order")
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      ac5d2683
  3. 27 6月, 2016 1 次提交
  4. 26 6月, 2016 4 次提交
    • E
      net_sched: generalize bulk dequeue · 4d202a0d
      Eric Dumazet 提交于
      When qdisc bulk dequeue was added in linux-3.18 (commit
      5772e9a3 "qdisc: bulk dequeue support for qdiscs
      with TCQ_F_ONETXQUEUE"), it was constrained to some
      specific qdiscs.
      
      With some extra care, we can extend this to all qdiscs,
      so that typical traffic shaping solutions can benefit from
      small batches (8 packets in this patch).
      
      For example, HTB is often used on some multi queue device.
      And bonding/team are multi queue devices...
      
      Idea is to bulk-dequeue packets mapping to the same transmit queue.
      
      This brings between 35 and 80 % performance increase in HTB setup
      under pressure on a bonding setup :
      
      1) NUMA node contention :   610,000 pps -> 1,110,000 pps
      2) No node contention   : 1,380,000 pps -> 1,930,000 pps
      
      Now we should work to add batches on the enqueue() side ;)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Florian Westphal <fw@strlen.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4d202a0d
    • E
      net_sched: sch_htb: export class backlog in dumps · 338ed9b4
      Eric Dumazet 提交于
      We already get child qdisc qlen, we also can get its backlog
      so that class dumps can report it.
      
      Also replace qstats by a single drop counter, but move it in
      a separate cache line so that drops do not dirty useful cache lines.
      
      Tested:
      
      $ tc -s cl sh dev eth0
      class htb 1:1 root leaf 3: prio 0 rate 1Gbit ceil 1Gbit burst 500000b cburst 500000b
       Sent 2183346912 bytes 9021815 pkt (dropped 2340774, overlimits 0 requeues 0)
       rate 1001Mbit 517543pps backlog 120758b 499p requeues 0
       lended: 9021770 borrowed: 0 giants: 0
       tokens: 9 ctokens: 9
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      338ed9b4
    • E
      net_sched: fq_codel: cache skb->truesize into skb->cb · 008830bc
      Eric Dumazet 提交于
      Now we defer skb drops, it makes sense to keep a copy
      of skb->truesize in struct codel_skb_cb to avoid one
      cache line miss per dropped skb in fq_codel_drop(),
      to reduce latencies a bit further.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      008830bc
    • E
      net_sched: drop packets after root qdisc lock is released · 520ac30f
      Eric Dumazet 提交于
      Qdisc performance suffers when packets are dropped at enqueue()
      time because drops (kfree_skb()) are done while qdisc lock is held,
      delaying a dequeue() draining the queue.
      
      Nominal throughput can be reduced by 50 % when this happens,
      at a time we would like the dequeue() to proceed as fast as possible.
      
      Even FQ is vulnerable to this problem, while one of FQ goals was
      to provide some flow isolation.
      
      This patch adds a 'struct sk_buff **to_free' parameter to all
      qdisc->enqueue(), and in qdisc_drop() helper.
      
      I measured a performance increase of up to 12 %, but this patch
      is a prereq so that future batches in enqueue() can fly.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      520ac30f
  5. 25 6月, 2016 2 次提交
  6. 23 6月, 2016 2 次提交
    • A
      can: only call can_stat_update with procfs · 2781ff5c
      Arnd Bergmann 提交于
      The change to leave out procfs support in CAN when CONFIG_PROC_FS
      is not set was incomplete and leads to a build error:
      
      net/built-in.o: In function `can_init':
      :(.init.text+0x9858): undefined reference to `can_stat_update'
      ERROR: "can_stat_update" [net/can/can.ko] undefined!
      
      This tries a better approach, encapsulating all of the calls
      within IS_ENABLED(), so we also leave out the timer function
      from the object file.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Fixes: a20fadf8 ("can: build proc support only if CONFIG_PROC_FS is activated")
      Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      2781ff5c
    • W
      openvswitch: Add packet len info to upcall. · b95e5928
      William Tu 提交于
      The commit f2a4d086 ("openvswitch: Add packet truncation support.")
      introduces packet truncation before sending to userspace upcall receiver.
      This patch passes up the skb->len before truncation so that the upcall
      receiver knows the original packet size. Potentially this will be used
      by sFlow, where OVS translates sFlow config header=N to a sample action,
      truncating packet to N byte in kernel datapath. Thus, only N bytes instead
      of full-packet size is copied from kernel to userspace, saving the
      kernel-to-userspace bandwidth.
      Signed-off-by: NWilliam Tu <u9012063@gmail.com>
      Cc: Pravin Shelar <pshelar@nicira.com>
      Acked-by: NPravin B Shelar <pshelar@ovn.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b95e5928
  7. 22 6月, 2016 12 次提交
    • D
      rxrpc: Kill off the rxrpc_transport struct · aa390bbe
      David Howells 提交于
      The rxrpc_transport struct is now redundant, given that the rxrpc_peer
      struct is now per peer port rather than per peer host, so get rid of it.
      
      Service connection lists are transferred to the rxrpc_peer struct, as is
      the conn_lock.  Previous patches moved the client connection handling out
      of the rxrpc_transport struct and discarded the connection bundling code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      aa390bbe
    • D
      rxrpc: Kill the client connection bundle concept · 999b69f8
      David Howells 提交于
      Kill off the concept of maintaining a bundle of connections to a particular
      target service to increase the number of call slots available for any
      beyond four for that service (there are four call slots per connection).
      
      This will make cleaning up the connection handling code easier and
      facilitate removal of the rxrpc_transport struct.  Bundling can be
      reintroduced later if necessary.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      999b69f8
    • D
      rxrpc: Provide more refcount helper functions · 5627cc8b
      David Howells 提交于
      Provide refcount helper functions for connections so that the code doesn't
      touch local or connection usage counts directly.
      
      Also make it such that local and peer put functions can take a NULL
      pointer.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5627cc8b
    • D
      rxrpc: Make rxrpc_send_packet() take a connection not a transport · 985a5c82
      David Howells 提交于
      Make rxrpc_send_packet() take a connection not a transport as part of the
      phasing out of the rxrpc_transport struct.
      
      Whilst we're at it, rename the function to rxrpc_send_data_packet() to
      differentiate it from the other packet sending functions.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      985a5c82
    • D
      rxrpc: Calls displayed in /proc may in future lack a connection · f4e7da8c
      David Howells 提交于
      Allocated rxrpc calls displayed in /proc/net/rxrpc_calls may in future be
      on the proc list before they're connected or after they've been
      disconnected - in which case they may not have a pointer to a connection
      struct that can be used to get data from there.
      
      Deal with this by using stuff from the call struct in preference where
      possible and printing "no_connection" rather than a peer address if no
      connection is assigned.
      
      This change also has the added bonus that the service ID is now taken from
      the call rather the connection which will allow per-call service upgrades
      to be shown - something required for AuriStor server compatibility.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      f4e7da8c
    • D
      rxrpc: Validate the net address given to rxrpc_kernel_begin_call() · f4552c2d
      David Howells 提交于
      Validate the net address given to rxrpc_kernel_begin_call() before using
      it.
      
      Whilst this should be mostly unnecessary for in-kernel users, it does clear
      the tail of the address struct in case we want to hash or compare the whole
      thing.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      f4552c2d
    • D
      rxrpc: Use IDR to allocate client conn IDs on a machine-wide basis · 4a3388c8
      David Howells 提交于
      Use the IDR facility to allocate client connection IDs on a machine-wide
      basis so that each client connection has a unique identifier.  When the
      connection ID space wraps, we advance the epoch by 1, thereby effectively
      having a 62-bit ID space.  The IDR facility is then used to look up client
      connections during incoming packet routing instead of using an rbtree
      rooted on the transport.
      
      This change allows for the removal of the transport in the future and also
      means that client connections can be looked up directly in the data-ready
      handler by connection ID.
      
      The ID management code is placed in a new file, conn-client.c, to which all
      the client connection-specific code will eventually move.
      
      Note that the IDR tree gets very expensive on memory if the connection IDs
      are widely scattered throughout the number space, so we shall need to
      retire connections that have, say, an ID more than four times the maximum
      number of client conns away from the current allocation point to try and
      keep the IDs concentrated.  We will also need to retire connections from an
      old epoch.
      
      Also note that, for the moment, a pointer to the transport has to be passed
      through into the ID allocation function so that we can take a BH lock to
      prevent a locking issue against in-BH lookup of client connections.  This
      will go away later when RCU is used for server connections also.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4a3388c8
    • D
      rxrpc: rxrpc_connection_lock shouldn't be a BH lock, but conn_lock is · b3f57504
      David Howells 提交于
      rxrpc_connection_lock shouldn't be accessed as a BH-excluding lock.  It's
      only accessed in a few places and none of those are in BH-context.
      
      rxrpc_transport::conn_lock, however, *is* a BH-excluding lock and should be
      accessed so consistently.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      b3f57504
    • D
      rxrpc: Pass sk_buff * rather than rxrpc_host_header * to functions · 42886ffe
      David Howells 提交于
      Pass a pointer to struct sk_buff rather than struct rxrpc_host_header to
      functions so that they can in the future get at transport protocol parameters
      rather than just RxRPC parameters.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      42886ffe
    • D
      rxrpc: Fix exclusive connection handling · cc8feb8e
      David Howells 提交于
      "Exclusive connections" are meant to be used for a single client call and
      then scrapped.  The idea is to limit the use of the negotiated security
      context.  The current code, however, isn't doing this: it is instead
      restricting the socket to a single virtual connection and doing all the
      calls over that.
      
      This is changed such that the socket no longer maintains a special virtual
      connection over which it will do all the calls, but rather gets a new one
      each time a new exclusive call is made.
      
      Further, using a socket option for this is a poor choice.  It should be
      done on sendmsg with a control message marker instead so that calls can be
      marked exclusive individually.  To that end, add RXRPC_EXCLUSIVE_CALL
      which, if passed to sendmsg() as a control message element, will cause the
      call to be done on an single-use connection.
      
      The socket option (RXRPC_EXCLUSIVE_CONNECTION) still exists and, if set,
      will override any lack of RXRPC_EXCLUSIVE_CALL being specified so that
      programs using the setsockopt() will appear to work the same.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      cc8feb8e
    • D
      rxrpc: Replace conn->trans->{local,peer} with conn->params.{local,peer} · 85f32278
      David Howells 提交于
      Replace accesses of conn->trans->{local,peer} with
      conn->params.{local,peer} thus making it easier for a future commit to
      remove the rxrpc_transport struct.
      
      This also reduces the number of memory accesses involved.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      85f32278
    • D
      rxrpc: Use structs to hold connection params and protocol info · 19ffa01c
      David Howells 提交于
      Define and use a structure to hold connection parameters.  This makes it
      easier to pass multiple connection parameters around.
      
      Define and use a structure to hold protocol information used to hash a
      connection for lookup on incoming packet.  Most of these fields will be
      disposed of eventually, including the duplicate local pointer.
      
      Whilst we're at it rename "proto" to "family" when referring to a protocol
      family.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      19ffa01c