1. 26 5月, 2017 1 次提交
    • D
      rxrpc: Support network namespacing · 2baec2c3
      David Howells 提交于
      Support network namespacing in AF_RXRPC with the following changes:
      
       (1) All the local endpoint, peer and call lists, locks, counters, etc. are
           moved into the per-namespace record.
      
       (2) All the connection tracking is moved into the per-namespace record
           with the exception of the client connection ID tree, which is kept
           global so that connection IDs are kept unique per-machine.
      
       (3) Each namespace gets its own epoch.  This allows each network namespace
           to pretend to be a separate client machine.
      
       (4) The /proc/net/rxrpc_xxx files are now called /proc/net/rxrpc/xxx and
           the contents reflect the namespace.
      
      fs/afs/ should be okay with this patch as it explicitly requires the current
      net namespace to be init_net to permit a mount to proceed at the moment.  It
      will, however, need updating so that cells, IP addresses and DNS records are
      per-namespace also.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2baec2c3
  2. 06 4月, 2017 1 次提交
  3. 02 3月, 2017 1 次提交
  4. 05 1月, 2017 1 次提交
    • D
      rxrpc: Fix handling of enums-to-string translation in tracing · b54a134a
      David Howells 提交于
      Fix the way enum values are translated into strings in AF_RXRPC
      tracepoints.  The problem with just doing a lookup in a normal flat array
      of strings or chars is that external tracing infrastructure can't find it.
      Rather, TRACE_DEFINE_ENUM must be used.
      
      Also sort the enums and string tables to make it easier to keep them in
      order so that a future patch to __print_symbolic() can be optimised to try
      a direct lookup into the table first before iterating over it.
      
      A couple of _proto() macro calls are removed because they refered to tables
      that got moved to the tracing infrastructure.  The relevant data can be
      found by way of tracing.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      b54a134a
  5. 15 12月, 2016 1 次提交
  6. 30 9月, 2016 2 次提交
    • D
      rxrpc: When activating client conn channels, do state check inside lock · 2629c7fa
      David Howells 提交于
      In rxrpc_activate_channels(), the connection cache state is checked outside
      of the lock, which means it can change whilst we're waking calls up,
      thereby changing whether or not we're allowed to wake calls up.
      
      Fix this by moving the check inside the locked region.  The check to see if
      all the channels are currently busy can stay outside of the locked region.
      
      Whilst we're at it:
      
       (1) Split the locked section out into its own function so that we can call
           it from other places in a later patch.
      
       (2) Determine the mask of channels dependent on the state as we're going
           to add another state in a later patch that will restrict the number of
           simultaneous calls to 1 on a connection.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      2629c7fa
    • D
      rxrpc: Fix exclusive client connections · 8732db67
      David Howells 提交于
      Exclusive connections are currently reusable (which they shouldn't be)
      because rxrpc_alloc_client_connection() checks the exclusive flag in the
      rxrpc_connection struct before it's initialised from the function
      parameters.  This means that the DONT_REUSE flag doesn't get set.
      
      Fix this by checking the function parameters for the exclusive flag.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      8732db67
  7. 17 9月, 2016 3 次提交
    • D
      rxrpc: Add connection tracepoint and client conn state tracepoint · 363deeab
      David Howells 提交于
      Add a pair of tracepoints, one to track rxrpc_connection struct ref
      counting and the other to track the client connection cache state.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      363deeab
    • D
      rxrpc: Fix unexposed client conn release · 78883793
      David Howells 提交于
      If the last call on a client connection is release after the connection has
      had a bunch of calls allocated but before any DATA packets are sent (so
      that it's not yet marked RXRPC_CONN_EXPOSED), an assertion will happen in
      rxrpc_disconnect_client_call().
      
      	af_rxrpc: Assertion failed - 1(0x1) >= 2(0x2) is false
      	------------[ cut here ]------------
      	kernel BUG at ../net/rxrpc/conn_client.c:753!
      
      This is because it's expecting the conn to have been exposed and to have 2
      or more refs - but this isn't necessarily the case.
      
      Simply remove the assertion.  This allows the conn to be moved into the
      inactive state and deleted if it isn't resurrected before the final put is
      called.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      78883793
    • D
      rxrpc: Fix the putting of client connections · 66d58af7
      David Howells 提交于
      In rxrpc_put_one_client_conn(), if a connection has RXRPC_CONN_COUNTED set
      on it, then it's accounted for in rxrpc_nr_client_conns and may be on
      various lists - and this is cleaned up correctly.
      
      However, if the connection doesn't have RXRPC_CONN_COUNTED set on it, then
      the put routine returns rather than just skipping the extra bit of cleanup.
      
      Fix this by making the extra bit of clean up conditional instead and always
      killing off the connection.
      
      This manifests itself as connections with a zero usage count hanging around
      in /proc/net/rxrpc_conns because the connection allocated, but discarded,
      due to a race with another process that set up a parallel connection, which
      was then shared instead.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      66d58af7
  8. 07 9月, 2016 1 次提交
  9. 05 9月, 2016 1 次提交
  10. 04 9月, 2016 1 次提交
  11. 30 8月, 2016 2 次提交
  12. 24 8月, 2016 3 次提交
    • D
      rxrpc: Improve management and caching of client connection objects · 45025bce
      David Howells 提交于
      Improve the management and caching of client rxrpc connection objects.
      From this point, client connections will be managed separately from service
      connections because AF_RXRPC controls the creation and re-use of client
      connections but doesn't have that luxury with service connections.
      
      Further, there will be limits on the numbers of client connections that may
      be live on a machine.  No direct restriction will be placed on the number
      of client calls, excepting that each client connection can support a
      maximum of four concurrent calls.
      
      Note that, for a number of reasons, we don't want to simply discard a
      client connection as soon as the last call is apparently finished:
      
       (1) Security is negotiated per-connection and the context is then shared
           between all calls on that connection.  The context can be negotiated
           again if the connection lapses, but that involves holding up calls
           whilst at least two packets are exchanged and various crypto bits are
           performed - so we'd ideally like to cache it for a little while at
           least.
      
       (2) If a packet goes astray, we will need to retransmit a final ACK or
           ABORT packet.  To make this work, we need to keep around the
           connection details for a little while.
      
       (3) The locally held structures represent some amount of setup time, to be
           weighed against their occupation of memory when idle.
      
      
      To this end, the client connection cache is managed by a state machine on
      each connection.  There are five states:
      
       (1) INACTIVE - The connection is not held in any list and may not have
           been exposed to the world.  If it has been previously exposed, it was
           discarded from the idle list after expiring.
      
       (2) WAITING - The connection is waiting for the number of client conns to
           drop below the maximum capacity.  Calls may be in progress upon it
           from when it was active and got culled.
      
           The connection is on the rxrpc_waiting_client_conns list which is kept
           in to-be-granted order.  Culled conns with waiters go to the back of
           the queue just like new conns.
      
       (3) ACTIVE - The connection has at least one call in progress upon it, it
           may freely grant available channels to new calls and calls may be
           waiting on it for channels to become available.
      
           The connection is on the rxrpc_active_client_conns list which is kept
           in activation order for culling purposes.
      
       (4) CULLED - The connection got summarily culled to try and free up
           capacity.  Calls currently in progress on the connection are allowed
           to continue, but new calls will have to wait.  There can be no waiters
           in this state - the conn would have to go to the WAITING state
           instead.
      
       (5) IDLE - The connection has no calls in progress upon it and must have
           been exposed to the world (ie. the EXPOSED flag must be set).  When it
           expires, the EXPOSED flag is cleared and the connection transitions to
           the INACTIVE state.
      
           The connection is on the rxrpc_idle_client_conns list which is kept in
           order of how soon they'll expire.
      
      A connection in the ACTIVE or CULLED state must have at least one active
      call upon it; if in the WAITING state it may have active calls upon it;
      other states may not have active calls.
      
      As long as a connection remains active and doesn't get culled, it may
      continue to process calls - even if there are connections on the wait
      queue.  This simplifies things a bit and reduces the amount of checking we
      need do.
      
      
      There are a couple flags of relevance to the cache:
      
       (1) EXPOSED - The connection ID got exposed to the world.  If this flag is
           set, an extra ref is added to the connection preventing it from being
           reaped when it has no calls outstanding.  This flag is cleared and the
           ref dropped when a conn is discarded from the idle list.
      
       (2) DONT_REUSE - The connection should be discarded as soon as possible and
           should not be reused.
      
      
      This commit also provides a number of new settings:
      
       (*) /proc/net/rxrpc/max_client_conns
      
           The maximum number of live client connections.  Above this number, new
           connections get added to the wait list and must wait for an active
           conn to be culled.  Culled connections can be reused, but they will go
           to the back of the wait list and have to wait.
      
       (*) /proc/net/rxrpc/reap_client_conns
      
           If the number of desired connections exceeds the maximum above, the
           active connection list will be culled until there are only this many
           left in it.
      
       (*) /proc/net/rxrpc/idle_conn_expiry
      
           The normal expiry time for a client connection, provided there are
           fewer than reap_client_conns of them around.
      
       (*) /proc/net/rxrpc/idle_conn_fast_expiry
      
           The expedited expiry time, used when there are more than
           reap_client_conns of them around.
      
      
      Note that I combined the Tx wait queue with the channel grant wait queue to
      save space as only one of these should be in use at once.
      
      Note also that, for the moment, the service connection cache still uses the
      old connection management code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      45025bce
    • D
      rxrpc: Dup the main conn list for the proc interface · 4d028b2c
      David Howells 提交于
      The main connection list is used for two independent purposes: primarily it
      is used to find connections to reap and secondarily it is used to list
      connections in procfs.
      
      Split the procfs list out from the reap list.  This allows us to stop using
      the reap list for client connections when they acquire a separate
      management strategy from service collections.
      
      The client connections will not be on a management single list, and sometimes
      won't be on a management list at all.  This doesn't leave them floating,
      however, as they will also be on an rb-tree rooted on the socket so that the
      socket can find them to dispatch calls.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4d028b2c
    • D
      rxrpc: Make /proc/net/rxrpc_calls safer · df5d8bf7
      David Howells 提交于
      Make /proc/net/rxrpc_calls safer by stashing a copy of the peer pointer in
      the rxrpc_call struct and checking in the show routine that the peer
      pointer, the socket pointer and the local pointer obtained from the socket
      pointer aren't NULL before we use them.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      df5d8bf7
  13. 23 8月, 2016 2 次提交
    • D
      rxrpc: Drop channel number field from rxrpc_call struct · 01a90a45
      David Howells 提交于
      Drop the channel number (channel) field from the rxrpc_call struct to
      reduce the size of the call struct.  The field is redundant: if the call is
      attached to a connection, the channel can be obtained from there by AND'ing
      with RXRPC_CHANNELMASK.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      01a90a45
    • D
      rxrpc: Tidy up the rxrpc_call struct a bit · dabe5a79
      David Howells 提交于
      Do a little tidying of the rxrpc_call struct:
      
       (1) in_clientflag is no longer compared against the value that's in the
           packet, so keeping it in this form isn't necessary.  Use a flag in
           flags instead and provide a pair of wrapper functions.
      
       (2) We don't read the epoch value, so that can go.
      
       (3) Move what remains of the data that were used for hashing up in the
           struct to be with the channel number.
      
       (4) Get rid of the local pointer.  We can get at this via the socket
           struct and we only use this in the procfs viewer.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      dabe5a79
  14. 06 7月, 2016 5 次提交
    • D
      rxrpc: Use RCU to access a peer's service connection tree · 8496af50
      David Howells 提交于
      Move to using RCU access to a peer's service connection tree when routing
      an incoming packet.  This is done using a seqlock to trigger retrying of
      the tree walk if a change happened.
      
      Further, we no longer get a ref on the connection looked up in the
      data_ready handler unless we queue the connection's work item - and then
      only if the refcount > 0.
      
      
      Note that I'm avoiding the use of a hash table for service connections
      because each service connection is addressed by a 62-bit number
      (constructed from epoch and connection ID >> 2) that would allow the client
      to engage in bucket stuffing, given knowledge of the hash algorithm.
      Peers, however, are hashed as the network address is less controllable by
      the client.  The total number of peers will also be limited in a future
      commit.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      8496af50
    • D
      rxrpc: Prune the contents of the rxrpc_conn_proto struct · e8d70ce1
      David Howells 提交于
      Prune the contents of the rxrpc_conn_proto struct.  Most of the fields aren't
      used anymore.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      e8d70ce1
    • D
      rxrpc: Maintain an extra ref on a conn for the cache list · 001c1122
      David Howells 提交于
      Overhaul the usage count accounting for the rxrpc_connection struct to make
      it easier to implement RCU access from the data_ready handler.
      
      The problem is that currently we're using a lock to prevent the garbage
      collector from trying to clean up a connection that we're contemplating
      unidling.  We could just stick incoming packets on the connection we find,
      but we've then got a problem that we may race when dispatching a work item
      to process it as we need to give that a ref to prevent the rxrpc_connection
      struct from disappearing in the meantime.
      
      Further, incoming packets may get discarded if attached to an
      rxrpc_connection struct that is going away.  Whilst this is not a total
      disaster - the client will presumably resend - it would delay processing of
      the call.  This would affect the AFS client filesystem's service manager
      operation.
      
      To this end:
      
       (1) We now maintain an extra count on the connection usage count whilst it
           is on the connection list.  This mean it is not in use when its
           refcount is 1.
      
       (2) When trying to reuse an old connection, we only increment the refcount
           if it is greater than 0.  If it is 0, we replace it in the tree with a
           new candidate connection.
      
       (3) Two connection flags are added to indicate whether or not a connection
           is in the local's client connection tree (used by sendmsg) or the
           peer's service connection tree (used by data_ready).  This makes sure
           that we don't try and remove a connection if it got replaced.
      
           The flags are tested under lock with the removal operation to prevent
           the reaper from killing the rxrpc_connection struct whilst someone
           else is trying to effect a replacement.
      
           This could probably be alleviated by using memory barriers between the
           flag set/test and the rb_tree ops.  The rb_tree op would still need to
           be under the lock, however.
      
       (4) When trying to reap an old connection, we try to flip the usage count
           from 1 to 0.  If it's not 1 at that point, then it must've come back
           to life temporarily and we ignore it.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      001c1122
    • D
      rxrpc: Split client connection code out into its own file · c6d2b8d7
      David Howells 提交于
      Split the client-specific connection code out into its own file.  It will
      behave somewhat differently from the service-specific connection code, so
      it makes sense to separate them.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      c6d2b8d7
    • D
      rxrpc: Check that the client conns cache is empty before module removal · eb9b9d22
      David Howells 提交于
      Check that the client conns cache is empty before module removal and bug if
      not, listing any offending connections that are still present.  Unfortunately,
      if there are connections still around, then the transport socket is still
      unexpectedly open and active, so we can't just unallocate the connections.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      eb9b9d22
  15. 22 6月, 2016 2 次提交
    • D
      rxrpc: Kill the client connection bundle concept · 999b69f8
      David Howells 提交于
      Kill off the concept of maintaining a bundle of connections to a particular
      target service to increase the number of call slots available for any
      beyond four for that service (there are four call slots per connection).
      
      This will make cleaning up the connection handling code easier and
      facilitate removal of the rxrpc_transport struct.  Bundling can be
      reintroduced later if necessary.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      999b69f8
    • D
      rxrpc: Use IDR to allocate client conn IDs on a machine-wide basis · 4a3388c8
      David Howells 提交于
      Use the IDR facility to allocate client connection IDs on a machine-wide
      basis so that each client connection has a unique identifier.  When the
      connection ID space wraps, we advance the epoch by 1, thereby effectively
      having a 62-bit ID space.  The IDR facility is then used to look up client
      connections during incoming packet routing instead of using an rbtree
      rooted on the transport.
      
      This change allows for the removal of the transport in the future and also
      means that client connections can be looked up directly in the data-ready
      handler by connection ID.
      
      The ID management code is placed in a new file, conn-client.c, to which all
      the client connection-specific code will eventually move.
      
      Note that the IDR tree gets very expensive on memory if the connection IDs
      are widely scattered throughout the number space, so we shall need to
      retire connections that have, say, an ID more than four times the maximum
      number of client conns away from the current allocation point to try and
      keep the IDs concentrated.  We will also need to retire connections from an
      old epoch.
      
      Also note that, for the moment, a pointer to the transport has to be passed
      through into the ID allocation function so that we can take a BH lock to
      prevent a locking issue against in-BH lookup of client connections.  This
      will go away later when RCU is used for server connections also.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4a3388c8