1. 26 8月, 2016 1 次提交
  2. 25 8月, 2016 3 次提交
  3. 24 8月, 2016 15 次提交
    • D
      rxrpc: Improve management and caching of client connection objects · 45025bce
      David Howells 提交于
      Improve the management and caching of client rxrpc connection objects.
      From this point, client connections will be managed separately from service
      connections because AF_RXRPC controls the creation and re-use of client
      connections but doesn't have that luxury with service connections.
      
      Further, there will be limits on the numbers of client connections that may
      be live on a machine.  No direct restriction will be placed on the number
      of client calls, excepting that each client connection can support a
      maximum of four concurrent calls.
      
      Note that, for a number of reasons, we don't want to simply discard a
      client connection as soon as the last call is apparently finished:
      
       (1) Security is negotiated per-connection and the context is then shared
           between all calls on that connection.  The context can be negotiated
           again if the connection lapses, but that involves holding up calls
           whilst at least two packets are exchanged and various crypto bits are
           performed - so we'd ideally like to cache it for a little while at
           least.
      
       (2) If a packet goes astray, we will need to retransmit a final ACK or
           ABORT packet.  To make this work, we need to keep around the
           connection details for a little while.
      
       (3) The locally held structures represent some amount of setup time, to be
           weighed against their occupation of memory when idle.
      
      
      To this end, the client connection cache is managed by a state machine on
      each connection.  There are five states:
      
       (1) INACTIVE - The connection is not held in any list and may not have
           been exposed to the world.  If it has been previously exposed, it was
           discarded from the idle list after expiring.
      
       (2) WAITING - The connection is waiting for the number of client conns to
           drop below the maximum capacity.  Calls may be in progress upon it
           from when it was active and got culled.
      
           The connection is on the rxrpc_waiting_client_conns list which is kept
           in to-be-granted order.  Culled conns with waiters go to the back of
           the queue just like new conns.
      
       (3) ACTIVE - The connection has at least one call in progress upon it, it
           may freely grant available channels to new calls and calls may be
           waiting on it for channels to become available.
      
           The connection is on the rxrpc_active_client_conns list which is kept
           in activation order for culling purposes.
      
       (4) CULLED - The connection got summarily culled to try and free up
           capacity.  Calls currently in progress on the connection are allowed
           to continue, but new calls will have to wait.  There can be no waiters
           in this state - the conn would have to go to the WAITING state
           instead.
      
       (5) IDLE - The connection has no calls in progress upon it and must have
           been exposed to the world (ie. the EXPOSED flag must be set).  When it
           expires, the EXPOSED flag is cleared and the connection transitions to
           the INACTIVE state.
      
           The connection is on the rxrpc_idle_client_conns list which is kept in
           order of how soon they'll expire.
      
      A connection in the ACTIVE or CULLED state must have at least one active
      call upon it; if in the WAITING state it may have active calls upon it;
      other states may not have active calls.
      
      As long as a connection remains active and doesn't get culled, it may
      continue to process calls - even if there are connections on the wait
      queue.  This simplifies things a bit and reduces the amount of checking we
      need do.
      
      
      There are a couple flags of relevance to the cache:
      
       (1) EXPOSED - The connection ID got exposed to the world.  If this flag is
           set, an extra ref is added to the connection preventing it from being
           reaped when it has no calls outstanding.  This flag is cleared and the
           ref dropped when a conn is discarded from the idle list.
      
       (2) DONT_REUSE - The connection should be discarded as soon as possible and
           should not be reused.
      
      
      This commit also provides a number of new settings:
      
       (*) /proc/net/rxrpc/max_client_conns
      
           The maximum number of live client connections.  Above this number, new
           connections get added to the wait list and must wait for an active
           conn to be culled.  Culled connections can be reused, but they will go
           to the back of the wait list and have to wait.
      
       (*) /proc/net/rxrpc/reap_client_conns
      
           If the number of desired connections exceeds the maximum above, the
           active connection list will be culled until there are only this many
           left in it.
      
       (*) /proc/net/rxrpc/idle_conn_expiry
      
           The normal expiry time for a client connection, provided there are
           fewer than reap_client_conns of them around.
      
       (*) /proc/net/rxrpc/idle_conn_fast_expiry
      
           The expedited expiry time, used when there are more than
           reap_client_conns of them around.
      
      
      Note that I combined the Tx wait queue with the channel grant wait queue to
      save space as only one of these should be in use at once.
      
      Note also that, for the moment, the service connection cache still uses the
      old connection management code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      45025bce
    • D
      rxrpc: Dup the main conn list for the proc interface · 4d028b2c
      David Howells 提交于
      The main connection list is used for two independent purposes: primarily it
      is used to find connections to reap and secondarily it is used to list
      connections in procfs.
      
      Split the procfs list out from the reap list.  This allows us to stop using
      the reap list for client connections when they acquire a separate
      management strategy from service collections.
      
      The client connections will not be on a management single list, and sometimes
      won't be on a management list at all.  This doesn't leave them floating,
      however, as they will also be on an rb-tree rooted on the socket so that the
      socket can find them to dispatch calls.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4d028b2c
    • D
      rxrpc: Make /proc/net/rxrpc_calls safer · df5d8bf7
      David Howells 提交于
      Make /proc/net/rxrpc_calls safer by stashing a copy of the peer pointer in
      the rxrpc_call struct and checking in the show routine that the peer
      pointer, the socket pointer and the local pointer obtained from the socket
      pointer aren't NULL before we use them.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      df5d8bf7
    • D
      rxrpc: Fix conn-based retransmit · 2266ffde
      David Howells 提交于
      If a duplicate packet comes in for a call that has just completed on a
      connection's channel then there will be an oops in the data_ready handler
      because it tries to examine the connection struct via a call struct (which
      we don't have - the pointer is unset).
      
      Since the connection struct pointer is available to us, go direct instead.
      
      Also, the ACK packet to be retransmitted needs three octets of padding
      between the soft ack list and the ackinfo.
      
      Fixes: 18bfeba5 ("rxrpc: Perform terminal call ACK/ABORT retransmission from conn processor")
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      2266ffde
    • E
      net: remove clear_sk() method · ba2489b0
      Eric Dumazet 提交于
      We no longer use this handler, we can delete it.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ba2489b0
    • E
      ipv6: tcp: get rid of tcp_v6_clear_sk() · 391bb6be
      Eric Dumazet 提交于
      Now RCU lookups of IPv6 TCP sockets no longer dereference pinet6,
      we do not need tcp_v6_clear_sk() anymore.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      391bb6be
    • E
      udp: get rid of sk_prot_clear_portaddr_nulls() · 4cac8204
      Eric Dumazet 提交于
      Since we no longer use SLAB_DESTROY_BY_RCU for UDP,
      we do not need sk_prot_clear_portaddr_nulls() helper.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4cac8204
    • E
      ipv6: udp: remove udp_v6_clear_sk() · 6a6ad2a4
      Eric Dumazet 提交于
      Now RCU lookups of ipv6 udp sockets no longer dereference
      pinet6 field, we can get rid of udp_v6_clear_sk() helper.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6a6ad2a4
    • D
      net: diag: support SOCK_DESTROY for UDP sockets · 5d77dca8
      David Ahern 提交于
      This implements SOCK_DESTROY for UDP sockets similar to what was done
      for TCP with commit c1e64e29 ("net: diag: Support destroying TCP
      sockets.") A process with a UDP socket targeted for destroy is awakened
      and recvmsg fails with ECONNABORTED.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5d77dca8
    • W
      tipc: use kfree_skb() instead of kfree() · 5128b185
      Wei Yongjun 提交于
      Use kfree_skb() instead of kfree() to free sk_buff.
      
      Fixes: 0d051bf9 ("tipc: make bearer packet filtering generic")
      Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com>
      Acked-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5128b185
    • P
      net: rtnetlink: Don't export empty RTAX_FEATURES · f8edcd12
      Phil Sutter 提交于
      Since the features bit field has bits for internal only use as well, it
      may happen that the kernel exports RTAX_FEATURES attribute with zero
      value which is pointless.
      
      Fix this by making sure the attribute is added only if the exported
      value is non-zero.
      Signed-off-by: NPhil Sutter <phil@nwl.cc>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8edcd12
    • Y
      net-tcp: retire TFO_SERVER_WO_SOCKOPT2 config · cebc5cba
      Yuchung Cheng 提交于
      TFO_SERVER_WO_SOCKOPT2 was intended for debugging purposes during
      Fast Open development. Remove this config option and also
      update/clean-up the documentation of the Fast Open sysctl.
      Reported-by: NPiotr Jurkiewicz <piotr.jerzy.jurkiewicz@gmail.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cebc5cba
    • G
      l2tp: Refactor the codes with existing macros instead of literal number · 54c151d9
      Gao Feng 提交于
      Use PPP_ALLSTATIONS, PPP_UI, and SEND_SHUTDOWN instead of 0xff,
      0x03, and 2 separately.
      Signed-off-by: NGao Feng <fgao@ikuai8.com>
      Acked-by: NGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      54c151d9
    • T
      kcm: Fix locking issue · 1616b38f
      Tom Herbert 提交于
      Lock the lower socket in kcm_unattach. Release during call to strp_done
      since that function cancels the RX timers and work queue with sync.
      
      Also added some status information in psock reporting.
      Signed-off-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1616b38f
    • T
      strparser: Queue work when being unpaused · cff6a334
      Tom Herbert 提交于
      When the upper layer unpauses a stream parser connection we need to
      queue rx_work to make sure no events are missed.
      Signed-off-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cff6a334
  4. 23 8月, 2016 12 次提交
  5. 22 8月, 2016 2 次提交
  6. 20 8月, 2016 7 次提交