1. 05 2月, 2015 7 次提交
    • J
      tipc: separate link starting event from link timeout event · af9946fd
      Jon Paul Maloy 提交于
      When a new link instance is created, it is trigged to start by
      sending it a TIPC_STARTING_EVT, whereafter a regular link
      reset is applied to it.
      
      The starting event is codewise treated as a timeout event, and prompts
      a link RESET message to be sent to the peer node, carrying a link
      session identifier. The later link_reset() call nudges this session
      identifier, whereafter all subsequent RESET messages will be sent out
      with the new identifier. The latter session number overrides the former,
      causing the peer to unconditionally accept it irrespective of its
      current working state.
      
      We don't think that this causes any problem, but it is not in accordance
      with the protocol spec, and may cause confusion when debugging TIPC
      sessions.
      
      To avoid this, we make the starting event distinct from the
      subsequent timeout events, by not allowing the former to send
      out any RESET message. This eliminates the described problem.
      Reviewed-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      af9946fd
    • J
      tipc: eliminate race during node creation · b45db71b
      Jon Paul Maloy 提交于
      Instances of struct node are created in the function tipc_disc_rcv()
      under the assumption that there is no race between received discovery
      messages arriving from the same node. This assumption is wrong.
      When we use more than one bearer, it is possible that discovery
      messages from the same node arrive at the same moment, resulting in
      creation of two instances of struct tipc_node. This may later cause
      confusion during link establishment, and may result in one of the links
      never becoming activated.
      
      We fix this by making lookup and potential creation of nodes atomic.
      Instead of first looking up the node, and in case of failure, create it,
      we now start with looking up the node inside node_link_create(), and
      return a reference to that one if found. Otherwise, we go ahead and
      create the node as we did before.
      Reviewed-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b45db71b
    • J
      tipc: avoid stale link after aborted failover · 7d24dcdb
      Jon Paul Maloy 提交于
      During link failover it may happen that the remaining link goes
      down while it is still in the process of taking over traffic
      from a previously failed link. When this happens, we currently
      abort the failover procedure and reset the first failed link to
      non-failover mode, so that it will be ready to re-establish
      contact with its peer when it comes available.
      
      However, if the first link goes down because its bearer was manually
      disabled, it is not enough to reset it; it must also be deleted;
      which is supposed to happen when the failover procedure is finished.
      Otherwise it will remain a zombie link: attached to the owner node
      structure, in mode LINK_STOPPED, and permanently blocking any re-
      establishing of the link to the peer via the interface in question.
      
      We fix this by amending the failover abort procedure. Apart from
      resetting the link to non-failover state, we test if the link is
      also in LINK_STOPPED mode. If so, we delete it, using the conditional
      tipc_link_delete() function introduced in the previous commit.
      Reviewed-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7d24dcdb
    • J
      tipc: add reference count to struct tipc_link · 2d72d495
      Jon Paul Maloy 提交于
      When a bearer is disabled, all pertaining links will be reset and
      deleted. However, if there is a second active link towards a killed
      link's destination, the delete has to be postponed until the failover
      is finished. During this interval, we currently put the link in zombie
      mode, i.e., we take it out of traffic, delete its timer, but leave it
      attached to the owner node structure until all missing packets have
      been received.  When this is done, we detach the link from its node
      and delete it, assuming that the synchronous timer deletion that was
      initiated earlier in a different thread has finished.
      
      This is unsafe, as the failover may finish before del_timer_sync()
      has returned in the other thread.
      
      We fix this by adding an atomic reference counter of type kref in
      struct tipc_link. The counter keeps track of the references kept
      to the link by the owner node and the timer. We then do a conditional
      delete, based on the reference counter, both after the failover has
      been finished and when the timer expires, if applicable. Whoever
      comes last, will actually delete the link. This approach also implies
      that we can make the deletion of the timer asynchronous.
      Reviewed-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2d72d495
    • T
      net: add skb functions to process remote checksum offload · dcdc8994
      Tom Herbert 提交于
      This patch adds skb_remcsum_process and skb_gro_remcsum_process to
      perform the appropriate adjustments to the skb when receiving
      remote checksum offload.
      
      Updated vxlan and gue to use these functions.
      
      Tested: Ran TCP_RR and TCP_STREAM netperf for VXLAN and GUE, did
      not see any change in performance.
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dcdc8994
    • S
      bridge: Let bridge not age 'externally' learnt FDB entries, they are removed... · 9a05dde5
      Siva Mannem 提交于
      bridge: Let bridge not age 'externally' learnt FDB entries, they are removed when 'external' entity notifies the aging
      
      When 'learned_sync' flag is turned on, the offloaded switch
       port syncs learned MAC addresses to bridge's FDB via switchdev notifier
       (NETDEV_SWITCH_FDB_ADD). Currently, FDB entries learnt via this mechanism are
       wrongly being deleted by bridge aging logic. This patch ensures that FDB
       entries synced from offloaded switch ports are not deleted by bridging logic.
       Such entries can only be deleted via switchdev notifier
       (NETDEV_SWITCH_FDB_DEL).
      Signed-off-by: NSiva Mannem <siva.mannem.lnx@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9a05dde5
    • E
      xps: fix xps for stacked devices · 2bd82484
      Eric Dumazet 提交于
      A typical qdisc setup is the following :
      
      bond0 : bonding device, using HTB hierarchy
      eth1/eth2 : slaves, multiqueue NIC, using MQ + FQ qdisc
      
      XPS allows to spread packets on specific tx queues, based on the cpu
      doing the send.
      
      Problem is that dequeues from bond0 qdisc can happen on random cpus,
      due to the fact that qdisc_run() can dequeue a batch of packets.
      
      CPUA -> queue packet P1 on bond0 qdisc, P1->ooo_okay=1
      CPUA -> queue packet P2 on bond0 qdisc, P2->ooo_okay=0
      
      CPUB -> dequeue packet P1 from bond0
              enqueue packet on eth1/eth2
      CPUC -> dequeue packet P2 from bond0
              enqueue packet on eth1/eth2 using sk cache (ooo_okay is 0)
      
      get_xps_queue() then might select wrong queue for P1, since current cpu
      might be different than CPUA.
      
      P2 might be sent on the old queue (stored in sk->sk_tx_queue_mapping),
      if CPUC runs a bit faster (or CPUB spins a bit on qdisc lock)
      
      Effect of this bug is TCP reorders, and more generally not optimal
      TX queue placement. (A victim bulk flow can be migrated to the wrong TX
      queue for a while)
      
      To fix this, we have to record sender cpu number the first time
      dev_queue_xmit() is called for one tx skb.
      
      We can union napi_id (used on receive path) and sender_cpu,
      granted we clear sender_cpu in skb_scrub_packet() (credit to Willem for
      this union idea)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Cc: Nandita Dukkipati <nanditad@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2bd82484
  2. 04 2月, 2015 3 次提交
  3. 03 2月, 2015 13 次提交
  4. 02 2月, 2015 5 次提交
  5. 01 2月, 2015 11 次提交
  6. 31 1月, 2015 1 次提交