1. 09 4月, 2019 7 次提交
  2. 08 4月, 2019 2 次提交
    • V
      net: sched: flower: insert filter to ht before offloading it to hw · 1f17f774
      Vlad Buslov 提交于
      John reports:
      
      Recent refactoring of fl_change aims to use the classifier spinlock to
      avoid the need for rtnl lock. In doing so, the fl_hw_replace_filer()
      function was moved to before the lock is taken. This can create problems
      for drivers if duplicate filters are created (commmon in ovs tc offload
      due to filters being triggered by user-space matches).
      
      Drivers registered for such filters will now receive multiple copies of
      the same rule, each with a different cookie value. This means that the
      drivers would need to do a full match field lookup to determine
      duplicates, repeating work that will happen in flower __fl_lookup().
      Currently, drivers do not expect to receive duplicate filters.
      
      To fix this, verify that filter with same key is not present in flower
      classifier hash table and insert the new filter to the flower hash table
      before offloading it to hardware. Implement helper function
      fl_ht_insert_unique() to atomically verify/insert a filter.
      
      This change makes filter visible to fast path at the beginning of
      fl_change() function, which means it can no longer be freed directly in
      case of error. Refactor fl_change() error handling code to deallocate the
      filter with rcu timeout.
      
      Fixes: 620da486 ("net: sched: flower: refactor fl_change")
      Reported-by: NJohn Hurley <john.hurley@netronome.com>
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f17f774
    • N
      rhashtable: use bit_spin_locks to protect hash bucket. · 8f0db018
      NeilBrown 提交于
      This patch changes rhashtables to use a bit_spin_lock on BIT(1) of the
      bucket pointer to lock the hash chain for that bucket.
      
      The benefits of a bit spin_lock are:
       - no need to allocate a separate array of locks.
       - no need to have a configuration option to guide the
         choice of the size of this array
       - locking cost is often a single test-and-set in a cache line
         that will have to be loaded anyway.  When inserting at, or removing
         from, the head of the chain, the unlock is free - writing the new
         address in the bucket head implicitly clears the lock bit.
         For __rhashtable_insert_fast() we ensure this always happens
         when adding a new key.
       - even when lockings costs 2 updates (lock and unlock), they are
         in a cacheline that needs to be read anyway.
      
      The cost of using a bit spin_lock is a little bit of code complexity,
      which I think is quite manageable.
      
      Bit spin_locks are sometimes inappropriate because they are not fair -
      if multiple CPUs repeatedly contend of the same lock, one CPU can
      easily be starved.  This is not a credible situation with rhashtable.
      Multiple CPUs may want to repeatedly add or remove objects, but they
      will typically do so at different buckets, so they will attempt to
      acquire different locks.
      
      As we have more bit-locks than we previously had spinlocks (by at
      least a factor of two) we can expect slightly less contention to
      go with the slightly better cache behavior and reduced memory
      consumption.
      
      To enhance type checking, a new struct is introduced to represent the
        pointer plus lock-bit
      that is stored in the bucket-table.  This is "struct rhash_lock_head"
      and is empty.  A pointer to this needs to be cast to either an
      unsigned lock, or a "struct rhash_head *" to be useful.
      Variables of this type are most often called "bkt".
      
      Previously "pprev" would sometimes point to a bucket, and sometimes a
      ->next pointer in an rhash_head.  As these are now different types,
      pprev is NULL when it would have pointed to the bucket. In that case,
      'blk' is used, together with correct locking protocol.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f0db018
  3. 07 4月, 2019 15 次提交
  4. 05 4月, 2019 16 次提交
    • L
      ipv6: sit: reset ip header pointer in ipip6_rcv · bb9bd814
      Lorenzo Bianconi 提交于
      ipip6 tunnels run iptunnel_pull_header on received skbs. This can
      determine the following use-after-free accessing iph pointer since
      the packet will be 'uncloned' running pskb_expand_head if it is a
      cloned gso skb (e.g if the packet has been sent though a veth device)
      
      [  706.369655] BUG: KASAN: use-after-free in ipip6_rcv+0x1678/0x16e0 [sit]
      [  706.449056] Read of size 1 at addr ffffe01b6bd855f5 by task ksoftirqd/1/=
      [  706.669494] Hardware name: HPE ProLiant m400 Server/ProLiant m400 Server, BIOS U02 08/19/2016
      [  706.771839] Call trace:
      [  706.801159]  dump_backtrace+0x0/0x2f8
      [  706.845079]  show_stack+0x24/0x30
      [  706.884833]  dump_stack+0xe0/0x11c
      [  706.925629]  print_address_description+0x68/0x260
      [  706.982070]  kasan_report+0x178/0x340
      [  707.025995]  __asan_report_load1_noabort+0x30/0x40
      [  707.083481]  ipip6_rcv+0x1678/0x16e0 [sit]
      [  707.132623]  tunnel64_rcv+0xd4/0x200 [tunnel4]
      [  707.185940]  ip_local_deliver_finish+0x3b8/0x988
      [  707.241338]  ip_local_deliver+0x144/0x470
      [  707.289436]  ip_rcv_finish+0x43c/0x14b0
      [  707.335447]  ip_rcv+0x628/0x1138
      [  707.374151]  __netif_receive_skb_core+0x1670/0x2600
      [  707.432680]  __netif_receive_skb+0x28/0x190
      [  707.482859]  process_backlog+0x1d0/0x610
      [  707.529913]  net_rx_action+0x37c/0xf68
      [  707.574882]  __do_softirq+0x288/0x1018
      [  707.619852]  run_ksoftirqd+0x70/0xa8
      [  707.662734]  smpboot_thread_fn+0x3a4/0x9e8
      [  707.711875]  kthread+0x2c8/0x350
      [  707.750583]  ret_from_fork+0x10/0x18
      
      [  707.811302] Allocated by task 16982:
      [  707.854182]  kasan_kmalloc.part.1+0x40/0x108
      [  707.905405]  kasan_kmalloc+0xb4/0xc8
      [  707.948291]  kasan_slab_alloc+0x14/0x20
      [  707.994309]  __kmalloc_node_track_caller+0x158/0x5e0
      [  708.053902]  __kmalloc_reserve.isra.8+0x54/0xe0
      [  708.108280]  __alloc_skb+0xd8/0x400
      [  708.150139]  sk_stream_alloc_skb+0xa4/0x638
      [  708.200346]  tcp_sendmsg_locked+0x818/0x2b90
      [  708.251581]  tcp_sendmsg+0x40/0x60
      [  708.292376]  inet_sendmsg+0xf0/0x520
      [  708.335259]  sock_sendmsg+0xac/0xf8
      [  708.377096]  sock_write_iter+0x1c0/0x2c0
      [  708.424154]  new_sync_write+0x358/0x4a8
      [  708.470162]  __vfs_write+0xc4/0xf8
      [  708.510950]  vfs_write+0x12c/0x3d0
      [  708.551739]  ksys_write+0xcc/0x178
      [  708.592533]  __arm64_sys_write+0x70/0xa0
      [  708.639593]  el0_svc_handler+0x13c/0x298
      [  708.686646]  el0_svc+0x8/0xc
      
      [  708.739019] Freed by task 17:
      [  708.774597]  __kasan_slab_free+0x114/0x228
      [  708.823736]  kasan_slab_free+0x10/0x18
      [  708.868703]  kfree+0x100/0x3d8
      [  708.905320]  skb_free_head+0x7c/0x98
      [  708.948204]  skb_release_data+0x320/0x490
      [  708.996301]  pskb_expand_head+0x60c/0x970
      [  709.044399]  __iptunnel_pull_header+0x3b8/0x5d0
      [  709.098770]  ipip6_rcv+0x41c/0x16e0 [sit]
      [  709.146873]  tunnel64_rcv+0xd4/0x200 [tunnel4]
      [  709.200195]  ip_local_deliver_finish+0x3b8/0x988
      [  709.255596]  ip_local_deliver+0x144/0x470
      [  709.303692]  ip_rcv_finish+0x43c/0x14b0
      [  709.349705]  ip_rcv+0x628/0x1138
      [  709.388413]  __netif_receive_skb_core+0x1670/0x2600
      [  709.446943]  __netif_receive_skb+0x28/0x190
      [  709.497120]  process_backlog+0x1d0/0x610
      [  709.544169]  net_rx_action+0x37c/0xf68
      [  709.589131]  __do_softirq+0x288/0x1018
      
      [  709.651938] The buggy address belongs to the object at ffffe01b6bd85580
                      which belongs to the cache kmalloc-1024 of size 1024
      [  709.804356] The buggy address is located 117 bytes inside of
                      1024-byte region [ffffe01b6bd85580, ffffe01b6bd85980)
      [  709.946340] The buggy address belongs to the page:
      [  710.003824] page:ffff7ff806daf600 count:1 mapcount:0 mapping:ffffe01c4001f600 index:0x0
      [  710.099914] flags: 0xfffff8000000100(slab)
      [  710.149059] raw: 0fffff8000000100 dead000000000100 dead000000000200 ffffe01c4001f600
      [  710.242011] raw: 0000000000000000 0000000000380038 00000001ffffffff 0000000000000000
      [  710.334966] page dumped because: kasan: bad access detected
      
      Fix it resetting iph pointer after iptunnel_pull_header
      
      Fixes: a09a4c8d ("tunnels: Remove encapsulation offloads on decap")
      Tested-by: NJianlin Shi <jishi@redhat.com>
      Signed-off-by: NLorenzo Bianconi <lorenzo.bianconi@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bb9bd814
    • T
      tipc: adapt link failover for new Gap-ACK algorithm · 58ee86b8
      Tuong Lien 提交于
      In commit 0ae955e2656d ("tipc: improve TIPC throughput by Gap ACK
      blocks"), we enhance the link transmq by releasing as many packets as
      possible with the multi-ACKs from peer node. This also means the queue
      is now non-linear and the peer link deferdq becomes vital.
      
      Whereas, in the case of link failover, all messages in the link transmq
      need to be transmitted as tunnel messages in such a way that message
      sequentiality and cardinality per sender is preserved. This requires us
      to maintain the link deferdq somehow, so that when the tunnel messages
      arrive, the inner user messages along with the ones in the deferdq will
      be delivered to upper layer correctly.
      
      The commit accomplishes this by defining a new queue in the TIPC link
      structure to hold the old link deferdq when link failover happens and
      process it upon receipt of tunnel messages.
      
      Also, in the case of link syncing, the link deferdq will not be purged
      to avoid unnecessary retransmissions that in the worst case will fail
      because the packets might have been freed on the sending side.
      Acked-by: NYing Xue <ying.xue@windriver.com>
      Acked-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NTuong Lien <tuong.t.lien@dektech.com.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58ee86b8
    • T
      tipc: reduce duplicate packets for unicast traffic · 382f598f
      Tuong Lien 提交于
      For unicast transmission, the current NACK sending althorithm is over-
      active that forces the sending side to retransmit a packet that is not
      really lost but just arrived at the receiving side with some delay, or
      even retransmit same packets that have already been retransmitted
      before. As a result, many duplicates are observed also under normal
      condition, ie. without packet loss.
      
      One example case is: node1 transmits 1 2 3 4 10 5 6 7 8 9, when node2
      receives packet #10, it puts into the deferdq. When the packet #5 comes
      it sends NACK with gap [6 - 9]. However, shortly after that, when
      packet #6 arrives, it pulls out packet #10 from the deferfq, but it is
      still out of order, so it makes another NACK with gap [7 - 9] and so on
      ... Finally, node1 has to retransmit the packets 5 6 7 8 9 a number of
      times, but in fact all the packets are not lost at all, so duplicates!
      
      This commit reduces duplicates by changing the condition to send NACK,
      also restricting the retransmissions on individual packets via a timer
      of about 1ms. However, it also needs to say that too tricky condition
      for NACKs or too long timeout value for retransmissions will result in
      performance reducing! The criterias in this commit are found to be
      effective for both the requirements to reduce duplicates but not affect
      performance.
      
      The tipc_link_rcv() is also improved to only dequeue skb from the link
      deferdq if it is expected (ie. its seqno <= rcv_nxt).
      Acked-by: NYing Xue <ying.xue@windriver.com>
      Acked-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NTuong Lien <tuong.t.lien@dektech.com.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      382f598f
    • T
      tipc: improve TIPC throughput by Gap ACK blocks · 9195948f
      Tuong Lien 提交于
      During unicast link transmission, it's observed very often that because
      of one or a few lost/dis-ordered packets, the sending side will fastly
      reach the send window limit and must wait for the packets to be arrived
      at the receiving side or in the worst case, a retransmission must be
      done first. The sending side cannot release a lot of subsequent packets
      in its transmq even though all of them might have already been received
      by the receiving side.
      That is, one or two packets dis-ordered/lost and dozens of packets have
      to wait, this obviously reduces the overall throughput!
      
      This commit introduces an algorithm to overcome this by using "Gap ACK
      blocks". Basically, a Gap ACK block will consist of <ack, gap> numbers
      that describes the link deferdq where packets have been got by the
      receiving side but with gaps, for example:
      
            link deferdq: [1 2 3 4      10 11      13 14 15       20]
      --> Gap ACK blocks:       <4, 5>,   <11, 1>,      <15, 4>, <20, 0>
      
      The Gap ACK blocks will be sent to the sending side along with the
      traditional ACK or NACK message. Immediately when receiving the message
      the sending side will now not only release from its transmq the packets
      ack-ed by the ACK but also by the Gap ACK blocks! So, more packets can
      be enqueued and transmitted.
      In addition, the sending side can now do "multi-retransmissions"
      according to the Gaps reported in the Gap ACK blocks.
      
      The new algorithm as verified helps greatly improve the TIPC throughput
      especially under packet loss condition.
      
      So far, a maximum of 32 blocks is quite enough without any "Too few Gap
      ACK blocks" reports with a 5.0% packet loss rate, however this number
      can be increased in the furture if needed.
      
      Also, the patch is backward compatible.
      Acked-by: NYing Xue <ying.xue@windriver.com>
      Acked-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NTuong Lien <tuong.t.lien@dektech.com.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9195948f
    • N
      net: bridge: mcast: remove unused br_ip_equal function · e177163d
      Nikolay Aleksandrov 提交于
      Since the mcast conversion to rhashtable this function has been unused, so
      remove it.
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e177163d
    • N
      net: bridge: always clear mcast matching struct on reports and leaves · 1515a63f
      Nikolay Aleksandrov 提交于
      We need to be careful and always zero the whole br_ip struct when it is
      used for matching since the rhashtable change. This patch fixes all the
      places which didn't properly clear it which in turn might've caused
      mismatches.
      
      Thanks for the great bug report with reproducing steps and bisection.
      
      Steps to reproduce (from the bug report):
      ip link add br0 type bridge mcast_querier 1
      ip link set br0 up
      
      ip link add v2 type veth peer name v3
      ip link set v2 master br0
      ip link set v2 up
      ip link set v3 up
      ip addr add 3.0.0.2/24 dev v3
      
      ip netns add test
      ip link add v1 type veth peer name v1 netns test
      ip link set v1 master br0
      ip link set v1 up
      ip -n test link set v1 up
      ip -n test addr add 3.0.0.1/24 dev v1
      
      # Multicast receiver
      ip netns exec test socat
      UDP4-RECVFROM:5588,ip-add-membership=224.224.224.224:3.0.0.1,fork -
      
      # Multicast sender
      echo hello | nc -u -s 3.0.0.2 224.224.224.224 5588
      
      Reported-by: liam.mcbirnie@boeing.com
      Fixes: 19e3a9c9 ("net: bridge: convert multicast to generic rhashtable")
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1515a63f
    • T
      tcp: Accept ECT on SYN in the presence of RFC8311 · f6fee16d
      Tilmans, Olivier (Nokia - BE/Antwerp) 提交于
      Linux currently disable ECN for incoming connections when the SYN
      requests ECN and the IP header has ECT(0)/ECT(1) set, as some
      networks were reportedly mangling the ToS byte, hence could later
      trigger false congestion notifications.
      
      RFC8311 §4.3 relaxes RFC3168's requirements such that ECT can be set
      one TCP control packets (including SYNs). The main benefit of this
      is the decreased probability of losing a SYN in a congested
      ECN-capable network (i.e., it avoids the initial 1s timeout).
      Additionally, this allows the development of newer TCP extensions,
      such as AccECN.
      
      This patch relaxes the previous check, by enabling ECN on incoming
      connections using SYN+ECT if at least one bit of the reserved flags
      of the TCP header is set. Such bit would indicate that the sender of
      the SYN is using a newer TCP feature than what the host implements,
      such as AccECN, and is thus implementing RFC8311. This enables
      end-hosts not supporting such extensions to still negociate ECN, and
      to have some of the benefits of using ECN on control packets.
      Signed-off-by: NOlivier Tilmans <olivier.tilmans@nokia-bell-labs.com>
      Suggested-by: NBob Briscoe <research@bobbriscoe.net>
      Cc: Koen De Schepper <koen.de_schepper@nokia-bell-labs.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f6fee16d
    • J
      net: devlink: add warning for ndo_get_port_parent_id set when not needed · 119c0b57
      Jiri Pirko 提交于
      Currently if the driver registers devlink port instance, he should set
      the devlink port attributes as well. Then the devlink core is able to
      obtain switch id itself, no need for driver to implement the ndo.
      Once all drivers will implement devlink port registration, this ndo
      should be removed. This warning guides new drivers to do things as
      they should be done.
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      119c0b57
    • J
      dsa: pass switch ID through devlink_port_attrs_set() · 15b04ace
      Jiri Pirko 提交于
      Pass the switch ID down the to devlink through devlink_port_attrs_set()
      so it can be used by devlink_compat_switch_id_get(). Leave
      ndo_get_port_parent_id implementation only for legacy.
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15b04ace
    • J
      net: devlink: introduce devlink_compat_switch_id_get() helper · 7e1146e8
      Jiri Pirko 提交于
      Introduce devlink_compat_switch_id_get() helper which fills up switch_id
      according to passed netdev pointer. Call it directly from
      dev_get_port_parent_id() as a fallback when ndo_get_port_parent_id
      is not defined for given netdev.
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7e1146e8
    • J
      net: devlink: extend port attrs for switch ID · bec5267c
      Jiri Pirko 提交于
      Extend devlink_port_attrs_set() to pass switch ID for ports which are
      part of switch and store it in port attrs. For other ports, this is
      NULL.
      
      Note that this allows the driver to group devlink ports into one or more
      switches according to the actual topology.
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bec5267c
    • N
      net: bridge: optimize backup_port fdb convergence · 8dc35020
      Nikolay Aleksandrov 提交于
      We can optimize the fdb convergence when a backup_port is present by not
      immediately flushing the entries of the stopped port since traffic for
      those entries will flow towards the backup_port.
      
      There are 2 cases specifically that benefit most:
      - when the stopped port comes up before the entries expire by themselves
      - when there's an external entry refresh and they're kept while the
        backup_port is operating (e.g. mlag)
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8dc35020
    • Y
      net: Remove inclusion of pci.h · a0640e61
      Yuval Shaia 提交于
      This header is not in use - remove it.
      Signed-off-by: NYuval Shaia <yuval.shaia@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0640e61
    • H
      tipc: add NULL pointer check · e1279ff7
      Hoang Le 提交于
      skb somehow dequeued out of inputq before processing, it causes to
      NULL pointer and kernel crashed.
      
      Add checking skb valid before using.
      
      Fixes: c55c8eda ("tipc: smooth change between replicast and broadcast")
      Reported-by: NTuong Lien Tong <tuong.t.lien@dektech.com.au>
      Acked-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NHoang Le <hoang.h.le@dektech.com.au>
      Acked-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e1279ff7
    • J
      net: sched: ensure tc flower reoffload takes filter ref · 95e27a4d
      John Hurley 提交于
      Recent changes to TC flower remove the requirement for rtnl lock when
      accessing and modifying filters. Refcounts now ensure access and deletion
      do not happen concurrently. However, the reoffload function which cycles
      through all filters and replays them to registered hw drivers is not
      protected.
      
      Use the fl_get_next_filter() function to cycle the filters for reoffload
      and ensure the ref taken by this function is put when done with each
      filter.
      Signed-off-by: NJohn Hurley <john.hurley@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      95e27a4d
    • C
      vlan: conditional inclusion of FCoE hooks to match netdevice.h and bnx2x · 0a89eb92
      Chris Leech 提交于
      Way back in 3c9c36bc the
      ndo_fcoe_get_wwn pointer was switched from depending on CONFIG_FCOE to
      CONFIG_LIBFCOE in order to allow building FCoE support into the bnx2x
      driver and used by bnx2fc without including the generic software fcoe
      module.
      
      But, FCoE is generally used over an 802.1q VLAN, and the implementation
      of ndo_fcoe_get_wwn in the 8021q module was not similarly changed.  The
      result is that if CONFIG_FCOE is disabled, then bnz2fc cannot make a
      call to ndo_fcoe_get_wwn through the 8021q interface to the underlying
      bnx2x interface.  The bnx2fc driver then falls back to a potentially
      different mapping of Ethernet MAC to Fibre Channel WWN, creating an
      incompatibility with the fabric and target configurations when compared
      to the WWNs used by pre-boot firmware and differently-configured
      kernels.
      
      So make the conditional inclusion of FCoE code in 8021q match the
      conditional inclusion in netdevice.h
      Signed-off-by: NChris Leech <cleech@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0a89eb92