1. 27 2月, 2021 3 次提交
  2. 14 2月, 2021 11 次提交
  3. 13 2月, 2021 6 次提交
  4. 12 2月, 2021 4 次提交
  5. 10 2月, 2021 3 次提交
  6. 09 2月, 2021 1 次提交
  7. 07 2月, 2021 2 次提交
  8. 06 2月, 2021 1 次提交
  9. 05 2月, 2021 3 次提交
  10. 04 2月, 2021 2 次提交
  11. 03 2月, 2021 1 次提交
  12. 31 1月, 2021 1 次提交
    • C
      neighbour: Prevent a dead entry from updating gc_list · eb4e8fac
      Chinmay Agarwal 提交于
      Following race condition was detected:
      <CPU A, t0> - neigh_flush_dev() is under execution and calls
      neigh_mark_dead(n) marking the neighbour entry 'n' as dead.
      
      <CPU B, t1> - Executing: __netif_receive_skb() ->
      __netif_receive_skb_core() -> arp_rcv() -> arp_process().arp_process()
      calls __neigh_lookup() which takes a reference on neighbour entry 'n'.
      
      <CPU A, t2> - Moves further along neigh_flush_dev() and calls
      neigh_cleanup_and_release(n), but since reference count increased in t2,
      'n' couldn't be destroyed.
      
      <CPU B, t3> - Moves further along, arp_process() and calls
      neigh_update()-> __neigh_update() -> neigh_update_gc_list(), which adds
      the neighbour entry back in gc_list(neigh_mark_dead(), removed it
      earlier in t0 from gc_list)
      
      <CPU B, t4> - arp_process() finally calls neigh_release(n), destroying
      the neighbour entry.
      
      This leads to 'n' still being part of gc_list, but the actual
      neighbour structure has been freed.
      
      The situation can be prevented from happening if we disallow a dead
      entry to have any possibility of updating gc_list. This is what the
      patch intends to achieve.
      
      Fixes: 9c29a2f5 ("neighbor: Fix locking order for gc_list changes")
      Signed-off-by: NChinmay Agarwal <chinagar@codeaurora.org>
      Reviewed-by: NCong Wang <xiyou.wangcong@gmail.com>
      Reviewed-by: NDavid Ahern <dsahern@kernel.org>
      Link: https://lore.kernel.org/r/20210127165453.GA20514@chinagar-linux.qualcomm.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      eb4e8fac
  13. 30 1月, 2021 2 次提交
    • K
      net: Remove redundant calls of sk_tx_queue_clear(). · df610cd9
      Kuniyuki Iwashima 提交于
      The commit 41b14fb8 ("net: Do not clear the sock TX queue in
      sk_set_socket()") removes sk_tx_queue_clear() from sk_set_socket() and adds
      it instead in sk_alloc() and sk_clone_lock() to fix an issue introduced in
      the commit e022f0b4 ("net: Introduce sk_tx_queue_mapping"). On the
      other hand, the original commit had already put sk_tx_queue_clear() in
      sk_prot_alloc(): the callee of sk_alloc() and sk_clone_lock(). Thus
      sk_tx_queue_clear() is called twice in each path.
      
      If we remove sk_tx_queue_clear() in sk_alloc() and sk_clone_lock(), it
      currently works well because (i) sk_tx_queue_mapping is defined between
      sk_dontcopy_begin and sk_dontcopy_end, and (ii) sock_copy() called after
      sk_prot_alloc() in sk_clone_lock() does not overwrite sk_tx_queue_mapping.
      However, if we move sk_tx_queue_mapping out of the no copy area, it
      introduces a bug unintentionally.
      
      Therefore, this patch adds a compile-time check to take care of the order
      of sock_copy() and sk_tx_queue_clear() and removes sk_tx_queue_clear() from
      sk_prot_alloc() so that it does the only allocation and its callers
      initialize fields.
      
      CC: Boris Pismenny <borisp@mellanox.com>
      Signed-off-by: NKuniyuki Iwashima <kuniyu@amazon.co.jp>
      Acked-by: NTariq Toukan <tariqt@nvidia.com>
      Link: https://lore.kernel.org/r/20210128150217.6060-1-kuniyu@amazon.co.jpSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      df610cd9
    • X
      net: support ip generic csum processing in skb_csum_hwoffload_help · 62fafcd6
      Xin Long 提交于
      NETIF_F_IP|IPV6_CSUM feature flag indicates UDP and TCP csum offload
      while NETIF_F_HW_CSUM feature flag indicates ip generic csum offload
      for HW, which includes not only for TCP/UDP csum, but also for other
      protocols' csum like GRE's.
      
      However, in skb_csum_hwoffload_help() it only checks features against
      NETIF_F_CSUM_MASK(NETIF_F_HW|IP|IPV6_CSUM). So if it's a non TCP/UDP
      packet and the features doesn't support NETIF_F_HW_CSUM, but supports
      NETIF_F_IP|IPV6_CSUM only, it would still return 0 and leave the HW
      to do csum.
      
      This patch is to support ip generic csum processing by checking
      NETIF_F_HW_CSUM for all protocols, and check (NETIF_F_IP_CSUM |
      NETIF_F_IPV6_CSUM) only for TCP and UDP.
      
      Note that we're using skb->csum_offset to check if it's a TCP/UDP
      proctol, this might be fragile. However, as Alex said, for now we
      only have a few L4 protocols that are requesting Tx csum offload,
      we'd better fix this until a new protocol comes with a same csum
      offset.
      
      v1->v2:
        - not extend skb->csum_not_inet, but use skb->csum_offset to tell
          if it's an UDP/TCP csum packet.
      v2->v3:
        - add a note in the changelog, as Willem suggested.
      Suggested-by: NAlexander Duyck <alexander.duyck@gmail.com>
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      62fafcd6