1. 09 5月, 2022 1 次提交
    • L
      net: fix wrong network header length · cf3ab8d4
      Lina Wang 提交于
      When clatd starts with ebpf offloaing, and NETIF_F_GRO_FRAGLIST is enable,
      several skbs are gathered in skb_shinfo(skb)->frag_list. The first skb's
      ipv6 header will be changed to ipv4 after bpf_skb_proto_6_to_4,
      network_header\transport_header\mac_header have been updated as ipv4 acts,
      but other skbs in frag_list didnot update anything, just ipv6 packets.
      
      udp_queue_rcv_skb will call skb_segment_list to traverse other skbs in
      frag_list and make sure right udp payload is delivered to user space.
      Unfortunately, other skbs in frag_list who are still ipv6 packets are
      updated like the first skb and will have wrong transport header length.
      
      e.g.before bpf_skb_proto_6_to_4,the first skb and other skbs in frag_list
      has the same network_header(24)& transport_header(64), after
      bpf_skb_proto_6_to_4, ipv6 protocol has been changed to ipv4, the first
      skb's network_header is 44,transport_header is 64, other skbs in frag_list
      didnot change.After skb_segment_list, the other skbs in frag_list has
      different network_header(24) and transport_header(44), so there will be 20
      bytes different from original,that is difference between ipv6 header and
      ipv4 header. Just change transport_header to be the same with original.
      
      Actually, there are two solutions to fix it, one is traversing all skbs
      and changing every skb header in bpf_skb_proto_6_to_4, the other is
      modifying frag_list skb's header in skb_segment_list. Considering
      efficiency, adopt the second one--- when the first skb and other skbs in
      frag_list has different network_header length, restore them to make sure
      right udp payload is delivered to user space.
      Signed-off-by: NLina Wang <lina.wang@mediatek.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cf3ab8d4
  2. 01 4月, 2022 1 次提交
    • J
      skbuff: fix coalescing for page_pool fragment recycling · 1effe8ca
      Jean-Philippe Brucker 提交于
      Fix a use-after-free when using page_pool with page fragments. We
      encountered this problem during normal RX in the hns3 driver:
      
      (1) Initially we have three descriptors in the RX queue. The first one
          allocates PAGE1 through page_pool, and the other two allocate one
          half of PAGE2 each. Page references look like this:
      
                      RX_BD1 _______ PAGE1
                      RX_BD2 _______ PAGE2
                      RX_BD3 _________/
      
      (2) Handle RX on the first descriptor. Allocate SKB1, eventually added
          to the receive queue by tcp_queue_rcv().
      
      (3) Handle RX on the second descriptor. Allocate SKB2 and pass it to
          netif_receive_skb():
      
          netif_receive_skb(SKB2)
            ip_rcv(SKB2)
              SKB3 = skb_clone(SKB2)
      
          SKB2 and SKB3 share a reference to PAGE2 through
          skb_shinfo()->dataref. The other ref to PAGE2 is still held by
          RX_BD3:
      
                            SKB2 ---+- PAGE2
                            SKB3 __/   /
                      RX_BD3 _________/
      
       (3b) Now while handling TCP, coalesce SKB3 with SKB1:
      
            tcp_v4_rcv(SKB3)
              tcp_try_coalesce(to=SKB1, from=SKB3)    // succeeds
              kfree_skb_partial(SKB3)
                skb_release_data(SKB3)                // drops one dataref
      
                            SKB1 _____ PAGE1
                                 \____
                            SKB2 _____ PAGE2
                                       /
                      RX_BD3 _________/
      
          In skb_try_coalesce(), __skb_frag_ref() takes a page reference to
          PAGE2, where it should instead have increased the page_pool frag
          reference, pp_frag_count. Without coalescing, when releasing both
          SKB2 and SKB3, a single reference to PAGE2 would be dropped. Now
          when releasing SKB1 and SKB2, two references to PAGE2 will be
          dropped, resulting in underflow.
      
       (3c) Drop SKB2:
      
            af_packet_rcv(SKB2)
              consume_skb(SKB2)
                skb_release_data(SKB2)                // drops second dataref
                  page_pool_return_skb_page(PAGE2)    // drops one pp_frag_count
      
                            SKB1 _____ PAGE1
                                 \____
                                       PAGE2
                                       /
                      RX_BD3 _________/
      
      (4) Userspace calls recvmsg()
          Copies SKB1 and releases it. Since SKB3 was coalesced with SKB1, we
          release the SKB3 page as well:
      
          tcp_eat_recv_skb(SKB1)
            skb_release_data(SKB1)
              page_pool_return_skb_page(PAGE1)
              page_pool_return_skb_page(PAGE2)        // drops second pp_frag_count
      
      (5) PAGE2 is freed, but the third RX descriptor was still using it!
          In our case this causes IOMMU faults, but it would silently corrupt
          memory if the IOMMU was disabled.
      
      Change the logic that checks whether pp_recycle SKBs can be coalesced.
      We still reject differing pp_recycle between 'from' and 'to' SKBs, but
      in order to avoid the situation described above, we also reject
      coalescing when both 'from' and 'to' are pp_recycled and 'from' is
      cloned.
      
      The new logic allows coalescing a cloned pp_recycle SKB into a page
      refcounted one, because in this case the release (4) will drop the right
      reference, the one taken by skb_try_coalesce().
      
      Fixes: 53e0961d ("page_pool: add frag page recycling support in page pool")
      Suggested-by: NAlexander Duyck <alexanderduyck@fb.com>
      Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org>
      Reviewed-by: NYunsheng Lin <linyunsheng@huawei.com>
      Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com>
      Acked-by: NIlias Apalodimas <ilias.apalodimas@linaro.org>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1effe8ca
  3. 04 3月, 2022 1 次提交
  4. 03 3月, 2022 3 次提交
    • M
      net: Clear mono_delivery_time bit in __skb_tstamp_tx() · d93376f5
      Martin KaFai Lau 提交于
      In __skb_tstamp_tx(), it may clone the egress skb and queues the clone to
      the sk_error_queue.  The outgoing skb may have the mono delivery_time
      while the (rcv) timestamp is expected for the clone, so the
      skb->mono_delivery_time bit needs to be cleared from the clone.
      
      This patch adds the skb->mono_delivery_time clearing to the existing
      __net_timestamp() and use it in __skb_tstamp_tx().
      The __net_timestamp() fast path usage in dev.c is changed to directly
      call ktime_get_real() since the mono_delivery_time bit is not set at
      that point.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d93376f5
    • M
      net: Add skb_clear_tstamp() to keep the mono delivery_time · de799101
      Martin KaFai Lau 提交于
      Right now, skb->tstamp is reset to 0 whenever the skb is forwarded.
      
      If skb->tstamp has the mono delivery_time, clearing it can hurt
      the performance when it finally transmits out to fq@phy-dev.
      
      The earlier patch added a skb->mono_delivery_time bit to
      flag the skb->tstamp carrying the mono delivery_time.
      
      This patch adds skb_clear_tstamp() helper which keeps
      the mono delivery_time and clears everything else.
      
      The delivery_time clearing will be postponed until the stack knows the
      skb will be delivered locally.  It will be done in a latter patch.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de799101
    • L
      net: fix up skbs delta_truesize in UDP GRO frag_list · 224102de
      lena wang 提交于
      The truesize for a UDP GRO packet is added by main skb and skbs in main
      skb's frag_list:
      skb_gro_receive_list
              p->truesize += skb->truesize;
      
      The commit 53475c5d ("net: fix use-after-free when UDP GRO with
      shared fraglist") introduced a truesize increase for frag_list skbs.
      When uncloning skb, it will call pskb_expand_head and trusesize for
      frag_list skbs may increase. This can occur when allocators uses
      __netdev_alloc_skb and not jump into __alloc_skb. This flow does not
      use ksize(len) to calculate truesize while pskb_expand_head uses.
      skb_segment_list
      err = skb_unclone(nskb, GFP_ATOMIC);
      pskb_expand_head
              if (!skb->sk || skb->destructor == sock_edemux)
                      skb->truesize += size - osize;
      
      If we uses increased truesize adding as delta_truesize, it will be
      larger than before and even larger than previous total truesize value
      if skbs in frag_list are abundant. The main skb truesize will become
      smaller and even a minus value or a huge value for an unsigned int
      parameter. Then the following memory check will drop this abnormal skb.
      
      To avoid this error we should use the original truesize to segment the
      main skb.
      
      Fixes: 53475c5d ("net: fix use-after-free when UDP GRO with shared fraglist")
      Signed-off-by: Nlena wang <lena.wang@mediatek.com>
      Acked-by: NPaolo Abeni <pabeni@redhat.com>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Link: https://lore.kernel.org/r/1646133431-8948-1-git-send-email-lena.wang@mediatek.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      224102de
  5. 23 2月, 2022 3 次提交
    • E
      net: preserve skb_end_offset() in skb_unclone_keeptruesize() · 2b88cba5
      Eric Dumazet 提交于
      syzbot found another way to trigger the infamous WARN_ON_ONCE(delta < len)
      in skb_try_coalesce() [1]
      
      I was able to root cause the issue to kfence.
      
      When kfence is in action, the following assertion is no longer true:
      
      int size = xxxx;
      void *ptr1 = kmalloc(size, gfp);
      void *ptr2 = kmalloc(size, gfp);
      
      if (ptr1 && ptr2)
      	ASSERT(ksize(ptr1) == ksize(ptr2));
      
      We attempted to fix these issues in the blamed commits, but forgot
      that TCP was possibly shifting data after skb_unclone_keeptruesize()
      has been used, notably from tcp_retrans_try_collapse().
      
      So we not only need to keep same skb->truesize value,
      we also need to make sure TCP wont fill new tailroom
      that pskb_expand_head() was able to get from a
      addr = kmalloc(...) followed by ksize(addr)
      
      Split skb_unclone_keeptruesize() into two parts:
      
      1) Inline skb_unclone_keeptruesize() for the common case,
         when skb is not cloned.
      
      2) Out of line __skb_unclone_keeptruesize() for the 'slow path'.
      
      WARNING: CPU: 1 PID: 6490 at net/core/skbuff.c:5295 skb_try_coalesce+0x1235/0x1560 net/core/skbuff.c:5295
      Modules linked in:
      CPU: 1 PID: 6490 Comm: syz-executor161 Not tainted 5.17.0-rc4-syzkaller-00229-g4f12b742 #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      RIP: 0010:skb_try_coalesce+0x1235/0x1560 net/core/skbuff.c:5295
      Code: bf 01 00 00 00 0f b7 c0 89 c6 89 44 24 20 e8 62 24 4e fa 8b 44 24 20 83 e8 01 0f 85 e5 f0 ff ff e9 87 f4 ff ff e8 cb 20 4e fa <0f> 0b e9 06 f9 ff ff e8 af b2 95 fa e9 69 f0 ff ff e8 95 b2 95 fa
      RSP: 0018:ffffc900063af268 EFLAGS: 00010293
      RAX: 0000000000000000 RBX: 00000000ffffffd5 RCX: 0000000000000000
      RDX: ffff88806fc05700 RSI: ffffffff872abd55 RDI: 0000000000000003
      RBP: ffff88806e675500 R08: 00000000ffffffd5 R09: 0000000000000000
      R10: ffffffff872ab659 R11: 0000000000000000 R12: ffff88806dd554e8
      R13: ffff88806dd9bac0 R14: ffff88806dd9a2c0 R15: 0000000000000155
      FS:  00007f18014f9700(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 0000000020002000 CR3: 000000006be7a000 CR4: 00000000003506f0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      Call Trace:
       <TASK>
       tcp_try_coalesce net/ipv4/tcp_input.c:4651 [inline]
       tcp_try_coalesce+0x393/0x920 net/ipv4/tcp_input.c:4630
       tcp_queue_rcv+0x8a/0x6e0 net/ipv4/tcp_input.c:4914
       tcp_data_queue+0x11fd/0x4bb0 net/ipv4/tcp_input.c:5025
       tcp_rcv_established+0x81e/0x1ff0 net/ipv4/tcp_input.c:5947
       tcp_v4_do_rcv+0x65e/0x980 net/ipv4/tcp_ipv4.c:1719
       sk_backlog_rcv include/net/sock.h:1037 [inline]
       __release_sock+0x134/0x3b0 net/core/sock.c:2779
       release_sock+0x54/0x1b0 net/core/sock.c:3311
       sk_wait_data+0x177/0x450 net/core/sock.c:2821
       tcp_recvmsg_locked+0xe28/0x1fd0 net/ipv4/tcp.c:2457
       tcp_recvmsg+0x137/0x610 net/ipv4/tcp.c:2572
       inet_recvmsg+0x11b/0x5e0 net/ipv4/af_inet.c:850
       sock_recvmsg_nosec net/socket.c:948 [inline]
       sock_recvmsg net/socket.c:966 [inline]
       sock_recvmsg net/socket.c:962 [inline]
       ____sys_recvmsg+0x2c4/0x600 net/socket.c:2632
       ___sys_recvmsg+0x127/0x200 net/socket.c:2674
       __sys_recvmsg+0xe2/0x1a0 net/socket.c:2704
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      Fixes: c4777efa ("net: add and use skb_unclone_keeptruesize() helper")
      Fixes: 097b9146 ("net: fix up truesize of cloned skb in skb_prepare_for_shift()")
      Reported-by: Nsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Marco Elver <elver@google.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      2b88cba5
    • E
      net: add skb_set_end_offset() helper · 763087da
      Eric Dumazet 提交于
      We have multiple places where this helper is convenient,
      and plan using it in the following patch.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      763087da
    • E
      net: __pskb_pull_tail() & pskb_carve_frag_list() drop_monitor friends · ef527f96
      Eric Dumazet 提交于
      Whenever one of these functions pull all data from an skb in a frag_list,
      use consume_skb() instead of kfree_skb() to avoid polluting drop
      monitoring.
      
      Fixes: 6fa01ccd ("skbuff: Add pskb_extract() helper function")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Link: https://lore.kernel.org/r/20220220154052.1308469-1-eric.dumazet@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      ef527f96
  6. 18 2月, 2022 1 次提交
    • E
      net-timestamp: convert sk->sk_tskey to atomic_t · a1cdec57
      Eric Dumazet 提交于
      UDP sendmsg() can be lockless, this is causing all kinds
      of data races.
      
      This patch converts sk->sk_tskey to remove one of these races.
      
      BUG: KCSAN: data-race in __ip_append_data / __ip_append_data
      
      read to 0xffff8881035d4b6c of 4 bytes by task 8877 on cpu 1:
       __ip_append_data+0x1c1/0x1de0 net/ipv4/ip_output.c:994
       ip_make_skb+0x13f/0x2d0 net/ipv4/ip_output.c:1636
       udp_sendmsg+0x12bd/0x14c0 net/ipv4/udp.c:1249
       inet_sendmsg+0x5f/0x80 net/ipv4/af_inet.c:819
       sock_sendmsg_nosec net/socket.c:705 [inline]
       sock_sendmsg net/socket.c:725 [inline]
       ____sys_sendmsg+0x39a/0x510 net/socket.c:2413
       ___sys_sendmsg net/socket.c:2467 [inline]
       __sys_sendmmsg+0x267/0x4c0 net/socket.c:2553
       __do_sys_sendmmsg net/socket.c:2582 [inline]
       __se_sys_sendmmsg net/socket.c:2579 [inline]
       __x64_sys_sendmmsg+0x53/0x60 net/socket.c:2579
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x44/0xd0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      write to 0xffff8881035d4b6c of 4 bytes by task 8880 on cpu 0:
       __ip_append_data+0x1d8/0x1de0 net/ipv4/ip_output.c:994
       ip_make_skb+0x13f/0x2d0 net/ipv4/ip_output.c:1636
       udp_sendmsg+0x12bd/0x14c0 net/ipv4/udp.c:1249
       inet_sendmsg+0x5f/0x80 net/ipv4/af_inet.c:819
       sock_sendmsg_nosec net/socket.c:705 [inline]
       sock_sendmsg net/socket.c:725 [inline]
       ____sys_sendmsg+0x39a/0x510 net/socket.c:2413
       ___sys_sendmsg net/socket.c:2467 [inline]
       __sys_sendmmsg+0x267/0x4c0 net/socket.c:2553
       __do_sys_sendmmsg net/socket.c:2582 [inline]
       __se_sys_sendmmsg net/socket.c:2579 [inline]
       __x64_sys_sendmmsg+0x53/0x60 net/socket.c:2579
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x44/0xd0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      value changed: 0x0000054d -> 0x0000054e
      
      Reported by Kernel Concurrency Sanitizer on:
      CPU: 0 PID: 8880 Comm: syz-executor.5 Not tainted 5.17.0-rc2-syzkaller-00167-gdcb85f85-dirty #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      
      Fixes: 09c2d251 ("net-timestamp: add key to disambiguate concurrent datagrams")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Reported-by: Nsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a1cdec57
  7. 10 2月, 2022 1 次提交
  8. 10 1月, 2022 1 次提交
    • M
      net: skb: introduce kfree_skb_reason() · c504e5c2
      Menglong Dong 提交于
      Introduce the interface kfree_skb_reason(), which is able to pass
      the reason why the skb is dropped to 'kfree_skb' tracepoint.
      
      Add the 'reason' field to 'trace_kfree_skb', therefor user can get
      more detail information about abnormal skb with 'drop_monitor' or
      eBPF.
      
      All drop reasons are defined in the enum 'skb_drop_reason', and
      they will be print as string in 'kfree_skb' tracepoint in format
      of 'reason: XXX'.
      
      ( Maybe the reasons should be defined in a uapi header file, so that
      user space can use them? )
      Signed-off-by: NMenglong Dong <imagedong@tencent.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      c504e5c2
  9. 16 12月, 2021 1 次提交
  10. 08 12月, 2021 1 次提交
  11. 22 11月, 2021 1 次提交
  12. 16 11月, 2021 4 次提交
  13. 03 11月, 2021 2 次提交
    • T
      net: avoid double accounting for pure zerocopy skbs · 9b65b17d
      Talal Ahmad 提交于
      Track skbs containing only zerocopy data and avoid charging them to
      kernel memory to correctly account the memory utilization for
      msg_zerocopy. All of the data in such skbs is held in user pages which
      are already accounted to user. Before this change, they are charged
      again in kernel in __zerocopy_sg_from_iter. The charging in kernel is
      excessive because data is not being copied into skb frags. This
      excessive charging can lead to kernel going into memory pressure
      state which impacts all sockets in the system adversely. Mark pure
      zerocopy skbs with a SKBFL_PURE_ZEROCOPY flag and remove
      charge/uncharge for data in such skbs.
      
      Initially, an skb is marked pure zerocopy when it is empty and in
      zerocopy path. skb can then change from a pure zerocopy skb to mixed
      data skb (zerocopy and copy data) if it is at tail of write queue and
      there is room available in it and non-zerocopy data is being sent in
      the next sendmsg call. At this time sk_mem_charge is done for the pure
      zerocopied data and the pure zerocopy flag is unmarked. We found that
      this happens very rarely on workloads that pass MSG_ZEROCOPY.
      
      A pure zerocopy skb can later be coalesced into normal skb if they are
      next to each other in queue but this patch prevents coalescing from
      happening. This avoids complexity of charging when skb downgrades from
      pure zerocopy to mixed. This is also rare.
      
      In sk_wmem_free_skb, if it is a pure zerocopy skb, an sk_mem_uncharge
      for SKB_TRUESIZE(skb_end_offset(skb)) is done for sk_mem_charge in
      tcp_skb_entail for an skb without data.
      
      Testing with the msg_zerocopy.c benchmark between two hosts(100G nics)
      with zerocopy showed that before this patch the 'sock' variable in
      memory.stat for cgroup2 that tracks sum of sk_forward_alloc,
      sk_rmem_alloc and sk_wmem_queued is around 1822720 and with this
      change it is 0. This is due to no charge to sk_forward_alloc for
      zerocopy data and shows memory utilization for kernel is lowered.
      
      With this commit we don't see the warning we saw in previous commit
      which resulted in commit 84882cf7.
      Signed-off-by: NTalal Ahmad <talalahmad@google.com>
      Acked-by: NArjun Roy <arjunroy@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9b65b17d
    • E
      net: add and use skb_unclone_keeptruesize() helper · c4777efa
      Eric Dumazet 提交于
      While commit 097b9146 ("net: fix up truesize of cloned
      skb in skb_prepare_for_shift()") fixed immediate issues found
      when KFENCE was enabled/tested, there are still similar issues,
      when tcp_trim_head() hits KFENCE while the master skb
      is cloned.
      
      This happens under heavy networking TX workloads,
      when the TX completion might be delayed after incoming ACK.
      
      This patch fixes the WARNING in sk_stream_kill_queues
      when sk->sk_mem_queued/sk->sk_forward_alloc are not zero.
      
      Fixes: d3fb45f3 ("mm, kfence: insert KFENCE hooks for SLAB")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NMarco Elver <elver@google.com>
      Link: https://lore.kernel.org/r/20211102004555.1359210-1-eric.dumazet@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      c4777efa
  14. 02 11月, 2021 2 次提交
    • J
      Revert "net: avoid double accounting for pure zerocopy skbs" · 84882cf7
      Jakub Kicinski 提交于
      This reverts commit f1a456f8.
      
        WARNING: CPU: 1 PID: 6819 at net/core/skbuff.c:5429 skb_try_coalesce+0x78b/0x7e0
        CPU: 1 PID: 6819 Comm: xxxxxxx Kdump: loaded Tainted: G S                5.15.0-04194-gd852503f7711 #16
        RIP: 0010:skb_try_coalesce+0x78b/0x7e0
        Code: e8 2a bf 41 ff 44 8b b3 bc 00 00 00 48 8b 7c 24 30 e8 19 c0 41 ff 44 89 f0 48 03 83 c0 00 00 00 48 89 44 24 40 e9 47 fb ff ff <0f> 0b e9 ca fc ff ff 4c 8d 70 ff 48 83 c0 07 48 89 44 24 38 e9 61
        RSP: 0018:ffff88881f449688 EFLAGS: 00010282
        RAX: 00000000fffffe96 RBX: ffff8881566e4460 RCX: ffffffff82079f7e
        RDX: 0000000000000003 RSI: dffffc0000000000 RDI: ffff8881566e47b0
        RBP: ffff8881566e46e0 R08: ffffed102619235d R09: ffffed102619235d
        R10: ffff888130c91ae3 R11: ffffed102619235c R12: ffff88881f4498a0
        R13: 0000000000000056 R14: 0000000000000009 R15: ffff888130c91ac0
        FS:  00007fec2cbb9700(0000) GS:ffff88881f440000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00007fec1b060d80 CR3: 00000003acf94005 CR4: 00000000003706e0
        DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
        Call Trace:
         <IRQ>
         tcp_try_coalesce+0xeb/0x290
         ? tcp_parse_options+0x610/0x610
         ? mark_held_locks+0x79/0xa0
         tcp_queue_rcv+0x69/0x2f0
         tcp_rcv_established+0xa49/0xd40
         ? tcp_data_queue+0x18a0/0x18a0
         tcp_v6_do_rcv+0x1c9/0x880
         ? rt6_mtu_change_route+0x100/0x100
         tcp_v6_rcv+0x1624/0x1830
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      84882cf7
    • T
      net: avoid double accounting for pure zerocopy skbs · f1a456f8
      Talal Ahmad 提交于
      Track skbs with only zerocopy data and avoid charging them to kernel
      memory to correctly account the memory utilization for msg_zerocopy.
      All of the data in such skbs is held in user pages which are already
      accounted to user. Before this change, they are charged again in
      kernel in __zerocopy_sg_from_iter. The charging in kernel is
      excessive because data is not being copied into skb frags. This
      excessive charging can lead to kernel going into memory pressure
      state which impacts all sockets in the system adversely. Mark pure
      zerocopy skbs with a SKBFL_PURE_ZEROCOPY flag and remove
      charge/uncharge for data in such skbs.
      
      Initially, an skb is marked pure zerocopy when it is empty and in
      zerocopy path. skb can then change from a pure zerocopy skb to mixed
      data skb (zerocopy and copy data) if it is at tail of write queue and
      there is room available in it and non-zerocopy data is being sent in
      the next sendmsg call. At this time sk_mem_charge is done for the pure
      zerocopied data and the pure zerocopy flag is unmarked. We found that
      this happens very rarely on workloads that pass MSG_ZEROCOPY.
      
      A pure zerocopy skb can later be coalesced into normal skb if they are
      next to each other in queue but this patch prevents coalescing from
      happening. This avoids complexity of charging when skb downgrades from
      pure zerocopy to mixed. This is also rare.
      
      In sk_wmem_free_skb, if it is a pure zerocopy skb, an sk_mem_uncharge
      for SKB_TRUESIZE(MAX_TCP_HEADER) is done for sk_mem_charge in
      tcp_skb_entail for an skb without data.
      
      Testing with the msg_zerocopy.c benchmark between two hosts(100G nics)
      with zerocopy showed that before this patch the 'sock' variable in
      memory.stat for cgroup2 that tracks sum of sk_forward_alloc,
      sk_rmem_alloc and sk_wmem_queued is around 1822720 and with this
      change it is 0. This is due to no charge to sk_forward_alloc for
      zerocopy data and shows memory utilization for kernel is lowered.
      Signed-off-by: NTalal Ahmad <talalahmad@google.com>
      Acked-by: NArjun Roy <arjunroy@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      f1a456f8
  15. 29 10月, 2021 1 次提交
  16. 23 10月, 2021 1 次提交
  17. 22 9月, 2021 1 次提交
  18. 14 9月, 2021 1 次提交
  19. 03 9月, 2021 1 次提交
  20. 14 8月, 2021 1 次提交
  21. 05 8月, 2021 1 次提交
  22. 03 8月, 2021 1 次提交
  23. 29 7月, 2021 3 次提交
  24. 19 7月, 2021 1 次提交
    • P
      net: Fix zero-copy head len calculation. · a17ad096
      Pravin B Shelar 提交于
      In some cases skb head could be locked and entire header
      data is pulled from skb. When skb_zerocopy() called in such cases,
      following BUG is triggered. This patch fixes it by copying entire
      skb in such cases.
      This could be optimized incase this is performance bottleneck.
      
      ---8<---
      kernel BUG at net/core/skbuff.c:2961!
      invalid opcode: 0000 [#1] SMP PTI
      CPU: 2 PID: 0 Comm: swapper/2 Tainted: G           OE     5.4.0-77-generic #86-Ubuntu
      Hardware name: OpenStack Foundation OpenStack Nova, BIOS 1.13.0-1ubuntu1.1 04/01/2014
      RIP: 0010:skb_zerocopy+0x37a/0x3a0
      RSP: 0018:ffffbcc70013ca38 EFLAGS: 00010246
      Call Trace:
       <IRQ>
       queue_userspace_packet+0x2af/0x5e0 [openvswitch]
       ovs_dp_upcall+0x3d/0x60 [openvswitch]
       ovs_dp_process_packet+0x125/0x150 [openvswitch]
       ovs_vport_receive+0x77/0xd0 [openvswitch]
       netdev_port_receive+0x87/0x130 [openvswitch]
       netdev_frame_hook+0x4b/0x60 [openvswitch]
       __netif_receive_skb_core+0x2b4/0xc90
       __netif_receive_skb_one_core+0x3f/0xa0
       __netif_receive_skb+0x18/0x60
       process_backlog+0xa9/0x160
       net_rx_action+0x142/0x390
       __do_softirq+0xe1/0x2d6
       irq_exit+0xae/0xb0
       do_IRQ+0x5a/0xf0
       common_interrupt+0xf/0xf
      
      Code that triggered BUG:
      int
      skb_zerocopy(struct sk_buff *to, struct sk_buff *from, int len, int hlen)
      {
              int i, j = 0;
              int plen = 0; /* length of skb->head fragment */
              int ret;
              struct page *page;
              unsigned int offset;
      
              BUG_ON(!from->head_frag && !hlen);
      Signed-off-by: NPravin B Shelar <pshelar@ovn.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a17ad096
  25. 17 7月, 2021 1 次提交
  26. 07 7月, 2021 1 次提交
  27. 30 6月, 2021 1 次提交
  28. 11 6月, 2021 1 次提交
  29. 08 6月, 2021 1 次提交
    • I
      page_pool: Allow drivers to hint on SKB recycling · 6a5bcd84
      Ilias Apalodimas 提交于
      Up to now several high speed NICs have custom mechanisms of recycling
      the allocated memory they use for their payloads.
      Our page_pool API already has recycling capabilities that are always
      used when we are running in 'XDP mode'. So let's tweak the API and the
      kernel network stack slightly and allow the recycling to happen even
      during the standard operation.
      The API doesn't take into account 'split page' policies used by those
      drivers currently, but can be extended once we have users for that.
      
      The idea is to be able to intercept the packet on skb_release_data().
      If it's a buffer coming from our page_pool API recycle it back to the
      pool for further usage or just release the packet entirely.
      
      To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
      a field in struct page (page->pp) to store the page_pool pointer.
      Storing the information in page->pp allows us to recycle both SKBs and
      their fragments.
      We could have skipped the skb bit entirely, since identical information
      can bederived from struct page. However, in an effort to affect the free path
      as less as possible, reading a single bit in the skb which is already
      in cache, is better that trying to derive identical information for the
      page stored data.
      
      The driver or page_pool has to take care of the sync operations on it's own
      during the buffer recycling since the buffer is, after opting-in to the
      recycling, never unmapped.
      
      Since the gain on the drivers depends on the architecture, we are not
      enabling recycling by default if the page_pool API is used on a driver.
      In order to enable recycling the driver must call skb_mark_for_recycle()
      to store the information we need for recycling in page->pp and
      enabling the recycling bit, or page_pool_store_mem_info() for a fragment.
      Co-developed-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Co-developed-by: NMatteo Croce <mcroce@microsoft.com>
      Signed-off-by: NMatteo Croce <mcroce@microsoft.com>
      Signed-off-by: NIlias Apalodimas <ilias.apalodimas@linaro.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6a5bcd84