1. 10 5月, 2023 6 次提交
    • L
      bpf, sockmap: Fix an infinite loop error when len is 0 in tcp_bpf_recvmsg_parser() · 4be06acd
      Liu Jian 提交于
      mainline inclusion
      from mainline-v6.3-rc2
      commit d900f3d2
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d900f3d20cc3169ce42ec72acc850e662a4d4db2
      
      ---------------------------
      
      When the buffer length of the recvmsg system call is 0, we got the
      flollowing soft lockup problem:
      
      watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [a.out:6149]
      CPU: 3 PID: 6149 Comm: a.out Kdump: loaded Not tainted 6.2.0+ #30
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014
      RIP: 0010:remove_wait_queue+0xb/0xc0
      Code: 5e 41 5f c3 cc cc cc cc 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 0f 1f 44 00 00 41 57 <41> 56 41 55 41 54 55 48 89 fd 53 48 89 f3 4c 8d 6b 18 4c 8d 73 20
      RSP: 0018:ffff88811b5978b8 EFLAGS: 00000246
      RAX: 0000000000000000 RBX: ffff88811a7d3780 RCX: ffffffffb7a4d768
      RDX: dffffc0000000000 RSI: ffff88811b597908 RDI: ffff888115408040
      RBP: 1ffff110236b2f1b R08: 0000000000000000 R09: ffff88811a7d37e7
      R10: ffffed10234fa6fc R11: 0000000000000001 R12: ffff88811179b800
      R13: 0000000000000001 R14: ffff88811a7d38a8 R15: ffff88811a7d37e0
      FS:  00007f6fb5398740(0000) GS:ffff888237180000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 0000000020000000 CR3: 000000010b6ba002 CR4: 0000000000370ee0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      Call Trace:
       <TASK>
       tcp_msg_wait_data+0x279/0x2f0
       tcp_bpf_recvmsg_parser+0x3c6/0x490
       inet_recvmsg+0x280/0x290
       sock_recvmsg+0xfc/0x120
       ____sys_recvmsg+0x160/0x3d0
       ___sys_recvmsg+0xf0/0x180
       __sys_recvmsg+0xea/0x1a0
       do_syscall_64+0x3f/0x90
       entry_SYSCALL_64_after_hwframe+0x72/0xdc
      
      The logic in tcp_bpf_recvmsg_parser is as follows:
      
      msg_bytes_ready:
      	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
      	if (!copied) {
      		wait data;
      		goto msg_bytes_ready;
      	}
      
      In this case, "copied" always is 0, the infinite loop occurs.
      
      According to the Linux system call man page, 0 should be returned in this
      case. Therefore, in tcp_bpf_recvmsg_parser(), if the length is 0, directly
      return. Also modify several other functions with the same problem.
      
      Fixes: 1f5be6b3 ("udp: Implement udp_bpf_recvmsg() for sockmap")
      Fixes: 9825d866 ("af_unix: Implement unix_dgram_bpf_recvmsg()")
      Fixes: c5d2177a ("bpf, sockmap: Fix race in ingress receive verdict with redirect to self")
      Fixes: 604326b4 ("bpf, sockmap: convert to generic sk_msg interface")
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Cc: Jakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20230303080946.1146638-1-liujian56@huawei.comSigned-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/ipv4/udp_bpf.c
      	net/unix/unix_bpf.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      4be06acd
    • J
      bpf, sockmap: Fix double bpf_prog_put on error case in map_link · 220b8487
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.17-rc1
      commit 218d747a
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=218d747a4142f281a256687bb513a135c905867b
      
      ---------------------------
      
      sock_map_link() is called to update a sockmap entry with a sk. But, if the
      sock_map_init_proto() call fails then we return an error to the map_update
      op against the sockmap. In the error path though we need to cleanup psock
      and dec the refcnt on any programs associated with the map, because we
      refcnt them early in the update process to ensure they are pinned for the
      psock. (This avoids a race where user deletes programs while also updating
      the map with new socks.)
      
      In current code we do the prog refcnt dec explicitely by calling
      bpf_prog_put() when the program was found in the map. But, after commit
      '38207a5e' in this error path we've already done the prog to psock
      assignment so the programs have a reference from the psock as well. This
      then causes the psock tear down logic, invoked by sk_psock_put() in the
      error path, to similarly call bpf_prog_put on the programs there.
      
      To be explicit this logic does the prog->psock assignment:
      
        if (msg_*)
          psock_set_prog(...)
      
      Then the error path under the out_progs label does a similar check and
      dec with:
      
        if (msg_*)
           bpf_prog_put(...)
      
      And the teardown logic sk_psock_put() does ...
      
        psock_set_prog(msg_*, NULL)
      
      ... triggering another bpf_prog_put(...). Then KASAN gives us this splat,
      found by syzbot because we've created an inbalance between bpf_prog_inc and
      bpf_prog_put calling put twice on the program.
      
        BUG: KASAN: vmalloc-out-of-bounds in __bpf_prog_put kernel/bpf/syscall.c:1812 [inline]
        BUG: KASAN: vmalloc-out-of-bounds in __bpf_prog_put kernel/bpf/syscall.c:1812 [inline] kernel/bpf/syscall.c:1829
        BUG: KASAN: vmalloc-out-of-bounds in bpf_prog_put+0x8c/0x4f0 kernel/bpf/syscall.c:1829 kernel/bpf/syscall.c:1829
        Read of size 8 at addr ffffc90000e76038 by task syz-executor020/3641
      
      To fix clean up error path so it doesn't try to do the bpf_prog_put in the
      error path once progs are assigned then it relies on the normal psock
      tear down logic to do complete cleanup.
      
      For completness we also cover the case whereh sk_psock_init_strp() fails,
      but this is not expected because it indicates an incorrect socket type
      and should be caught earlier.
      
      Fixes: 38207a5e ("bpf, sockmap: Attach map progs to psock early for feature probes")
      Reported-by: syzbot+bb73e71cf4b8fd376a4f@syzkaller.appspotmail.com
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20220104214645.290900-1-john.fastabend@gmail.com
      (cherry picked from commit 218d747a)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/core/sock_map.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      220b8487
    • J
      bpf, sockmap: Re-evaluate proto ops when psock is removed from sockmap · cbc0f0ca
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.16-rc5
      commit c0d95d33
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c0d95d3380ee099d735e08618c0d599e72f6c8b0
      
      ---------------------------
      
      When a sock is added to a sock map we evaluate what proto op hooks need to
      be used. However, when the program is removed from the sock map we have not
      been evaluating if that changes the required program layout.
      
      Before the patch listed in the 'fixes' tag this was not causing failures
      because the base program set handles all cases. Specifically, the case with
      a stream parser and the case with out a stream parser are both handled. With
      the fix below we identified a race when running with a proto op that attempts
      to read skbs off both the stream parser and the skb->receive_queue. Namely,
      that a race existed where when the stream parser is empty checking the
      skb->receive_queue from recvmsg at the precies moment when the parser is
      paused and the receive_queue is not empty could result in skipping the stream
      parser. This may break a RX policy depending on the parser to run.
      
      The fix tag then loads a specific proto ops that resolved this race. But, we
      missed removing that proto ops recv hook when the sock is removed from the
      sockmap. The result is the stream parser is stopped so no more skbs will be
      aggregated there, but the hook and BPF program continues to be attached on
      the psock. User space will then get an EBUSY when trying to read the socket
      because the recvmsg() handler is now waiting on a stopped stream parser.
      
      To fix we rerun the proto ops init() function which will look at the new set
      of progs attached to the psock and rest the proto ops hook to the correct
      handlers. And in the above case where we remove the sock from the sock map
      the RX prog will no longer be listed so the proto ops is removed.
      
      Fixes: c5d2177a ("bpf, sockmap: Fix race in ingress receive verdict with redirect to self")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211119181418.353932-3-john.fastabend@gmail.com
      (cherry picked from commit c0d95d33)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/core/skmsg.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      cbc0f0ca
    • J
      bpf, sockmap: Attach map progs to psock early for feature probes · 75922849
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.16-rc5
      commit 38207a5e
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=38207a5e81230d6ffbdd51e5fa5681be5116dcae
      
      ---------------------------
      
      When a TCP socket is added to a sock map we look at the programs attached
      to the map to determine what proto op hooks need to be changed. Before
      the patch in the 'fixes' tag there were only two categories -- the empty
      set of programs or a TX policy. In any case the base set handled the
      receive case.
      
      After the fix we have an optimized program for receive that closes a small,
      but possible, race on receive. This program is loaded only when the map the
      psock is being added to includes a RX policy. Otherwise, the race is not
      possible so we don't need to handle the race condition.
      
      In order for the call to sk_psock_init() to correctly evaluate the above
      conditions all progs need to be set in the psock before the call. However,
      in the current code this is not the case. We end up evaluating the
      requirements on the old prog state. If your psock is attached to multiple
      maps -- for example a tx map and rx map -- then the second update would pull
      in the correct maps. But, the other pattern with a single rx enabled map
      the correct receive hooks are not used. The result is the race fixed by the
      patch in the fixes tag below may still be seen in this case.
      
      To fix we simply set all psock->progs before doing the call into
      sock_map_init(). With this the init() call gets the full list of programs
      and chooses the correct proto ops on the first iteration instead of
      requiring the second update to pull them in. This fixes the race case when
      only a single map is used.
      
      Fixes: c5d2177a ("bpf, sockmap: Fix race in ingress receive verdict with redirect to self")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211119181418.353932-2-john.fastabend@gmail.com
      (cherry picked from commit 38207a5e)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/core/sock_map.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      75922849
    • J
      bpf, sockmap: Fix return codes from tcp_bpf_recvmsg_parser() · d3bb682c
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.17-rc1
      commit 5b2c5540
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5b2c5540b8110eea0d67a78fb0ddb9654c58daeb
      
      ---------------------------
      
      Applications can be confused slightly because we do not always return the
      same error code as expected, e.g. what the TCP stack normally returns. For
      example on a sock err sk->sk_err instead of returning the sock_error we
      return EAGAIN. This usually means the application will 'try again'
      instead of aborting immediately. Another example, when a shutdown event
      is received we should immediately abort instead of waiting for data when
      the user provides a timeout.
      
      These tend to not be fatal, applications usually recover, but introduces
      bogus errors to the user or introduces unexpected latency. Before
      'c5d2177a' we fell back to the TCP stack when no data was available
      so we managed to catch many of the cases here, although with the extra
      latency cost of calling tcp_msg_wait_data() first.
      
      To fix lets duplicate the error handling in TCP stack into tcp_bpf so
      that we get the same error codes.
      
      These were found in our CI tests that run applications against sockmap
      and do longer lived testing, at least compared to test_sockmap that
      does short-lived ping/pong tests, and in some of our test clusters
      we deploy.
      
      Its non-trivial to do these in a shorter form CI tests that would be
      appropriate for BPF selftests, but we are looking into it so we can
      ensure this keeps working going forward. As a preview one idea is to
      pull in the packetdrill testing which catches some of this.
      
      Fixes: c5d2177a ("bpf, sockmap: Fix race in ingress receive verdict with redirect to self")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20220104205918.286416-1-john.fastabend@gmail.com
      (cherry picked from commit 5b2c5540)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/ipv4/tcp_bpf.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      d3bb682c
    • J
      bpf, sockmap: Fix race in ingress receive verdict with redirect to self · 9a9749fb
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.16-rc1
      commit c5d2177a
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c5d2177a72a1659554922728fc407f59950aa929
      
      ---------------------------
      
      A socket in a sockmap may have different combinations of programs attached
      depending on configuration. There can be no programs in which case the socket
      acts as a sink only. There can be a TX program in this case a BPF program is
      attached to sending side, but no RX program is attached. There can be an RX
      program only where sends have no BPF program attached, but receives are hooked
      with BPF. And finally, both TX and RX programs may be attached. Giving us the
      permutations:
      
       None, Tx, Rx, and TxRx
      
      To date most of our use cases have been TX case being used as a fast datapath
      to directly copy between local application and a userspace proxy. Or Rx cases
      and TxRX applications that are operating an in kernel based proxy. The traffic
      in the first case where we hook applications into a userspace application looks
      like this:
      
        AppA  redirect   AppB
         Tx <-----------> Rx
         |                |
         +                +
         TCP <--> lo <--> TCP
      
      In this case all traffic from AppA (after 3whs) is copied into the AppB
      ingress queue and no traffic is ever on the TCP recieive_queue.
      
      In the second case the application never receives, except in some rare error
      cases, traffic on the actual user space socket. Instead the send happens in
      the kernel.
      
                 AppProxy       socket pool
             sk0 ------------->{sk1,sk2, skn}
              ^                      |
              |                      |
              |                      v
             ingress              lb egress
             TCP                  TCP
      
      Here because traffic is never read off the socket with userspace recv() APIs
      there is only ever one reader on the sk receive_queue. Namely the BPF programs.
      
      However, we've started to introduce a third configuration where the BPF program
      on receive should process the data, but then the normal case is to push the
      data into the receive queue of AppB.
      
             AppB
             recv()                (userspace)
           -----------------------
             tcp_bpf_recvmsg()     (kernel)
               |             |
               |             |
               |             |
             ingress_msgQ    |
               |             |
             RX_BPF          |
               |             |
               v             v
             sk->receive_queue
      
      This is different from the App{A,B} redirect because traffic is first received
      on the sk->receive_queue.
      
      Now for the issue. The tcp_bpf_recvmsg() handler first checks the ingress_msg
      queue for any data handled by the BPF rx program and returned with PASS code
      so that it was enqueued on the ingress msg queue. Then if no data exists on
      that queue it checks the socket receive queue. Unfortunately, this is the same
      receive_queue the BPF program is reading data off of. So we get a race. Its
      possible for the recvmsg() hook to pull data off the receive_queue before the
      BPF hook has a chance to read it. It typically happens when an application is
      banging on recv() and getting EAGAINs. Until they manage to race with the RX
      BPF program.
      
      To fix this we note that before this patch at attach time when the socket is
      loaded into the map we check if it needs a TX program or just the base set of
      proto bpf hooks. Then it uses the above general RX hook regardless of if we
      have a BPF program attached at rx or not. This patch now extends this check to
      handle all cases enumerated above, TX, RX, TXRX, and none. And to fix above
      race when an RX program is attached we use a new hook that is nearly identical
      to the old one except now we do not let the recv() call skip the RX BPF program.
      Now only the BPF program pulls data from sk->receive_queue and recv() only
      pulls data from the ingress msgQ post BPF program handling.
      
      With this resolved our AppB from above has been up and running for many hours
      without detecting any errors. We do this by correlating counters in RX BPF
      events and the AppB to ensure data is never skipping the BPF program. Selftests,
      was not able to detect this because we only run them for a short period of time
      on well ordered send/recvs so we don't get any of the noise we see in real
      application environments.
      
      Fixes: 51199405 ("bpf: skb_verdict, support SK_PASS on RX BPF path")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: NJussi Maki <joamaki@gmail.com>
      Reviewed-by: NJakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20211103204736.248403-4-john.fastabend@gmail.com
      (cherry picked from commit c5d2177a)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
       Conflicts:
      	net/ipv4/tcp_bpf.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      9a9749fb
  2. 26 4月, 2023 22 次提交
  3. 19 4月, 2023 12 次提交