1. 10 5月, 2023 5 次提交
    • J
      bpf, sockmap: Fix double bpf_prog_put on error case in map_link · ec9966f2
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.17-rc1
      commit 218d747a
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=218d747a4142f281a256687bb513a135c905867b
      
      ---------------------------
      
      sock_map_link() is called to update a sockmap entry with a sk. But, if the
      sock_map_init_proto() call fails then we return an error to the map_update
      op against the sockmap. In the error path though we need to cleanup psock
      and dec the refcnt on any programs associated with the map, because we
      refcnt them early in the update process to ensure they are pinned for the
      psock. (This avoids a race where user deletes programs while also updating
      the map with new socks.)
      
      In current code we do the prog refcnt dec explicitely by calling
      bpf_prog_put() when the program was found in the map. But, after commit
      '38207a5e' in this error path we've already done the prog to psock
      assignment so the programs have a reference from the psock as well. This
      then causes the psock tear down logic, invoked by sk_psock_put() in the
      error path, to similarly call bpf_prog_put on the programs there.
      
      To be explicit this logic does the prog->psock assignment:
      
        if (msg_*)
          psock_set_prog(...)
      
      Then the error path under the out_progs label does a similar check and
      dec with:
      
        if (msg_*)
           bpf_prog_put(...)
      
      And the teardown logic sk_psock_put() does ...
      
        psock_set_prog(msg_*, NULL)
      
      ... triggering another bpf_prog_put(...). Then KASAN gives us this splat,
      found by syzbot because we've created an inbalance between bpf_prog_inc and
      bpf_prog_put calling put twice on the program.
      
        BUG: KASAN: vmalloc-out-of-bounds in __bpf_prog_put kernel/bpf/syscall.c:1812 [inline]
        BUG: KASAN: vmalloc-out-of-bounds in __bpf_prog_put kernel/bpf/syscall.c:1812 [inline] kernel/bpf/syscall.c:1829
        BUG: KASAN: vmalloc-out-of-bounds in bpf_prog_put+0x8c/0x4f0 kernel/bpf/syscall.c:1829 kernel/bpf/syscall.c:1829
        Read of size 8 at addr ffffc90000e76038 by task syz-executor020/3641
      
      To fix clean up error path so it doesn't try to do the bpf_prog_put in the
      error path once progs are assigned then it relies on the normal psock
      tear down logic to do complete cleanup.
      
      For completness we also cover the case whereh sk_psock_init_strp() fails,
      but this is not expected because it indicates an incorrect socket type
      and should be caught earlier.
      
      Fixes: 38207a5e ("bpf, sockmap: Attach map progs to psock early for feature probes")
      Reported-by: syzbot+bb73e71cf4b8fd376a4f@syzkaller.appspotmail.com
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20220104214645.290900-1-john.fastabend@gmail.com
      (cherry picked from commit 218d747a)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/core/sock_map.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      ec9966f2
    • J
      bpf, sockmap: Re-evaluate proto ops when psock is removed from sockmap · 00b4e7a1
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.16-rc5
      commit c0d95d33
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c0d95d3380ee099d735e08618c0d599e72f6c8b0
      
      ---------------------------
      
      When a sock is added to a sock map we evaluate what proto op hooks need to
      be used. However, when the program is removed from the sock map we have not
      been evaluating if that changes the required program layout.
      
      Before the patch listed in the 'fixes' tag this was not causing failures
      because the base program set handles all cases. Specifically, the case with
      a stream parser and the case with out a stream parser are both handled. With
      the fix below we identified a race when running with a proto op that attempts
      to read skbs off both the stream parser and the skb->receive_queue. Namely,
      that a race existed where when the stream parser is empty checking the
      skb->receive_queue from recvmsg at the precies moment when the parser is
      paused and the receive_queue is not empty could result in skipping the stream
      parser. This may break a RX policy depending on the parser to run.
      
      The fix tag then loads a specific proto ops that resolved this race. But, we
      missed removing that proto ops recv hook when the sock is removed from the
      sockmap. The result is the stream parser is stopped so no more skbs will be
      aggregated there, but the hook and BPF program continues to be attached on
      the psock. User space will then get an EBUSY when trying to read the socket
      because the recvmsg() handler is now waiting on a stopped stream parser.
      
      To fix we rerun the proto ops init() function which will look at the new set
      of progs attached to the psock and rest the proto ops hook to the correct
      handlers. And in the above case where we remove the sock from the sock map
      the RX prog will no longer be listed so the proto ops is removed.
      
      Fixes: c5d2177a ("bpf, sockmap: Fix race in ingress receive verdict with redirect to self")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211119181418.353932-3-john.fastabend@gmail.com
      (cherry picked from commit c0d95d33)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/core/skmsg.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      00b4e7a1
    • J
      bpf, sockmap: Attach map progs to psock early for feature probes · 8464e846
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.16-rc5
      commit 38207a5e
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=38207a5e81230d6ffbdd51e5fa5681be5116dcae
      
      ---------------------------
      
      When a TCP socket is added to a sock map we look at the programs attached
      to the map to determine what proto op hooks need to be changed. Before
      the patch in the 'fixes' tag there were only two categories -- the empty
      set of programs or a TX policy. In any case the base set handled the
      receive case.
      
      After the fix we have an optimized program for receive that closes a small,
      but possible, race on receive. This program is loaded only when the map the
      psock is being added to includes a RX policy. Otherwise, the race is not
      possible so we don't need to handle the race condition.
      
      In order for the call to sk_psock_init() to correctly evaluate the above
      conditions all progs need to be set in the psock before the call. However,
      in the current code this is not the case. We end up evaluating the
      requirements on the old prog state. If your psock is attached to multiple
      maps -- for example a tx map and rx map -- then the second update would pull
      in the correct maps. But, the other pattern with a single rx enabled map
      the correct receive hooks are not used. The result is the race fixed by the
      patch in the fixes tag below may still be seen in this case.
      
      To fix we simply set all psock->progs before doing the call into
      sock_map_init(). With this the init() call gets the full list of programs
      and chooses the correct proto ops on the first iteration instead of
      requiring the second update to pull them in. This fixes the race case when
      only a single map is used.
      
      Fixes: c5d2177a ("bpf, sockmap: Fix race in ingress receive verdict with redirect to self")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20211119181418.353932-2-john.fastabend@gmail.com
      (cherry picked from commit 38207a5e)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/core/sock_map.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      8464e846
    • J
      bpf, sockmap: Fix return codes from tcp_bpf_recvmsg_parser() · d6b44b0f
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.17-rc1
      commit 5b2c5540
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5b2c5540b8110eea0d67a78fb0ddb9654c58daeb
      
      ---------------------------
      
      Applications can be confused slightly because we do not always return the
      same error code as expected, e.g. what the TCP stack normally returns. For
      example on a sock err sk->sk_err instead of returning the sock_error we
      return EAGAIN. This usually means the application will 'try again'
      instead of aborting immediately. Another example, when a shutdown event
      is received we should immediately abort instead of waiting for data when
      the user provides a timeout.
      
      These tend to not be fatal, applications usually recover, but introduces
      bogus errors to the user or introduces unexpected latency. Before
      'c5d2177a' we fell back to the TCP stack when no data was available
      so we managed to catch many of the cases here, although with the extra
      latency cost of calling tcp_msg_wait_data() first.
      
      To fix lets duplicate the error handling in TCP stack into tcp_bpf so
      that we get the same error codes.
      
      These were found in our CI tests that run applications against sockmap
      and do longer lived testing, at least compared to test_sockmap that
      does short-lived ping/pong tests, and in some of our test clusters
      we deploy.
      
      Its non-trivial to do these in a shorter form CI tests that would be
      appropriate for BPF selftests, but we are looking into it so we can
      ensure this keeps working going forward. As a preview one idea is to
      pull in the packetdrill testing which catches some of this.
      
      Fixes: c5d2177a ("bpf, sockmap: Fix race in ingress receive verdict with redirect to self")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20220104205918.286416-1-john.fastabend@gmail.com
      (cherry picked from commit 5b2c5540)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
      Conflicts:
      	net/ipv4/tcp_bpf.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      d6b44b0f
    • J
      bpf, sockmap: Fix race in ingress receive verdict with redirect to self · 95b3e959
      John Fastabend 提交于
      mainline inclusion
      from mainline-v5.16-rc1
      commit c5d2177a
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I65HYE
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c5d2177a72a1659554922728fc407f59950aa929
      
      ---------------------------
      
      A socket in a sockmap may have different combinations of programs attached
      depending on configuration. There can be no programs in which case the socket
      acts as a sink only. There can be a TX program in this case a BPF program is
      attached to sending side, but no RX program is attached. There can be an RX
      program only where sends have no BPF program attached, but receives are hooked
      with BPF. And finally, both TX and RX programs may be attached. Giving us the
      permutations:
      
       None, Tx, Rx, and TxRx
      
      To date most of our use cases have been TX case being used as a fast datapath
      to directly copy between local application and a userspace proxy. Or Rx cases
      and TxRX applications that are operating an in kernel based proxy. The traffic
      in the first case where we hook applications into a userspace application looks
      like this:
      
        AppA  redirect   AppB
         Tx <-----------> Rx
         |                |
         +                +
         TCP <--> lo <--> TCP
      
      In this case all traffic from AppA (after 3whs) is copied into the AppB
      ingress queue and no traffic is ever on the TCP recieive_queue.
      
      In the second case the application never receives, except in some rare error
      cases, traffic on the actual user space socket. Instead the send happens in
      the kernel.
      
                 AppProxy       socket pool
             sk0 ------------->{sk1,sk2, skn}
              ^                      |
              |                      |
              |                      v
             ingress              lb egress
             TCP                  TCP
      
      Here because traffic is never read off the socket with userspace recv() APIs
      there is only ever one reader on the sk receive_queue. Namely the BPF programs.
      
      However, we've started to introduce a third configuration where the BPF program
      on receive should process the data, but then the normal case is to push the
      data into the receive queue of AppB.
      
             AppB
             recv()                (userspace)
           -----------------------
             tcp_bpf_recvmsg()     (kernel)
               |             |
               |             |
               |             |
             ingress_msgQ    |
               |             |
             RX_BPF          |
               |             |
               v             v
             sk->receive_queue
      
      This is different from the App{A,B} redirect because traffic is first received
      on the sk->receive_queue.
      
      Now for the issue. The tcp_bpf_recvmsg() handler first checks the ingress_msg
      queue for any data handled by the BPF rx program and returned with PASS code
      so that it was enqueued on the ingress msg queue. Then if no data exists on
      that queue it checks the socket receive queue. Unfortunately, this is the same
      receive_queue the BPF program is reading data off of. So we get a race. Its
      possible for the recvmsg() hook to pull data off the receive_queue before the
      BPF hook has a chance to read it. It typically happens when an application is
      banging on recv() and getting EAGAINs. Until they manage to race with the RX
      BPF program.
      
      To fix this we note that before this patch at attach time when the socket is
      loaded into the map we check if it needs a TX program or just the base set of
      proto bpf hooks. Then it uses the above general RX hook regardless of if we
      have a BPF program attached at rx or not. This patch now extends this check to
      handle all cases enumerated above, TX, RX, TXRX, and none. And to fix above
      race when an RX program is attached we use a new hook that is nearly identical
      to the old one except now we do not let the recv() call skip the RX BPF program.
      Now only the BPF program pulls data from sk->receive_queue and recv() only
      pulls data from the ingress msgQ post BPF program handling.
      
      With this resolved our AppB from above has been up and running for many hours
      without detecting any errors. We do this by correlating counters in RX BPF
      events and the AppB to ensure data is never skipping the BPF program. Selftests,
      was not able to detect this because we only run them for a short period of time
      on well ordered send/recvs so we don't get any of the noise we see in real
      application environments.
      
      Fixes: 51199405 ("bpf: skb_verdict, support SK_PASS on RX BPF path")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: NJussi Maki <joamaki@gmail.com>
      Reviewed-by: NJakub Sitnicki <jakub@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20211103204736.248403-4-john.fastabend@gmail.com
      (cherry picked from commit c5d2177a)
      Signed-off-by: NLiu Jian <liujian56@huawei.com>
      
       Conflicts:
      	net/ipv4/tcp_bpf.c
      Reviewed-by: NYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      95b3e959
  2. 26 4月, 2023 21 次提交
  3. 19 4月, 2023 14 次提交