1. 27 1月, 2022 10 次提交
  2. 26 1月, 2022 12 次提交
  3. 25 1月, 2022 11 次提交
  4. 24 1月, 2022 7 次提交
    • J
      net: stmmac: remove unused members in struct stmmac_priv · de8a820d
      Jisheng Zhang 提交于
      The tx_coalesce and mii_irq are not used at all now, so remove them.
      Signed-off-by: NJisheng Zhang <jszhang@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de8a820d
    • C
      net: atlantic: Use the bitmap API instead of hand-writing it · ebe0582b
      Christophe JAILLET 提交于
      Simplify code by using bitmap_weight() and bitmap_zero() instead of
      hand-writing these functions.
      Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Reviewed-by: NIgor Russkikh <irusskikh@marvell.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ebe0582b
    • X
      ping: fix the sk_bound_dev_if match in ping_lookup · 2afc3b5a
      Xin Long 提交于
      When 'ping' changes to use PING socket instead of RAW socket by:
      
         # sysctl -w net.ipv4.ping_group_range="0 100"
      
      the selftests 'router_broadcast.sh' will fail, as such command
      
        # ip vrf exec vrf-h1 ping -I veth0 198.51.100.255 -b
      
      can't receive the response skb by the PING socket. It's caused by mismatch
      of sk_bound_dev_if and dif in ping_rcv() when looking up the PING socket,
      as dif is vrf-h1 if dif's master was set to vrf-h1.
      
      This patch is to fix this regression by also checking the sk_bound_dev_if
      against sdif so that the packets can stil be received even if the socket
      is not bound to the vrf device but to the real iif.
      
      Fixes: c319b4d7 ("net: ipv4: add IPPROTO_ICMP socket kind")
      Reported-by: NHangbin Liu <liuhangbin@gmail.com>
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2afc3b5a
    • W
      net/smc: Transitional solution for clcsock race issue · c0bf3d8a
      Wen Gu 提交于
      We encountered a crash in smc_setsockopt() and it is caused by
      accessing smc->clcsock after clcsock was released.
      
       BUG: kernel NULL pointer dereference, address: 0000000000000020
       #PF: supervisor read access in kernel mode
       #PF: error_code(0x0000) - not-present page
       PGD 0 P4D 0
       Oops: 0000 [#1] PREEMPT SMP PTI
       CPU: 1 PID: 50309 Comm: nginx Kdump: loaded Tainted: G E     5.16.0-rc4+ #53
       RIP: 0010:smc_setsockopt+0x59/0x280 [smc]
       Call Trace:
        <TASK>
        __sys_setsockopt+0xfc/0x190
        __x64_sys_setsockopt+0x20/0x30
        do_syscall_64+0x34/0x90
        entry_SYSCALL_64_after_hwframe+0x44/0xae
       RIP: 0033:0x7f16ba83918e
        </TASK>
      
      This patch tries to fix it by holding clcsock_release_lock and
      checking whether clcsock has already been released before access.
      
      In case that a crash of the same reason happens in smc_getsockopt()
      or smc_switch_to_fallback(), this patch also checkes smc->clcsock
      in them too. And the caller of smc_switch_to_fallback() will identify
      whether fallback succeeds according to the return value.
      
      Fixes: fd57770d ("net/smc: wait for pending work before clcsock release_sock")
      Link: https://lore.kernel.org/lkml/5dd7ffd1-28e2-24cc-9442-1defec27375e@linux.ibm.com/T/Signed-off-by: NWen Gu <guwen@linux.alibaba.com>
      Acked-by: NKarsten Graul <kgraul@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c0bf3d8a
    • S
      ibmvnic: remove unused ->wait_capability · 3a5d9db7
      Sukadev Bhattiprolu 提交于
      With previous bug fix, ->wait_capability flag is no longer needed and can
      be removed.
      
      Fixes: 249168ad ("ibmvnic: Make CRQ interrupt tasklet wait for all capabilities crqs")
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.ibm.com>
      Reviewed-by: NDany Madden <drt@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3a5d9db7
    • S
      ibmvnic: don't spin in tasklet · 48079e7f
      Sukadev Bhattiprolu 提交于
      ibmvnic_tasklet() continuously spins waiting for responses to all
      capability requests. It does this to avoid encountering an error
      during initialization of the vnic. However if there is a bug in the
      VIOS and we do not receive a response to one or more queries the
      tasklet ends up spinning continuously leading to hard lock ups.
      
      If we fail to receive a message from the VIOS it is reasonable to
      timeout the login attempt rather than spin indefinitely in the tasklet.
      
      Fixes: 249168ad ("ibmvnic: Make CRQ interrupt tasklet wait for all capabilities crqs")
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.ibm.com>
      Reviewed-by: NDany Madden <drt@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      48079e7f
    • S
      ibmvnic: init ->running_cap_crqs early · 151b6a5c
      Sukadev Bhattiprolu 提交于
      We use ->running_cap_crqs to determine when the ibmvnic_tasklet() should
      send out the next protocol message type. i.e when we get back responses
      to all our QUERY_CAPABILITY CRQs we send out REQUEST_CAPABILITY crqs.
      Similiary, when we get responses to all the REQUEST_CAPABILITY crqs, we
      send out the QUERY_IP_OFFLOAD CRQ.
      
      We currently increment ->running_cap_crqs as we send out each CRQ and
      have the ibmvnic_tasklet() send out the next message type, when this
      running_cap_crqs count drops to 0.
      
      This assumes that all the CRQs of the current type were sent out before
      the count drops to 0. However it is possible that we send out say 6 CRQs,
      get preempted and receive all the 6 responses before we send out the
      remaining CRQs. This can result in ->running_cap_crqs count dropping to
      zero before all messages of the current type were sent and we end up
      sending the next protocol message too early.
      
      Instead initialize the ->running_cap_crqs upfront so the tasklet will
      only send the next protocol message after all responses are received.
      
      Use the cap_reqs local variable to also detect any discrepancy (either
      now or in future) in the number of capability requests we actually send.
      
      Currently only send_query_cap() is affected by this behavior (of sending
      next message early) since it is called from the worker thread (during
      reset) and from application thread (during ->ndo_open()) and they can be
      preempted. send_request_cap() is only called from the tasklet  which
      processes CRQ responses sequentially, is not be affected.  But to
      maintain the existing symmtery with send_query_capability() we update
      send_request_capability() also.
      
      Fixes: 249168ad ("ibmvnic: Make CRQ interrupt tasklet wait for all capabilities crqs")
      Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.ibm.com>
      Reviewed-by: NDany Madden <drt@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      151b6a5c