1. 09 10月, 2021 1 次提交
    • V
      net: dsa: mv88e6xxx: keep the pvid at 0 when VLAN-unaware · 8b6836d8
      Vladimir Oltean 提交于
      The VLAN support in mv88e6xxx has a loaded history. Commit 2ea7a679
      ("net: dsa: Don't add vlans when vlan filtering is disabled") noticed
      some issues with VLAN and decided the best way to deal with them was to
      make the DSA core ignore VLANs added by the bridge while VLAN awareness
      is turned off. Those issues were never explained, just presented as
      "at least one corner case".
      
      That approach had problems of its own, presented by
      commit 54a0ed0d ("net: dsa: provide an option for drivers to always
      receive bridge VLANs") for the DSA core, followed by
      commit 1fb74191 ("net: dsa: mv88e6xxx: fix vlan setup") which
      applied ds->configure_vlan_while_not_filtering = true for mv88e6xxx in
      particular.
      
      We still don't know what corner case Andrew saw when he wrote
      commit 2ea7a679 ("net: dsa: Don't add vlans when vlan filtering is
      disabled"), but Tobias now reports that when we use TX forwarding
      offload, pinging an external station from the bridge device is broken if
      the front-facing DSA user port has flooding turned off. The full
      description is in the link below, but for short, when a mv88e6xxx port
      is under a VLAN-unaware bridge, it inherits that bridge's pvid.
      So packets ingressing a user port will be classified to e.g. VID 1
      (assuming that value for the bridge_default_pvid), whereas when
      tag_dsa.c xmits towards a user port, it always sends packets using a VID
      of 0 if that port is standalone or under a VLAN-unaware bridge - or at
      least it did so prior to commit d82f8ab0 ("net: dsa: tag_dsa:
      offload the bridge forwarding process").
      
      In any case, when there is a conversation between the CPU and a station
      connected to a user port, the station's MAC address is learned in VID 1
      but the CPU tries to transmit through VID 0. The packets reach the
      intended station, but via flooding and not by virtue of matching the
      existing ATU entry.
      
      DSA has established (and enforced in other drivers: sja1105, felix,
      mt7530) that a VLAN-unaware port should use a private pvid, and not
      inherit the one from the bridge. The bridge's pvid should only be
      inherited when that bridge is VLAN-aware, so all state transitions need
      to be handled. On the other hand, all bridge VLANs should sit in the VTU
      starting with the moment when the bridge offloads them via switchdev,
      they are just not used.
      
      This solves the problem that Tobias sees because packets ingressing on
      VLAN-unaware user ports now get classified to VID 0, which is also the
      VID used by tag_dsa.c on xmit.
      
      Fixes: d82f8ab0 ("net: dsa: tag_dsa: offload the bridge forwarding process")
      Link: https://patchwork.kernel.org/project/netdevbpf/patch/20211003222312.284175-2-vladimir.oltean@nxp.com/#24491503Reported-by: NTobias Waldekranz <tobias@waldekranz.com>
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      8b6836d8
  2. 08 10月, 2021 2 次提交
  3. 07 10月, 2021 3 次提交
    • S
      iavf: fix double unlock of crit_lock · 54ee3943
      Stefan Assmann 提交于
      The crit_lock mutex could be unlocked twice as reported here
      https://lists.osuosl.org/pipermail/intel-wired-lan/Week-of-Mon-20210823/025525.html
      
      Remove the superfluous unlock. Technically the problem was already
      present before 5ac49f3c as that commit only replaced the locking
      primitive, but no functional change.
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Fixes: 5ac49f3c ("iavf: use mutexes for locking of critical sections")
      Fixes: bac84861 ("iavf: Refactor the watchdog state machine")
      Signed-off-by: NStefan Assmann <sassmann@kpanic.de>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      54ee3943
    • S
      i40e: Fix freeing of uninitialized misc IRQ vector · 2e5a2057
      Sylwester Dziedziuch 提交于
      When VSI set up failed in i40e_probe() as part of PF switch set up
      driver was trying to free misc IRQ vectors in
      i40e_clear_interrupt_scheme and produced a kernel Oops:
      
         Trying to free already-free IRQ 266
         WARNING: CPU: 0 PID: 5 at kernel/irq/manage.c:1731 __free_irq+0x9a/0x300
         Workqueue: events work_for_cpu_fn
         RIP: 0010:__free_irq+0x9a/0x300
         Call Trace:
         ? synchronize_irq+0x3a/0xa0
         free_irq+0x2e/0x60
         i40e_clear_interrupt_scheme+0x53/0x190 [i40e]
         i40e_probe.part.108+0x134b/0x1a40 [i40e]
         ? kmem_cache_alloc+0x158/0x1c0
         ? acpi_ut_update_ref_count.part.1+0x8e/0x345
         ? acpi_ut_update_object_reference+0x15e/0x1e2
         ? strstr+0x21/0x70
         ? irq_get_irq_data+0xa/0x20
         ? mp_check_pin_attr+0x13/0xc0
         ? irq_get_irq_data+0xa/0x20
         ? mp_map_pin_to_irq+0xd3/0x2f0
         ? acpi_register_gsi_ioapic+0x93/0x170
         ? pci_conf1_read+0xa4/0x100
         ? pci_bus_read_config_word+0x49/0x70
         ? do_pci_enable_device+0xcc/0x100
         local_pci_probe+0x41/0x90
         work_for_cpu_fn+0x16/0x20
         process_one_work+0x1a7/0x360
         worker_thread+0x1cf/0x390
         ? create_worker+0x1a0/0x1a0
         kthread+0x112/0x130
         ? kthread_flush_work_fn+0x10/0x10
         ret_from_fork+0x1f/0x40
      
      The problem is that at that point misc IRQ vectors
      were not allocated yet and we get a call trace
      that driver is trying to free already free IRQ vectors.
      
      Add a check in i40e_clear_interrupt_scheme for __I40E_MISC_IRQ_REQUESTED
      PF state before calling i40e_free_misc_vector. This state is set only if
      misc IRQ vectors were properly initialized.
      
      Fixes: c17401a1 ("i40e: use separate state bit for miscellaneous IRQ setup")
      Reported-by: NPJ Waskiewicz <pwaskiewicz@jumptrading.com>
      Signed-off-by: NSylwester Dziedziuch <sylwesterx.dziedziuch@intel.com>
      Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com>
      Tested-by: NDave Switzer <david.switzer@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      2e5a2057
    • J
      i40e: fix endless loop under rtnl · 857b6c6f
      Jiri Benc 提交于
      The loop in i40e_get_capabilities can never end. The problem is that
      although i40e_aq_discover_capabilities returns with an error if there's
      a firmware problem, the returned error is not checked. There is a check for
      pf->hw.aq.asq_last_status but that value is set to I40E_AQ_RC_OK on most
      firmware problems.
      
      When i40e_aq_discover_capabilities encounters a firmware problem, it will
      encounter the same problem on its next invocation. As the result, the loop
      becomes endless. We hit this with I40E_ERR_ADMIN_QUEUE_TIMEOUT but looking
      at the code, it can happen with a range of other firmware errors.
      
      I don't know what the correct behavior should be: whether the firmware
      should be retried a few times, or whether pf->hw.aq.asq_last_status should
      be always set to the encountered firmware error (but then it would be
      pointless and can be just replaced by the i40e_aq_discover_capabilities
      return value). However, the current behavior with an endless loop under the
      rtnl mutex(!) is unacceptable and Intel has not submitted a fix, although we
      explained the bug to them 7 months ago.
      
      This may not be the best possible fix but it's better than hanging the whole
      system on a firmware bug.
      
      Fixes: 56a62fc8 ("i40e: init code and hardware support")
      Tested-by: NStefan Assmann <sassmann@redhat.com>
      Signed-off-by: NJiri Benc <jbenc@redhat.com>
      Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NDave Switzer <david.switzer@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      857b6c6f
  4. 06 10月, 2021 11 次提交
  5. 05 10月, 2021 4 次提交
  6. 04 10月, 2021 1 次提交
  7. 03 10月, 2021 1 次提交
  8. 02 10月, 2021 9 次提交
  9. 01 10月, 2021 8 次提交