1. 03 3月, 2022 11 次提交
  2. 02 3月, 2022 10 次提交
    • W
      nfp: avoid newline at end of message in NL_SET_ERR_MSG_MOD · 323d51ca
      Wan Jiabing 提交于
      Fix the following coccicheck warning:
      ./drivers/net/ethernet/netronome/nfp/flower/qos_conf.c:750:7-55: WARNING
      avoid newline at end of message in NL_SET_ERR_MSG_MOD
      Signed-off-by: NWan Jiabing <wanjiabing@vivo.com>
      Reviewed-by: NSimon Horman <simon.horman@corigine.com>
      Link: https://lore.kernel.org/r/20220301112356.1820985-1-wanjiabing@vivo.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      323d51ca
    • Í
      sfc: set affinity hints in local NUMA node only · 09a99ab1
      Íñigo Huguet 提交于
      Affinity hints were being set to CPUs in local NUMA node first, and then
      in other CPUs. This was creating 2 unintended issues:
      1. Channels created to be assigned each to a different physical core
         were assigned to hyperthreading siblings because of being in same
         NUMA node.
         Since the patch previous to this one, this did not longer happen
         with default rss_cpus modparam because less channels are created.
      2. XDP channels could be assigned to CPUs in different NUMA nodes,
         decreasing performance too much (to less than half in some of my
         tests).
      
      This patch sets the affinity hints spreading the channels only in local
      NUMA node's CPUs. A fallback for the case that no CPU in local NUMA node
      is online has been added too.
      
      Example of CPUs being assigned in a non optimal way before this and the
      previous patch (note: in this system, xdp-8 to xdp-15 are created
      because num_possible_cpus == 64, but num_present_cpus == 32 so they're
      never used):
      
      $ lscpu | grep -i numa
      NUMA node(s):                    2
      NUMA node0 CPU(s):               0-7,16-23
      NUMA node1 CPU(s):               8-15,24-31
      
      $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
      /proc/irq/141/0000:07:00.0-0/../smp_affinity_list:0
      /proc/irq/142/0000:07:00.0-1/../smp_affinity_list:1
      /proc/irq/143/0000:07:00.0-2/../smp_affinity_list:2
      /proc/irq/144/0000:07:00.0-3/../smp_affinity_list:3
      /proc/irq/145/0000:07:00.0-4/../smp_affinity_list:4
      /proc/irq/146/0000:07:00.0-5/../smp_affinity_list:5
      /proc/irq/147/0000:07:00.0-6/../smp_affinity_list:6
      /proc/irq/148/0000:07:00.0-7/../smp_affinity_list:7
      /proc/irq/149/0000:07:00.0-8/../smp_affinity_list:16
      /proc/irq/150/0000:07:00.0-9/../smp_affinity_list:17
      /proc/irq/151/0000:07:00.0-10/../smp_affinity_list:18
      /proc/irq/152/0000:07:00.0-11/../smp_affinity_list:19
      /proc/irq/153/0000:07:00.0-12/../smp_affinity_list:20
      /proc/irq/154/0000:07:00.0-13/../smp_affinity_list:21
      /proc/irq/155/0000:07:00.0-14/../smp_affinity_list:22
      /proc/irq/156/0000:07:00.0-15/../smp_affinity_list:23
      /proc/irq/157/0000:07:00.0-xdp-0/../smp_affinity_list:8
      /proc/irq/158/0000:07:00.0-xdp-1/../smp_affinity_list:9
      /proc/irq/159/0000:07:00.0-xdp-2/../smp_affinity_list:10
      /proc/irq/160/0000:07:00.0-xdp-3/../smp_affinity_list:11
      /proc/irq/161/0000:07:00.0-xdp-4/../smp_affinity_list:12
      /proc/irq/162/0000:07:00.0-xdp-5/../smp_affinity_list:13
      /proc/irq/163/0000:07:00.0-xdp-6/../smp_affinity_list:14
      /proc/irq/164/0000:07:00.0-xdp-7/../smp_affinity_list:15
      /proc/irq/165/0000:07:00.0-xdp-8/../smp_affinity_list:24
      /proc/irq/166/0000:07:00.0-xdp-9/../smp_affinity_list:25
      /proc/irq/167/0000:07:00.0-xdp-10/../smp_affinity_list:26
      /proc/irq/168/0000:07:00.0-xdp-11/../smp_affinity_list:27
      /proc/irq/169/0000:07:00.0-xdp-12/../smp_affinity_list:28
      /proc/irq/170/0000:07:00.0-xdp-13/../smp_affinity_list:29
      /proc/irq/171/0000:07:00.0-xdp-14/../smp_affinity_list:30
      /proc/irq/172/0000:07:00.0-xdp-15/../smp_affinity_list:31
      
      CPUs assignments after this and previous patch, so normal channels
      created only one per core in NUMA node and affinities set only to local
      NUMA node:
      
      $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
      /proc/irq/116/0000:07:00.0-0/../smp_affinity_list:0
      /proc/irq/117/0000:07:00.0-1/../smp_affinity_list:1
      /proc/irq/118/0000:07:00.0-2/../smp_affinity_list:2
      /proc/irq/119/0000:07:00.0-3/../smp_affinity_list:3
      /proc/irq/120/0000:07:00.0-4/../smp_affinity_list:4
      /proc/irq/121/0000:07:00.0-5/../smp_affinity_list:5
      /proc/irq/122/0000:07:00.0-6/../smp_affinity_list:6
      /proc/irq/123/0000:07:00.0-7/../smp_affinity_list:7
      /proc/irq/124/0000:07:00.0-xdp-0/../smp_affinity_list:16
      /proc/irq/125/0000:07:00.0-xdp-1/../smp_affinity_list:17
      /proc/irq/126/0000:07:00.0-xdp-2/../smp_affinity_list:18
      /proc/irq/127/0000:07:00.0-xdp-3/../smp_affinity_list:19
      /proc/irq/128/0000:07:00.0-xdp-4/../smp_affinity_list:20
      /proc/irq/129/0000:07:00.0-xdp-5/../smp_affinity_list:21
      /proc/irq/130/0000:07:00.0-xdp-6/../smp_affinity_list:22
      /proc/irq/131/0000:07:00.0-xdp-7/../smp_affinity_list:23
      /proc/irq/132/0000:07:00.0-xdp-8/../smp_affinity_list:0
      /proc/irq/133/0000:07:00.0-xdp-9/../smp_affinity_list:1
      /proc/irq/134/0000:07:00.0-xdp-10/../smp_affinity_list:2
      /proc/irq/135/0000:07:00.0-xdp-11/../smp_affinity_list:3
      /proc/irq/136/0000:07:00.0-xdp-12/../smp_affinity_list:4
      /proc/irq/137/0000:07:00.0-xdp-13/../smp_affinity_list:5
      /proc/irq/138/0000:07:00.0-xdp-14/../smp_affinity_list:6
      /proc/irq/139/0000:07:00.0-xdp-15/../smp_affinity_list:7
      Signed-off-by: NÍñigo Huguet <ihuguet@redhat.com>
      Acked-by: NMartin Habets <habetsm.xilinx@gmail.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      09a99ab1
    • Í
      sfc: default config to 1 channel/core in local NUMA node only · c265b569
      Íñigo Huguet 提交于
      Handling channels from CPUs in different NUMA node can penalize
      performance, so better configure only one channel per core in the same
      NUMA node than the NIC, and not per each core in the system.
      
      Fallback to all other online cores if there are not online CPUs in local
      NUMA node.
      Signed-off-by: NÍñigo Huguet <ihuguet@redhat.com>
      Acked-by: NMartin Habets <habetsm.xilinx@gmail.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      c265b569
    • M
      iavf: Remove non-inclusive language · 0a62b209
      Mateusz Palczewski 提交于
      Remove non-inclusive language from the iavf driver.
      Signed-off-by: NAleksandr Loktionov <aleksandr.loktionov@intel.com>
      Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      0a62b209
    • M
      iavf: Fix incorrect use of assigning iavf_status to int · 8fc16be6
      Mateusz Palczewski 提交于
      Currently there are functions in iavf_virtchnl.c for polling specific
      virtchnl receive events. These are all assigning iavf_status values to
      int values. Fix this and explicitly assign int values if iavf_status
      is not IAVF_SUCCESS.
      
      Also, refactor a small amount of duplicated code that can be reused by
      all of the previously mentioned functions.
      
      Finally, fix some spacing errors for variable assignment and get rid of
      all the goto statements in the refactored functions for clarity.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      8fc16be6
    • M
      iavf: stop leaking iavf_status as "errno" values · bae569d0
      Mateusz Palczewski 提交于
      Several functions in the iAVF core files take status values of the enum
      iavf_status and convert them into integer values. This leads to
      confusion as functions return both Linux errno values and status codes
      intermixed. Reporting status codes as if they were "errno" values can
      lead to confusion when reviewing error logs. Additionally, it can lead
      to unexpected behavior if a return value is not interpreted properly.
      
      Fix this by introducing iavf_status_to_errno, a switch that explicitly
      converts from the status codes into an appropriate error value. Also
      introduce a virtchnl_status_to_errno function for the one case where we
      were returning both virtchnl status codes and iavf_status codes in the
      same function.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      bae569d0
    • M
      iavf: remove redundant ret variable · c3fec56e
      Minghao Chi 提交于
      Return value directly instead of taking this in another redundant
      variable.
      Reported-by: NZeal Robot <zealci@zte.com.cn>
      Signed-off-by: NMinghao Chi <chi.minghao@zte.com.cn>
      Signed-off-by: NCGEL ZTE <cgel.zte@gmail.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      c3fec56e
    • M
      iavf: Add usage of new virtchnl format to set default MAC · a3e839d5
      Mateusz Palczewski 提交于
      Use new type field of VIRTCHNL_OP_ADD_ETH_ADDR and
      VIRTCHNL_OP_DEL_ETH_ADDR requests to indicate that
      VF wants to change its default MAC address.
      Signed-off-by: NSylwester Dziedziuch <sylwesterx.dziedziuch@intel.com>
      Signed-off-by: NJedrzej Jagielski <jedrzej.jagielski@intel.com>
      Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      a3e839d5
    • M
      iavf: refactor processing of VLAN V2 capability message · 87dba256
      Mateusz Palczewski 提交于
      In order to handle the capability exchange necessary for
      VIRTCHNL_VF_OFFLOAD_VLAN_V2, the driver must send
      a VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. This must occur prior to
      __IAVF_CONFIG_ADAPTER, and the driver must wait for the response from
      the PF.
      
      To handle this, the __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS state was
      introduced. This state is intended to process the response from the VLAN
      V2 caps message. This works ok, but is difficult to extend to adding
      more extended capability exchange.
      
      Existing (and future) AVF features are relying more and more on these
      sort of extended ops for processing additional capabilities. Just like
      VLAN V2, this exchange must happen prior to __IAVF_CONFIG_ADPATER.
      
      Since we only send one outstanding AQ message at a time during init, it
      is not clear where to place this state. Adding more capability specific
      states becomes a mess. Instead of having the "previous" state send
      a message and then transition into a capability-specific state,
      introduce __IAVF_EXTENDED_CAPS state. This state will use a list of
      extended_caps that determines what messages to send and receive. As long
      as there are extended_caps bits still set, the driver will remain in
      this state performing one send or one receive per state machine loop.
      
      Refactor the VLAN V2 negotiation to use this new state, and remove the
      capability-specific state. This makes it significantly easier to add
      a new similar capability exchange going forward.
      
      Extended capabilities are processed by having an associated SEND and
      RECV extended capability bit. During __IAVF_EXTENDED_CAPS, the
      driver checks these bits in order by feature, first the send bit for
      a feature, then the recv bit for a feature. Each send flag will call
      a function that sends the necessary response, while each receive flag
      will wait for the response from the PF. If a given feature can't be
      negotiated with the PF, the associated flags will be cleared in
      order to skip processing of that feature.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      87dba256
    • M
      iavf: Add support for 50G/100G in AIM algorithm · d73dd127
      Mateusz Palczewski 提交于
      Advanced link speed support was added long back, but adding AIM support was
      missed. This patch adds AIM support for advanced link speed support, which
      allows the algorithm to take into account 50G/100G link speeds. Also, other
      previous speeds are taken into consideration when advanced link speeds are
      supported.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com>
      Reviewed-by: NAlexander Lobakin <alexandr.lobakin@intel.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      d73dd127
  3. 01 3月, 2022 1 次提交
  4. 28 2月, 2022 6 次提交
  5. 27 2月, 2022 8 次提交
  6. 26 2月, 2022 1 次提交
  7. 25 2月, 2022 3 次提交