1. 04 2月, 2022 13 次提交
  2. 03 2月, 2022 17 次提交
  3. 02 2月, 2022 10 次提交
    • A
      net: dsa: qca8k: introduce qca8k_bulk_read/write function · 4f3701fc
      Ansuel Smith 提交于
      Introduce qca8k_bulk_read/write() function to use mgmt Ethernet way to
      read/write packet in bulk. Make use of this new function in the fdb
      function and while at it reduce the reg for fdb_read from 4 to 3 as the
      max bit for the ARL(fdb) table is 83 bits.
      Signed-off-by: NAnsuel Smith <ansuelsmth@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4f3701fc
    • A
      net: dsa: qca8k: add support for larger read/write size with mgmt Ethernet · 90386223
      Ansuel Smith 提交于
      mgmt Ethernet packet can read/write up to 16byte at times. The len reg
      is limited to 15 (0xf). The switch actually sends and accepts data in 4
      different steps of len values.
      Len steps:
      - 0: nothing
      - 1-4: first 4 byte
      - 5-6: first 12 byte
      - 7-15: all 16 byte
      
      In the alloc skb function we check if the len is 16 and we fix it to a
      len of 15. It the read/write function interest to extract the real asked
      data. The tagger handler will always copy the fully 16byte with a READ
      command. This is useful for some big regs like the fdb reg that are
      more than 4byte of data. This permits to introduce a bulk function that
      will send and request the entire entry in one go.
      Write function is changed and it does now require to pass the pointer to
      val to also handle array val.
      Signed-off-by: NAnsuel Smith <ansuelsmth@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      90386223
    • A
      net: dsa: qca8k: cache lo and hi for mdio write · 2481d206
      Ansuel Smith 提交于
      From Documentation, we can cache lo and hi the same way we do with the
      page. This massively reduce the mdio write as 3/4 of the time as we only
      require to write the lo or hi part for a mdio write.
      Signed-off-by: NAnsuel Smith <ansuelsmth@gmail.com>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2481d206
    • A
      net: dsa: qca8k: move page cache to driver priv · 4264350a
      Ansuel Smith 提交于
      There can be multiple qca8k switch on the same system. Move the static
      qca8k_current_page to qca8k_priv and make it specific for each switch.
      Signed-off-by: NAnsuel Smith <ansuelsmth@gmail.com>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4264350a
    • A
      net: dsa: qca8k: add support for phy read/write with mgmt Ethernet · 2cd54856
      Ansuel Smith 提交于
      Use mgmt Ethernet also for phy read/write if availabale. Use a different
      seq number to make sure we receive the correct packet.
      On any error, we fallback to the legacy mdio read/write.
      Signed-off-by: NAnsuel Smith <ansuelsmth@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2cd54856
    • A
      net: dsa: qca8k: add support for mib autocast in Ethernet packet · 5c957c7c
      Ansuel Smith 提交于
      The switch can autocast MIB counter using Ethernet packet.
      Add support for this and provide a handler for the tagger.
      The switch will send packet with MIB counter for each port, the switch
      will use completion API to wait for the correct packet to be received
      and will complete the task only when each packet is received.
      Although the handler will drop all the other packet, we still have to
      consume each MIB packet to complete the request. This is done to prevent
      mixed data with concurrent ethtool request.
      
      connect_tag_protocol() is used to add the handler to the tag_qca tagger,
      master_state_change() use the MIB lock to make sure no MIB Ethernet is
      in progress.
      Signed-off-by: NAnsuel Smith <ansuelsmth@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5c957c7c
    • A
      net: dsa: qca8k: add support for mgmt read/write in Ethernet packet · 5950c7c0
      Ansuel Smith 提交于
      Add qca8k side support for mgmt read/write in Ethernet packet.
      qca8k supports some specially crafted Ethernet packet that can be used
      for mgmt read/write instead of the legacy method uart/internal mdio.
      This add support for the qca8k side to craft the packet and enqueue it.
      Each port and the qca8k_priv have a special struct to put data in it.
      The completion API is used to wait for the packet to be received back
      with the requested data.
      
      The various steps are:
      1. Craft the special packet with the qca hdr set to mgmt read/write
         mode.
      2. Set the lock in the dedicated mgmt struct.
      3. Increment the seq number and set it in the mgmt pkt
      4. Reinit the completion.
      5. Enqueue the packet.
      6. Wait the packet to be received.
      7. Use the data set by the tagger to complete the mdio operation.
      
      If the completion timeouts or the ack value is not true, the legacy
      mdio way is used.
      
      It has to be considered that in the initial setup mdio is still used and
      mdio is still used until DSA is ready to accept and tag packet.
      
      tag_proto_connect() is used to fill the required handler for the tagger
      to correctly parse and elaborate the special Ethernet mdio packet.
      
      Locking is added to qca8k_master_change() to make sure no mgmt Ethernet
      are in progress.
      Signed-off-by: NAnsuel Smith <ansuelsmth@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5950c7c0
    • A
      net: dsa: qca8k: add tracking state of master port · cddbec19
      Ansuel Smith 提交于
      MDIO/MIB Ethernet require the master port and the tagger availabale to
      correctly work. Use the new api master_state_change to track when master
      is operational or not and set a bool in qca8k_priv.
      We cache the first cached master available and we check if other cpu
      port are operational when the cached one goes down.
      This cached master will later be used by mdio read/write and mib request to
      correctly use the working function.
      
      qca8k implementation for MDIO/MIB Ethernet is bad. CPU port0 is the only
      one that answers with the ack packet or sends MIB Ethernet packets. For
      this reason the master_state_change ignore CPU port6 and only checks
      CPU port0 if it's operational and enables this mode.
      Signed-off-by: NAnsuel Smith <ansuelsmth@gmail.com>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cddbec19
    • K
      net/mlx5e: Avoid field-overflowing memcpy() · ad518573
      Kees Cook 提交于
      In preparation for FORTIFY_SOURCE performing compile-time and run-time
      field bounds checking for memcpy(), memmove(), and memset(), avoid
      intentionally writing across neighboring fields.
      
      Use flexible arrays instead of zero-element arrays (which look like they
      are always overflowing) and split the cross-field memcpy() into two halves
      that can be appropriately bounds-checked by the compiler.
      
      We were doing:
      
      	#define ETH_HLEN  14
      	#define VLAN_HLEN  4
      	...
      	#define MLX5E_XDP_MIN_INLINE (ETH_HLEN + VLAN_HLEN)
      	...
              struct mlx5e_tx_wqe      *wqe  = mlx5_wq_cyc_get_wqe(wq, pi);
      	...
              struct mlx5_wqe_eth_seg  *eseg = &wqe->eth;
              struct mlx5_wqe_data_seg *dseg = wqe->data;
      	...
      	memcpy(eseg->inline_hdr.start, xdptxd->data, MLX5E_XDP_MIN_INLINE);
      
      target is wqe->eth.inline_hdr.start (which the compiler sees as being
      2 bytes in size), but copying 18, intending to write across start
      (really vlan_tci, 2 bytes). The remaining 16 bytes get written into
      wqe->data[0], covering byte_count (4 bytes), lkey (4 bytes), and addr
      (8 bytes).
      
      struct mlx5e_tx_wqe {
              struct mlx5_wqe_ctrl_seg   ctrl;                 /*     0    16 */
              struct mlx5_wqe_eth_seg    eth;                  /*    16    16 */
              struct mlx5_wqe_data_seg   data[];               /*    32     0 */
      
              /* size: 32, cachelines: 1, members: 3 */
              /* last cacheline: 32 bytes */
      };
      
      struct mlx5_wqe_eth_seg {
              u8                         swp_outer_l4_offset;  /*     0     1 */
              u8                         swp_outer_l3_offset;  /*     1     1 */
              u8                         swp_inner_l4_offset;  /*     2     1 */
              u8                         swp_inner_l3_offset;  /*     3     1 */
              u8                         cs_flags;             /*     4     1 */
              u8                         swp_flags;            /*     5     1 */
              __be16                     mss;                  /*     6     2 */
              __be32                     flow_table_metadata;  /*     8     4 */
              union {
                      struct {
                              __be16     sz;                   /*    12     2 */
                              u8         start[2];             /*    14     2 */
                      } inline_hdr;                            /*    12     4 */
                      struct {
                              __be16     type;                 /*    12     2 */
                              __be16     vlan_tci;             /*    14     2 */
                      } insert;                                /*    12     4 */
                      __be32             trailer;              /*    12     4 */
              };                                               /*    12     4 */
      
              /* size: 16, cachelines: 1, members: 9 */
              /* last cacheline: 16 bytes */
      };
      
      struct mlx5_wqe_data_seg {
              __be32                     byte_count;           /*     0     4 */
              __be32                     lkey;                 /*     4     4 */
              __be64                     addr;                 /*     8     8 */
      
              /* size: 16, cachelines: 1, members: 3 */
              /* last cacheline: 16 bytes */
      };
      
      So, split the memcpy() so the compiler can reason about the buffer
      sizes.
      
      "pahole" shows no size nor member offset changes to struct mlx5e_tx_wqe
      nor struct mlx5e_umr_wqe. "objdump -d" shows no meaningful object
      code changes (i.e. only source line number induced differences and
      optimizations).
      
      Fixes: b5503b99 ("net/mlx5e: XDP TX forwarding support")
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      ad518573
    • K
      net/mlx5e: Use struct_group() for memcpy() region · 6d5c900e
      Kees Cook 提交于
      In preparation for FORTIFY_SOURCE performing compile-time and run-time
      field bounds checking for memcpy(), memmove(), and memset(), avoid
      intentionally writing across neighboring fields.
      
      Use struct_group() in struct vlan_ethhdr around members h_dest and
      h_source, so they can be referenced together. This will allow memcpy()
      and sizeof() to more easily reason about sizes, improve readability,
      and avoid future warnings about writing beyond the end of h_dest.
      
      "pahole" shows no size nor member offset changes to struct vlan_ethhdr.
      "objdump -d" shows no object code changes.
      
      Fixes: 34802a42 ("net/mlx5e: Do not modify the TX SKB")
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      6d5c900e