1. 10 9月, 2020 3 次提交
  2. 04 9月, 2020 1 次提交
  3. 04 7月, 2020 4 次提交
  4. 03 9月, 2019 1 次提交
    • M
      mvpp2: percpu buffers · 7d04b0b1
      Matteo Croce 提交于
      Every mvpp2 unit can use up to 8 buffers mapped by the BM (the HW buffer
      manager). The HW will place the frames in the buffer pool depending on the
      frame size: short (< 128 bytes), long (< 1664) or jumbo (up to 9856).
      
      As any unit can have up to 4 ports, the driver allocates only 2 pools,
      one for small and one long frames, and share them between ports.
      When the first port MTU is set higher than 1664 bytes, a third pool is
      allocated for jumbo frames.
      
      This shared allocation makes impossible to use percpu allocators,
      and creates contention between HW queues.
      
      If possible, i.e. if the number of possible CPU are less than 8 and jumbo
      frames are not used, switch to a new scheme: allocate 8 per-cpu pools for
      short and long frames and bind every pool to an RXQ.
      
      When the first port MTU is set higher than 1664 bytes, the allocation
      scheme is reverted to the old behaviour (3 shared pools), and when all
      ports MTU are lowered, the per-cpu buffers are allocated again.
      Signed-off-by: NMatteo Croce <mcroce@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7d04b0b1
  5. 15 8月, 2019 1 次提交
  6. 11 6月, 2019 1 次提交
  7. 30 5月, 2019 1 次提交
    • I
      net: phylink: Add struct phylink_config to PHYLINK API · 44cc27e4
      Ioana Ciornei 提交于
      The phylink_config structure will encapsulate a pointer to a struct
      device and the operation type requested for this instance of PHYLINK.
      This patch does not make any functional changes, it just transitions the
      PHYLINK internals and all its users to the new API.
      
      A pointer to a phylink_config structure will be passed to
      phylink_create() instead of the net_device directly. Also, the same
      phylink_config pointer will be passed back to all phylink_mac_ops
      callbacks instead of the net_device. Using this mechanism, a PHYLINK
      user can get the original net_device using a structure such as
      'to_net_dev(config->dev)' or directly the structure containing the
      phylink_config using a container_of call.
      
      At the moment, only the PHYLINK_NETDEV is defined as a valid operation
      type for PHYLINK. In this mode, a valid reference to a struct device
      linked to the original net_device should be passed to PHYLINK through
      the phylink_config structure.
      
      This API changes is mainly driven by the necessity of adding a new
      operation type in PHYLINK that disconnects the phy_device from the
      net_device and also works when the net_device is lacking.
      Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com>
      Signed-off-by: NVladimir Oltean <olteanv@gmail.com>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Reviewed-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Tested-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      44cc27e4
  8. 26 5月, 2019 3 次提交
  9. 02 5月, 2019 2 次提交
    • M
      net: mvpp2: cls: Allow dropping packets with classification offload · bec2d46d
      Maxime Chevallier 提交于
      This commit introduces support for the "Drop" action in classification
      offload. This corresponds to the "-1" action with ethtool -N.
      
      This is achieved using the color marking actions available in the C2
      engine, which associate a color to a packet. These colors can be either
      Green, Yellow or Red, Red meaning that the packet should be dropped.
      
      Green and Yellow colors are interpreted by the Policer, which isn't
      supported yet.
      
      This method of dropping using the Classifier is different than the
      already existing early-drop features, such as VLAN filtering and MAC
      UC/MC filtering, which are performed during the Parsing step, and
      therefore take precedence over classification actions.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bec2d46d
    • M
      net: mvpp2: cls: Add Classification offload support · 90b509b3
      Maxime Chevallier 提交于
      This commit introduces basic classification offloading support for the
      PPv2 controller.
      
      The PPv2 classifier has many classification engines, for now we only use
      the C2 TCAM match engine.
      
      This engine allows to perform ternary lookups on 64 bits keys (called
      Header Extracted Key), that are built by extracting fields from the packet
      header and concatenating them. At most 4 fields can be extracted for a
      single lookup.
      
      This basic implementation allows to build the HEK from the following
      fields :
       - L4 source and destination ports (for UDP and TCP)
      
      More fields are to be added in the future.
      
      Classification flows are added through the ethtool interface, using the
      newly introduced flow_rule infrastructure as an internal rule
      representation, allowing to more easily implement tc flower rules if
      need be.
      
      The internal design for now allocates one range of 4 rules per port
      due to the internal design of the flow table, which uses 22 sub-flows.
      
      When inserting a classification rule, the rule is created in every
      relevant sub-flow.
      
      This low rule-count is a very simple design which reaches quickly the
      limitations of the flow table ordering, but guarantees that the rule
      ordering will always be respected.
      
      This commit only introduces support for the "steer to rxq" action.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      90b509b3
  10. 28 3月, 2019 4 次提交
  11. 02 3月, 2019 6 次提交
  12. 09 2月, 2019 1 次提交
  13. 31 10月, 2018 1 次提交
    • M
      net: mvpp2: Fix affinity hint allocation · a6b3a3fa
      Marc Zyngier 提交于
      The mvpp2 driver has the curious behaviour of passing a stack variable
      to irq_set_affinity_hint(), which results in the kernel exploding
      the first time anyone accesses this information. News flash: userspace
      does, and irqbalance will happily take the machine down. Great stuff.
      
      An easy fix is to track the mask within the queue_vector structure,
      and to make sure it has the same lifetime as the interrupt itself.
      
      Fixes: e531f767 ("net: mvpp2: handle cases where more CPUs are available than s/w threads")
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a6b3a3fa
  14. 25 9月, 2018 1 次提交
  15. 20 9月, 2018 8 次提交
  16. 16 7月, 2018 2 次提交
    • M
      net: mvpp2: debugfs: add classifier hit counters · f9d30d5b
      Maxime Chevallier 提交于
      The classification operations that are used for RSS make use of several
      lookup tables. Having hit counters for these tables is really helpful
      to determine what flows were matched by ingress traffic, and see the
      path of packets among all the classifier tables.
      
      This commit adds hit counters for the 3 tables used at the moment :
      
       - The decoding table (also called lookup_id table), that links flows
         identified by the Header Parser to the flow table.
      
         There's one entry per flow, located at :
         .../mvpp2/<controller>/flows/XX/dec_hits
      
         Note that there are 21 flows in the decoding table, whereas there are
         52 flows in the Header Parser. That's because there are several kind
         of traffic that will match a given flow. Reading the hit counter from
         one sub-flow will clear all hit counter that have the same flow_id.
      
         This also applies to the flow_hits.
      
       - The flow table, that contains all the different lookups to be
         performed by the classifier for each packet of a given flow. The match
         is done on the first entry of the flow sequence.
      
       - The C2 engine entries, that are used to assign the default rx queue,
         and enable or disable RSS for a given port.
      
         There's one entry per flow, located at:
         .../mvpp2/<controller>/flows/XX/flow_hits
      
         There is one C2 entry per port, so the c2 hit counter is located at :
         .../mvpp2/<controller>/ethX/c2_hits
      
      All hit counter values are 16-bits clear-on-read values.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9d30d5b
    • M
      net: mvpp2: debugfs: add entries for classifier flows · dba1d918
      Maxime Chevallier 提交于
      The classifier configuration for RSS is quite complex, with several
      lookup tables being used. This commit adds useful info in debugfs to
      see how the different tables are configured :
      
      Added 2 new entries in the per-port directory :
      
        - .../eth0/default_rxq : The default rx queue on that port
        - .../eth0/rss_enable : Indicates if RSS is enabled in the C2 entry
      
      Added the 'flows' directory :
      
        It contains one entry per sub-flow. a 'sub-flow' is a unique path from
        Header Parser to the flow table. Multiple sub-flows can point to the
        same 'flow' (each flow has an id from 8 to 29, which is its index in the
        Lookup Id table) :
      
        - .../flows/00/...
                   /01/...
                   ...
                   /51/id : The flow id. There are 21 unique flows. There's one
                             flow per combination of the following parameters :
                             - L4 protocol (TCP, UDP, none)
                             - L3 protocol (IPv4, IPv6)
                             - L3 parameters (Fragmented or not)
                             - L2 parameters (Vlan tag presence or not)
                    .../type : The flow type. This is an even higher level flow,
                               that we manipulate with ethtool. It can be :
                               "udp4" "tcp4" "udp6" "tcp6" "ipv4" "ipv6" "other".
                    .../eth0/...
                    .../eth1/engine : The hash generation engine used for this
      	                        flow on the given port
                        .../hash_opts : The hash generation options indicating on
                                        what data we base the hash (vlan tag, src
                                        IP, src port, etc.)
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dba1d918