1. 20 9月, 2018 7 次提交
  2. 17 9月, 2018 1 次提交
    • A
      net: mvpp2: let phylink manage the carrier state · 41948ccb
      Antoine Tenart 提交于
      Net drivers using phylink shouldn't mess with the link carrier
      themselves and should let phylink manage it. The mvpp2 driver wasn't
      following this best practice as the mac_config() function made calls to
      change the link carrier state. This led to wrongly reported carrier link
      state which then triggered other issues. This patch fixes this
      behaviour.
      
      But the PPv2 driver relied on this misbehaviour in two cases: for fixed
      links and when not using phylink (ACPI mode). The later was fixed by
      adding an explicit call to link_up(), which when the ACPI mode will use
      phylink should be removed.
      
      The fixed link case was relying on the mac_config() function to set the
      link up, as we found an issue in phylink_start() which assumes the
      carrier is off. If not, the link_up() function is never called. To fix
      this, a call to netif_carrier_off() is added just before phylink_start()
      so that we do not introduce a regression in the driver.
      
      Fixes: 4bb04326 ("net: mvpp2: phylink support")
      Reported-by: NRussell King <linux@armlinux.org.uk>
      Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      41948ccb
  3. 03 9月, 2018 3 次提交
  4. 30 8月, 2018 2 次提交
  5. 11 8月, 2018 1 次提交
  6. 29 7月, 2018 8 次提交
  7. 19 7月, 2018 1 次提交
  8. 16 7月, 2018 5 次提交
    • M
      net: mvpp2: debugfs: add classifier hit counters · f9d30d5b
      Maxime Chevallier 提交于
      The classification operations that are used for RSS make use of several
      lookup tables. Having hit counters for these tables is really helpful
      to determine what flows were matched by ingress traffic, and see the
      path of packets among all the classifier tables.
      
      This commit adds hit counters for the 3 tables used at the moment :
      
       - The decoding table (also called lookup_id table), that links flows
         identified by the Header Parser to the flow table.
      
         There's one entry per flow, located at :
         .../mvpp2/<controller>/flows/XX/dec_hits
      
         Note that there are 21 flows in the decoding table, whereas there are
         52 flows in the Header Parser. That's because there are several kind
         of traffic that will match a given flow. Reading the hit counter from
         one sub-flow will clear all hit counter that have the same flow_id.
      
         This also applies to the flow_hits.
      
       - The flow table, that contains all the different lookups to be
         performed by the classifier for each packet of a given flow. The match
         is done on the first entry of the flow sequence.
      
       - The C2 engine entries, that are used to assign the default rx queue,
         and enable or disable RSS for a given port.
      
         There's one entry per flow, located at:
         .../mvpp2/<controller>/flows/XX/flow_hits
      
         There is one C2 entry per port, so the c2 hit counter is located at :
         .../mvpp2/<controller>/ethX/c2_hits
      
      All hit counter values are 16-bits clear-on-read values.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9d30d5b
    • M
      net: mvpp2: debugfs: add entries for classifier flows · dba1d918
      Maxime Chevallier 提交于
      The classifier configuration for RSS is quite complex, with several
      lookup tables being used. This commit adds useful info in debugfs to
      see how the different tables are configured :
      
      Added 2 new entries in the per-port directory :
      
        - .../eth0/default_rxq : The default rx queue on that port
        - .../eth0/rss_enable : Indicates if RSS is enabled in the C2 entry
      
      Added the 'flows' directory :
      
        It contains one entry per sub-flow. a 'sub-flow' is a unique path from
        Header Parser to the flow table. Multiple sub-flows can point to the
        same 'flow' (each flow has an id from 8 to 29, which is its index in the
        Lookup Id table) :
      
        - .../flows/00/...
                   /01/...
                   ...
                   /51/id : The flow id. There are 21 unique flows. There's one
                             flow per combination of the following parameters :
                             - L4 protocol (TCP, UDP, none)
                             - L3 protocol (IPv4, IPv6)
                             - L3 parameters (Fragmented or not)
                             - L2 parameters (Vlan tag presence or not)
                    .../type : The flow type. This is an even higher level flow,
                               that we manipulate with ethtool. It can be :
                               "udp4" "tcp4" "udp6" "tcp6" "ipv4" "ipv6" "other".
                    .../eth0/...
                    .../eth1/engine : The hash generation engine used for this
      	                        flow on the given port
                        .../hash_opts : The hash generation options indicating on
                                        what data we base the hash (vlan tag, src
                                        IP, src port, etc.)
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dba1d918
    • M
      net: mvpp2: debugfs: add hit counter stats for Header Parser entries · 1203341c
      Maxime Chevallier 提交于
      One helpful feature to help debug the Header Parser TCAM filter in PPv2
      is to be able to see if the entries did match something when a packet
      comes in. This can be done by using the built-in hit counter for TCAM
      entries.
      
      This commit implements reading the counter, and exposing its value on
      debugfs for each filter entry.
      
      The counter is a 16-bits clear-on-read value, located at:
       .../mvpp2/<controller>/parser/XXX/hits
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1203341c
    • M
      net: mvpp2: add a debugfs interface for the Header Parser · 21da57a2
      Maxime Chevallier 提交于
      Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not
      trivial to configure and debug. Being able to dump TCAM entries from
      userspace can be really helpful to help development of new features
      and debug existing ones.
      
      This commit adds a basic debugfs interface for the PPv2 driver, focusing
      on TCAM related features.
      
      <mnt>/mvpp2/ --- f2000000.ethernet
                    \- f4000000.ethernet --- parser --- 000 ...
                                          |          \- 001
                                          |          \- ...
                                          |          \- 255 --- ai
                                          |                  \- header_data
                                          |                  \- lookup_id
                                          |                  \- sram
                                          |                  \- valid
                                          \- eth1 ...
                                          \- eth2 --- mac_filter
                                                   \- parser_entries
                                                   \- vid_filter
      
      There's one directory per PPv2 instance, named after pdev->name to make
      sure names are uniques. In each of these directories, there's :
      
       - one directory per interface on the controller, each containing :
      
         - "mac_filter", which lists all filtered addresses for this port
           (based on TCAM, not on the kernel's uc / mc lists)
      
         - "parser_entries", which lists the indices of all valid TCAM
            entries that have this port in their port map
      
         - "vid_filter", which lists the vids allowed on this port, based on
           TCAM
      
       - one "parser" directory (the parser is common to all ports), containing :
      
         - one directory per TCAM entry (256 of them, from 0 to 255), each
           containing :
      
           - "ai" : Contains the 1 byte Additional Info field from TCAM, and
      
           - "header_data" : Contains the 8 bytes Header Data extracted from
             the packet
      
           - "lookup_id" : Contains the 4 bits LU_ID
      
           - "sram" : contains the raw SRAM data, which is the result of the TCAM
      		lookup. This readonly at the moment.
      
           - "valid" : Indicates if the entry is valid of not.
      
      All entries are read-only, and everything is output in hex form.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      21da57a2
    • A
      net: mvpp2: switch to SPDX identifiers · f1e37e31
      Antoine Tenart 提交于
      Use the appropriate SPDX license identifiers and drop the license text.
      This patch is only cosmetic.
      Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com>
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f1e37e31
  9. 14 7月, 2018 1 次提交
  10. 13 7月, 2018 11 次提交
    • M
      net: mvpp2: allow setting RSS flow hash parameters with ethtool · 436d4fdb
      Maxime Chevallier 提交于
      This commit allows setting the RSS hash generation parameters from
      ethtool. When setting parameters for a given flow type from ethtool
      (e.g. tcp4), all the corresponding flows in the flow table are updated,
      according to the supported hash parameters.
      
      For example, when configuring TCP over IPv4 hash parameters to be
      src/dst IP  + src/dst port ("ethtool -N eth0 rx-flow-hash tcp4 sdfn"),
      we only set the "src/dst port" hash parameters on the non-fragmented TCP
      over IPv4 flows.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      436d4fdb
    • M
      net: mvpp2: add an RSS classification step for each flow · d33ec452
      Maxime Chevallier 提交于
      One of the classification action that can be performed is to compute a
      hash of the packet header based on some header fields, and lookup a RSS
      table based on this hash to determine the final RxQ.
      
      This is done by adding one lookup entry per flow per port, so that we
      can configure the hash generation parameters for each flow and each
      port.
      
      There are 2 possible engines that can be used for RSS hash generation :
      
       - C3HA, that generates a hash based on up to 4 header-extracted fields
       - C3HB, that does the same as c3HA, but also includes L4 info in the hash
      
      There are a lot of fields that can be extracted from the header. For now,
      we only use the ones that we can configure using ethtool :
       - DST MAC address
       - L3 info
       - Source IP
       - Destination IP
       - Source port
       - Destination port
      
      The C3HB engine is selected when we use L4 fields (src/dst port).
      
                     Header parser          Dec table
       Ingress pkt  +-------------+ flow id +----------------------------+
      ------------->| TCAM + SRAM |-------->|TCP IPv4 w/ VLAN, not frag  |
                    +-------------+         |TCP IPv4 w/o VLAN, not frag |
                                            |TCP IPv4 w/ VLAN, frag      |--+
                                            |etc.                        |  |
                                            +----------------------------+  |
                                                                            |
                                                  Flow table                |
        +---------+   +------------+         +--------------------------+   |
        | RSS tbl |<--| Classifier |<--------| flow 0: C2 lookup        |   |
        +---------+   +------------+         |         C3 lookup port 0 |   |
                       |         |           |         C3 lookup port 1 |   |
               +-----------+ +-------------+ |         ...              |   |
               | C2 engine | | C3H engines | | flow 1: C2 lookup        |<--+
               +-----------+ +-------------+ |         C3 lookup port 0 |
                                             |         ...              |
                                             | ...                      |
                                             | flow 51 : C2 lookup      |
                                             |           ...            |
                                             +--------------------------+
      
      The C2 engine also gains the role of enabling and disabling the RSS
      table lookup for this packet.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d33ec452
    • M
      net: mvpp2: split ingress traffic into multiple flows · f9358e12
      Maxime Chevallier 提交于
      The PPv2 classifier allows to perform classification operations on each
      ingress packet, based on the flow the packet is assigned to.
      
      The current code uses only 1 flow per port, and the only classification
      action consists of assigning the rx queue to the packet, depending on the
      port.
      
      In preparation for adding RSS support, we have to split all incoming
      traffic into different flows. Since RSS assigns a rx queue depending on
      the hash of some header fields, we have to make sure that the hash is
      generated in a consistent way for all packets in the same flow.
      
      What we call a "flow" is actually a set of attributes attached to a
      packet that depends on various L2/L3/L4 info.
      
      This patch introduces 52 flows, wich are a combination of various L2, L3
      and L4 attributes :
       - Whether or not the packet has a VLAN tag
       - Whether the packet is IPv4, IPv6 or something else
       - Whether the packet is TCP, UDP or something else
       - Whether or not the packet is fragmented at L3 level.
      
      The flow is associated to a packet by the Header Parser. Each flow
      corresponds to an entry in the decoding table. This entry then points to
      the sequence of classification lookups to be performed by the
      classifier, represented in the flow table.
      
      For now, the only lookup we perform is a C2 lookup to set the default
      rx queue.
      
                     Header parser          Dec table
       Ingress pkt  +-------------+ flow id +----------------------------+
      ------------->| TCAM + SRAM |-------->|TCP IPv4 w/ VLAN, not frag  |
                    +-------------+         |TCP IPv4 w/o VLAN, not frag |
                                            |TCP IPv4 w/ VLAN, frag      |--+
                                            |etc.                        |  |
                                            +----------------------------+  |
                                                                            |
                                                 Flow table                 |
                      +------------+        +---------------------+         |
           To RxQ <---| Classifier |<-------| flow 0: C2 lookup   |<--------+
                      +------------+        | flow 1: C2 lookup   |
                             |              | ...                 |
                      +------------+        | flow 51 : C2 lookup |
      		| C2 engine  |        +---------------------+
                      +------------+
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9358e12
    • M
      net: mvpp2: use classifier to assign default rx queue · b1a962c6
      Maxime Chevallier 提交于
      The PPv2 Controller has a classifier, that can perform multiple lookup
      operations for each packet, using different engines.
      
      One of these engines is the C2 engine, which performs TCAM based lookups
      on data extracted from the packet header. When a packet matches an
      entry, the engine sets various attributes, used to perform
      classification operations.
      
      One of these attributes is the rx queue in which the packet should be sent.
      The current code uses the lookup_id table (also called decoding table)
      to assign the rx queue. However, this only works if we use one entry per
      port in the decoding table, which won't be the case once we add RSS
      lookups.
      
      This patch uses the C2 engine to assign the rx queue to each packet.
      
      The C2 engine is used through the flow table, which dictates what
      classification operations are done for a given flow.
      
      Right now, we have one flow per port, which contains every ingress
      packet for this port.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b1a962c6
    • M
      net: mvpp2: rename per-port RSS init function · e6e21c02
      Maxime Chevallier 提交于
      mvpp22_init_rss function configures the RSS parameters for each port, so
      rename it accordingly. Since this function relies on classifier
      configuration, move its call right after the classifier config.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e6e21c02
    • M
      net: mvpp2: make sure we don't spread load on disabled CPUs · 2a2f467d
      Maxime Chevallier 提交于
      When filling the RSS table, we have to make sure that the rx queue is
      attached to an online CPU.
      
      This patch is not a full support for cpu_hotplug, but rather a way to
      make sure that we don't break network on system booted with the maxcpus
      parameter.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2a2f467d
    • A
      net: mvpp2: improve the distribution of packets on CPUs when using RSS · 662ae3fe
      Antoine Tenart 提交于
      This patch adds an extra indirection when setting the indirection table
      into the RSS hardware table to improve the packets distribution across
      CPUs. For example, if 2 queues are used on a multi-core system this new
      indirection will choose two queues on two different CPUs instead of the
      two first queues which are on the same first CPU.
      Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com>
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      662ae3fe
    • A
      net: mvpp2: RSS indirection table support · 8179642b
      Antoine Tenart 提交于
      This patch adds the RSS indirection table support, allowing to use the
      ethtool -x and -X options to dump and set this table.
      Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com>
      [Maxime: Small warning fixes, use one table per port]
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8179642b
    • M
      net: mvpp2: use one RSS table per port · a27a254c
      Maxime Chevallier 提交于
      PPv2 Controller has 8 RSS Tables, of 32 entries each. A lookup in the
      RXQ2RSS_TABLE is performed for each incoming packet, and the RSS Table
      to be used is chosen according to the default rx queue that would be
      used for the packet.
      
      This default rx queue is set in the Lookup_id Table (also called
      Decoding Table), and is equal to the port->first_rxq.
      
      Since the Classifier itself isn't active at any time for the moment,
      this doesn't have a direct effect, the default rx queue at the moment is
      the one where all packets end-up into.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a27a254c
    • M
      net: mvpp2: fix RSS register definitions · 4b86097b
      Maxime Chevallier 提交于
      There is no RSS_TABLE register in PPv2 Controller. The register 0x1510
      which was specified is actually named "RSS_HASH_SEL", but isn't used by
      this driver at all.
      
      Based on how this register was used, it should have been the
      RXQ2RSS_TABLE register, which allows to select the RSS table that will
      be used for the incoming packet.
      
      The RSS_TABLE_POINTER is actually a field of this RXQ2RSS_TABLE
      register.
      
      Since RSS tables are actually not used by the driver for now, this
      commit does not fix a runtime bug.
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4b86097b
    • A
      net: mvpp2: fix a typo in the RSS code · 132baa03
      Antoine Tenart 提交于
      Cosmetic patch fixing a typo in one of the RSS comments.
      Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com>
      Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      132baa03