1. 12 2月, 2021 2 次提交
    • G
      octeontx2-pf: cn10k: Map LMTST region · 6e8ad438
      Geetha sowjanya 提交于
      On CN10K platform transmit/receive buffer alloc and free from/to hardware
      had changed to support burst operation. Whereas pervious silicon's only
      support single buffer free at a time.
      To Support the same firmware allocates a DRAM region for each PF/VF for
      storing LMTLINES. These LMTLINES are used for NPA batch free and for
      flushing SQE to the hardware.
      PF/VF LMTST region is accessed via BAR4. PFs LMTST region is followed
      by its VFs mbox memory. The size of region varies from 2KB to 256KB based
      on number of LMTLINES configured.
      
      This patch adds support for
      - Mapping PF/VF LMTST region.
      - Reserves 0-71 (RX + TX + XDP) LMTST lines for NPA batch
        free operation.
      - Reserves 72-512 LMTST lines for NIX SQE flush.
      Signed-off-by: NGeetha sowjanya <gakula@marvell.com>
      Signed-off-by: NSunil Goutham <sgoutham@marvell.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6e8ad438
    • S
      octeontx2-pf: cn10k: Add mbox support for CN10K · facede82
      Subbaraya Sundeep 提交于
      Firmware allocates memory regions for PFs and VFs in DRAM.
      The PFs memory region is used for AF-PF and PF-VF mailbox.
      This mbox facilitate communication between AF-PF and PF-VF.
      
      On CN10K platform:
      The DRAM region allocated to PF is enumerated as PF BAR4 memory.
      PF BAR4 contains AF-PF mbox region followed by its VFs mbox region.
      AF-PF mbox region base address is configured at RVU_AF_PFX_BAR4_ADDR
      PF-VF mailbox base address is configured at
      RVU_PF(x)_VF_MBOX_ADDR = RVU_AF_PF()_BAR4_ADDR+64KB. PF access its
      mbox region via BAR4, whereas VF accesses PF-VF DRAM mailboxes via
      BAR2 indirect access.
      
      On CN9XX platform:
      Mailbox region in DRAM is divided into two parts AF-PF mbox region and
      PF-VF mbox region i.e all PFs mbox region is contiguous similarly all
      VFs.
      The base address of the AF-PF mbox region is configured at
      RVU_AF_PF_BAR4_ADDR.
      AF-PF1 mbox address can be calculated as RVU_AF_PF_BAR4_ADDR * mbox
      size.
      The base address of PF-VF mbox region for each PF is configure at
      RVU_AF_PF(0..15)_VF_BAR4_ADDR.PF access its mbox region via BAR4 and its
      VF mbox regions from RVU_PF_VF_BAR4_ADDR register, whereas VF access its
      mbox region via BAR4.
      
      This patch changes mbox initialization to support both CN9XX and CN10K
      platform.
      The patch also adds new hw_cap flag to setting hw features like TSO etc
      and removes platform specific name from the PF/VF driver name to make it
      appropriate for all supported platforms
      
      This patch also removes platform specific name from the PF/VF driver name
      to make it appropriate for all supported platforms
      Signed-off-by: NSubbaraya Sundeep <sbhatta@marvell.com>
      Signed-off-by: NGeetha sowjanya <gakula@marvell.com>
      Signed-off-by: NSunil Goutham <sgoutham@marvell.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      facede82
  2. 11 2月, 2021 2 次提交
  3. 06 1月, 2021 1 次提交
    • G
      octeontx2-pf: Add RSS multi group support · 81a43620
      Geetha sowjanya 提交于
      Hardware supports 8 RSS groups per interface. Currently we are using
      only group '0'. This patch allows user to create new RSS groups/contexts
      and use the same as destination for flow steering rules.
      
      usage:
      To steer the traffic to RQ 2,3
      
      ethtool -X eth0 weight 0 0 1 1 context new
      (It will print the allocated context id number)
      New RSS context is 1
      
      ethtool -N eth0 flow-type tcp4 dst-port 80 context 1 loc 1
      
      To delete the context
      ethtool -X eth0 context 1 delete
      
      When an RSS context is removed, the active classification
      rules using this context are also removed.
      
      Change-log:
      
      v4
      - Fixed compiletime warning.
      - Address Saeed's comments on v3.
      
      v3
      - Coverted otx2_set_rxfh() to use new function.
      
      v2
      - Removed unrelated whitespace
      - Coverted otx2_get_rxfh() to use new function.
      Signed-off-by: NSunil Kovvuri Goutham <sgoutham@marvell.com>
      Signed-off-by: NGeetha sowjanya <gakula@marvell.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      81a43620
  4. 24 11月, 2020 1 次提交
  5. 21 11月, 2020 1 次提交
  6. 18 11月, 2020 4 次提交
  7. 01 11月, 2020 1 次提交
  8. 24 9月, 2020 1 次提交
  9. 02 9月, 2020 1 次提交
  10. 25 8月, 2020 1 次提交
  11. 10 5月, 2020 1 次提交
    • K
      octeontx2-pf: Use the napi_alloc_frag() to alloc the pool buffers · 7a36e491
      Kevin Hao 提交于
      In the current codes, the octeontx2 uses its own method to allocate
      the pool buffers, but there are some issues in this implementation.
      1. We have to run the otx2_get_page() for each allocation cycle and
         this is pretty error prone. As I can see there is no invocation
         of the otx2_get_page() in otx2_pool_refill_task(), this will leave
         the allocated pages have the wrong refcount and may be freed wrongly.
      2. It wastes memory. For example, if we only receive one packet in a
         NAPI RX cycle, and then allocate a 2K buffer with otx2_alloc_rbuf()
         to refill the pool buffers and leave the remain area of the allocated
         page wasted. On a kernel with 64K page, 62K area is wasted.
      
      IMHO it is really unnecessary to implement our own method for the
      buffers allocate, we can reuse the napi_alloc_frag() to simplify
      our code.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      7a36e491
  12. 26 4月, 2020 1 次提交
  13. 26 3月, 2020 1 次提交
  14. 24 3月, 2020 6 次提交
  15. 03 3月, 2020 1 次提交
  16. 27 1月, 2020 15 次提交