1. 23 8月, 2017 3 次提交
  2. 04 8月, 2017 6 次提交
  3. 23 6月, 2017 3 次提交
  4. 14 6月, 2017 1 次提交
  5. 11 6月, 2017 2 次提交
  6. 08 6月, 2017 1 次提交
  7. 19 4月, 2017 16 次提交
  8. 10 3月, 2017 8 次提交
    • T
      net: mvpp2: finally add the PPv2.2 compatible string · fc5e1550
      Thomas Petazzoni 提交于
      Now that the mvpp2 driver has been modified to accommodate the support
      for PPv2.2, we can finally advertise this support by adding the
      appropriate compatible string.
      
      At the same time, we update the Kconfig description of the MVPP2 driver.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fc5e1550
    • T
      net: mvpp2: set dma mask and coherent dma mask on PPv2.2 · 2067e0a1
      Thomas Petazzoni 提交于
      On PPv2.2, the streaming mappings can be anywhere in the first 40 bits
      of the physical address space. However, for the coherent mappings, we
      still need them to be in the first 32 bits of the address space,
      because all BM pools share a single register to store the high 32 bits
      of the BM pool address, which means all BM pools must be allocated in
      the same 4GB memory area.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2067e0a1
    • T
      net: mvpp2: add support for an additional clock needed for PPv2.2 · fceb55d4
      Thomas Petazzoni 提交于
      The PPv2.2 variant of the network controller needs an additional
      clock, the "MG clock" in order for the IP block to operate
      properly. This commit adds support for this additional clock to the
      driver, reworking as needed the error handling path.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fceb55d4
    • T
      net: mvpp2: adapt rxq distribution to PPv2.2 · 59b9a31e
      Thomas Petazzoni 提交于
      In PPv2.1, we have a maximum of 8 RXQs per port, with a default of 4
      RXQs per port, and we were assigning RXQs 0->3 to the first port, 4->7
      to the second port, 8->11 to the third port, etc.
      
      In PPv2.2, we have a maximum of 32 RXQs per port, and we must allocate
      RXQs from the range of 32 RXQs available for each port. So port 0 must
      use RXQs in the range 0->31, port 1 in the range 32->63, etc.
      
      This commit adapts the mvpp2 to this difference between PPv2.1 and
      PPv2.2:
      
       - The constant definition MVPP2_MAX_RXQ is replaced by a new field
         'max_port_rxqs' in 'struct mvpp2', which stores the maximum number of
         RXQs per port. This field is initialized during ->probe() depending
         on the IP version.
      
       - MVPP2_RXQ_TOTAL_NUM is removed, and instead we calculate the total
         number of RXQs by multiplying the number of ports by the maximum of
         RXQs per port. This was anyway used in only one place.
      
       - In mvpp2_port_probe(), the calculation of port->first_rxq is adjusted
         to cope with the different allocation strategy between PPv2.1 and
         PPv2.2. Due to this change, the 'next_first_rxq' argument of this
         function is no longer needed and is removed.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      59b9a31e
    • T
      net: mvpp2: rework RXQ interrupt group initialization for PPv2.2 · a73fef10
      Thomas Petazzoni 提交于
      This commit adjusts how the MVPP2_ISR_RXQ_GROUP_REG register is
      configured, since it changed between PPv2.1 and PPv2.2.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a73fef10
    • T
      net: mvpp2: add AXI bridge initialization for PPv2.2 · 6763ce31
      Thomas Petazzoni 提交于
      The PPv2.2 unit is connected to an AXI bus on Armada 7K/8K, so this
      commit adds the necessary initialization of the AXI bridge.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6763ce31
    • T
      net: mvpp2: handle misc PPv2.1/PPv2.2 differences · 26975821
      Thomas Petazzoni 提交于
      This commit handles a few miscellaneous differences between PPv2.1 and
      PPv2.2 in different areas, where code done for PPv2.1 doesn't apply for
      PPv2.2 or needs to be adjusted (getting the MAC address, disabling PHY
      polling, etc.).
      
      Thanks to Russell King for providing the initial implementation of
      mvpp22_port_mii_set().
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      26975821
    • T
      net: mvpp2: handle register mapping and access for PPv2.2 · a786841d
      Thomas Petazzoni 提交于
      This commit adjusts the mvpp2 driver register mapping and access logic
      to support PPv2.2, to handle a number of differences.
      
      Due to how the registers are laid out in memory, the Device Tree binding
      for the "reg" property is different:
      
       - On PPv2.1, we had a first area for the packet processor
         registers (common to all ports), and then one area per port.
      
       - On PPv2.2, we have a first area for the packet processor
         registers (common to all ports), and a second area for numerous other
         registers, including a large number of per-port registers
      
      In addition, on PPv2.2, the area for the common registers is split into
      so-called "address spaces" of 64 KB each. They allow to access per-CPU
      registers, where each CPU has its own copy of some registers. A few
      other registers, which have a single copy, also need to be accessed from
      those per-CPU windows if they are related to a per-CPU register. For
      example:
      
        - Writing to MVPP2_TXQ_NUM_REG selects a TX queue. This register is a
          per-CPU register, it must be accessed from the current CPU register
          window.
      
        - Then a write to MVPP2_TXQ_PENDING_REG, MVPP2_TXQ_DESC_ADDR_REG (and
          a few others) will affect the TX queue that was selected by the
          write to MVPP2_TXQ_NUM_REG. It must be accessed from the same CPU
          window as the write to the TXQ_NUM_REG.
      
      Therefore, the ->base member of 'struct mvpp2' is replaced with a
      ->cpu_base[] array, each entry pointing to a mapping of the per-CPU
      area. Since PPv2.1 doesn't have this concept of per-CPU windows, all
      entries in ->cpu_base[] point to the same io-remapped area.
      
      The existing mvpp2_read() and mvpp2_write() accessors use cpu_base[0],
      they are used for registers for which the CPU window doesn't matter.
      
      mvpp2_percpu_read() and mvpp2_percpu_write() are new accessors added to
      access the registers for which the CPU window does matter, which is why
      they take a "cpu" as argument.
      
      The driver is then changed to use mvpp2_percpu_read() and
      mvpp2_percpu_write() where it matters.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a786841d