- 02 4月, 2016 3 次提交
-
-
由 Jisheng Zhang 提交于
L1_CACHE_BYTES may not be the real cacheline size, use cache_line_size to determine the cacheline size in runtime. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Suggested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
L1_CACHE_BYTES may not be the real cacheline size, use cache_line_size to determine the cacheline size in runtime. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Suggested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
This is to fix the following maybe-uninitialized warning: drivers/net/ethernet/marvell/mvpp2.c:6007:18: warning: 'err' may be used uninitialized in this function [-Wmaybe-uninitialized] Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 4月, 2016 2 次提交
-
-
由 Jisheng Zhang 提交于
The mvneta is also used in some Marvell berlin family SoCs which may have 64bytes cacheline size. Replace the MVNETA_CPU_D_CACHE_LINE_SIZE usage with L1_CACHE_BYTES. And since dma_alloc_coherent() is always cacheline size aligned, so remove the align checks. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
The mvpp2 ip maybe used in SoCs which may have have 64bytes cacheline size. Replace the MVPP2_CPU_D_CACHE_LINE_SIZE with L1_CACHE_BYTES. And since dma_alloc_coherent() is always cacheline size aligned, so remove the align checks. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 3月, 2016 1 次提交
-
-
由 Arnd Bergmann 提交于
MVNETA_BM has a dependency on MVNETA, so we can only select the former if the latter is enabled. However, the code dependency is the reverse: The mvneta module can call into the mvneta_bm module, so mvneta cannot be a built-in if mvneta_bm is a module, or we get a link error: drivers/net/built-in.o: In function `mvneta_remove': drivers/net/ethernet/marvell/mvneta.c:4211: undefined reference to `mvneta_bm_pool_destroy' drivers/net/built-in.o: In function `mvneta_bm_update_mtu': drivers/net/ethernet/marvell/mvneta.c:1034: undefined reference to `mvneta_bm_bufs_free' This avoids the problem by further clarifying the dependency so that MVNETA_BM is a silent Kconfig option that gets turned on by the new MVNETA_BM_ENABLE option. This way both the core HWBM module and the MVNETA_BM code are always built-in when needed. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Fixes: dc35a10f ("net: mvneta: bm: add support for hardware buffer management") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 3月, 2016 7 次提交
-
-
由 Dmitri Epshtein 提交于
Some literal values are actually already defined by macros, so let's use them. [gregory.clement@free-electrons.com: split intial commit in two individual changes] Signed-off-by: NDmitri Epshtein <dima@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitri Epshtein 提交于
This commit corrects error printing when shutting down the port. [gregory.clement@free-electrons.com: split initial commit in two individual changes] Signed-off-by: NDmitri Epshtein <dima@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitri Epshtein 提交于
Function eth_prepare_mac_addr_change() is called as part of MAC address change. This function check if interface is running. To enable change MAC address when interface is running: IFF_LIVE_ADDR_CHANGE flag must be set to dev->priv_flags field Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") Cc: stable@vger.kernel.org Signed-off-by: NDmitri Epshtein <dima@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
In the previous patch, the spinlock was not initialized. While it didn't cause any trouble yet it could be a problem to use it uninitialized. The most annoying part was the critical section protected by the spinlock in mvneta_stop(). Some of the functions could sleep as pointed when activated CONFIG_DEBUG_ATOMIC_SLEEP. Actually, in mvneta_stop() we only need to protect the is_stopped flagged, indeed the code of the notifier for CPU online is protected by the same spinlock, so when we get the lock, the notifer work is done. Reported-by: NPatrick Uiterwijk <patrick@puiterwijk.org> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Anna-Maria Gleixner 提交于
The mvneta_percpu_notifier() hotplug callback lacks handling of the CPU_DOWN_FAILED case. That means, if CPU_DOWN_PREPARE failes, the driver is not well configured on the CPU. Add handling for CPU_DOWN_FAILED[_FROZEN] hotplug notifier transition to setup the driver. Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: netdev@vger.kernel.org Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Now that the hardware buffer management framework had been introduced, let's use it. Tested-by: NSebastian Careba <nitroshift@yahoo.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
Buffer manager (BM) is a dedicated hardware unit that can be used by all ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX path by sparing DRAM access on refilling buffer pool, hardware-based filling of descriptor ring data and better memory utilization due to HW arbitration for using 'short' pools for small packets. Tests performed with A388 SoC working as a network bridge between two packet generators showed increase of maximum processed 64B packets by ~20k (~555k packets with BM enabled vs ~535 packets without BM). Also when pushing 1500B-packets with a line rate achieved, CPU load decreased from around 25% without BM to 20% with BM. BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which are called external BP pools - BPPE. Allocating and releasing buffer pointers (BP) to/from BPPE is performed indirectly by write/read access to a dedicated internal SRAM, where internal BP pools (BPPI) are placed. BM hardware controls status of BPPE automatically, as well as assigning proper buffers to RX descriptors. For more details please refer to Functional Specification of Armada XP or 38x SoC. In order to enable support for a separate hardware block, common for all ports, a new driver has to be implemented ('mvneta_bm'). It provides initialization sequence of address space, clocks, registers, SRAM, empty pools' structures and also obtaining optional configuration from DT (please refer to device tree binding documentation). mvneta_bm exposes also a necessary API to mvneta driver, as well as a dedicated structure with BM information (bm_priv), whose presence is used as a flag notifying of BM usage by port. It has to be ensured that mvneta_bm probe is executed prior to the ones in ports' driver. In case BM is not used or its probe fails, mvneta falls back to use software buffer management. A sequence executed in mvneta_probe function is modified in order to have an access to needed resources before possible port's BM initialization is done. According to port-pools mapping provided by DT appropriate registers are configured and the buffer pools are filled. RX path is modified accordingly. Becaues the hardware allows a wide variety of configuration options, following assumptions are made: * using BM mechanisms can be selectively disabled/enabled basing on DT configuration among the ports * 'long' pool's single buffer size is tied to port's MTU * using 'long' pool by port is obligatory and it cannot be shared * using 'short' pool for smaller packets is optional * one 'short' pool can be shared among all ports This commit enables hardware buffer management operation cooperating with existing mvneta driver. New device tree binding documentation is added and the one of mvneta is updated accordingly. [gregory.clement@free-electrons.com: removed the suspend/resume part] Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 2月, 2016 8 次提交
-
-
由 Gregory CLEMENT 提交于
When stopping the port, the CPU notifier are still there whereas the mvneta_stop_dev function calls mvneta_percpu_disable() on each CPUs. It was possible to have a new CPU coming at this point which could be racy. This patch adds a flag preventing executing the code notifier for a new CPU when the port is stopping. It also uses the spinlock introduces previously. To avoid the deadlock, the lock has been moved outside the mvneta_percpu_elect function. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Electing a CPU must be done in an atomic way: it should be done after or before the removal/insertion of a CPU and this function is not reentrant. During the loop of mvneta_percpu_elect we associates the queues to the CPUs, if there is a topology change during this loop, then the mapping between the CPUs and the queues could be wrong. During this loop the interrupt mask is also updating for each CPUs, It should not be changed in the same time by other part of the driver. This patch adds spinlock to create the needed critical sections. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
In the MVNETA_INTR_* registers, the queues related fields are per cpu, according to the datasheet (comment in [] are added by me): "In a multi-CPU system, bits of RX[or TX] queues for which the access by the reading[or writing] CPU is disabled are read as 0, and cannot be cleared[or written]." That means that each time we want to manipulate these bits we had to do it on each cpu and not only on the current cpu. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Since the commit 2dcf75e2 ("net: mvneta: Associate RX queues with each CPU") all the percpu irq are used and disabled at initialization, so there is no point to disable them first. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Instead of using a for_each_* loop in which we just call the smp_call_function_single macro, it is more simple to directly use the on_each_cpu macro. Moreover, this macro ensures that the calls will be done all at once. Suggested-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
When passing to the management of multiple RX queue, the mvneta_percpu_elect function was broken. The use of the modulo can lead to elect the wrong cpu. For example with rxq_def=2, if the CPU 2 goes offline and then online, we ended with the third RX queue activated in the same time on CPU 0 and CPU2, which lead to a kernel crash. With this fix, we don't try to get "the closer" CPU if the default CPU is gone, now we just use CPU 0 which always be there. Thanks to this, the code becomes more readable, easier to maintain and more predicable. Cc: stable@vger.kernel.org Fixes: 2dcf75e2 ("net: mvneta: Associate RX queues with each CPU") Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
This patch convert the for_each_present in on_each_cpu, instead of applying on the present cpus it will be applied only on the online cpus. This fix a bug reported on http://thread.gmane.org/gmane.linux.ports.arm.kernel/468173. Using the macro on_each_cpu (instead of a for_each_* loop) also ensures that all the calls will be done all at once. Fixes: f8642885 ("net: mvneta: Statically assign queues to CPUs") Reported-by: NStefan Roese <stefan.roese@gmail.com> Suggested-by: NJisheng Zhang <jszhang@marvell.com> Suggested-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Amitoj Kaur Chawla 提交于
The return value of kzalloc on failure of allocation of memory should be -ENOMEM and not -1. Found using Coccinelle. A simplified version of the semantic patch used is: //<smpl> @@ expression *e; position p,q; @@ e@q = kzalloc(...); if@p (e == NULL) { ... return - -1 + -ENOMEM ; } //</smpl> This function may also return -1 after calling mpp2_prs_tcam_port_map_get. So that the function consistently returns meaningful error values on failure, the -1 is changed to -EINVAL. Signed-off-by: NAmitoj Kaur Chawla <amitoj1606@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 1月, 2016 1 次提交
-
-
由 Nicolas Schichan 提交于
The code in txq_put_data() would use txq->tx_curr_desc to index the tso_hdrs/tso_hdrs_dma buffers, for less than 8 bytes unaligned fragments, which is already moved to the next descriptor at the beginning of the function. If that fragment was the last of the the skb, the next skb would use that same space to place the ip headers, overwritting that small fragment data. Fixes: 91986fd3 (net: mv643xx_eth: Ensure proper data alignment in TSO TX path) Signed-off-by: NNicolas Schichan <nschichan@freebox.fr> Reviewed-by: NPhilipp Kirchhofer <philipp@familie-kirchhofer.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 1月, 2016 4 次提交
-
-
由 Jisheng Zhang 提交于
Some platforms may provide more than one clk for the mvneta IP, for example Marvell BG4CT provides one clk for the mac core, and one clk for the AXI bus logic. Obviously this bus clk also need to be enabled. This patch adds this optional "bus" clk support. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
Some platforms may provide more than one clk for the mvneta IP, for example Marvell BG4CT provides one clk for the mac core, and one clk for the AXI bus logic. To support for more than one clock, we'll need to distinguish between the clock by name. Change clock probing to first try to get "core" clock before falling back to unnamed clock. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Acked-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
Sorting the headers in alphabetic order will help to reduce the conflict when adding new headers in the future. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Acked-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
When s->type is T_REG_64, the high 32bits are lost in val. This patch fixes this trivial issue. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Fixes: 9b0cdefa ("net: mvneta: add ethtool statistics") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 1月, 2016 2 次提交
-
-
由 Andrew Lunn 提交于
Not all devices attached to an MDIO bus are phys. So add an mdio_device structure to represent the generic parts of an mdio device, and place this structure into the phy_device. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Andrew Lunn 提交于
Have mdio_alloc() create the array of interrupt numbers, and initialize it to POLLING. This is what most MDIO drivers want, so allowing code to be removed from the drivers. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 12月, 2015 1 次提交
-
-
由 Tom Herbert 提交于
The name NETIF_F_ALL_CSUM is a misnomer. This does not correspond to the set of features for offloading all checksums. This is a mask of the checksum offload related features bits. It is incorrect to set both NETIF_F_HW_CSUM and NETIF_F_IP_CSUM or NETIF_F_IPV6 at the same time for features of a device. This patch: - Changes instances of NETIF_F_ALL_CSUM to NETIF_F_CSUM_MASK (where NETIF_F_ALL_CSUM is being used as a mask). - Changes bonding, sfc/efx, ipvlan, macvlan, vlan, and team drivers to use NEITF_F_HW_CSUM in features list instead of NETIF_F_ALL_CSUM. Signed-off-by: NTom Herbert <tom@herbertland.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 12月, 2015 4 次提交
-
-
由 Gregory CLEMENT 提交于
With this patch each CPU is associated with its own set of TX queues. It also setup the XPS with an initial configuration which set the affinity matching the hardware configuration. Suggested-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
This patch adds the support for the RSS related ethtool function. Currently it only uses one entry in the indirection table which allows associating an mvneta interface to a given CPU. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
We enable the percpu interrupt for all the CPU and we just associate a CPU to a few queue at the neta level. The mapping between the CPUs and the queues is static. The queues are associated to the CPU module the number of CPUs. However currently we only use on RX queue for a given Ethernet port. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Instead of using the same default queue for all the port. Move it in the port struct. It will allow have a different default queue for each port. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 12月, 2015 3 次提交
-
-
由 Marcin Wojtas 提交于
In hitherto code in case of RX buffer allocation error during refill, original buffer is pushed to the network stack, but the amount of available buffer pointers in BM pool is decreased. This commit fixes the situation by moving refill call before skb_put(), and returning original buffer pointer to the pool in case of an error. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Fixes: 3f518509 ("ethernet: Add new driver for Marvell Armada 375 network unit") Cc: <stable@vger.kernel.org> # v3.18+ Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
Each allocated buffer, whose pointer is put into BM pool is DMA-mapped. Hence it should be properly unmapped after usage or when removing buffers from pool. This commit fixes DMA handling on RX path by adding dma_unmap_single() in mvpp2_rx() and in mvpp2_bufs_free(). The latter function's argument number had to be increased for this purpose. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Fixes: 3f518509 ("ethernet: Add new driver for Marvell Armada 375 network unit") Cc: <stable@vger.kernel.org> # v3.18+ Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
The Tx descriptor release code currently calls dma_unmap_single() and dev_kfree_skb_any() if the descriptor is associated with a non-NULL skb. This condition is true only for the last fragment of the packet. Since every descriptor's buffer is DMA-mapped it has to be properly unmapped. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Fixes: 3f518509 ("ethernet: Add new driver for Marvell Armada 375 network unit") Cc: <stable@vger.kernel.org> # v3.18+ Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 12月, 2015 3 次提交
-
-
由 Stas Sergeev 提交于
This patch allows to do ethtool -s eth0 autoneg off ethtool -s eth0 autoneg on to disable or enable autonegotiation at run-time. Without that functionality, the only way to control the autonegotiation is to modify the device tree. This is needed if you plan to use the same kernel with different ethernet switches, the ones that support the in-band status and the ones that not. CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> CC: netdev@vger.kernel.org CC: linux-kernel@vger.kernel.org Signed-off-by: NStas Sergeev <stsp@users.sourceforge.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stas Sergeev 提交于
This moves autoneg-related bit manipulations to the single place. CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> CC: netdev@vger.kernel.org CC: linux-kernel@vger.kernel.org Signed-off-by: NStas Sergeev <stsp@users.sourceforge.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Thierry Reding 提交于
These new helpers simplify implementing multi-driver modules and properly handle failure to register one driver by unregistering all previously registered drivers. Signed-off-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 12月, 2015 1 次提交
-
-
由 Marcin Wojtas 提交于
Since Armada 38x SoC can support IP checksum for jumbo frames only on a single port, it means that this feature should be enabled per-port, rather than for the whole SoC. This patch enables setting custom TX IP checksum limit by adding new optional property to the mvneta device tree node. If not used, by default 1600B is set for "marvell,armada-370-neta" and 9800B for other strings, which ensures backward compatibility. Binding documentation is updated accordingly. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-