- 02 4月, 2016 1 次提交
-
-
由 Jisheng Zhang 提交于
L1_CACHE_BYTES may not be the real cacheline size, use cache_line_size to determine the cacheline size in runtime. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Suggested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 4月, 2016 1 次提交
-
-
由 Jisheng Zhang 提交于
The mvneta is also used in some Marvell berlin family SoCs which may have 64bytes cacheline size. Replace the MVNETA_CPU_D_CACHE_LINE_SIZE usage with L1_CACHE_BYTES. And since dma_alloc_coherent() is always cacheline size aligned, so remove the align checks. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 3月, 2016 7 次提交
-
-
由 Dmitri Epshtein 提交于
Some literal values are actually already defined by macros, so let's use them. [gregory.clement@free-electrons.com: split intial commit in two individual changes] Signed-off-by: NDmitri Epshtein <dima@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitri Epshtein 提交于
This commit corrects error printing when shutting down the port. [gregory.clement@free-electrons.com: split initial commit in two individual changes] Signed-off-by: NDmitri Epshtein <dima@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitri Epshtein 提交于
Function eth_prepare_mac_addr_change() is called as part of MAC address change. This function check if interface is running. To enable change MAC address when interface is running: IFF_LIVE_ADDR_CHANGE flag must be set to dev->priv_flags field Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") Cc: stable@vger.kernel.org Signed-off-by: NDmitri Epshtein <dima@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
In the previous patch, the spinlock was not initialized. While it didn't cause any trouble yet it could be a problem to use it uninitialized. The most annoying part was the critical section protected by the spinlock in mvneta_stop(). Some of the functions could sleep as pointed when activated CONFIG_DEBUG_ATOMIC_SLEEP. Actually, in mvneta_stop() we only need to protect the is_stopped flagged, indeed the code of the notifier for CPU online is protected by the same spinlock, so when we get the lock, the notifer work is done. Reported-by: NPatrick Uiterwijk <patrick@puiterwijk.org> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Anna-Maria Gleixner 提交于
The mvneta_percpu_notifier() hotplug callback lacks handling of the CPU_DOWN_FAILED case. That means, if CPU_DOWN_PREPARE failes, the driver is not well configured on the CPU. Add handling for CPU_DOWN_FAILED[_FROZEN] hotplug notifier transition to setup the driver. Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: netdev@vger.kernel.org Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Now that the hardware buffer management framework had been introduced, let's use it. Tested-by: NSebastian Careba <nitroshift@yahoo.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
Buffer manager (BM) is a dedicated hardware unit that can be used by all ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX path by sparing DRAM access on refilling buffer pool, hardware-based filling of descriptor ring data and better memory utilization due to HW arbitration for using 'short' pools for small packets. Tests performed with A388 SoC working as a network bridge between two packet generators showed increase of maximum processed 64B packets by ~20k (~555k packets with BM enabled vs ~535 packets without BM). Also when pushing 1500B-packets with a line rate achieved, CPU load decreased from around 25% without BM to 20% with BM. BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which are called external BP pools - BPPE. Allocating and releasing buffer pointers (BP) to/from BPPE is performed indirectly by write/read access to a dedicated internal SRAM, where internal BP pools (BPPI) are placed. BM hardware controls status of BPPE automatically, as well as assigning proper buffers to RX descriptors. For more details please refer to Functional Specification of Armada XP or 38x SoC. In order to enable support for a separate hardware block, common for all ports, a new driver has to be implemented ('mvneta_bm'). It provides initialization sequence of address space, clocks, registers, SRAM, empty pools' structures and also obtaining optional configuration from DT (please refer to device tree binding documentation). mvneta_bm exposes also a necessary API to mvneta driver, as well as a dedicated structure with BM information (bm_priv), whose presence is used as a flag notifying of BM usage by port. It has to be ensured that mvneta_bm probe is executed prior to the ones in ports' driver. In case BM is not used or its probe fails, mvneta falls back to use software buffer management. A sequence executed in mvneta_probe function is modified in order to have an access to needed resources before possible port's BM initialization is done. According to port-pools mapping provided by DT appropriate registers are configured and the buffer pools are filled. RX path is modified accordingly. Becaues the hardware allows a wide variety of configuration options, following assumptions are made: * using BM mechanisms can be selectively disabled/enabled basing on DT configuration among the ports * 'long' pool's single buffer size is tied to port's MTU * using 'long' pool by port is obligatory and it cannot be shared * using 'short' pool for smaller packets is optional * one 'short' pool can be shared among all ports This commit enables hardware buffer management operation cooperating with existing mvneta driver. New device tree binding documentation is added and the one of mvneta is updated accordingly. [gregory.clement@free-electrons.com: removed the suspend/resume part] Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 2月, 2016 7 次提交
-
-
由 Gregory CLEMENT 提交于
When stopping the port, the CPU notifier are still there whereas the mvneta_stop_dev function calls mvneta_percpu_disable() on each CPUs. It was possible to have a new CPU coming at this point which could be racy. This patch adds a flag preventing executing the code notifier for a new CPU when the port is stopping. It also uses the spinlock introduces previously. To avoid the deadlock, the lock has been moved outside the mvneta_percpu_elect function. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Electing a CPU must be done in an atomic way: it should be done after or before the removal/insertion of a CPU and this function is not reentrant. During the loop of mvneta_percpu_elect we associates the queues to the CPUs, if there is a topology change during this loop, then the mapping between the CPUs and the queues could be wrong. During this loop the interrupt mask is also updating for each CPUs, It should not be changed in the same time by other part of the driver. This patch adds spinlock to create the needed critical sections. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
In the MVNETA_INTR_* registers, the queues related fields are per cpu, according to the datasheet (comment in [] are added by me): "In a multi-CPU system, bits of RX[or TX] queues for which the access by the reading[or writing] CPU is disabled are read as 0, and cannot be cleared[or written]." That means that each time we want to manipulate these bits we had to do it on each cpu and not only on the current cpu. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Since the commit 2dcf75e2 ("net: mvneta: Associate RX queues with each CPU") all the percpu irq are used and disabled at initialization, so there is no point to disable them first. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Instead of using a for_each_* loop in which we just call the smp_call_function_single macro, it is more simple to directly use the on_each_cpu macro. Moreover, this macro ensures that the calls will be done all at once. Suggested-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
When passing to the management of multiple RX queue, the mvneta_percpu_elect function was broken. The use of the modulo can lead to elect the wrong cpu. For example with rxq_def=2, if the CPU 2 goes offline and then online, we ended with the third RX queue activated in the same time on CPU 0 and CPU2, which lead to a kernel crash. With this fix, we don't try to get "the closer" CPU if the default CPU is gone, now we just use CPU 0 which always be there. Thanks to this, the code becomes more readable, easier to maintain and more predicable. Cc: stable@vger.kernel.org Fixes: 2dcf75e2 ("net: mvneta: Associate RX queues with each CPU") Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
This patch convert the for_each_present in on_each_cpu, instead of applying on the present cpus it will be applied only on the online cpus. This fix a bug reported on http://thread.gmane.org/gmane.linux.ports.arm.kernel/468173. Using the macro on_each_cpu (instead of a for_each_* loop) also ensures that all the calls will be done all at once. Fixes: f8642885 ("net: mvneta: Statically assign queues to CPUs") Reported-by: NStefan Roese <stefan.roese@gmail.com> Suggested-by: NJisheng Zhang <jszhang@marvell.com> Suggested-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 1月, 2016 4 次提交
-
-
由 Jisheng Zhang 提交于
Some platforms may provide more than one clk for the mvneta IP, for example Marvell BG4CT provides one clk for the mac core, and one clk for the AXI bus logic. Obviously this bus clk also need to be enabled. This patch adds this optional "bus" clk support. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
Some platforms may provide more than one clk for the mvneta IP, for example Marvell BG4CT provides one clk for the mac core, and one clk for the AXI bus logic. To support for more than one clock, we'll need to distinguish between the clock by name. Change clock probing to first try to get "core" clock before falling back to unnamed clock. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Acked-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
Sorting the headers in alphabetic order will help to reduce the conflict when adding new headers in the future. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Acked-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
When s->type is T_REG_64, the high 32bits are lost in val. This patch fixes this trivial issue. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Fixes: 9b0cdefa ("net: mvneta: add ethtool statistics") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 1月, 2016 1 次提交
-
-
由 Andrew Lunn 提交于
Not all devices attached to an MDIO bus are phys. So add an mdio_device structure to represent the generic parts of an mdio device, and place this structure into the phy_device. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 12月, 2015 4 次提交
-
-
由 Gregory CLEMENT 提交于
With this patch each CPU is associated with its own set of TX queues. It also setup the XPS with an initial configuration which set the affinity matching the hardware configuration. Suggested-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
This patch adds the support for the RSS related ethtool function. Currently it only uses one entry in the indirection table which allows associating an mvneta interface to a given CPU. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
We enable the percpu interrupt for all the CPU and we just associate a CPU to a few queue at the neta level. The mapping between the CPUs and the queues is static. The queues are associated to the CPU module the number of CPUs. However currently we only use on RX queue for a given Ethernet port. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gregory CLEMENT 提交于
Instead of using the same default queue for all the port. Move it in the port struct. It will allow have a different default queue for each port. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 12月, 2015 2 次提交
-
-
由 Stas Sergeev 提交于
This patch allows to do ethtool -s eth0 autoneg off ethtool -s eth0 autoneg on to disable or enable autonegotiation at run-time. Without that functionality, the only way to control the autonegotiation is to modify the device tree. This is needed if you plan to use the same kernel with different ethernet switches, the ones that support the in-band status and the ones that not. CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> CC: netdev@vger.kernel.org CC: linux-kernel@vger.kernel.org Signed-off-by: NStas Sergeev <stsp@users.sourceforge.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stas Sergeev 提交于
This moves autoneg-related bit manipulations to the single place. CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> CC: netdev@vger.kernel.org CC: linux-kernel@vger.kernel.org Signed-off-by: NStas Sergeev <stsp@users.sourceforge.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 12月, 2015 5 次提交
-
-
由 Marcin Wojtas 提交于
Since Armada 38x SoC can support IP checksum for jumbo frames only on a single port, it means that this feature should be enabled per-port, rather than for the whole SoC. This patch enables setting custom TX IP checksum limit by adding new optional property to the mvneta device tree node. If not used, by default 1600B is set for "marvell,armada-370-neta" and 9800B for other strings, which ensures backward compatibility. Binding documentation is updated accordingly. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
In the actual RX processing, there is same error path for both descriptor ring refilling and building skb fails. This is not correct, because after successful refill, the ring is already updated with newly allocated buffer. Then, in case of build_skb() fail, hitherto code left the original buffer unmapped. This patch fixes above situation by swapping error check of skb build with DMA-unmap of original buffer. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Acked-by: NSimon Guinot <simon.guinot@sequanux.org> Cc: <stable@vger.kernel.org> # v4.2+ Fixes a84e3289 ("net: mvneta: fix refilling for Rx DMA buffers") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
A value originally defined in the driver was inappropriate. Even though the ingress was somehow working, writing MVNETA_RXQ_INTR_ENABLE_ALL_MASK to MVNETA_INTR_ENABLE didn't make any effect, because the bits [31:16] are reserved and read-only. This commit updates MVNETA_RXQ_INTR_ENABLE_ALL_MASK to be compliant with the controller's documentation. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
MVNETA_RXQ_HW_BUF_ALLOC bit which controls enabling hardware buffer allocation was mistakenly set as BIT(1). This commit fixes the assignment. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Reviewed-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marcin Wojtas 提交于
This commit adds missing configuration of MBUS windows access protection in mvneta_conf_mbus_windows function - a dedicated variable for that purpose remained there unused since v3.8 initial mvneta support. Because of that the register contents were inherited from the bootloader. Signed-off-by: NMarcin Wojtas <mw@semihalf.com> Reviewed-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 11月, 2015 1 次提交
-
-
由 Justin Maggard 提交于
After changing an interface's MTU, then bringing the interface down and back up again, I immediately saw tons of kernel messages like below. The reason for this bad behavior is mvneta_rxq_drop_pkts(), which calls dma_unmap_single() on already-freed memory. So we need to switch the order of those two operations. [ 152.388518] BUG: Bad page state in process ifconfig pfn:1b518 [ 152.388526] page:dff3dbc0 count:0 mapcount:0 mapping: (null) index:0x0 [ 152.395178] flags: 0x200(arch_1) [ 152.398441] page dumped because: PAGE_FLAGS_CHECK_AT_PREP flag set [ 152.398446] bad because of flags: [ 152.398450] flags: 0x200(arch_1) [ 152.401716] Modules linked in: [ 152.401728] CPU: 0 PID: 1453 Comm: ifconfig Tainted: P B O 4.1.12.armada.1 #1 [ 152.401733] Hardware name: Marvell Armada 370/XP (Device Tree) [ 152.401749] [<c0015b1c>] (unwind_backtrace) from [<c0011d8c>] (show_stack+0x10/0x14) [ 152.401762] [<c0011d8c>] (show_stack) from [<c06aa68c>] (dump_stack+0x74/0x90) [ 152.401772] [<c06aa68c>] (dump_stack) from [<c0096c08>] (bad_page+0xc4/0x124) [ 152.401783] [<c0096c08>] (bad_page) from [<c0099378>] (get_page_from_freelist+0x4e4/0x644) [ 152.401794] [<c0099378>] (get_page_from_freelist) from [<c0099620>] (__alloc_pages_nodemask+0x148/0x784) [ 152.401805] [<c0099620>] (__alloc_pages_nodemask) from [<c00ac658>] (kmalloc_order+0x10/0x20) [ 152.401818] [<c00ac658>] (kmalloc_order) from [<c04c6f44>] (mvneta_rx_refill+0xc4/0xe8) [ 152.401830] [<c04c6f44>] (mvneta_rx_refill) from [<c04c96c0>] (mvneta_setup_rxqs+0x298/0x39c) [ 152.401842] [<c04c96c0>] (mvneta_setup_rxqs) from [<c04c9904>] (mvneta_open+0x3c/0x150) [ 152.401853] [<c04c9904>] (mvneta_open) from [<c0597764>] (__dev_open+0xac/0x124) [ 152.401864] [<c0597764>] (__dev_open) from [<c05979e4>] (__dev_change_flags+0x8c/0x148) [ 152.401875] [<c05979e4>] (__dev_change_flags) from [<c0597ac0>] (dev_change_flags+0x18/0x48) [ 152.401886] [<c0597ac0>] (dev_change_flags) from [<c060d308>] (devinet_ioctl+0x620/0x6d0) [ 152.401897] [<c060d308>] (devinet_ioctl) from [<c057d810>] (sock_ioctl+0x64/0x288) [ 152.401908] [<c057d810>] (sock_ioctl) from [<c00dcb7c>] (do_vfs_ioctl+0x78/0x608) [ 152.401918] [<c00dcb7c>] (do_vfs_ioctl) from [<c00dd170>] (SyS_ioctl+0x64/0x74) [ 152.401930] [<c00dd170>] (SyS_ioctl) from [<c000f3a0>] (ret_fast_syscall+0x0/0x3c) Signed-off-by: NJustin Maggard <jmaggard@netgear.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 10月, 2015 2 次提交
-
-
由 Andrew Lunn 提交于
The existing function to clear the MIB statatistics was using the wrong address for the registers. Also, the counters would of been cleared when the interface was brought up, not during the probe. Fix both of these. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Russell King 提交于
Add support for the ethtool statistic interface, returning the full set of statistics which both Armada 370, 38x and Armada XP can support. Tested-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 9月, 2015 4 次提交
-
-
由 Maxime Ripard 提交于
Since the switch to per-CPU interrupts, we lost the ability to set which CPU was going to receive our RX interrupt, which was now only the CPU on which the mvneta_open function was run. We can now assign our queues to their respective CPUs, and make sure only this CPU is going to handle our traffic. This also paves the road to be able to change that at runtime, and later on to support RSS. [gregory.clement@free-electrons.com]: hardened the CPU hotplug support. Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maxime Ripard 提交于
The mvneta driver allows to change the default RX queue trough the rxq_def kernel parameter. However, the current code doesn't allow to have any value but 0. It is actively checked for in the driver's probe because the drivers makes a number of assumption and takes a number of shortcuts in order to just use that RX queue. Remove these limitations in order to be able to specify any available queue. Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maxime Ripard 提交于
Now that our interrupt controller is allowing us to use per-CPU interrupts, actually use it in the mvneta driver. This involves obviously reworking the driver to have a CPU-local NAPI structure, and report for incoming packet using that structure. Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maxime Ripard 提交于
The CPU_MAP register is duplicated for each CPUs at different addresses, each instance being at a different address. However, the code so far was using CONFIG_NR_CPUS to initialise the CPU_MAP registers for each registers, while the SoCs embed at most 4 CPUs. This is especially an issue with multi_v7_defconfig, where CONFIG_NR_CPUS is currently set to 16, resulting in writes to registers that are not CPU_MAP. Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Cc: <stable@vger.kernel.org> # v3.8+ Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 9月, 2015 1 次提交
-
-
由 Russell King 提交于
of_phy_find_device() increments the phy struct device refcount, which we need to properly balance. Add code to network drivers using this function to ensure that the struct device refcount is correctly balanced. For xgene, looking back in the history, we should be able to use of_phy_connect() with a zero flags argument for the DT case as this is how the driver used to operate prior to de7b5b3d ("net: eth: xgene: change APM X-Gene SoC platform ethernet to support ACPI"). This leaves the Cavium Thunder BGX unfixed; fixing this driver is a complicated task, one which the maintainers need to be involved with. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-