- 18 5月, 2018 8 次提交
-
-
由 Jose Abreu 提交于
We either have .enable_dma_transmission or .set_tx_tail_ptr in the HW table callbacks, we can never have both so there is no need to check for GMAC version. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Instead of relying on the GMAC version for choosing if we need to use dma_init or dma_init_{rx/tx}_chan callback, lets uniformize this and always use the dma_init_{rx/tx}_chan callbacks. While at it, fix the use of dma_init_chan callback, which shall be called for as many channels as the max of rx/tx channels. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
PTP and MMC modules base address can depend on the GMAC version. As this is HW specific lets move this base address calculation to hwif.c. Also, add an entry in the HW table so that we can specify the module offset. This can later be extended to more modules, if deemed necessary. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
With the introducion of callbacks check in hwif.h we only call the callback if HW supports it so there is no longer need to check for GMAC version. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Instead of relying on the GMAC version for choosing if we need to use dma_{rx/tx}_mode or just dma_mode callback lets uniformize this and always use the dma_{rx/tx}_mode callbacks. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Stop using if conditions depending on the GMAC version for clearing the descriptor and use instead a helper implemented in the descriptor files. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Stop using if conditions depending on the GMAC version for setting the the descriptor skbuff address and use instead a helper implemented in the descriptor files. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
This is cutting down performance. Once the timer is armed it should run after the time expires for the first packet sent and not the last one. After this change, running iperf, the performance gain is +/- 24%. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 5月, 2018 1 次提交
-
-
由 Jose Abreu 提交于
This adds support for U32 filter by using an HW only feature called Flexible RX Parser. This allow us to match any given packet field with a pattern and accept/reject or even route the packet to a specific DMA channel. Right now we only support acception or rejection of frame and we only support simple rules. Though, the Parser has the flexibility of jumping to specific rules as an if condition so complex rules can be established. This is only supported in GMAC5.10+. The following commands can be used to test this code: 1) Setup an ingress qdisk: # tc qdisc add dev eth0 handle ffff: ingress 2) Setup a filter (e.g. filter by IP): # tc filter add dev eth0 parent ffff: protocol ip u32 match ip \ src 192.168.0.3 skip_sw action drop In every tests performed we always used the "skip_sw" flag to make sure only the RX Parser was involved. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: Jakub Kicinski <kubakici@wp.pl> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 5月, 2018 1 次提交
-
-
由 Kees Cook 提交于
In the quest to remove all stack VLAs from the kernel[1], this switches the "status" stack buffer to use the existing small (8) upper bound on how many queues can be checked for DMA, and adds a sanity-check just to make sure it doesn't operate under pathological conditions. [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NJose Abreu <joabreu@synopsys.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 4月, 2018 1 次提交
-
-
由 Jose Abreu 提交于
Move all the core version detection to a common place ("hwif.c") and implement a table which can be used to lookup the correct callbacks for each IP version. This simplifies the initialization flow of each IP version and eases future implementation of new IP versions. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Vitor Soares <soares@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 4月, 2018 1 次提交
-
-
由 Jose Abreu 提交于
ACS Feature is currently enabled for GMAC >= 4 but the llc_snap status is never checked in descriptor rx_status callback. This will cause stmmac to always strip packets even that ACS feature is already stripping them. Lets be safe and disable the ACS feature for GMAC >= 4 and always strip the packets for this GMAC version. Fixes: 477286b5 ("stmmac: add GMAC4 core support") Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 4月, 2018 5 次提交
-
-
由 Jose Abreu 提交于
Switch stmmac_mode_ops to generic Hardware Interface Helpers instead of using hard-coded callbacks. This makes the code more readable and more flexible. No functional change. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Switch stmmac_hwtimestamp to generic Hardware Interface Helpers instead of using hard-coded callbacks. This makes the code more readable and more flexible. No functional change. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Switch stmmac_ops to generic Hardware Interface Helpers instead of using hard-coded callbacks. This makes the code more readable and more flexible. No functional change. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Switch stmmac_dma_ops to generic Hardware Interface Helpers instead of using hard-coded callbacks. This makes the code more readable and more flexible. No functional change. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Switch stmmac_desc_ops to generic Hardware Interface Helpers instead of using hard-coded callbacks. This makes the code more readable and more flexible. No functional change. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 3月, 2018 2 次提交
-
-
由 Jose Abreu 提交于
This adds initial suport for DWMAC5 and implements the Automotive Safety Package which is available from core version 5.10. The Automotive Safety Pacakge (also called Safety Features) offers us with error protection in the core by implementing ECC Protection in memories, on-chip data path parity protection, FSM parity and timeout protection and Application/CSR interface timeout protection. In case of an uncorrectable error we call stmmac_global_err() and reconfigure the whole core. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: Andrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Currently TX Timeout handler does not behaves as expected and leads to an unrecoverable state. Rework current implementation of TX Timeout handling to actually perform a complete reset of the driver state and IP. We use deferred work to init a task which will be responsible for resetting the system. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: Andrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 3月, 2018 1 次提交
-
-
由 Joe Perches 提交于
Prefer the direct use of octal for permissions. Done with checkpatch -f --types=SYMBOLIC_PERMS --fix-inplace and some typing. Miscellanea: o Whitespace neatening around these conversions. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 2月, 2018 3 次提交
-
-
由 Niklas Cassel 提交于
According to Documentation/memory-barriers.txt, we need to use a dma_rmb() after reading the status/own bit, to ensure that all descriptor fields are read after reading the own bit. This way, we ensure that the DMA engine is done with the DMA descriptor before we read the other descriptor fields, e.g. reading the tx hardware timestamp (if PTP is enabled). Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Cassel 提交于
The last memory barrier in stmmac_xmit()/stmmac_tso_xmit() is placed between a coherent memory write and a MMIO write: The own bit is written in First Desc (TSO: MSS desc or First Desc). <barrier> The DMA engine is started by a write to the tx desc tail pointer/ enable dma transmission register, i.e. a MMIO write. This barrier cannot be a simple dma_wmb(), since a dma_wmb() is only used to guarantee the ordering, with respect to other writes, to cache coherent DMA memory. To guarantee that the cache coherent memory writes have completed before we attempt to write to the cache incoherent MMIO region, we need to use the more heavyweight barrier wmb(). Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Cassel 提交于
A dma_wmb() is used to guarantee the ordering, with respect to other writes, to cache coherent DMA memory. There is a dma_wmb() in prepare_tx_desc()/prepare_tso_tx_desc() which ensures that TDES0/1/2 is written before TDES3 (which contains the own bit), for First Desc. However, in the rare case that MSS changes, there will be a MSS context descriptor in front of the regular DMA descriptors: <MSS desc> <- DMA Next Descriptor <First Desc> <desc n> <Last Desc> Thus, for this special case, we need a dma_wmb() after prepare_tso_tx_desc()/before writing the own bit to the MSS desc, so that we flush the write to TDES3 for First Desc, in order to ensure that the MSS descriptor is the last descriptor to set the own bit. Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 2月, 2018 4 次提交
-
-
由 Niklas Cassel 提交于
stmmac_mac_config_rx_queues_routing() incorrectly calls rx_queue_prio() instead of rx_queue_routing(). This looks like a copy paste issue, since stmmac_mac_config_rx_queues_prio() already calls rx_queue_prio(), and both stmmac_mac_config_rx_queues_routing() and stmmac_mac_config_rx_queues_prio() are very similar in structure. Fixes: abe80fdc ("net: stmmac: RX queue routing configuration") Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Cassel 提交于
The current code assumes that a tx_skbuff entry has been cleared by stmmac_tx_clean() before stmmac_xmit()/stmmac_tso_xmit() assigns a new skb to that entry. However, since we never check the current value before overwriting it, it is theoretically possible that a non-NULL value is overwritten. Add WARN_ONs to verify that each entry in tx_skbuff is NULL before it is assigned a new value. Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Cassel 提交于
tx_skbuff is initialized to NULL in init_dma_tx_desc_rings(), which is called from ndo_open(). stmmac_tx_clean() frees any non-NULL skb, and sets the tx_skbuff entry to NULL. Hence, there is no need to set skbuff entries to NULL in stmmac_xmit()/stmmac_tso_xmit(), and doing so falsely gives the reader the impression that it is needed. Do not clear tx_skbuff entries in stmmac_xmit()/stmmac_tso_xmit(). Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Cassel 提交于
The DMA engine in dwmac4 can segment a large TSO packet to several smaller packets of (max) size Maximum Segment Size (MSS). The DMA engine fetches and saves the MSS via a context descriptor. This context decriptor has to be provided to each tx DMA channel. To ensure that this is done, move struct member mss from stmmac_priv to stmmac_tx_queue. stmmac_reset_queues_param() now also resets mss, together with other queue parameters, so reset of mss value can be removed from stmmac_resume(). init_dma_tx_desc_rings() now also resets mss, together with other queue parameters, so reset of mss value can be removed from stmmac_open(). This fixes tx queue timeouts for dwmac4, with DT property snps,tx-queues-to-use > 1, when running iperf3 with multiple threads. Fixes: ce736788 ("net: stmmac: adding multiple buffers for TX") Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 1月, 2018 1 次提交
-
-
由 Florian Fainelli 提交于
Broadcom tags inserted by Broadcom switches put a 4 byte header after the MAC SA and before the EtherType, which may look like some sort of 0 length LLC/SNAP packet (tcpdump and wireshark do think that way). With ACS enabled in stmmac the packets were truncated to 8 bytes on reception, whereas clearing this bit allowed normal reception to occur. In order to make that possible, we need to pass a net_device argument to the different core_init() functions and we are dependent on the Broadcom tagger padding packets correctly (which it now does). To be as little invasive as possible, this is only done for gmac1000 when the network device is DSA-enabled (netdev_uses_dsa() returns true). Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Acked-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 1月, 2018 1 次提交
-
-
由 Jerome Brunet 提交于
Note in the databook - Section 4.4 - EEE : " The EEE feature is not supported when the MAC is configured to use the TBI, RTBI, SMII, RMII or SGMII single PHY interface. Even if the MAC supports multiple PHY interfaces, you should activate the EEE mode only when the MAC is operating with GMII, MII, or RGMII interface." Applying this restriction solves a stability issue observed on Amlogic gxl platforms operating with RMII interface and the internal PHY. Fixes: 83bf79b6 ("stmmac: disable at run-time the EEE if not supported") Signed-off-by: NJerome Brunet <jbrunet@baylibre.com> Tested-by: NArnaud Patard <arnaud.patard@rtp-net.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 1月, 2018 1 次提交
-
-
由 Florian Fainelli 提交于
There is no reason not to allow printing the frame_len/COE value and put that under a check for ETH_FRAME_LEN, drop it so we can see what the descriptor reports. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 12月, 2017 1 次提交
-
-
由 Fredrik Hallenberg 提交于
As noted in dwmac4_wrback_get_rx_timestamp_status the timestamp is found in the context descriptor following the current descriptor. However the current code looks for the context descriptor in the current descriptor, which will always fail. Signed-off-by: NFredrik Hallenberg <megahallon@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 12月, 2017 1 次提交
-
-
由 Niklas Cassel 提交于
There is nothing that says that number of TX queues == number of RX queues. E.g. the ARTPEC-6 SoC has 2 TX queues and 1 RX queue. This code is obviously wrong: for (chan = 0; chan < tx_channel_count; chan++) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; priv->rx_queue has size MTL_MAX_RX_QUEUES, so this will send an uninitialized napi_struct to __napi_schedule(), causing us to crash in net_rx_action(), because napi_struct->poll is zero. [12846.759880] Unable to handle kernel NULL pointer dereference at virtual address 00000000 [12846.768014] pgd = (ptrval) [12846.770742] [00000000] *pgd=39ec7831, *pte=00000000, *ppte=00000000 [12846.777023] Internal error: Oops: 80000007 [#1] PREEMPT SMP ARM [12846.782942] Modules linked in: [12846.785998] CPU: 0 PID: 161 Comm: dropbear Not tainted 4.15.0-rc2-00285-gf5fb5f2f39a7 #36 [12846.794177] Hardware name: Axis ARTPEC-6 Platform [12846.798879] task: (ptrval) task.stack: (ptrval) [12846.803407] PC is at 0x0 [12846.805942] LR is at net_rx_action+0x274/0x43c [12846.810383] pc : [<00000000>] lr : [<80bff064>] psr: 200e0113 [12846.816648] sp : b90d9ae8 ip : b90d9ae8 fp : b90d9b44 [12846.821871] r10: 00000008 r9 : 0013250e r8 : 00000100 [12846.827094] r7 : 0000012c r6 : 00000000 r5 : 00000001 r4 : bac84900 [12846.833619] r3 : 00000000 r2 : b90d9b08 r1 : 00000000 r0 : bac84900 Since each DMA channel can be used for rx and tx simultaneously, the current code should probably be rewritten so that napi_struct is embedded in a new struct stmmac_channel. That way, stmmac_poll() can call stmmac_tx_clean() on just the tx queue where we got the IRQ, instead of looping through all tx queues. This is also how the xgbe driver does it (another driver for this IP). Fixes: c22a3f48 ("net: stmmac: adding multiple napi mechanism") Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 12月, 2017 1 次提交
-
-
由 Lars Persson 提交于
The mss variable tracks the last max segment size sent to the TSO engine. We do not update the hardware as long as we receive skb:s with the same value in gso_size. During a network device down/up cycle (mapped to stmmac_release() and stmmac_open() callbacks) we issue a reset to the hardware and it forgets the setting for mss. However we did not zero out our mss variable so the next transmission of a gso packet happens with an undefined hardware setting. This triggers a hang in the TSO engine and eventuelly the netdev watchdog will bark. Fixes: f748be53 ("stmmac: support new GMAC4") Signed-off-by: NLars Persson <larper@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 11月, 2017 1 次提交
-
-
由 Kees Cook 提交于
This converts all remaining cases of the old setup_timer() API into using timer_setup(), where the callback argument is the structure already holding the struct timer_list. These should have no behavioral changes, since they just change which pointer is passed into the callback with the same available pointers after conversion. It handles the following examples, in addition to some other variations. Casting from unsigned long: void my_callback(unsigned long data) { struct something *ptr = (struct something *)data; ... } ... setup_timer(&ptr->my_timer, my_callback, ptr); and forced object casts: void my_callback(struct something *ptr) { ... } ... setup_timer(&ptr->my_timer, my_callback, (unsigned long)ptr); become: void my_callback(struct timer_list *t) { struct something *ptr = from_timer(ptr, t, my_timer); ... } ... timer_setup(&ptr->my_timer, my_callback, 0); Direct function assignments: void my_callback(unsigned long data) { struct something *ptr = (struct something *)data; ... } ... ptr->my_timer.function = my_callback; have a temporary cast added, along with converting the args: void my_callback(struct timer_list *t) { struct something *ptr = from_timer(ptr, t, my_timer); ... } ... ptr->my_timer.function = (TIMER_FUNC_TYPE)my_callback; And finally, callbacks without a data assignment: void my_callback(unsigned long data) { ... } ... setup_timer(&ptr->my_timer, my_callback, 0); have their argument renamed to verify they're unused during conversion: void my_callback(struct timer_list *unused) { ... } ... timer_setup(&ptr->my_timer, my_callback, 0); The conversion is done with the following Coccinelle script: spatch --very-quiet --all-includes --include-headers \ -I ./arch/x86/include -I ./arch/x86/include/generated \ -I ./include -I ./arch/x86/include/uapi \ -I ./arch/x86/include/generated/uapi -I ./include/uapi \ -I ./include/generated/uapi --include ./include/linux/kconfig.h \ --dir . \ --cocci-file ~/src/data/timer_setup.cocci @fix_address_of@ expression e; @@ setup_timer( -&(e) +&e , ...) // Update any raw setup_timer() usages that have a NULL callback, but // would otherwise match change_timer_function_usage, since the latter // will update all function assignments done in the face of a NULL // function initialization in setup_timer(). @change_timer_function_usage_NULL@ expression _E; identifier _timer; type _cast_data; @@ ( -setup_timer(&_E->_timer, NULL, _E); +timer_setup(&_E->_timer, NULL, 0); | -setup_timer(&_E->_timer, NULL, (_cast_data)_E); +timer_setup(&_E->_timer, NULL, 0); | -setup_timer(&_E._timer, NULL, &_E); +timer_setup(&_E._timer, NULL, 0); | -setup_timer(&_E._timer, NULL, (_cast_data)&_E); +timer_setup(&_E._timer, NULL, 0); ) @change_timer_function_usage@ expression _E; identifier _timer; struct timer_list _stl; identifier _callback; type _cast_func, _cast_data; @@ ( -setup_timer(&_E->_timer, _callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, &_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, &_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)&_callback, _E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, (_cast_func)&_callback, (_cast_data)_E); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E._timer, _callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, &_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, &_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)_E); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)&_E); +timer_setup(&_E._timer, _callback, 0); | _E->_timer@_stl.function = _callback; | _E->_timer@_stl.function = &_callback; | _E->_timer@_stl.function = (_cast_func)_callback; | _E->_timer@_stl.function = (_cast_func)&_callback; | _E._timer@_stl.function = _callback; | _E._timer@_stl.function = &_callback; | _E._timer@_stl.function = (_cast_func)_callback; | _E._timer@_stl.function = (_cast_func)&_callback; ) // callback(unsigned long arg) @change_callback_handle_cast depends on change_timer_function_usage@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _origtype; identifier _origarg; type _handletype; identifier _handle; @@ void _callback( -_origtype _origarg +struct timer_list *t ) { ( ... when != _origarg _handletype *_handle = -(_handletype *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle = -(void *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle; ... when != _handle _handle = -(_handletype *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg | ... when != _origarg _handletype *_handle; ... when != _handle _handle = -(void *)_origarg; +from_timer(_handle, t, _timer); ... when != _origarg ) } // callback(unsigned long arg) without existing variable @change_callback_handle_cast_no_arg depends on change_timer_function_usage && !change_callback_handle_cast@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _origtype; identifier _origarg; type _handletype; @@ void _callback( -_origtype _origarg +struct timer_list *t ) { + _handletype *_origarg = from_timer(_origarg, t, _timer); + ... when != _origarg - (_handletype *)_origarg + _origarg ... when != _origarg } // Avoid already converted callbacks. @match_callback_converted depends on change_timer_function_usage && !change_callback_handle_cast && !change_callback_handle_cast_no_arg@ identifier change_timer_function_usage._callback; identifier t; @@ void _callback(struct timer_list *t) { ... } // callback(struct something *handle) @change_callback_handle_arg depends on change_timer_function_usage && !match_callback_converted && !change_callback_handle_cast && !change_callback_handle_cast_no_arg@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _handletype; identifier _handle; @@ void _callback( -_handletype *_handle +struct timer_list *t ) { + _handletype *_handle = from_timer(_handle, t, _timer); ... } // If change_callback_handle_arg ran on an empty function, remove // the added handler. @unchange_callback_handle_arg depends on change_timer_function_usage && change_callback_handle_arg@ identifier change_timer_function_usage._callback; identifier change_timer_function_usage._timer; type _handletype; identifier _handle; identifier t; @@ void _callback(struct timer_list *t) { - _handletype *_handle = from_timer(_handle, t, _timer); } // We only want to refactor the setup_timer() data argument if we've found // the matching callback. This undoes changes in change_timer_function_usage. @unchange_timer_function_usage depends on change_timer_function_usage && !change_callback_handle_cast && !change_callback_handle_cast_no_arg && !change_callback_handle_arg@ expression change_timer_function_usage._E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type change_timer_function_usage._cast_data; @@ ( -timer_setup(&_E->_timer, _callback, 0); +setup_timer(&_E->_timer, _callback, (_cast_data)_E); | -timer_setup(&_E._timer, _callback, 0); +setup_timer(&_E._timer, _callback, (_cast_data)&_E); ) // If we fixed a callback from a .function assignment, fix the // assignment cast now. @change_timer_function_assignment depends on change_timer_function_usage && (change_callback_handle_cast || change_callback_handle_cast_no_arg || change_callback_handle_arg)@ expression change_timer_function_usage._E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type _cast_func; typedef TIMER_FUNC_TYPE; @@ ( _E->_timer.function = -_callback +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -&_callback +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -(_cast_func)_callback; +(TIMER_FUNC_TYPE)_callback ; | _E->_timer.function = -(_cast_func)&_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -&_callback; +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -(_cast_func)_callback +(TIMER_FUNC_TYPE)_callback ; | _E._timer.function = -(_cast_func)&_callback +(TIMER_FUNC_TYPE)_callback ; ) // Sometimes timer functions are called directly. Replace matched args. @change_timer_function_calls depends on change_timer_function_usage && (change_callback_handle_cast || change_callback_handle_cast_no_arg || change_callback_handle_arg)@ expression _E; identifier change_timer_function_usage._timer; identifier change_timer_function_usage._callback; type _cast_data; @@ _callback( ( -(_cast_data)_E +&_E->_timer | -(_cast_data)&_E +&_E._timer | -_E +&_E->_timer ) ) // If a timer has been configured without a data argument, it can be // converted without regard to the callback argument, since it is unused. @match_timer_function_unused_data@ expression _E; identifier _timer; identifier _callback; @@ ( -setup_timer(&_E->_timer, _callback, 0); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, 0L); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E->_timer, _callback, 0UL); +timer_setup(&_E->_timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0L); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_E._timer, _callback, 0UL); +timer_setup(&_E._timer, _callback, 0); | -setup_timer(&_timer, _callback, 0); +timer_setup(&_timer, _callback, 0); | -setup_timer(&_timer, _callback, 0L); +timer_setup(&_timer, _callback, 0); | -setup_timer(&_timer, _callback, 0UL); +timer_setup(&_timer, _callback, 0); | -setup_timer(_timer, _callback, 0); +timer_setup(_timer, _callback, 0); | -setup_timer(_timer, _callback, 0L); +timer_setup(_timer, _callback, 0); | -setup_timer(_timer, _callback, 0UL); +timer_setup(_timer, _callback, 0); ) @change_callback_unused_data depends on match_timer_function_unused_data@ identifier match_timer_function_unused_data._callback; type _origtype; identifier _origarg; @@ void _callback( -_origtype _origarg +struct timer_list *unused ) { ... when != _origarg } Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 28 10月, 2017 1 次提交
-
-
由 Bhadram Varka 提交于
Currently stmmac driver not copying the valid ethernet MAC address to MAC registers. This patch takes care of updating the MAC register with MAC address. Signed-off-by: NBhadram Varka <vbhadram@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 10月, 2017 3 次提交
-
-
由 Bernd Edlinger 提交于
This is the possible reason for different hard to reproduce problems on my ARMv7-SMP test system. The symptoms are in recent kernels imprecise external aborts, and in older kernels various kinds of network stalls and unexpected page allocation failures. My testing indicates that the trouble started between v4.5 and v4.6 and prevails up to v4.14. Using the dirty_tx before acquiring the spin lock is clearly wrong and was first introduced with v4.6. Fixes: e3ad57c9 ("stmmac: review RX/TX ring management") Signed-off-by: NBernd Edlinger <bernd.edlinger@hotmail.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
When using GMAC4 the valid timestamp is from CTX next desc but we are passing the previous desc to get_rx_timestamp_status() callback. Fix this and while at it rework a little bit the function logic. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
When RX HW timestamp is enabled and a frame is discarded we are not freeing the skb but instead only setting to NULL the entry. Add a call to dev_kfree_skb_any() so that skb entry is correctly freed. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 10月, 2017 2 次提交
-
-
由 Jose Abreu 提交于
Flow control must be disabled for AVB enabled queues and TX AVB queues must be enabled by setting BIT(2) of TXQEN. Correct this by passing the queue mode to DMA callbacks and by checking in these functions wether we are in AVB performing the necessary adjustments. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Currently we are using all the available fifo size in RQS and TQS fields. This will not work correctly in multi-queues IP's because total fifo size must be splitted to the enabled queues. Correct this by computing the available fifo size per queue and setting the right value in TQS and RQS fields. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-