- 25 8月, 2012 9 次提交
-
-
由 Ben Hutchings 提交于
Currently we ignore and clear the disabled state. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
I don't think these PM functions can race with userland net device operations, but it's much easier to reason about locking if state is consistently guarded by the same lock. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
STATE_INIT and STATE_FINI are equivalent and represent incompletely initialised states; combine them as STATE_UNINIT. Rename STATE_RUNNING to STATE_READY, to avoid confusion with netif_running() and IFF_RUNNING. The comments do not quite match current usage, but this will be corrected in subsequent fixes. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
We only use tso_state::full_packet_space to calculate the IPv4 tot_len or IPv6 payload_len, not to set tso_state::packet_space. Replace it with an ip_base_len field holding the value of tot_len or payload_len before including the TCP payload, which is much more useful when constructing the new headers. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
TSO header buffers contain a control structure immediately followed by the packet headers, and are kept on a free list when not in use. This complicates buffer management and tends to result in cache read misses when we recycle such buffers (particularly if DMA-coherent memory requires caches to be disabled). Replace the free list with a simple mapping by descriptor index. We know that there is always a payload descriptor between any two descriptors with TSO header buffers, so we can allocate only one such buffer for each two descriptors. While we're at it, use a standard error code for allocation failure, not -1. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
We now have a definite upper bound on the number of descriptors per skb; use that to stop the queue when the next packet might not fit. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Add a flags field to struct efx_tx_buffer, replacing the continuation and map_single booleans. Since a single descriptor cannot be both a TSO header and the last descriptor for an skb, unionise efx_tx_buffer::{skb,tsoh} and add flags for validity of these fields. Clear all flags in free buffers (whereas previously the continuation flag would be set). Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Timur Tabi 提交于
Similar to fsl_pq_mdio.c, this driver is for the 10G MDIO controller on Freescale Frame Manager Ethernet controllers. Signed-off-by: NTimur Tabi <timur@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 8月, 2012 5 次提交
-
-
由 Jesse Brandeburg 提交于
This is the implementation for igb to allow forcing MDI state via ethtool, allowing users to work around some improperly behaving switches. Forcing in this driver is for now only allowed when auto-neg is enabled. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> CC: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
Some users report issues with link failing when connected to certain switches. This gives the user the ability to control the MDI state from the driver, allowing users to work around some improperly behaving switches. Forcing in this driver is for now only allowed when auto-neg is enabled. This is in regards to the related ethtool app patch and bugzilla.kernel.org bug 11998 Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> CC: bruce.w.allan@intel.com CC: n.poppelier@xs4all.nl CC: bastien@durel.org CC: jsveiga@it.eng.br Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
This is the implementation in e1000 to allow ethtool to force MDI state, allowing users to work around some improperly behaving switches. Forcing in this driver is for now only allowed when auto-neg is enabled. To use must have the matching version of ethtool app that supports this functionality. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> CC: Tushar Dave <tushar.n.dave@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
In order for igb to support MDI setting support via ethtool this code is needed to allow setting the MDI state via software. This is in regards to the related ethtool patch Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bruce W Allan 提交于
In order for e1000e to support MDI setting support via ethtool this code is needed to allow setting the MDI state via software. This is in regards to the related ethtool patch and fixes bugzilla.kernel.org bug 11998 Signed-off-by: NBruce W Allan <bruce.w.allan@intel.com> Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 20 8月, 2012 2 次提交
-
-
由 Kelvin Cheung 提交于
When getting clock, give a chance to the CPUs without DT support, which use Common Clock Framework, such as Loongson1B. Signed-off-by: NKelvin Cheung <keguang.zhang@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Phil Edworthy 提交于
Signed-off-by: NPhil Edworthy <phil.edworthy@renesas.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 8月, 2012 9 次提交
-
-
由 Alexander Duyck 提交于
This change updates the code related to configuring the transmit frame checksum. Specifically I have updated the code so that we can only skip inserting the checksum in the case that we are not performing some other offload that will modify the frame data. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Alexander Duyck 提交于
This change moves the RSC code into the non-EOP descriptor handling function. The main motivation behind this change is to help reduce the overhead in the non-RSC case. Previously the non-RSC path code would always be checking for append count even if RSC had been disabled. Now this code is completely skipped in a single conditional check instead of having to make two separate checks. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Alexander Duyck 提交于
This patch creates a function named ixgbe_fetch_rx_buffer. The sole purpose of this function is to retrieve a single buffer off of the ring and to place it in an skb. The advantage to doing this is that it helps improve the readability since I can decrease the indentation and for the code in this section. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that if only the first 256 bytes of a buffer are used we just copy the data out and leave the offset and page count unchanged. There are multiple advantages to this. First it allows us to reuse the page much more in the case of pages larger than 4K. It also allows us to avoid some expensive atomic operations in the form of get_page/put_page. In perf I have seen CPU utilization for put_page drop from 3.5% to 1.8% as a result of this patch when doing small packet routing, and packet rates increased by about 3%. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Alexander Duyck 提交于
This change creates a separate function for functionality similar to pskb_pull_tail. The main motivation for moving it to a separate function is so that later I can just skip this function in the case where we have already copied the buffer into skb->head. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Alexander Duyck 提交于
This patch makes it so that we will always have ownership of the buffers by the time we get to ixgbe_add_rx_frag. This is necessary as I am planning to add a copy-break to ixgbe_add_rx_frag and in order for that to function correctly we need the CPU to have ownership of the buffer. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we do not use double buffering if the page size is larger than 4K. Instead we will simply walk through the page using up to 3K per receive, and if we receive less than we only move the offset by that amount. We will free the page when there is no longer any space left that we can use instead of checking the page count to see if we can cycle back to the start. The main motivation behind this is to avoid the unnecessary truesize cost for using a half page when most packets are 2K or smaller. With this new approach the largest possible truesize for a page fragment will be 3K when PAGE_SIZE is larger than 4K. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Alexander Duyck 提交于
This patch combines ixgbe_add_rx_frag and ixgbe_can_reuse_page into a single function. The main motivation behind this is to make better use of the values so that we don't have to load them from memory and into registers twice. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Alexander Duyck 提交于
This change reverts an earlier patch that introduced ixgbe_init_rx_page_offset. The idea behind the function was to provide some variation in the starting offset for the page in order to reduce hot-spots in the cache. However it doesn't appear to provide any significant benefit in the testing I have done. It has however been a source of several bugs, and it blocks us from being able to use 2K fragments on larger page sizes. So the decision I made was to remove it. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
- 16 8月, 2012 3 次提交
-
-
由 Roland Dreier 提交于
- Use kcalloc() / vzalloc() instead of an extra bitmap_zero(). - Add __GFP_NOWARN to kcalloc() since we'll try vzalloc() if it fails. Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Yishai Hadas 提交于
Fix some issues around int variables used in data structures related to memory registration. Handle int overflow in mlx4_init_icm_table by using a u64 intermediate variable and changing struct mlx4_icm_table num_obj field to be u32. Change some more fields/variables to use u32 instead of int to prevent a case where the variable becomes negative when bit 31 is set. Also subtract log_mtts_per_seg from the exponent when computing num_mtt, since its added later on in that very same code area. This and the previous commit fixes some issues which actually prevent commit db5a7a65 ("mlx4_core: Scale size of MTT table with system RAM") from working. Now, when the number of MTTs is scaled with the size of the RAM we can map up to 8TB. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NJack Morgenstein <jackm@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Yishai Hadas 提交于
mlx4_buddy_init uses kmalloc() to allocate bitmaps, which fails when the required size is beyond the max supported value (or when memory is too fragmented to handle a huge allocation). Extend this to use use vmalloc() if kmalloc() fails, and take that into account when freeing the bitmaps as well. This fixes a driver load failure when log num mtt is 26 or higher, and is a step in the direction of allowing to register huge amounts of memory on large memory systems. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 15 8月, 2012 3 次提交
-
-
由 Julia Lawall 提交于
Convert a 0 error return code to a negative one, as returned elsewhere in the function. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier ret; expression e,e1,e2,e3,e4,x; @@ ( if (\(ret != 0\|ret < 0\) || ...) { ... return ...; } | ret = 0 ) ... when != ret = e1 *x = \(kmalloc\|kzalloc\|kcalloc\|devm_kzalloc\|ioremap\|ioremap_nocache\|devm_ioremap\|devm_ioremap_nocache\)(...); ... when != x = e2 when != ret = e3 *if (x == NULL || ...) { ... when != ret = e4 * return ret; } // </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julia Lawall 提交于
Convert a 0 error return code to a negative one, as returned elsewhere in the function. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier ret; expression e,e1,e2,e3,e4,x; @@ ( if (\(ret != 0\|ret < 0\) || ...) { ... return ...; } | ret = 0 ) ... when != ret = e1 *x = \(kmalloc\|kzalloc\|kcalloc\|devm_kzalloc\|ioremap\|ioremap_nocache\|devm_ioremap\|devm_ioremap_nocache\)(...); ... when != x = e2 when != ret = e3 *if (x == NULL || ...) { ... when != ret = e4 * return ret; } // </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julia Lawall 提交于
If the NULL test is necessary, the initialization involving a dereference of the tested value should be moved after the NULL test. The sematic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ type T; expression E; identifier i,fld; statement S; @@ - T i = E->fld; + T i; ... when != E when != i if (E == NULL) S + i = E->fld; // </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 8月, 2012 1 次提交
-
-
由 Joren Van Onder 提交于
Fix the following compiler warnings: - drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c:2908:3: warning: comparison of distinct pointer types lacks a cast [enabled by default] - drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c:1709:7: warning: comparison of distinct pointer types lacks a cast [enabled by default] Signed-off-by: NJoren Van Onder <joren.vanonder@gmail.com> Acked-By: NYuval Mintz <yuvalmin@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 8月, 2012 1 次提交
-
-
由 Emil Tantilov 提交于
This patch adds missing braces around the 10gig link check to include the check for KR support. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Reported-by: NSascha Wildner <saw@online.de> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 8月, 2012 2 次提交
-
-
由 Yuval Mintz 提交于
During probe, every function probed clears the recovery registers from all functions on its path - thus signaling that given a future recovery event, there will be no need to wait for those functions. This is a flawed behaviour - each function should only be responsible for its own bit. Since this registers are handled during the load/unload routines, this cleanup is removed altogether. Signed-off-by: NYuval Mintz <yuvalmin@broadcom.com> Signed-off-by: NAriel Elior <ariele@broadcom.com> Signed-off-by: NEilon Greenstein <eilong@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuval Mintz 提交于
The existing previous driver unload flow is flawed, causing the probe of functions reaching the 'uncommon fork' in flr-capable devices to fail. This patch resolves this, as well as fixing the flow for hypervisors which disable flr capabilities from functions as they pass them as PDA to VMs, as we cannot base the flow on the pci configuration space. Signed-off-by: NYuval Mintz <yuvalmin@broadcom.com> Signed-off-by: NAriel Elior <ariele@broadcom.com> Signed-off-by: NEilon Greenstein <eilong@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 8月, 2012 5 次提交
-
-
由 Alexander Duyck 提交于
It looks like the register defines for DCA were never updated after going from 82575 to 82576. This change addresses that by updating the defines. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Emil Tantilov 提交于
This patch resolves a "BUG: unable to handle kernel paging request at ..." oops while dumping packet data. The issue occurs with IOMMU enabled due to the address provided by phys_to_virt(). This patch avoids phys_to_virt() by using skb->data and the address of the pages allocated for Rx. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Emil Tantilov 提交于
This patch resolves a "BUG: unable to handle kernel paging request at ..." oops while dumping packet data. The issue occurs with IOMMU enabled due to the address provided by phys_to_virt(). This patch avoids phys_to_virt() by making using skb->data and the address of the pages allocated for Rx. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
由 Arnd Bergmann 提交于
Driver probe functions are generally __devinit so they will be discarded after initialization for non-hotplug kernels. This was found by a new warning after patch 6a228452 "stmmac: Add device-tree support" adds a new __devinit function that is called from stmmac_pltfr_probe. Without this patch, building socfpga_defconfig results in: WARNING: drivers/net/ethernet/stmicro/stmmac/stmmac.o(.text+0x5d4c): Section mismatch in reference from the function stmmac_pltfr_probe() to the function .devinit.text:stmmac_probe_config_dt() The function stmmac_pltfr_probe() references the function __devinit stmmac_probe_config_dt(). This is often because stmmac_pltfr_probe lacks a __devinit annotation or the annotation of stmmac_probe_config_dt is wrong. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Stefan Roese <sr@denx.de> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: David S. Miller <davem@davemloft.net> Cc: netdev@vger.kernel.org Acked-by: NStefan Roese <sr@denx.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 stigge@antcom.de 提交于
The #ifdefs regarding CONFIG_ARCH_LPC32XX_MII_SUPPORT and CONFIG_ARCH_LPC32XX_IRAM_FOR_NET are obsolete since the symbols have been removed from Kconfig and replaced by devicetree based configuration. Signed-off-by: NRoland Stigge <stigge@antcom.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-