- 09 7月, 2014 36 次提交
-
-
由 Eugenia Emantayev 提交于
Promiscous mode is only for MACs. Should not disable/enable VLAN filter when entering/leaving promisuous mode. Signed-off-by: NAviad Yehezkel <aviadye@mellanox.co.il> Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eugenia Emantayev 提交于
Verify port number to avoid crashes if port number is outside the range. Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eugenia Emantayev 提交于
Loopback can't work when port is down. Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eugenia Emantayev 提交于
In 40GE we can't use the default bw units for set ratelimit (100 Mbps) since the max is 255*100 Mbps = 25 Gbps (not suited for 40GE), thus we need 1 Gbps units. But for 10GE 1 Gbps units might be too bruit so we use the following solution. For user set ratelimit <= 25 Gbps: use 100 Mbps units * user_ratelimit (* 10). For user set ratelimit > 25 Gbps: use 1 Gbps units * user_ratelimit. For user set unlimited ratelimit (0 Gbps): use 1 Gbps units * MAX_RATELIMIT_DEFAULT (57) Note: any value > 58 will damage the FW ratelimit computation, so we allow a max and any higher value will be pulled down to 57. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Linus Lüssing says: ==================== bridge: multicast snooping exports #2 Some people pointed out to me that it might be helpful to add stubs for the newly added multicast exports. That way e.g. batman-adv should continue to be compile and useable without having to have a kernel compiled with bridge code in the future. This is what the first patch is supposed to do. The second patch adds a third multicast export for the bridge which e.g. batman-adv is supposed to use, too, soon: Just like the bridge disables its multicast snooping activities if no querier is present, batman-adv needs to do the same if bridges are involved. These three exports should be the final ones needed to marry the bridge multicast snooping with the batman-adv multicast optimizations recently added for the 3.15 kernel, allowing to use these optimzations in common setups having a bridge on top of e.g. bat0, too. So far these bridged setups would fall back to simple flooding through the batman-adv mesh network for any multicast packet entering bat0. More information about the batman-adv multicast optimizations currently implemented can be found here: http://www.open-mesh.org/projects/batman-adv/wiki/Basic-multicast-optimizations The integration on the batman-adv side could afterwards look like this, for instance (now including the third export): http://git.open-mesh.org/batman-adv.git/commitdiff/61e4f6af4b7a21ed4040f2e711d50c778e5b6d93?hp=6ae4281474675fbca5bedcf768972a32db586eb6 ====================
-
由 Linus Lüssing 提交于
With this patch other modules are able to ask the bridge whether an IGMP or MLD querier exists on the according, bridged link layer. Multicast snooping can only be performed if a valid, selected querier exists on a link. Just like the bridge only enables its multicast snooping if a querier exists, e.g. batman-adv too can only activate its multicast snooping in bridged scenarios if a querier is present. For instance this export avoids having to reimplement IGMP/MLD querier message snooping and parsing in e.g. batman-adv, when multicast optimizations for bridged scenarios are added in the future. Signed-off-by: NLinus Lüssing <linus.luessing@web.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Linus Lüssing 提交于
To make users (e.g. batman-adv soon) load- and runnable even if the bridge was compiled without snooping capabilities - or even if the kernel was compiled without any bridge code at all. Signed-off-by: NLinus Lüssing <linus.luessing@web.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Erik Hugne 提交于
This fixes a regression bug caused by: 067608e9 ("tipc: introduce direct iovec to buffer chain fragmentation function") If data is sent on a nonblocking socket and the destination link is congested, the buffer chain is leaked. We fix this by freeing the chain in this case. Signed-off-by: NErik Hugne <erik.hugne@ericsson.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Acked-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej W. Rozycki 提交于
This fixes issues with debug printk calls across the driver, normally disabled; first compilation errors: drivers/net/fddi/defxx.c:676:1: error: pasting "(" and ""In dfx_bus_init...\n"" does not give a valid preprocessing token drivers/net/fddi/defxx.c:820:1: error: pasting "(" and ""In dfx_bus_uninit...\n"" does not give a valid preprocessing token and so on, and then warnings: drivers/net/fddi/defxx.c: In function 'dfx_driver_init': drivers/net/fddi/defxx.c:1132: warning: format '%0X' expects type 'unsigned int', but argument 4 has type 'dma_addr_t' drivers/net/fddi/defxx.c:1132: warning: format '%0X' expects type 'unsigned int', but argument 4 has type 'dma_addr_t' etc. Additionally casts are removed from virtual addresses and %p used. Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Maciej W. Rozycki says: ==================== defxx: Fixes for 64-bit host support This mini patch series addresses issues with 64-bit host support for FDDI interface boards supported by the defxx driver where DMA mapping synchronisation is required on swiotlb systems. While PDQ, the DMA engine chip used with these boards, supports 48-bit addressing that would normally suffice for typical 64-bit systems in existence, the host bus interface chips used by individual implementations have their limitations as follows: * DEFTA or DEC FDDIcontroller/TURBOchannel -- there's no host bus interface chip, the PDQ connects to TURBOchannel directly; TURBOchannel supports DMA addressing of up to 16GB (34-bit addressing), however no TURBOchannel system has ever been made that supports more than 1GB of RAM, so in reality no remapping is ever required, * DEFEA or DEC FDDIcontroller/EISA -- the ESIC EISA interface chip only supports 32-bit addressing, all accesses beyond 4GB have to be remapped, * DEFPA or DEC FDDIcontroller/PCI -- the PFI PCI interface chip rev. 1 & 2 only support 32-bit addressing, they have 32 AD lines only both on the PDQ and the PCI side, and consequently no Dual Address Cycle support, so all accesses beyond 4GB have to be remapped; the range of addressing supported by PFI rev. 3 is currently not certain, however the chip is backwards compatible with earlier revisions and will work with code that supports them. Some other issues discovered in the course of correcting 64-bit support have been fixed as well. Each of the patches is functionally self-contained and can be applied independentely, although there may be mechanical dependencies making it necessary to apply patches in order. The driver suffers from non-standard formatting and while I did my best with these bug fixes to follow our coding style, I found some pieces hopeless, checkpatch.pl will complain. I plan to reformat the whole driver, that will inevitably require factoring out some pieces into separate functions, but that's going to be a major effort and therefore I want to do this separately, with no functional changes made at the same time. If anyone has specific suggestions as to how to reformat any of the pieces submitted here for a better layout, then I'll be happy to take them into account. And last but not least many thanks to Robert Coerver, who was the most recent person to report this problem with the driver and was kind enough to patiently try a few revisions of the driver update on his system as I was finding and addressing issues. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej W. Rozycki 提交于
This adds DMA synchronisation calls needed in the receive path: 1. To retrieve the Receive Status word that is prepended by the PDQ DMA engine in the receive buffer, and provides information about the frame received, including its size and any errors. 2. To make data received available for copying in the small-frame case (size <= SKBUFF_RX_COPYBREAK) where the original DMA buffer will be returned to the receive descriptor ring and therefore its mapping retained. With DMA mapping error handling in place, added by the other patch, this may now also trigger where an attempt to map a newly allocated buffer for DMA has failed. In that case data from the original buffer will be copied out and the buffer returned to the DMA descriptor ring. These calls may do nothing when data is in the host DMA addressing range of the FDDI interface, such as always on 32-bit systems, however their absence makes frame reception stop functioning reliably on systems that have memory beyond the low 4GB of the address space. Reported-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Tested-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej W. Rozycki 提交于
This adds error handling for DMA mapping requests; I think there isn't much else to say about it. A good side-effect is the mapping in the transmit path is now made with the board lock released. Also if DMA mapping fails for a newly allocated receive buffer, then data from the old buffer will be copied out (as is presently done for small frames only whose size does not exceed SKBUFF_RX_COPYBREAK) and the original buffer returned, with its mapping unchanged, to the DMA descriptor ring. Reported-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Tested-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej W. Rozycki 提交于
Switch the two remaining places across the driver that use dev_alloc_skb to netdev_alloc_skb. Another place has already been converted to use __netdev_alloc_skb, no idea why these two have been left behind. Reported-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Tested-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej W. Rozycki 提交于
Prearranged receive DMA bounce buffer mappings are not released in the card reboot/shutdown path. That does not affect frame reception, but probably explains the random segmentation fault I observed the other day on interface shutdown. Card is rebooted as required by the spec in the process of ring fault recovery when a PC Trace signal has been received. This change fixes the problem in an obvious manner. Reported-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Tested-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej W. Rozycki 提交于
Receive DMA maps are oversized, they include EISA legacy 128-byte alignment padding in size calculation whereas this padding is never used for data. Worse yet, if the skb's data area has been realigned indeed, then data beyond the end of the buffer will be synchronised from the receive DMA bounce buffer, possibly corrupting data structures residing in memory beyond the actual end of this data buffer. Therefore switch to using PI_RCV_DATA_K_SIZE_MAX rather than NEW_SKB_SIZE in DMA mapping, the value the former macro expands to is written to the receive ring DMA descriptor of the PDQ DMA chip and determines the maximum amount of data PDQ will ever transfer to the corresponding data buffer, including all headers and padding. Reported-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Tested-by: NRobert Coerver <Robert.Coerver@ll.mit.edu> Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
David Laight says: ==================== net: sctp: Optimisations to sctp command queue code These 3 patches optimise the code that processes sctp's command queue. (A list of 'tasks' to be performed after the rest of the chunk processing.) 1) Inline all the functions from command.c 2) Remove the memset() calls used to zero a word-sized union. 3) Use pointers instead of array indexes. The combined changes reduce the code size (amd64) by a few kb. I'm not 100% convinced that the zeroing done in patch 2 is needed at all. On BE systems it is likely to generate more code than on LE ones. In fact it might be best to change the union to only contain 'long' sized items. Changes for v2: - Add some missing initialisers in patch 2/3 and delete them in 3/3. - Modify the commit message for 2/3 to point out that the union shouldn't need to be zeroed, but the patches aren't intended to change the behaviour even if the code is buggy. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Laight 提交于
Using pointers into sctp_cmd_seq_t.cmds[] lets the compiler generate much better code. Use the last entry first to optimise the overflow check. Signed-off-by: NDavid Laight <david.laight@aculab.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Laight 提交于
Even if memset() is inlined (as on x86) using it to zero the union generates a memory word write of zero, followed by a write of the smaller field, and then a read of the word. As well as being a lot of instructions the sequence is unlikely to be optimised by the store-load forward hardware so will be slow. Instead allocate a field of the union that is the same size as the entire union and write a zero value to it. The compiler will then generate the required value in a register. Zeroing the union shouldn't be necessary, but this patch series isn't intended to have a behavioural change. Signed-off-by: NDavid Laight <david.laight@aculab.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Laight 提交于
sctp_init_cmd_seq() and sctp_next_cmd() are only called from one place. The call sequence for sctp_add_cmd_sf() is likely to be longer than the inlined code. With sctp_add_cmd_sf() inlined the compiler can optimise repeated calls. Signed-off-by: NDavid Laight <david.laight@aculab.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 wangweidong 提交于
This warning is introduced by commit 7b30600c ("appletalk: fix checkpatch error with indent"), So fix it. Signed-off-by: NWang Weidong <wangweidong1@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next由 David S. Miller 提交于
John W. Linville says: ==================== pull request: wireless-next 2014-07-03 Please pull this first batch of wireless updates intended for the 3.17 stream... For the mac80211 bits, Johannes says: "The biggest thing here is probably Arik's TDLS rework, beyond that we have smaller improvements and features like David's scanning IE thing, Luca's queue work, some CSA work, etc. Also your PID rate control removal, of course." For the iwlwifi bits, Emmanuel says: "I have here a whole bunch of various things. Andy contributes better debug prints for dvm specific flows and a module parameter to completely disable power save for dvm. Andrei is sharing the premises of his work on CSA - more to come. Eran and Liad keep on working on the new devices. I have the regular amount of BT Coex stuff and I continue to work on the firmware error report system adding more debug capabilities. More to come on that subject too." On top of that, there are some cleanups to the new rsi driver, some continuing improvements to the rtl818x drivers, and the usual bundles of updates to ath9k, b43, mwifiex, wil6210, and a few other bits here and there. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Zi Shen Lim 提交于
load_pointer() is already a static inline function. Let's move it into filter.h so BPF JIT implementations can reuse this function. Since we're exporting this function, let's also rename it to bpf_load_pointer() for clarity. Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com> Reviewed-by: NDaniel Borkmann <dborkman@redhat.com> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej W. Rozycki 提交于
This fixes compiler warnings: drivers/net/ethernet/amd/declance.c: In function 'lance_init_ring': drivers/net/ethernet/amd/declance.c:478: warning: format '%8.8x' expects type 'unsigned int', but argument 3 has type 'long unsigned int' drivers/net/ethernet/amd/declance.c:487: warning: format '%8.8x' expects type 'unsigned int', but argument 3 has type 'long unsigned int' drivers/net/ethernet/amd/declance.c:503: warning: cast from pointer to integer of different size drivers/net/ethernet/amd/declance.c:520: warning: cast from pointer to integer of different size in 64-bit compilation. Where the value printed is an offset (whose range will always fit) the cast uses a 32-bit type, otherwise, where it is a host memory address, the pointer is output directly with %p. Also the remaining `0x' prefix is dropped for consistency across these messages. Tested with both 32-bit and 64-bit compilation, as well as at the run time (with the debug messages affected enabled). Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Arvid Brodin says: ==================== net/hsr: Use list_head+rcu, better frame dispatch, etc. This patch series is meant to improve the HSR code in several ways: * Better code readability. * In general, make the code structure more like the net/bridge code (HSR operates similarly to a bridge, but uses the HSR-specific frame headers to break up rings, instead of the STP protocol). * Better handling of HSR ports' net_device features. * Use list_head and the _rcu list traversing routines instead of array of slave devices. * Make it easy to support HSR Interlink devices (for future Redbox/Quadbox support). * Somewhat better throughput on non-HAVE_EFFICIENT_UNALIGNED_ACCESS archs, due to lesser copying of skb data. The code has been tested in a ring together with other HSR nodes running unchanged code, on both avr32 and x86_64. There should only be one minor change in behaviour from a user perspective: * Anyone using the Netlink HSR_C_GET_NODE_LIST message to dump the internal node database will notice that the database now also contains the self node. All patches pass 'checkpatch.pl --ignore CAMELCASE --max-line-length=83 --strict' with only CHECKs, each of which have been deliberately left in place. The final code passes sparse checks with no output. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
If none of the slave interfaces are specified, struct nlattr *data[] may be NULL. Make sure to check for that. While I'm at it, fix the horrible error messages displayed when only one of the slave interfaces isn't specified. Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
This patch removes the separate paths for frames coming from the outside, and frames sent from the HSR device, and instead makes all frames go through hsr_forward_skb() in hsr_forward.c. This greatly improves code readability and also opens up the possibility for future support of the HSR Interlink device that is the basis for HSR RedBoxes and HSR QuadBoxes, as well as VLAN compatibility. Other improvements: * A reduction in the number of times an skb is copied on machines without HAVE_EFFICIENT_UNALIGNED_ACCESS, which improves throughput somewhat. * Headers are now created using the standard eth_header(), and using the standard hard_header_len. * Each HSR slave now gets its own private skb, so slave-specific fields can be correctly set. Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
Also try to prevent some possible slave dereference race conditions. This is finalized in the next patch, which abandons the slave array in favour of a list_head list and list RCU. Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
Also move the frame receive handler to hsr_slave.c. Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arvid Brodin 提交于
Signed-off-by: NArvid Brodin <arvid.brodin@alten.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 hayeswang 提交于
When the system is too busy to complete the urb, the tx timout function would be called. This causes the other tx urbs would be killed, too. Increase the tx timeout to avoid it. Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Fabian Frederick 提交于
ic_dev_xid is only used in ipconfig.c Cc: "David S. Miller" <davem@davemloft.net> Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Cc: netdev@vger.kernel.org Signed-off-by: NFabian Frederick <fabf@skynet.be> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 7月, 2014 4 次提交
-
-
由 David S. Miller 提交于
Tom Lendacky says: ==================== amd-xgbe: AMD 10Gb Ethernet driver updates The following series fixes some bugs and provides new/changed support in the driver. - Fix a debugfs backward compatibility issue introduced by a previous patch - Write to the interrupt enablement register, not the status register when setting MTL interrupts - Call netif_napi_del whenever the ndo_stop operation is called (to match the call to netif_napi_add on ndo_open) - Peformance enhancements: - Adjusted default coalescing settings - AXI DMA changes (burst length size and cache settings) - ioread/iowrite reduction during interrupt - Napi poll updates - AXI DMA settings based on device tree property to account for a change in the ARM64 default cache operations assignment This patch series is based on net-next. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
The default cache operations for ARM64 were changed during 3.15. To use coherent operations a "dma-coherent" device tree property is required. If that property is not present in the device tree node then the non-coherent operations are assigned for the device. Add support to the amd-xgbe driver to assign the AXI DMA cache settings based on whether the "dma-coherent" property is present in the device node. If present, use settings that work with the caches. If not present, use settings that do not look at the caches. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
This patch provides some general performance enhancements for the driver: - Modify the default coalescing settings (reduce usec, increase frames) - Change the AXI burst length to 256 bytes (default was 16 bytes which was smaller than a cache line) - Change the AXI cache settings to write-back/write-allocate which allocate cache entries for received packets during the DMA since the packet will be processed soon afterwards - Combine ioread/iowrite when disabling both the Tx and Rx interrupts - Change to processing the Tx/Rx channels in pairs - Only recycle the Rx descriptors when a threshold of dirty descriptors is reached Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
Currently the napi context is added using netif_napi_add each time the ndo_open operation is called. However, there is not a corresponding netif_napi_del call during the ndo_stop operation. If the device ndo_open operation was called more than once an infinite loop occurs during module unload. Add a call to netif_napi_del during the ndo_stop operation. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-