- 16 2月, 2013 20 次提交
-
-
由 Alexander Duyck 提交于
This patch adds support for the ethtool get_channels operation. Since the ixgbe driver has to support DCB as well as the other modes the assumption I made here is that the number of channels in DCB modes refers to the number of queues per traffic class, not the number of queues total. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Reviewed-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The ixgbe_setup_tc code is essentially the same code we need any time we have to update the number of queues. As such I am making it available always and just stripping the DCB specific bits out when DCB is disabled instead of stripping the entire function. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Reviewed-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change updates the ixgbe driver to use __netdev_pick_tx instead of the current logic it is using to select a queue. The main result of this change is that ixgbe can now fully support XPS, and in the case of non-FCoE enabled configs it means we don't need to have our own ndo_select_queue. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Reviewed-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change adds support for ixgbe to configure the XPS queue mapping on load. The result of this change is that on open we will now be resetting the number of Tx queues, and then setting the default configuration for XPS based on if ATR is enabled or disabled. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Reviewed-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Instead of adjusting the FCoE and Flow director limits based on the number of CPUs we can define them much sooner. This allows the user to come through later and adjust them once we have updated the code to support the set_channels ethtool operation. I am still allowing for FCoE and RSS queues to be separated if the number queues is less than the number of CPUs. This essentially treats the two groupings like they are two separate traffic classes. In addition I am changing the initialization to use the MAX_TX/RX_QUEUES defines instead of trying to compute the value as it will be possible in upcoming patches for the user to request the maximum number of queues. I have also updated things so that the upper limit on queues is exactly 63 instead of allowing it to go up to 64. The reason for this change is to address the fact thqt the driver only supports up to 63 queue vectors since the hardware supports 64 MSI-X vectors, but one must be reserved for "other" causes. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Stefan Assmann 提交于
On several machines with i350 adapters the ethtool offline self-test sometimes fails. This happens because link auto negotiation may take longer than the timeout of 4 seconds. Increasing the timeout by 1 seconds resolves the issue. Output from a failing i350 offline self-test: while [ 1 ]; do ethtool -t eth2 offline; done The test result is PASS The test extra info: Register test (offline) 0 Eeprom test (offline) 0 Interrupt test (offline) 0 Loopback test (offline) 0 Link test (on/offline) 0 The test result is FAIL The test extra info: Register test (offline) 0 Eeprom test (offline) 0 Interrupt test (offline) 0 Loopback test (offline) 0 Link test (on/offline) 1 The test result is PASS The test extra info: Register test (offline) 0 Eeprom test (offline) 0 Interrupt test (offline) 0 Loopback test (offline) 0 Link test (on/offline) 0 Signed-off-by: NStefan Assmann <sassmann@kpanic.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change is meant to address several race issues that become possible because next_to_watch could possibly be set to a value that shows that the descriptor is done when it is not. In order to correct that we instead make next_to_watch a pointer that is set to NULL during cleanup, and set to the eop_desc after the descriptor rings have been written. To enforce proper ordering the next_to_watch pointer is not set until after a wmb writing the values to the last descriptor in a transmit. In order to guarantee that the descriptor is not read until after the eop_desc we use the read_barrier_depends which is only really necessary on the alpha architecture. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Acked-by: NGreg Rose <gregory.v.rose@intel.com> Tested-by: NSibai Li <sibai.li@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Koki Sanagi 提交于
Current e1000e driver doesn't tell nothing when Link Speed is downgraded due to SmartSpeed. As a result, users suspect that there is something wrong with NIC. If the cause of it is SmartSpeed, there is no means to replace NIC. This patch make e1000e notify users that SmartSpeed worked. Signed-off-by: NKoki Sanagi <sanagi.koki@jp.fujitsu.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Fixes whitespace issues, such as lines exceeding 80 chars, needless blank lines and the use of spaces where tabs are needed. In addition, fix multi-line comments to align with the networking standard. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com>
-
由 Eric Dumazet 提交于
We no longer need to use mac_len, lets cleanup things. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lars-Peter Clausen 提交于
There is no need to implement empty suspend/resume callbacks if there is nothing to do during suspend/resume. The drivers will behave the same with no callbacks or empty callbacks during suspend/resume. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next由 David S. Miller 提交于
Jeff Kirsher says: ==================== This series contains updates to igb and ixgbe. Most of the changes are against igb, except for one patch against ixgbe. There are 3 igb fixes from Carolyn which were reported by Dan Carpenter which resolve issues found in the get_i2c_client(). Alex does some cleanup of the igb driver to match similar functionality in ixgbe on transmit. Alex also makes it so that we can enable the use of build_skb for cases where jumbo frames are disabled. The advantage to this is that we do not have to perform a memcpy to populate the header and as a result we see a significant performance improvement. Akeem provides 4 patches to initialize function pointers and do a re-factoring of the function pointers in igb_get_variants() to assist with driver debugging. The ixgbe patch comes from Emil to reshuffle the switch/case structure of the flag assignment to allow for the flags to be set for each MAC type separately. This is needed for new hardware that does not have feature parity with older hardware. v2: updated patches 4 & 5 based on feedback from Ben Hutchings and Eric Dumazet ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pravin B Shelar 提交于
Following patch adds GRE protocol offload handler so that skb_gso_segment() can segment GRE packets. SKB GSO CB is added to keep track of total header length so that skb_segment can push entire header. e.g. in case of GRE, skb_segment need to push inner and outer headers to every segment. New NETIF_F_GRE_GSO feature is added for devices which support HW GRE TSO offload. Currently none of devices support it therefore GRE GSO always fall backs to software GSO. [ Compute pkt_len before ip_local_out() invocation. -DaveM ] Signed-off-by: NPravin B Shelar <pshelar@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pravin B Shelar 提交于
This function will be used in next GRE_GSO patch. This patch does not change any functionality. It only exports skb_mac_gso_segment() function. [ Use skb_reset_mac_len() -DaveM ] Signed-off-by: NPravin B Shelar <pshelar@nicira.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pravin B Shelar 提交于
This function will be used in next GRE_GSO patch. This patch does not change any functionality. Signed-off-by: NPravin B Shelar <pshelar@nicira.com> Acked-by: NEric Dumazet <edumazet@google.com>
-
由 Michael Chan 提交于
Signed-off-by: NMichael Chan <mchan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Before the device is opened, the carrier state should be off. It will not race with the link interrupt if we set it before calling register_netdev(). Signed-off-by: NMichael Chan <mchan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Don't set the default size to 128K if it is 5762. Instead, rely on the size we obtain from NVRAM location 0xf0. Signed-off-by: NMichael Chan <mchan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
This chip supports Energy Efficient Ethernet. The existing code only supports a smaller set of devices with 5718 PCI ID. Expand support for all devices with the same 5717 B0 chip ID. Signed-off-by: NMichael Chan <mchan@broadocm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Matt Carlson 提交于
The patch also adds a couple of fixes - For the 57766 and non Ax versions of 57765, bootcode needs to setup the PCIE Fast Training Sequence (FTS) value to prevent transmit hangs. Unfortunately, it does not have enough room in the selfboot case (i.e. devices with no NVRAM). The driver needs to implement this. - For performance reasons, the 2k DMA engine mode on the 57766 should be enabled and dma size limited to 2k for standard sized packets. Signed-off-by: NNithin Nayak Sujir <nsujir@broadcom.com> Signed-off-by: NMichael Chan <mchan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 2月, 2013 20 次提交
-
-
由 Emil Tantilov 提交于
This patch reshuffles the switch/case structure of the flag assignment to allow for the flags to be set for each MAC type separately. This is needed for new HW that does not have feature parity with older HW. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Tested-by: NJack Morgan <jack.morgan@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Akeem G. Abodunrin 提交于
This patch simplifies igb_get_invariants function by moving all implemented function pointers in this function to individual separate functions, based on their functionalities, this would make debugging much easier. Signed-off-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Akeem G. Abodunrin 提交于
This patch initializes MAC function pointers for device configuration. Signed-off-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Akeem G. Abodunrin 提交于
This patch initializes NVM function pointers for device configuration. Signed-off-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Akeem G. Abodunrin 提交于
This patch initializes PHY function pointers for device configuration. Signed-off-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
After reviewing the igb and ixgbe code I realized there are a few issues in how the code is structured. Specifically we are not checking the size of the buffers being used in transmits and we are not using the same value to determine when to stop or start a Tx queue. As such the code is prone to be buggy. This patch makes it so that we have one value DESC_NEEDED that we will use for starting and stopping the queue. In addition we will check the size of buffers being used when setting up a transmit so as to avoid a possible buffer overrun if we were to receive a frame with a block of data larger than 32K in skb->data. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Carolyn Wyborny 提交于
This patch correctly resolves the sparse warnings found with this function. Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Carolyn Wyborny 提交于
This patch fixes the allocation function in igb_get_i2c_client to use GFP_ATOMIC instead of GFP_KERNEL because we have a spinlock. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Carolyn Wyborny 提交于
This patch fixes an issue where we check for irq's disabled then exit after explicitly disabling them with spin_lock_irqsave. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NAaron Brown <arron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we can enable the use of build_skb for cases where jumbo frames are disabled. The advantage to this is that we do not have to perform a memcpy to populate the header and as a result we see a significant performance improvement. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 David S. Miller 提交于
Even for non-pfmalloc SKBs, __netif_receive_skb() will do a tsk_restore_flags() on current unconditionally. Make __netif_receive_skb() a shim around the existing code, renamed to __netif_receive_skb_core(). Let __netif_receive_skb() wrap the __netif_receive_skb_core() call with the task flag modifications, if necessary. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Claudiu Manoil 提交于
This fixes a less obvious error on one hand, and prevents futher similar errors by disambiguating and optimizing RxFCB indication, on the other hand. The error consists in NETIF_F_HW_VLAN_TX flag being used as an indication of Rx FCB insertion. This happened as soon gfar_uses_fcb(), which despite its name indicates Rx FCB insertion, started incorporating is_vlan_on(). is_vlan_on(), on the other hand, is also a misleading construct because we need to differentiate b/w hw VLAN extraction/VLEX (marked by VLAN_RX flag) and hw VLAN insertion/VLINS (VLAN_TX flag), which are different mechanisms using different types of FCBs. The hw spec for the RxFCB feature is as follows: In the case of RxBD rings, FCBs (Frame Control Block) are inserted by the eTSEC whenever RCTRL[PRSDEP] is set to a non-zero value. Only one FCB is inserted per frame (in the buffer pointed to by the RxBD with bit F set). TOE acceleration for receive is enabled for all rx frames in this case. This patch introduces priv->uses_rxfcb field to quickly signal RxFCB insertion in accordance with the specification above. The dependency on FSL_GIANFAR_DEV_HAS_TIMER was also eliminated as another source of confusion. The actual dependency is to priv->hwts_rx_en. Upon changing priv->hwts_rx_en via IOCTL, the gfar device is being restarted and on init_mac() the priv->hwts_rx_en flag determines RxFCB insertion, and rctrl is programmed accordingly. The patch takes care of this case too. Though maybe not as self documenting as the inlining version uses_fcb(), priv->uses_rxfcb has the main purpose to quickly signal, on the hot path, that the incoming frame has a *Rx* FCB block inserted which needs to be pulled out before passing the skb to the stack. This is a performance critical operation, it needs to happen fast, that's why uses_rxfcb is placed in the first cacheline of gfar_private. This is also why a cached rctrl cannot be used instead: 1) because we don't have 32 bits available in the first cacheline of gfar_priv (but only 16); 2) bit operations are expensive on the hot path. Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Claudiu Manoil 提交于
The controller's ref manual states clearly that when the hw Rx vlan offload feature is enabled, meaning that the VLEX bit from RCTRL is correctly enabled, then the hw performs automatic VLAN tag extraction and deletion from the ethernet frames. So there's no point in trying to increase the rx buff size when rxvlan is on, as the frame is actually smaller. And the Tx vlan hw accel feature (VLINS) has nothing to do with rx buff size computation. Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Claudiu Manoil 提交于
No return code is expected from gfar_process_frame(), hence change it to return void. Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Claudiu Manoil 提交于
The change is significant since it affects the rx hot path. Paul observed and documented the effects at asm level, see below: "It turns out that it does make a difference, since gfar_process_frame gets inlined, and so the increment code gets moved out of line (I have marked the if statment with * and the increment code within "-----"): ------------------------- as is currently ------------------ 4d14: 80 61 00 18 lwz r3,24(r1) 4d18: 7f c4 f3 78 mr r4,r30 4d1c: 48 00 00 01 bl 4d1c <gfar_clean_rx_ring+0x10c> * 4d20: 2f 83 00 04 cmpwi cr7,r3,4 4d24: 40 9e 00 1c bne- cr7,4d40 <gfar_clean_rx_ring+0x130> ---------------------------- 4d28: 81 3c 01 f8 lwz r9,504(r28) 4d2c: 81 5c 01 fc lwz r10,508(r28) 4d30: 31 4a 00 01 addic r10,r10,1 4d34: 7d 29 01 94 addze r9,r9 4d38: 91 3c 01 f8 stw r9,504(r28) 4d3c: 91 5c 01 fc stw r10,508(r28) ---------------------------- 4d40: a0 1f 00 24 lhz r0,36(r31) 4d44: 81 3f 00 00 lwz r9,0(r31) 4d48: 7f a4 eb 78 mr r4,r29 4d4c: 7f e3 fb 78 mr r3,r31 -------------------------- unlikely ------------------------ 4d14: 80 61 00 18 lwz r3,24(r1) 4d18: 7f c4 f3 78 mr r4,r30 4d1c: 48 00 00 01 bl 4d1c <gfar_clean_rx_ring+0x10c> * 4d20: 2f 83 00 04 cmpwi cr7,r3,4 4d24: 41 9e 03 94 beq- cr7,50b8 <gfar_clean_rx_ring+0x4a8> 4d28: a0 1f 00 24 lhz r0,36(r31) 4d2c: 81 3f 00 00 lwz r9,0(r31) 4d30: 7f a4 eb 78 mr r4,r29 4d34: 7f e3 fb 78 mr r3,r31 [...] 50b8: 81 3c 01 f8 lwz r9,504(r28) 50bc: 81 5c 01 fc lwz r10,508(r28) 50c0: 31 4a 00 01 addic r10,r10,1 50c4: 7d 29 01 94 addze r9,r9 50c8: 91 3c 01 f8 stw r9,504(r28) 50cc: 91 5c 01 fc stw r10,508(r28) 50d0: 4b ff fc 58 b 4d28 <gfar_clean_rx_ring+0x118> So, the increment does actually get moved ~1k away." Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Claudiu Manoil 提交于
Group run-time critical fields within the 1st cacheline (32B) followed by the tx|rx_queue reference arrays and the interrupt group instances (gfargrp), all cacheline aligned. This has several benefits. Firstly comes the performance benefit by having the members required by the driver's hot path re-grouped in the structure's first cache lines, whereas the unimportant members were pushed towards the end of the struct. Another benefit comes from eliminating a 24 byte memory hole that was rendering gfar_priv's 2nd cacheline useless. The default gcc layout of gfar_private leaves an implicit 24 byte hole after the errata (enum) member. This patch fixes it. The uchar bitfields were pushed towards the end of the struct as these are not run-time performance critical (used for init time operations). Because there is no other 2 byte member around to couple the uchar bitfields memeber with, we will have an addititnal 2 byte hole after the bitfields. This is unsignificant however, and it doesn't influence gfar_priv's size, because the whole structure is padded to be a 32B multiple. Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Claudiu Manoil 提交于
Use device pointer (dev) to simplify the code and to avoid double indirections, especially on the hot path. Basically, instead of accessing priv to get the ofdev reference and then accessing the ofdev structure to dereference the needed dev pointer, we will get the dev pointer directly from priv. The dev pointer is required on the hot path, see gfar_new_rxbdp or gfar_clean_rx_ring (or xmit), and this patch makes it available directly from priv's 1st cacheline. This change is reflected at asm level too, taking (the hot) gfar_new_rxbdp(): initial version - 18c0: 7c 7e 1b 78 mr r30,r3 18d0: 81 69 04 3c lwz r11,1084(r9) 18d8: 34 6b 00 10 addic. r3,r11,16 18dc: 41 82 00 08 beq- 18e4 patched version - 18d0: 80 69 04 38 lwz r3,1080(r9) 18d8: 2f 83 00 00 cmpwi cr7,r3,0 18dc: 41 9e 00 08 beq- cr7,18e4 Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Claudiu Manoil 提交于
Remove unused device node pointer. Remove duplicated SET_NETDEV_DEV(). Signed-off-by: NClaudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next由 David S. Miller 提交于
Steffen Klassert says: ==================== 1) Remove a duplicated call to skb_orphan() in pf_key, from Cong Wang. 2) Prepare xfrm and pf_key for algorithms without pf_key support, from Jussi Kivilinna. 3) Fix an unbalanced lock in xfrm_output_one(), from Li RongQing. 4) Add an IPsec state resolution packet queue to handle packets that are send before the states are resolved. 5) xfrm4_policy_fini() is unused since 2.6.11, time to remove it. From Michal Kubecek. 6) The xfrm gc threshold was configurable just in the initial namespace, make it configurable in all namespaces. From Michal Kubecek. 7) We currently can not insert policies with mark and mask such that some flows would be matched from both policies. Allow this if the priorities of these policies are different, the one with the higher priority is used in this case. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Cong Wang 提交于
They are only used within this file. Cc: Vlad Yasevich <vyasevic@redhat.com> Cc: Stephen Hemminger <stephen@networkplumber.org> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: NCong Wang <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-