- 05 5月, 2012 3 次提交
-
-
由 Richard Alpe 提交于
Clear the REQ and GNT bit in the eeprom control register (EECD). This is required if the eeprom is to be accessed with auto read EERD register. After a cold reset this doesn't matter but if PBIST MAC test was executed before booting, the register was left in a dirty state (the 2 bits where set), which caused the read operation to time out and returning 0. Reference (page 312): http://download.intel.com/design/network/manuals/316080.pdfReported-by: NAleksandar Igic <aleksandar.igic@dektech.com.au> Signed-off-by: NRichard Alpe <richard.alpe@ericsson.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bruce Allan 提交于
Like other supported (igp) PHYs, the driver needs to be able to force the master/slave mode on 82577. Since the code is the same as what already exists in the code flow for igp PHYs, move it to a new function to be called for both flows. Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Eric Dumazet 提交于
It appears some networks play bad games with the two bits reserved for ECN. This can trigger false congestion notifications and very slow transferts. Since RFC 3168 (6.1.1) forbids SYN packets to carry CT bits, we can disable TCP ECN negociation if it happens we receive mangled CT bits in the SYN packet. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Perry Lorier <perryl@google.com> Cc: Matt Mathis <mattmathis@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Wilmer van der Gaast <wilmer@google.com> Cc: Ankur Jain <jankur@google.com> Cc: Tom Herbert <therbert@google.com> Cc: Dave Täht <dave.taht@bufferbloat.net> Acked-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 5月, 2012 19 次提交
-
-
由 Karsten Keil 提交于
With multiple cards is hard to figure out which port caused trouble int the layer2 routines (e.g. got a timeout). Now we have the informations in the log output. Signed-off-by: NKarsten Keil <kkeil@linux-pingi.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Keil 提交于
The timer3 and the activation delay timer need to be independent. If timer3 fires do not reqest power up we have to send only INFO 0. Now layer1 pass TBR3 again. Signed-off-by: NKarsten Keil <kkeil@linux-pingi.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Keil 提交于
For certification test it is very useful to change the layer1 timer3 value on runtime. Signed-off-by: NKarsten Keil <kkeil@linux-pingi.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Keil 提交于
To be full preemptiv safe, we cannot handle a L2 timeout in the timer context itself, we should do all actions via the D-channel thread. Signed-off-by: NKarsten Keil <kkeil@linux-pingi.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Keil 提交于
Under some configs it was still not possible to unload the driver, because the module use count was srewed up. Signed-off-by: NKarsten Keil <keil@b1-systems.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Andreas Eversberg 提交于
Tei manager reports current layer 1 state on creation. On state change it reports it to the socket interface. Signed-off-by: NAndreas Eversberg <andreas@eversberg.eu> Signed-off-by: NKarsten Keil <keil@b1-systems.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Use qdisc_drop() helper where possible. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This change updates the link flow control configuration so that we correctly set the link flow control settings for DCB. Previously we would have to call the fc_enable call 8 times, once for each packet buffer. If we move that logic into the fc_enable call itself we can avoid multiple unnecessary register writes. This change also corrects an issue in which we were only shifting the water marks for 82599 parts by 6 instead of 10. This was resulting in us only using 1/16 of the packet buffer when flow control was enabled. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
We can avoid many of the forward declarations found in ixgbe_common.c by just reordering things so this patch does that to help cleanup the code. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change replaces the calls to put_page with calls to __free_page. Since the FCoE code is able to access order 1 pages I thought it would be a good idea to change things over to using __free_pages since that is the preferred approach for freeing pages. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that ixgbe_fc_autoneg is a void and always sets the current_mode. Previously if the link was down we would return an error, however there is no harm in simply treating a link down case as a case in which autoneg simply failed. This allows us to rely on the return value of the ixgbe_fc_enable call now since there should be no cases where it returns an error that would normally be ignored. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change reorders the mapping of rings to q_vectors in the case that the number of rings exceeds the number of q_vectors. Previously we would allocate the first R/N queues to the first q_vector where R is the number of rings and N is the number of q_vectors. Instead of doing this we can do a better job of interleaving the rings to the CPUs by assigning every Nth ring to the q_vector. The below tables illustrate this change for the R = 16 N = 4 case. Before patch After patch q_vector: 0 1 2 3 0 1 2 3 Rings: 0 4 8 12 0 1 2 3 1 5 9 13 4 5 6 7 3 6 10 14 8 9 10 11 4 7 11 15 12 13 14 15 This should improve the performance for both DCB or ATR when the number of rings exceeds the number of q_vectors allocated by the adapter. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we can track instances of where a packet was dropped due to a packet being received when there are no DMA buffers available in the ring. For some reason this was only being enabled with RSC, however it makes more sense to always have this feature on so that we can track any cases where we might drop a buffer due to an Rx ring being full. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bruce Allan 提交于
i217 is the next-generation LOM that will be available on systems with the Lynx Point Platform Controller Hub (PCH) chipset from Intel. This patch provides the initial support for the device. Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Matthew Vick 提交于
Version bump to 1.11.3-k. Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
The idea here seems to be to get a 44bit DMA mask working and if this fails it should fallback to a 32bit DMA mask. The dma_mask variable is assigned once to 44bit and never updated. pci_set_dma_mask() and pci_set_consistent_dma_mask() are both implemented as functions so there is no evil macro which might update dma_mask. Looking at the assembly, I see a call to dma_set_mask() followed by dma_supported() and then a jump passed the second dma_set_mask(). The only way to get to second dma_set_mask() call is by an error code in the first one. So I hereby remove the check since it looks superfluous. Please ignore the path if there is black magic involved. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This patch adds support for a skb_head_is_locked helper function. It is meant to be used any time we are considering transferring the head from skb->head to a paged frag. If the head is locked it means we cannot remove the head from the skb so it must be copied or we must take the skb as a whole. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
GRO is very optimistic in skb truesize estimates, only taking into account the used part of fragments. Be conservative, and use more precise computation, so that bloated GRO skbs can be collapsed eventually. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 5月, 2012 18 次提交
-
-
由 Greg Rose 提交于
Signed-off-by: NGreg Rose <gregory.v.rose@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Greg Rose 提交于
If the Physical Function (PF) resets after the VF has set jumbo frame MTU then the VF jumbo frame is overwritten. Make sure the VF driver always requests proper MTU size after reset synchronization. Signed-off-by: NGreg Rose <gregory.v.rose@intel.com> Tested-by: NSibai Li <sibai.li@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Greg Rose 提交于
The X540 10Gig controller is capable of linking at 100Mbits - add support for reporting that link speed. Signed-off-by: NGreg Rose <gregory.v.rose@intel.com> Tested-by: NSibai Li <sibai.li@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Chris Boot 提交于
For the 82573, ASPM L1 gets disabled wholesale so this special-case code is not required. For the 82574 the previous patch does the same as for the 82573, disabling L1 on the adapter. Thus, this code is no longer required and can be removed. Signed-off-by: NChris Boot <bootc@bootc.net> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Chris Boot 提交于
ASPM on the 82574 causes trouble. Currently the driver disables L0s for this NIC but only disables L1 if the MTU is >1500. This patch simply causes L1 to be disabled regardless of the MTU setting. Signed-off-by: NChris Boot <bootc@bootc.net> Cc: "Wyborny, Carolyn" <carolyn.wyborny@intel.com> Cc: Nix <nix@esperi.org.uk> Link: https://lkml.org/lkml/2012/3/19/362Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Matthew Vick 提交于
Previously, IPv6 extension header parsing was disabled for all devices supported by e1000e when using packet split mode. However, as per a silicon errata, only certain devices need this restriction and will need to disable IPv6 extension header parsing for all modes. Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Matthew Vick 提交于
For 82574 and 82583 devices, resolve an intermittent link issue where the link negotiates to 100Mbps rather than 1Gbps when powering off the PHY and powering on the PHY after several seconds. Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bruce Allan 提交于
Calling the locked versions of the read/write PHY ops function pointers often produces excessively long lines. Shorten these as is done with the non-locked versions of the PHY register read/write functions. Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bruce Allan 提交于
There is a known issue in the 82577 and 82578 device that can cause a hang in the device hardware during traffic stress; the current workaround in the driver is to disable transmit flow control by default. If the user enables transmit flow control and the device hang occurs, provide a message in the syslog suggesting to re-enable the workaround. Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
While testing the TCP changes I had to fix an issue in order to be able to load and unload the module. The recent patch that added thermal sensor support added a use after free bug on module unload with an 82598 adapter in the system. To resolve the issue I have updated the code so that when we free the info_kobj we set it back to NULL. I suspect there are likely other bugs present, but I will leave that for another patch that can undergo more testing. I am submitting this directly to net-next since this fixes a fairly serious bug that will lock up the ixgbe module until the system is rebooted. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This change cleans up the last bits of tcp_try_coalesce so that we only need one goto which jumps to the end of the function. The idea is to make the code more readable by putting things in a linear order so that we start execution at the top of the function, and end it at the bottom. I also made a slight tweak to the code for handling frags when we are a clone. Instead of making it an if (clone) loop else nr_frags = 0 I changed the logic so that if (!clone) we just set the number of frags to 0 which disables the for loop anyway. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This change reorders the code related to the use of an skb->head_frag so it is placed before we check the rest of the frags. This allows the code to read more linearly instead of like some sort of loop. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This patch addresses several issues in the way we were tracking the truesize in tcp_try_coalesce. First it was using ksize which prevents us from having a 0 sized head frag and getting a usable result. To resolve that this patch uses the end pointer which is set based off either ksize, or the frag_size supplied in build_skb. This allows us to compute the original truesize of the entire buffer and remove that value leaving us with just what was added as pages. The second issue was the use of skb->len if there is a mergeable head frag. We should only need to remove the size of an data aligned sk_buff from our current skb->truesize to compute the delta for a buffer with a reused head. By using skb->len the value of truesize was being artificially reduced which means that head frags could use more memory than buffers using standard allocations. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This change is meant ot prevent stealing the skb->head to use as a page in the event that the skb->head was cloned. This allows the other clones to track each other via shinfo->dataref. Without this we break down to two methods for tracking the reference count, one being dataref, the other being the page count. As a result it becomes difficult to track how many references there are to skb->head. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Extend tcp coalescing implementing it from tcp_queue_rcv(), the main receiver function when application is not blocked in recvmsg(). Function tcp_queue_rcv() is moved a bit to allow its call from tcp_data_queue() This gives good results especially if GRO could not kick, and if skb head is a fragment. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Tom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Before stealing fragments or skb head, we must make sure skbs are not cloned. Alexander was worried about destination skb being cloned : In bridge setups, a driver could be fooled if skb->data_len would not match skb nr_frags. If source skb is cloned, we must take references on pages instead. Bug happened using tcpdump (if not using mmap()) Introduce kfree_skb_partial() helper to cleanup code. Reported-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Somnath Kotur 提交于
An EEH error can cause the FW to trigger a flash debug dump. Resetting the card while flash dump is in progress can cause it not to recover. Wait for it to finish before letting EEH flow to reset the card. Signed-off-by: NSathya Perla <Sathya.Perla@emulex.com> Signed-off-by: NSomnath Kotur <somnath.kotur@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-