- 19 10月, 2012 7 次提交
-
-
由 Alexander Duyck 提交于
This change combines the the allocation of q_vectors and rings into a single function. The advantage of this is that we are guaranteed we will avoid overlap in the L1 cache sets. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change locks us in at 2K buffers even on a system that supports larger frames. The reason for this change is to make better use of pages and to reduce the overall truesize of frames generated by igb. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In order to try and isolate things a bit further I am moving the code related to retrieving data from the rx_buffer_info structure into a separate function. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we map the entire page and just sync half of it for the device at a time. The advantage to this approach is that we can avoid the locking on map/unmap seen in many IOMMU implementations. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change is meant to just clean-up a number of function calls that were made at the end of the Rx clean-up path by combining them into a single function call. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we no longer use header split. The idea is to reduce partial cache line writes by hardware when handling frames larger then header size. We can compensate for the extra overhead of having to memcpy the header buffer by avoiding the cache misses seen by leaving an full skb allocated and sitting on the ring. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tushar Dave 提交于
Current implementation mess up the tail pointer. This patch sets skb->tail correctly. Also, the small packet check and padding is optimized by using unlikely and calling skb_pad directly. Signed-off-by: NTushar Dave <tushar.n.dave@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 22 9月, 2012 7 次提交
-
-
由 Alexander Duyck 提交于
This change is meant to improve performance on systems that do not require the DMA unmap calls. On those systems we do not need to make use of the unmap address for Tx or the unmap length so we can drop both thereby reducing the size of the Tx buffer info structure. In addition I have changed the logic to check for unmap length instead of unmap address when checking to see if a buffer needs to be unmapped from DMA use. The reasons for this change is that on some platforms it is possible to receive a valid DMA address of 0 from an IOMMU. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Instead of storing the RSS key as a character array we can simplify the configuration by making it a u32 array. This allows us to just write one value per register without any unnecessary operations to construct the value. This change will produce the same exact key, the only difference is that I translated the u8 array to a u32 array which will be correctly ordered on writes to hardware by the cpu_to_le32 operations that are built into the writel calls. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch cleans up our RSS indirection table configuration so that we generate the same table regardless of CPU endianness. In addition it changes the table setup so that instead of doing a modulo based setup it is instead a divisor based setup. The advantage to this is that we should be able to take the Rx hash and compute the Rx queue with very little CPU overhead if needed. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that Tx cleanup is done in a do/while loop instead of a for loop. The main motivation behind this is the fact that we should never be invoked with a budget less than 1 so we can skip checking the budget before processing the first descriptor. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change removes the code that was doing the NUMA allocations for the q_vectors, rings, and ring resources. The problem is the logic used assumed that the NUMA nodes were always interleved and that is not always the case. At some point I hope to add this functionality back in a more controlled manner in the future. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Carolyn Wyborny 提交于
Due to a hardware issue, on i210 and i211 parts, the TNCRS statistic provides an invalid value. This patch changes the update stats function to increment the stat only for non-i210/i211 parts. Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Stefan Assmann 提交于
Adapt the pre-existing and assigned VFs code to the ixgbe way introduced in commit 9297127b. Instead of searching the enabled VFs we use pci_num_vf to determine enabled VFs. By comparing to which PF an assigned VF is owned it's possible to decide whether to leave it enabled or not. Signed-off-by: NStefan Assmann <sassmann@kpanic.de> Acked-by: NGreg Rose <gregory.v.rose@intel.com> Tested-by: NRobert Garrett <robertx.e.garrett@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 20 9月, 2012 1 次提交
-
-
由 Alexander Duyck 提交于
For some reason the reading of the RQDPC register was being artificially limited to 4K. Instead of limiting the value we should read the value and add the full amount. Otherwise this can lead to a misleading number of dropped packets when the actual value is in fact much higher. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 17 9月, 2012 3 次提交
-
-
由 Matthew Vick 提交于
In rare circumstances, it's possible a descriptor writeback will occur before a timestamped Tx packet will go out on the wire, leading to the driver believing the hardware failed to timestamp the packet. Schedule a work item for 82576 and use the available time sync interrupt registers on 82580 and above to account for this. Cc: Richard Cochran <richardcochran@gmail.com> Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Matthew Vick 提交于
Where possible, move PTP-related functions into igb_ptp.c and update the names of functions and variables to match the established coding style in the files and specify that they are PTP-specific. Cc: Richard Cochran <richardcochran@gmail.com> Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Matthew Vick 提交于
For users without CONFIG_IGB_PTP=y, we should not be compiling any PTP code into the driver. Tidy up the wrapping in igb to support this. Cc: Richard Cochran <richardcochran@gmail.com> Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Acked-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 08 9月, 2012 1 次提交
-
-
由 Stephen Hemminger 提交于
Signed-off-by: NStephen Hemminger <shemminger@vyatta.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 24 8月, 2012 1 次提交
-
-
由 Jiang Liu 提交于
Use PCI Express Capability access functions to simplify igb driver. [bhelgaas: split e1000e and igb into separate patches] Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Signed-off-by: NYijing Wang <wangyijing@huawei.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 21 8月, 2012 1 次提交
-
-
由 Jesse Brandeburg 提交于
This is the implementation for igb to allow forcing MDI state via ethtool, allowing users to work around some improperly behaving switches. Forcing in this driver is for now only allowed when auto-neg is enabled. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> CC: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 09 8月, 2012 1 次提交
-
-
由 Emil Tantilov 提交于
This patch resolves a "BUG: unable to handle kernel paging request at ..." oops while dumping packet data. The issue occurs with IOMMU enabled due to the address provided by phys_to_virt(). This patch avoids phys_to_virt() by making using skb->data and the address of the pages allocated for Rx. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
-
- 01 8月, 2012 1 次提交
-
-
由 Mel Gorman 提交于
The skb->pfmemalloc flag gets set to true iff during the slab allocation of data in __alloc_skb that the the PFMEMALLOC reserves were used. If page splitting is used, it is possible that pages will be allocated from the PFMEMALLOC reserve without propagating this information to the skb. This patch propagates page->pfmemalloc from pages allocated for fragments to the skb. It works by reintroducing and expanding the skb_alloc_page() API to take an skb. If the page was allocated from pfmemalloc reserves, it is automatically copied. If the driver allocates the page before the skb, it should call skb_propagate_pfmemalloc() after the skb is allocated to ensure the flag is copied properly. Failure to do so is not critical. The resulting driver may perform slower if it is used for swap-over-NBD or swap-over-NFS but it should not result in failure. [davem@davemloft.net: API rename and consistency] Signed-off-by: NMel Gorman <mgorman@suse.de> Acked-by: NDavid S. Miller <davem@davemloft.net> Cc: Neil Brown <neilb@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: Eric B Munson <emunson@mgebm.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Cc: Mel Gorman <mgorman@suse.de> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 7月, 2012 1 次提交
-
-
由 Akeem G. Abodunrin 提交于
There was a previous patch to resolve issue with 82576 losing PHY setting after PHY power down. However that previous implementation triggered speed mismatch and occasional link lost. Now, this patch resolves both initial PHY setting and speed mismatch issues. Signed-off-by: NAkeem G. Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 17 7月, 2012 1 次提交
-
-
由 Joe Perches 提交于
Convert the existing uses of random_ether_addr to the new eth_random_addr. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 7月, 2012 1 次提交
-
-
由 Ben Hutchings 提交于
Fix incorrect start markers, wrapped summary lines, missing section breaks, incorrect separators, and some name mismatches. Delete a few that are content-free. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 6月, 2012 4 次提交
-
-
由 Carolyn Wyborny 提交于
This patch updates the igb version to 4.0.1. Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Carolyn Wyborny 提交于
Our NVM image creation tools have evolved over the years and there are multiple versions contained in them, depending on the tool used to create them. This patch outputs the NVM versions available in ethtool -i output. rc2: (not sure why others show in log but not in the message) Added additional call to igb_set_fw_version per Community feedback. Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Matthew Vick 提交于
Rather than spread out the complexity of the RSS queue and queue pairing assignment logic, place it all in one location for simplicity and readability. Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Lior Levy 提交于
There is a need to configure MMW_SIZE in register RTTBCNRM with a correct value. For 82576 device, the value should be 0x14. Signed-off-by: NLior Levy <lior.levy@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 17 5月, 2012 1 次提交
-
-
由 Matthew Vick 提交于
Under certain scenarios, it's possible that bursty manageability traffic over the BMC-to-OS path may overrun the internal manageability receive buffer causing dropped manageability packets. Clearing this bit prevents this situation by interrupting coalescing to allow manageability traffic through. Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 13 5月, 2012 1 次提交
-
-
由 Carolyn Wyborny 提交于
This patch adds new initialization functions and device support for i210 and i211 devices. Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 11 5月, 2012 1 次提交
-
-
由 Benjamin Poirier 提交于
Since the caller (PM resume code) is not the one holding rtnl, when taking the 'else' branch rtnl may be released at any moment, thereby defeating the whole purpose of this code block. Signed-off-by: NBenjamin Poirier <bpoirier@suse.de> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 5月, 2012 1 次提交
-
-
由 John Fastabend 提交于
igb and ixgbe incorrectly call netdev_tx_reset_queue() from i{gb|xgbe}_clean_tx_ring() this sort of works in most cases except when the number of real tx queues changes. When the number of real tx queues changes netdev_tx_reset_queue() only gets called on the new number of queues so when we reduce the number of queues we risk triggering the watchdog timer and repeated device resets. So this is not only a cosmetic issue but causes real bugs. For example enabling/disabling DCB or FCoE in ixgbe will trigger this. CC: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NJohn Bishop <johnx.bishop@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 27 4月, 2012 1 次提交
-
-
由 Matthew Vick 提交于
During igb_reset(), we initiate a hardware reset which will clear our flow control settings. For auto-negotiation, we re-negotiate them when linking up again, but we need to force them off properly for the forced speed case. Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 14 4月, 2012 1 次提交
-
-
由 Carolyn Wyborny 提交于
Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 04 4月, 2012 1 次提交
-
-
由 Richard Cochran 提交于
This commit removes the legacy timecompare code from the igb driver and offers a tunable PHC instead. Signed-off-by: NRichard Cochran <richardcochran@gmail.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 28 3月, 2012 1 次提交
-
-
由 stephen hemminger 提交于
Dan Carpenter noticed that ixgbevf initial default was different than the rest. But the problem is broader than that, only one Intel driver (ixgb) was doing it almost right. The convention for default debug level should be consistent among Intel drivers and follow established convention. Signed-off-by: NStephen Hemminger <shemminger@vyatta.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 17 3月, 2012 2 次提交
-
-
由 Ben Greear 提交于
This allows the NIC to receive all frames available, including those with bad FCS, un-matched vlans, ethernet control frames, and more. Tested by sending frames with bad FCS. Signed-off-by: NBen Greear <greearb@candelatech.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Ben Greear 提交于
Including bad FCS, used generate frames with bad FCS to test other system's handling of RX of bad packets. Signed-off-by: NBen Greear <greearb@candelatech.com> Tested-by: NJeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-