- 21 3月, 2017 4 次提交
-
-
由 Jacob Keller 提交于
In preparation for adding code to properly check the mask values, we will need to know the number of active filters for each type. Add counters for each filter type. Rename the already existing fd_tcp_rule to fd_tcp4_filter_cnt to match the style of other names. To avoid style warnings, avoid assigning multiple parameters at once, and fix up one other case where we did so previously. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Move ATR exit check after we have sent the TCP/IPv4 filter to the ring successfully. This avoids an issue where we potentially update the filter count without actually succeeding in adding the filter. Now, we only increment the fd_tcp_rule after we've succeeded. Additionally, we will re-enable ATR mode only after deletion of the filter is actually posted to the FDIR ring. Change-ID: If5c1dea422081cc5e2de65618b01b4c3bf6bd586 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Reviewed-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Instead of setting err=true and checking this to determine when to free the raw_packet near the end of the function, simply kfree and return immediately. The resulting code is a bit cleaner and has one less variable. This also resolves a subtle bug in the ipv4 case which could fail to add the first filter and then never free the memory, resulting in a small memory leak. Change-ID: I7583aac033481dc794b4acaa14445059c8930ff1 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Reviewed-by: NAvinash Dayanand <avinash.dayanand@intel.com> Reviewed-by: NAlan Brady <alan.brady@intel.com> Reviewed-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
The code originally included src_ip and dst_ip with enough space to support ipv6 filters. However, no actual support for ipv6 filters has been implemented. Thus, remove the arrays and just use __be32 values. Should ipv6 support be added in the future, we can replace these with a union that has sizes for both values. Change-Id: I1bc04032244a80eb6ebc8a4e6c723a4a665c1dd5 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 15 3月, 2017 2 次提交
-
-
由 Harshitha Ramamurthy 提交于
A previous commit introduced a field that tracks the features that are disabled due to HW resource limitations as opposed to the featured disabled by the user. This patch changes the name of the field to make it more readable since it might get confusing when looking at code containing both the flags field and the auto_disable_features field together. Change-ID: Idcc9888659698f6fe3ccff17c8c3f09b5026f708 Signed-off-by: NHarshitha Ramamurthy <harshitha.ramamurthy@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC and DMA_ATTR_WEAK_ORDERING. By enabling both of these for the Rx path we are able to see performance improvements on architectures that implement either one due to the fact that page mapping and unmapping only has to sync what is actually being used instead of the entire buffer. In addition by enabling the weak ordering attribute enables a performance improvement for architectures that can associate a memory ordering with a DMA buffer such as Sparc. Change-ID: If176824e8231c5b24b8a5d55b339a6026738fc75 Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 19 2月, 2017 2 次提交
-
-
由 Jacob Keller 提交于
Fix, or rather, avoid a sparse warning caused by the fact that csum_replace_by_diff expects to receive a __wsum value. Since the calculation appears to work, simply typecast the passed paylen value to __wsum to avoid the warning. This seems pretty fishy since __wsum was obviously annotated as a separate type on purpose, so this throws the entire calculation into question. Since it currently appears to behave as expected, the typecast is probably safe. Change-ID: I4fdc5cddd589abc16098176e8a61127e761488f4 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Carolyn Wyborny 提交于
This patch fixes a bug introduced with the addition of the per queue ITR feature support in ethtool. With that addition, there were functions added which converted the ITR settings to binary values. The IS_ENABLED macros that run on those values check whether a bit is set or not and with the value being binary, the bit check always returned ITR disabled which prevents any updating of the ITR rate. This patch fixes the problem by changing the functions to return the current ITR value instead and renaming it to better reflect its function. These functions now provide a value which will be accurately asessed and update the ITR as intended. Change-ID: I14f1d088d052e27f652aaa3113e186415ddea1fc Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 12 2月, 2017 4 次提交
-
-
由 Jacob Keller 提交于
The original comment implies that the only location where the raw_packet buffer will be freed is in i40e_clean_tx_ring() which is incorrect. In fact this isn't even the normal case. Update the comment explaining where the memory is freed. Change-ID: Ie0defc35ed1c3af183f81fdc60b6d783707a5595 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Scott Peterson 提交于
Reorganize the i40e_pull_tail() logic, doing it in i40e_add_rx_frag() where it's cheaper. The igb driver does this the same way. Also renames i40e_page_is_reserved() to reflect what it actually tests. Change-ID: Icd9cc507aae1fcdc02308b3a09034111b4c24071 Signed-off-by: NScott Peterson <scott.d.peterson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Scott Peterson 提交于
This patch reduces the size of struct i40e_rx_buffer by one pointer, and makes the i40e driver a little more consistent with the igb driver in terms of packets that span buffers. We do this by moving the skb field from struct i40e_rx_buffer to struct i40e_ring. We pass the skb we already have (or NULL if we don't) to i40e_fetch_rx_buffer(), which skips the skb allocation if we already have one for this packet. Change-ID: I4ad48a531844494ba0c5d8e1a62209a057f661b0 Signed-off-by: NScott Peterson <scott.d.peterson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Scott Peterson 提交于
On packet RX, we perform a DMA sync for CPU before passing the packet up. Here we limit that sync to the actual length of the incoming packet, rather than always syncing the entire buffer. Change-ID: I626aaf6c37275a8ce9e81efcaa773f327b331487 Signed-off-by: NScott Peterson <scott.d.peterson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 03 2月, 2017 1 次提交
-
-
由 Alexander Duyck 提交于
This patch does some quick work to pull some of the data off of the stack and hopefully start storing it in the Tx buffer info section of the Tx ring. Ideally we should be moving away from having to store much of anything on the stack and can just maintain it all in the descriptor rings. Change-ID: I4b4715ea1920e122502482b3f9e56a9a6cb1e9fe Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 07 12月, 2016 1 次提交
-
-
由 Alexander Duyck 提交于
Currently the function i40e_napi-poll() returns 0 when it clean completely the Rx rings, but this foul budget accounting in core code. Fix this by returning the actual work done, capped to budget - 1, since the core doesn't allow to return the full budget when the driver modifies the NAPI status This is based on a similar change that was made for the ixgbe driver by Paolo Abeni. Change-ID: Ic3d93ad2fa2fc8ce3164bc461e69367da0f9173b Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 01 11月, 2016 4 次提交
-
-
由 Alexander Duyck 提交于
This patch reorders the logic at the end of i40e_tx_map to address the fact that the logic was rather convoluted and much larger than it needed to be. In order to try and coalesce the code paths I have updated some of the comments and repurposed some of the variables in order to reduce unnecessary overhead. This patch does the following: 1. Quit tracking skb->xmit_more with a flag, just max out packet_stride 2. Drop tail_bump and do_rs and instead just use desc_count and td_cmd 3. Pull comments from ixgbe that make need for wmb() more explicit. Change-ID: Ic7da85ec75043c634e87fef958109789bcc6317c Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds a common method for finding a VSI by type. The main motivation for doing this is that the Flow Director path actually had two ways of handling this, one stopped on first match and one did not. This patch makes it so that all callers of this function will get the same approach for finding a VSI. Change-ID: Ibf25de8acd8466582520694424aa87da66965fbd Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NBimmy Pujari <bimmy.pujari@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
The current Rx timestamp hang logic is not very robust because it does not notice a register is hung until all four timestamps have been latched and we wait a full 5 seconds. Replace this logic with a newer Rx hang detection based on storing the jiffies when we first notice a receive timestamp event. We store each register's time separately, along with a flag indicating if it is currently latched. Upon first transitioning to latch, we will update the latch_events[i] jiffies value. This indicates the time we first noticed this event. The watchdog routine will simply check that the either the flag has been cleared, or we have passed at least one second. In this case, it is able to clear the Rx timestamp register under the assumption that it was for a dropped frame. The benefit if this strategy is that we should be able to detect and clear out stalled RXTIME_H registers before we exhaust the supply of 4, and avoid complete stall of Rx timestamp events. Change-ID: Id55458c0cd7a5dd0c951ff2b8ac0b2509364131f Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
When hardware has taken a timestamp for a received packet, it indicates which RXTIME register the timestamp was placed in by some bits in the receive descriptor. It uses 3 bits, one to indicate if the descriptor index is valid (ie: there was a timestamp) and 2 bits to indicate which of the 4 registers to read. However, the driver currently does not check the TSYNVALID bit and only checks the index. It assumes a zero index means no timestamp, and a non zero index means a timestamp occurred. While this appears to be true, it prevents ever reading a timestamp in RXTIME[0], and causes the first timestamp the device captures to be ignored. Fix this by using the TSYNVALID bit correctly as the true indicator of whether the packet has an associated timestamp. Also rename the variable rsyn to tsyn as this is more descriptive and matches the register names. Change-ID: I4437e8f3a3df2c2ddb458b0fb61420f3dafc4c12 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 29 10月, 2016 4 次提交
-
-
由 Alexander Duyck 提交于
This patch cleans up several pieces of redundant code in the Rx clean-up paths. The first bit is that hdr_addr and the status_err_len portions of the Rx descriptor represent the same value. As such there is no point in setting them to 0 before setting them to 0. I'm dropping the second spot where we are updating the value to 0 so that we only have 1 write for this value instead of 2. The second piece is the checking for the DD bit in the packet. We only need to check for a non-zero value for the status_err_len because if the device is done with the descriptor it will have written something back and the DD is just one piece of it. In addition I have moved the reading of the Rx descriptor bits related to rx_ptype down so that they are actually below the dma_rmb() call so that we are guaranteed that we don't have any funky 64b on 32b calls causing any ordering issues. Change-ID: I256e44a025d3c64a7224aaaec37c852bfcb1871b Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alan Brady 提交于
There exists a bug in which a 'perfect storm' can occur and cause interrupts to fail to be correctly affinitized. This causes unexpected behavior and has a substantial impact on performance when it happens. The bug occurs if there is heavy traffic, any number of CPUs that have an i40e interrupt are pegged at 100%, and the interrupt afffinity for those CPUs is changed. Instead of moving to the new CPU, the interrupt continues to be polled while there is heavy traffic. The bug is most readily realized as the driver is first brought up and all interrupts start on CPU0. If there is heavy traffic and the interrupt starts polling before the interrupt is affinitized, the interrupt will be stuck on CPU0 until traffic stops. The bug, however, can also be wrought out more simply by affinitizing all the interrupts to a single CPU and then attempting to move any of those interrupts off while there is heavy traffic. This patch fixes the bug by registering for update notifications from the kernel when the interrupt affinity changes. When that fires, we cache the intended affinity mask. Then, while polling, if the cpu is pegged at 100% and we failed to clean the rings, we check to make sure we have the correct affinity and stop polling if we're firing on the wrong CPU. When the kernel successfully moves the interrupt, it will start polling on the correct CPU. The performance impact is minimal since the only time this section gets executed is when performance is already compromised by the CPU. Change-ID: I4410a880159b9dba1f8297aa72bef36dca34e830 Signed-off-by: NAlan Brady <alan.brady@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
We cannot currently support SCTP in the hardware, and IPV4_FLOW is not used anywhere by the software so we can go through and drop the functionality related to these two flow types. In addition we cannot support masking based on the protocol value so if the user is expecting a value other than TCP or UDP we should simply return an error rather then trying to allocate a filter for a rule that will only partially match what the user requested. Change-ID: I10d52bb97d8104d76255fe244551814ff9531a63 Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
We can reorder the busy wait loop at the start of the Flow Director transmit function to reduce the overall code size while still retaining the same functionality. As such I am taking advantage of the opportunity to do so. Change-ID: I34c403ca001953c6ac9816e65d5305e73d869026 Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 25 9月, 2016 6 次提交
-
-
由 Jacob Keller 提交于
In commit a75e8005 ("i40e: queue-specific settings for interrupt moderation") the i40e driver gained support for setting interrupt moderation values per queue. This patch adds support for this feature to the i40evf driver as well. In addition, a few changes are made to the i40e implementation to add function header documentation comments, as well. This behaves in a similar fashion to the implementation in i40e. Thus, requesting the moderation value when no queue is provided will report queue 0 value, while setting the value without a queue will set all queues at once. Change-ID: I1f310a57c8e6c84a8524c178d44d1b7a6d3a848e Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This interface was only ever meant for debug only. Since it is not supposed to be here we are removing it. Change-ID: Id771a1e5e7d3e2b4b7f56591b61fb48c921e1d04 Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In an effort to improve code readability I am splitting the Flow Director filter configuration out into a separate function like we have done for the standard xmit path. The general idea is to provide a single block of code that translates the flow specification into a proper Flow Director descriptor. Change-ID: Id355ad8030c4e6c72c57504fa09de60c976a8ffe Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds a txring_txq function which allows us to convert a i40e_ring/i40evf_ring to a netdev_tx_queue structure. This way we can avoid having to make a multi-line function call for all the spots that need access to this. Change-ID: Ic063b71d8b92ea406d2c32e798c8e2b02809d65b Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The Tx cleanup flow was incorrectly assuming it could check for the flow director bits after it had unmapped the buffer. However in this case it results in us trying to free a raw_buf as though it is an sk_buff. To fix this I am moving up the flag test for the FD_SB bit so that when find a non-NULL skb or raw_buf value we then check the flag and use the appropriate call to free the buffer. Change-ID: I6284034ba1ea87c9922e56f6eb3181f7f09bddde Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Some locations that disable ATR accidentally used the "full" disable by disabling the flag in the standard flags field. This incorrectly forces ATR off permanently instead of temporarily disabling it. In addition, some code locations accidentally set the ATR flag enabled when they only meant to clear the auto_disable_flags. This results in ignoring the user's ethtool private flag settings. Additionally, when disabling ATR via ethtool, we did not perform a flush of the FD table. This results in the previously assigned ATR rules still functioning which was not expected. Cleanup all these areas so that automatic disable uses only the auto_disable_flag. Fix the flush code so that we can trigger a flush even when we've disabled ATR and SB support, as otherwise the flush doesn't work. Fix ethtool setting to actually request a flush. Fix NETIF_F_NTUPLE flag to only clear the auto_disable setting and not enable the full feature. Change-ID: Ib2486111f8031bd16943e9308757b276305c03b5 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 23 9月, 2016 1 次提交
-
-
由 Alexander Duyck 提交于
The i40e driver was incorrectly assuming that we would always be pulling no more than 1 descriptor from each fragment. It is in fact possible for us to end up with the case where 2 descriptors worth of data may be pulled when a frame is larger than one of the pieces generated when aligning the payload to either 4K or pieces smaller than 16K. To adjust for this we just need to make certain to test all the way to the end of the fragments as it is possible for us to span 2 descriptors in the block before us so we need to guarantee that even the last 6 descriptors have enough data to fill a full frame. Change-ID: Ic2ecb4d6b745f447d334e66c14002152f50e2f99 Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 20 8月, 2016 1 次提交
-
-
由 Carolyn Wyborny 提交于
This patch refactors tail bump check. Change-ID: Ide0e19171d67d90cb2b06b8dcd4fa791ae120160 Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 22 7月, 2016 1 次提交
-
-
由 Mitch Williams 提交于
This initializer isn't needed because the variable is assigned right away. Change-ID: I6ce3edb3f4e0364db248a7a0bcc62ca95c01d941 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 15 7月, 2016 1 次提交
-
-
由 Alexander Duyck 提交于
There are a couple of issues I found in i40e_rx_checksum while doing some recent testing. As a result I have found the Rx checksum logic is pretty much broken and returning that the checksum is valid for tunnels in cases where it is not. First the inner types are not the correct values to use to test for if a tunnel is present or not. In addition the inner protocol types are not a bitmask as such performing an OR of the values doesn't make sense. I have instead changed the code so that the inner protocol types are used to determine if we report CHECKSUM_UNNECESSARY or not. For anything that does not end in UDP, TCP, or SCTP it doesn't make much sense to report a checksum offload since it won't contain a checksum anyway. This leaves us with the need to set the csum_level based on some value. For that purpose I am using the tunnel_type field. If the tunnel type is GRENAT or greater then this means we have a GRE or UDP tunnel with an inner header. In the case of GRE or UDP we will have a possible checksum present so for this reason it should be safe to set the csum_level to 1 to indicate that we are reporting the state of the inner header. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 21 5月, 2016 2 次提交
-
-
由 Alexander Duyck 提交于
This patch adds support for offloading IPXIP6 type packets that represent either IPv4 or IPv6 encapsulated inside of an IPv6 outer IP header. In addition with this change we should also be able to support FOU encapsulated traffic with outer IPv6 headers. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
This patch defines two new GSO definitions SKB_GSO_IPXIP4 and SKB_GSO_IPXIP6 along with corresponding NETIF_F_GSO_IPXIP4 and NETIF_F_GSO_IPXIP6. These are used to described IP in IP tunnel and what the outer protocol is. The inner protocol can be deduced from other GSO types (e.g. SKB_GSO_TCPV4 and SKB_GSO_TCPV6). The GSO types of SKB_GSO_IPIP and SKB_GSO_SIT are removed (these are both instances of SKB_GSO_IPXIP4). SKB_GSO_IPXIP6 will be used when support for GSO with IP encapsulation over IPv6 is added. Signed-off-by: NTom Herbert <tom@herbertland.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 5月, 2016 1 次提交
-
-
由 Mitch Williams 提交于
This logic is inverted. If the RXHASH flag is set, then we should go ahead and call skb_set_hash. Change-ID: Ib2e30356dced1d3e939c8061ab6ad5bd94197e7c Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 06 5月, 2016 3 次提交
-
-
由 Jesse Brandeburg 提交于
This is part 1 of the Rx refactor series, just including changes to i40e. This refactor aligns the receive routine with the one in ixgbe which was highly optimized. This reduces the code we have to maintain and allows for (hopefully) more readable and maintainable RX hot path. In order to do this: - consolidate the receive path into a single function that doesn't use packet split but *does* use pages for Rx buffers. - remove the old _1buf routine - consolidate several routines into helper functions - remove ethtool control over packet split Change-ID: I5ca100721de65992aa0114f8b4bac844b84758e0 Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
As part of preparation for the rx-refactor, remove the packet split receive routine and ancillary code. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
Refactor the interpretation of a tunnel. This removes some code and lets us start using the hardware's parsing. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 02 5月, 2016 1 次提交
-
-
由 Alexander Duyck 提交于
This patch makes it so that i40e and i40evf can use GSO_PARTIAL to support segmentation for frames with checksums enabled in outer headers. As a result we can now send data over these types of tunnels at over 20Gb/s versus the 12Gb/s that was previously possible on my system. The advantage with the i40e parts is that this offload is mostly transparent as the hardware still deals with the inner and/or outer IPv4 headers so the IP ID is still incrementing for both when this offload is performed. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 28 4月, 2016 1 次提交
-
-
由 Jesse Brandeburg 提交于
The driver was offloading the VLAN tag into the skb any time there was a VLAN tag and the hardware stripping was enabled. Just check to make sure it's enabled before put_tag. Change-Id: Ife95290c06edd9a616393b38679923938b382241 Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-