- 06 4月, 2016 1 次提交
-
-
由 Mitch Williams 提交于
If the driver happens to read a register during the time in which the device is undergoing reset, it will receive a value of 0xdeadbeef instead of a valid value. Unfortunately, the driver may misinterpret this as a valid value, especially if it's just looking for individual bits. Add an explicit check for this value when we are looking for admin queue errors, and trigger reset recovery if we find it. Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 05 4月, 2016 5 次提交
-
-
由 Jesse Brandeburg 提交于
Simple cast to fix a sparse warning. Fixes: commit 5453205c ("i40e/i40evf: Enable support for SKB_GSO_UDP_TUNNEL_CSUM") Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch enables bulk Tx clean for skbs. In order to enable it we need to pass the napi_budget value as that is used to determine if we are truly running in NAPI mode or if we are simply calling the routine from netpoll with a budget of 0. In order to avoid adding too many more variables I thought it best to pass the VSI directly in a fashion similar to what we do on igb and ixgbe with the q_vector. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In the polling routines for i40e and i40evf we were using bitwise operators to avoid the side effects of the logical operators, specifically the fact that if the first case is true with "||" we skip the second case, or if it is false with "&&" we skip the second case. This fixes an earlier patch that converted the bitwise operators over to the logical operators and instead replaces the entire thing with just an if statement since it should be more readable what we are trying to do this way. Fixes: 1a36d7fa ("i40e/i40evf: use logical operators, not bitwise") Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alan Cox 提交于
The only error case is when the malloc fails, in which case the clean up loop does nothing at all, so remove it Signed-off-by: NAlan Cox <alan@linux.intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
From what I can tell the practical limitation on the size of the Tx data buffer is the fact that the Tx descriptor is limited to 14 bits. As such we cannot use 16K as is typically used on the other Intel drivers. However artificially limiting ourselves to 8K can be expensive as this means that we will consume up to 10 descriptors (1 context, 1 for header, and 9 for payload, non-8K aligned) in a single send. I propose that we can reduce this by increasing the maximum data for a 4K aligned block to 12K. We can reduce the descriptors used for a 32K aligned block by 1 by increasing the size like this. In addition we still have the 4K - 1 of space that is still unused. We can use this as a bit of extra padding when dealing with data that is not aligned to 4K. By aligning the descriptors after the first to 4K we can improve the efficiency of PCIe accesses as we can avoid using byte enables and can fetch full TLP transactions after the first fetch of the buffer. This helps to improve PCIe efficiency. Below is the results of testing before and after with this patch: Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % U us/KB us/KB Before: 87380 16384 16384 10.00 33682.24 20.27 -1.00 0.592 -1.00 After: 87380 16384 16384 10.00 34204.08 20.54 -1.00 0.590 -1.00 So the net result of this patch is that we have a small gain in throughput due to a reduction in overhead for putting together the frame. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 19 2月, 2016 19 次提交
-
-
由 Jesse Brandeburg 提交于
Bump. Change-ID: Ifa19aadaa892ad103f1b96fe2361fa690912c6a3 Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Use the new AdminQ functions for safely accessing the Rx control registers that may be affected by heavy small packet traffic. Change-ID: Ibb00983e8dcba71f4b760222a609a5fcaa726f18 Signed-off-by: NShannon Nelson <shannon.nelson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Add the new opcodes and struct used for asking the firmware to update Rx control registers that need extra care when being accessed while under heavy traffic - e.g. sustained 64byte packets at line rate on all ports. The firmware will take extra steps to be sure the register accesses are successful. The registers involved are: PFQF_CTL_0 PFQF_HENA PFQF_FDALLOC PFQF_HREGION PFLAN_QALLOC VPQF_CTL VFQF_HENA VFQF_HREGION VSIQF_CTL VSILAN_QBASE VSILAN_QTABLE VSIQF_TCREGION PFQF_HKEY VFQF_HKEY PRTQF_CTL_0 GLFCOE_RCTL GLFCOE_RSOF GLQF_CTL GLQF_SWAP GLQF_HASH_MSK GLQF_HASH_INSET GLQF_HSYM GLQF_FC_MSK GLQF_FC_INSET GLQF_FD_MSK PRTQF_FD_INSET PRTQF_FD_FLXINSET PRTQF_FD_MSK Change-ID: I56c8144000da66ad99f68948d8a184b2ec2aeb3e Signed-off-by: NShannon Nelson <shannon.nelson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Carolyn Wyborny 提交于
This patch adds functions to blink led on devices using 10GBaseT PHY since MAC registers used in other designs do not work in this device configuration. Change-ID: Id4b88c93c649fd2b88073a00b42867a77c761ca3 Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
On all of the other Intel drivers we place checksum close to TSO as they have a significant amount in common and it can help to reduce the decision tree for how to handle the frame as the first check in TSO is to see if checksumming is offloaded, and if it is not we can skip _BOTH_ TSO and Tx checksum offload based on a single check. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch is meant to rewrite the logic for how we determine if we can transmit the frame or if it needs to be linearized. The previous code for this function was using a mix of division and modulus division as a part of computing if we need to take the slow path. Instead I have replaced this by simply working with a sliding window which will tell us if the frame would be capable of causing a single packet to span several descriptors. The logic for the scan is fairly simple. If any given group of 6 fragments is less than gso_size - 1 then it is possible for us to have one byte coming out of the first fragment, 6 fragments, and one or more bytes coming out of the last fragment. This gives us a total of 8 fragments which exceeds what we can allow so we send such frames to be linearized. Arguably the use of modulus might be more exact as the approach I propose may generate some false positives. However the likelihood of us taking much of a hit for those false positives is fairly low, and I would rather not add more overhead in the case where we are receiving a frame composed of 4K pages. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In an upcoming patch I would like to have access to the descriptor count used for the data portion of the frame. For this reason I am splitting up the descriptor count function from the function that stops the ring. Also in order to try and reduce unnecessary duplication of code I am moving the slow-path portions of the code out of being inline calls so that we can just jump to them and process them instead of having to build them into each function that calls them. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Recent changes should have enabled support for IPv6 based tunnels and support for TSO with outer UDP checksums. As such we can update the feature flags to reflect that. In addition we can clean-up the flags that aren't needed such as SCTP and RXCSUM since having the bits there doesn't add any value. I also found one spot where we were setting the same flag twice. It looks like it was probably a git merge error that resulted in the line being duplicated. As such I have dropped it in this patch. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Acked-by: NAnjali Singhai Jain <anjali.singhai@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The XL722 has support for providing the outer UDP tunnel checksum on transmits. Make use of this feature to support segmenting UDP tunnels with outer checksums enabled. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This is mostly a minor clean-up for the Rx checksum path in order to avoid some of the unnecessary conditional checks that were being applied. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Add exception handling to the Tx checksum path so that we can handle cases of TSO where the frame is bad, or Tx checksum where we didn't recognize a protocol Drop I40E_TX_FLAGS_CSUM as it is unused, move the CHECKSUM_PARTIAL check into the function itself so that we can decrease indent. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch defers writing to the Tx descriptor bits until we know we have successfully completed a given operation. So for example we defer updating the tunnelling portion of the context descriptor until we have fully identified the type. The advantage to this approach is that we can assemble values as we go instead of having to try and kludge everything together all at once. As a result we can significantly clean up the tunneling configuration for instance as we can just do a pointer walk and do the math for the distance between each set of points. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds support for IPv6 extension headers in setting up the Tx checksum. Without this patch extension headers would cause IPv6 traffic to fail as the transport protocol could not be identified. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch fixes two issues. First was the fact that iphdr(skb)->protocl was being used to test for the outer transport protocol. This completely breaks IPv6 support. Second was the fact that we cleared the flag for v4 going to v6, but we didn't take care of txflags going the other way. As such we would have the v6 flag still set even if the inner header was v4. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The Tx checksum path was maintaining a set of 3 pointers and two lengths in order to prepare the packet for being checksummed. The thing is we only really needed 2 pointers, and the lengths that were being maintained can easily be computed. As such we can replace the IPv4 and IPv6 header pointers with one single union that represents both, or a generic pointer to the start of the network header. For the L4 headers we can do the same with TCP and a generic pointer to the start of the transport header. The length of the TCP header is obtained by simply multiplying doff by 4, and the network header length can be obtained by subtracting the network header pointer from the transport header pointer. While I was at it I renamed l4_hdr to l4_proto to make it a bit more clear and less likely to be confused with l4.hdr which is the transport header pointer. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch goes through and pulls all of the spots where we were updating either the TCP or IP checksums in the TSO and checksum path into the TSO function. The general idea here is that we should only be updating the header after we verify we have completed a skb_cow_head check to verify the head is writable. One other advantage to doing this is that it makes things much more obvious. For example, in the case of IPv6 there was one spot where the offset of the IPv4 header checksum was being updated which is obviously incorrect. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch makes it so that the L4 header offsets and such can be ignored when dealing with the L3 checksum and length update. This is done making use of two things. First we can just use the offset from the L4 header to the start of the packet to determine the L4 offset, and from that we can then make use of the data offset to determine the full length of the headers. As far as adjusting the checksum to remove the length we can simply add the inverse of the length instead of having to recompute the entire pseudo-header without the length. In the case of an IPv6 header this should be significantly cheaper since we can make use of a value we already needed instead of having to read the source and destination address out of the packet. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Instead of casing u32 values to u64 it makes more sense to just start out with u64 values in the first place. This way we don't need to create a mess with all of the casts needed to populate a 64b value. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The i40e and i40evf drivers contained code for inserting an outer checksum on UDP tunnels. The issue however is that the upper levels of the stack never requested such an offload and it results in possible errors. In addition the same logic was being applied to the Rx side where it was attempting to validate the outer checksum, but the logic there was incorrect in that it was testing for the resultant sum to be equal to the header checksum instead of being equal to 0. Since this code is so massively flawed, and doing things that we didn't ask for it to do I am just dropping it, and will bring it back later to use as an offload for SKB_GSO_UDP_TUNNEL_CSUM which can make use of such a feature. As far as the Rx feature I am dropping it completely since it would need to be massively expanded and applied to IPv4 and IPv6 checksums for all parts, not just the one that supports Tx checksum offload for the outer. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 18 2月, 2016 15 次提交
-
-
由 Catherine Sullivan 提交于
Bump. Change-ID: Ie280dc67e37a1cf667c3469499a4fb90f4177b75 Signed-off-by: NCatherine Sullivan <catherine.sullivan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Anjali Singhai Jain 提交于
In MFP mode particularly when we were setting the PF VSI in limited promiscuous, the HW switch was still mirroring the outgoing packets from other VSIs (VF/VMdq) onto the PF VSI. With this new bit set, the mirroring doesn't happen any more and so we are in limited promiscuous on the PF VSI in MFP which is similar to defport. An API check is not required, since this bit is reserved for FW API version < 1.5 Also update copyright year in file headers. Change-ID: I9840cb95f11dde733d943cb03ce84f68b9611bc8 Signed-off-by: NAnjali Singhai Jain <anjali.singhai@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
In one obscure corner case, it was possible to clear the NVM update wait flag when no update_done message was actually received. This patch cleans the event descriptor before use, and moves the opcode check to where it won't get done if there was no event to clean. Also update copyright year in file headers. Change-ID: I68bbc41965e93f4adf07cbe98b9dfd63d41509a4 Signed-off-by: NShannon Nelson <shannon.nelson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
If a reset fails to complete, the driver gets its affairs in order and awaits the cold solace of rmmod. Unfortunately, it was not properly setting the adapter state, which would cause a panic on rmmod, instead of the desired surcease. Set the adapter state to DOWN in this case, and avoid a panic. Change-ID: I6fdd9906da52e023f8dc744f7da44b5d95278ca9 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
In the case where we have a page fully used by receive data, we need to release the page fully to the stack. Instead of calling get_page (which increments the page count) followed by free_page (which decrements the page count), just donate our reference to the stack. Although this donation is not tax deductible, it does allow us to avoid two very expensive atomic operations that reverse each other. Change-ID: If70739792d5748995fc175ec92ac2171ed4ad8fc Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Anjali Singhai Jain 提交于
This patch adds a workaround for cases where we might have interrupts that got lost but WB happened. If that happens without this patch we will see a tx_timeout. To work around it, this patch goes ahead and reschedules NAPI in that situation, if NAPI is not already scheduled. We also add a counter in ethtool to keep track of when we detect a case of tx_lost_interrupt. Note: napi_reschedule() can be safely called from process/service_task context and is done in other drivers as well without an issue. Change-ID: I00f98f1ce3774524d9421227652bef20fcbd0d20 Signed-off-by: NAnjali Singhai Jain <anjali.singhai@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
Support packet split receive on VFs. This is off by default but can be enabled using ethtool private flags. Because we need to trigger a reset from outside of i40evf_main.c, create a new function to do so, and export it. Also update copyright year in file headers. Change-ID: I721aa5d70113d3d6d94102e5f31526f6fc57cbbb Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
Bump version to i40e-1.4.13 and i40evf-1.4.9 Change-ID: I9db37f9d4899141c3e5455dfb456d45465b8c035 Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
Get rid of the unused hsplit field in the ring struct and use the existing macro to detect packet split enablement. This allows debugfs dumps of the VSI to properly show which Rx routine is in use. Change-ID: Ic4e9589e6a788ab196ed0850703f704e30c03781 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
Mr. Spock would certainly raise an eyebrow to see us using bitwise operators, when we should clearly be relying on logic. Fascinating. Change-ID: Ie338010c016f93e9faa2002c07c90b15134b7477 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
Refactor the packet split Rx code to properly use half-pages for receives. The previous code was doing way more mapping and unmapping than it needed to, and wasn't properly using half-pages. Increment the page use count each time we give a half-page to an skb, knowing that the stack will probably process and release the page before we need it again. Only free and reallocate pages if the count shows that both half-pages are in use. Add counters to track reallocations and page reuse. Change-ID: I534b299196036b64be82b4861a0a4036310a8f22 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
The i40e and i40evf drivers now cleanly handle allocation failures and can avoid kernel log spew from the memory allocator when allocations fail, so set __GFP_NOWARN on Rx buffer alloc. Change-ID: Ic9e1b83c495e2a3ef6b069ba7fb6e52ce134cd23 Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
This is the "Don't Give Up" patch. Previously the driver could fail an allocation, and then possibly stall a queue forever, by never coming back to continue receiving or allocating buffers. With this patch, the driver will keep polling trying to allocate receive buffers until it succeeds. This should keep all receive queues running even in the face of memory pressure. Also update copyright year in file header. Change-ID: I2b103d1ce95b9831288a7222c3343ffa1988b81b Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jesse Brandeburg 提交于
While re-enabling interrupts the driver would clear all pending causes. This meant that if an interrupt was generated while the driver was cleaning or polling with interrupts disabled, then that interrupt was lost. This could cause a queue to become dead, especially for receive. Refactored the enable_icr0 function in order to allow it to be decided by the caller whether the CLEARPBA (clear pending events) bit will be set while re-enabling the interrupt. Also update copyright year in file headers. Change-ID: Ic1db100a05e13c98919057696db147a258ca365a Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Catherine Sullivan 提交于
Change the driver string to 40-10 Gigabit instead of XL710/X710 for X722 and all future products. Also update copyright year in file header. Change-ID: I57fae656b36dc4eb682b2b7a054f8f48f3589149 Signed-off-by: NCatherine Sullivan <catherine.sullivan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-