- 18 5月, 2018 1 次提交
-
-
由 Mauro S M Rodrigues 提交于
Since commit f7f37e7f ("ixgbe: handle close/suspend race with netif_device_detach/present") ixgbe_close_suspend is called, from ixgbe_close, only if the device is present, i.e. if it isn't detached. That exposed a situation where IRQs weren't freed if a PCI error recovery system opts to remove the device. For such case the pci channel state is set to pci_channel_io_perm_failure and ixgbe_io_error_detected was returning PCI_ERS_RESULT_DISCONNECT before calling ixgbe_close_suspend consequentially not freeing IRQ and crashing when the remove handler calls pci_disable_device, hitting a BUG_ON at free_msi_irqs, which asserts that there is no non-free IRQ associated with the device to be removed: BUG_ON(irq_has_action(entry->irq + i)); The issue is fixed by calling the ixgbe_close_suspend before evaluate the pci channel state. Reported-by: NNaresh Bannoth <nbannoth@in.ibm.com> Reported-by: NAbdul Haleem <abdhalee@in.ibm.com> Signed-off-by: NMauro S M Rodrigues <maurosr@linux.vnet.ibm.com> Reviewed-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 17 5月, 2018 1 次提交
-
-
由 Cathy Zhou 提交于
Sparse complains valid conversions between restricted types, force attribute is used to avoid those warnings. Signed-off-by: NCathy Zhou <cathy.zhou@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 28 4月, 2018 1 次提交
-
-
由 Jeff Kirsher 提交于
After many years of having a ~30 line copyright and license header to our source files, we are finally able to reduce that to one line with the advent of the SPDX identifier. Also caught a few files missing the SPDX license identifier, so fixed them up. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NShannon Nelson <shannon.nelson@oracle.com> Acked-by: NRichard Cochran <richardcochran@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 4月, 2018 7 次提交
-
-
由 Alexander Duyck 提交于
The original implementation for macvlan offload has us performing a full port reset every time we added a new macvlan. This shouldn't be necessary and can be avoided with a few behavior changes. This patches updates the logic for the queues so that we have essentially 3 possible configurations for macvlan offload. They consist of 15 macvlans with 4 queues per macvlan, 31 macvlans with 2 queues per macvlan, and 63 macvlans with 1 queue per macvlan. As macvlans are added you will encounter up to 3 total resets if you add all the way up to 63, and after that the device will stay in the mode supporting up to 63 macvlans until the L2FW flag is cleared. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch drops the real_adapter member from the fwd_adapter structure. The general idea behind the change is that the real_adapter is carrying unnecessary data since we could always just grab the adapter structure from netdev_priv(macvlan->lowerdev) if we really needed to get at it. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Both the ixgbe and fm10k drivers support destination filtering. Instead of adding a ton of complexity to support either source or passthru mode we can instead just avoid offloading them for now. Doing this we avoid leaking packets into interfaces that aren't meant to receive them. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Drop dead code now that we shouldn't be receiving broadcast or multicast frames on the queues associated to the macvlan netdev. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we use a software path for packets that are going to be locally switched between two macvlan interfaces on the same device. In addition we resort to software replication of broadcast and multicast packets instead of offloading that to hardware. The general idea is that using the device for east/west traffic local to the system is extremely inefficient. We can only support up to whatever the PCIe limit is for any given device so this caps us at somewhere around 20G for devices supported by ixgbe. This is compounded even further when you take broadcast and multicast into account as a single 10G port can come to a crawl as a packet is replicated up to 60+ times in some cases. In order to get away from that I am implementing changes so that we handle broadcast/multicast replication and east/west local traffic all in software. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change renames the fwd_priv member to accel_priv as this more accurately reflects the actual purpose of this value. In addition I am adding an accessor which will allow us to further abstract this in the future if needed. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Drop the code for handling macvlan specific unicast lists. It isn't needed since we don't take any efforts to maintain it when we bring the interface up and it takes the slow path anyway. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 17 4月, 2018 4 次提交
-
-
由 Jesper Dangaard Brouer 提交于
Changing API ndo_xdp_xmit to take a struct xdp_frame instead of struct xdp_buff. This brings xdp_return_frame and ndp_xdp_xmit in sync. This builds towards changing the API further to become a bulk API, because xdp_buff is not a queue-able object while xdp_frame is. V4: Adjust for commit 59655a5b ("tuntap: XDP_TX can use native XDP") V7: Adjust for commit d9314c47 ("i40e: add support for XDP_REDIRECT") Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesper Dangaard Brouer 提交于
Changing API xdp_return_frame() to take struct xdp_frame as argument, seems like a natural choice. But there are some subtle performance details here that needs extra care, which is a deliberate choice. When de-referencing xdp_frame on a remote CPU during DMA-TX completion, result in the cache-line is change to "Shared" state. Later when the page is reused for RX, then this xdp_frame cache-line is written, which change the state to "Modified". This situation already happens (naturally) for, virtio_net, tun and cpumap as the xdp_frame pointer is the queued object. In tun and cpumap, the ptr_ring is used for efficiently transferring cache-lines (with pointers) between CPUs. Thus, the only option is to de-referencing xdp_frame. It is only the ixgbe driver that had an optimization, in which it can avoid doing the de-reference of xdp_frame. The driver already have TX-ring queue, which (in case of remote DMA-TX completion) have to be transferred between CPUs anyhow. In this data area, we stored a struct xdp_mem_info and a data pointer, which allowed us to avoid de-referencing xdp_frame. To compensate for this, a prefetchw is used for telling the cache coherency protocol about our access pattern. My benchmarks show that this prefetchw is enough to compensate the ixgbe driver. V7: Adjust for commit d9314c47 ("i40e: add support for XDP_REDIRECT") V8: Adjust for commit bd658dda ("net/mlx5e: Separate dma base address and offset in dma_sync call") Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesper Dangaard Brouer 提交于
Use the IDA infrastructure for getting a cyclic increasing ID number, that is used for keeping track of each registered allocator per RX-queue xdp_rxq_info. Instead of using the IDR infrastructure, which uses a radix tree, use a dynamic rhashtable, for creating ID to pointer lookup table, because this is faster. The problem that is being solved here is that, the xdp_rxq_info pointer (stored in xdp_buff) cannot be used directly, as the guaranteed lifetime is too short. The info is needed on a (potentially) remote CPU during DMA-TX completion time . In an xdp_frame the xdp_mem_info is stored, when it got converted from an xdp_buff, which is sufficient for the simple page refcnt based recycle schemes. For more advanced allocators there is a need to store a pointer to the registered allocator. Thus, there is a need to guard the lifetime or validity of the allocator pointer, which is done through this rhashtable ID map to pointer. The removal and validity of of the allocator and helper struct xdp_mem_allocator is guarded by RCU. The allocator will be created by the driver, and registered with xdp_rxq_info_reg_mem_model(). It is up-to debate who is responsible for freeing the allocator pointer or invoking the allocator destructor function. In any case, this must happen via RCU freeing. Use the IDA infrastructure for getting a cyclic increasing ID number, that is used for keeping track of each registered allocator per RX-queue xdp_rxq_info. V4: Per req of Jason Wang - Use xdp_rxq_info_reg_mem_model() in all drivers implementing XDP_REDIRECT, even-though it's not strictly necessary when allocator==NULL for type MEM_TYPE_PAGE_SHARED (given it's zero). V6: Per req of Alex Duyck - Introduce rhashtable_lookup() call in later patch V8: Address sparse should be static warnings (from kbuild test robot) Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesper Dangaard Brouer 提交于
Extend struct ixgbe_tx_buffer to store the xdp_mem_info. Notice that this could be optimized further by putting this into a union in the struct ixgbe_tx_buffer, but this patchset works towards removing this again. Thus, this is not done. Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 3月, 2018 5 次提交
-
-
由 Björn Töpel 提交于
The current page counting scheme assumes that the reference count cannot decrease until the received frame is sent to the upper layers of the networking stack. This assumption does not hold for the XDP_REDIRECT action, since a page (pointed out by xdp_buff) can have its reference count decreased via the xdp_do_redirect call. To work around that, we now start off by a large page count and then don't allow a refcount less than two. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Fix things up to support TSO offload in conjunction with IPsec hw offload. This raises throughput with IPsec offload on to nearly line rate. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Since the ipsec data fields will be zero anyway in the non-ipsec case, we can remove the conditional jump. Suggested-by: NAlexander Duyck <alexander.duyck@gmail.com> Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
With the patch commit f8aa2696b4af ("esp: check the NETIF_F_HW_ESP_TX_CSUM bit before segmenting") we no longer need to protect ourself from checksum offload requests on IPsec packets, so we can remove the check in our .ndo_features_check callback. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Paul Greenwalt 提交于
Add status register reads and delay between reads to ixgbe_check_remove. Registers can read 0xFFFFFFFF during PCI reset, which causes the driver to remove the adapter. The additional status register reads can reduce the chance of this race condition. If the status register is not 0xFFFFFFFF, then ixgbe_check_remove returns the value of the register being read. Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 27 2月, 2018 2 次提交
-
-
由 Emil Tantilov 提交于
Add check for build_skb enabled ring in ixgbe_dma_sync_frag(). In that case &skb_shinfo(skb)->frags[0] may not always be set which can lead to a crash. Instead we derive the page offset from skb->data. Fixes: 42073d91 ("ixgbe: Have the CPU take ownership of the buffers sooner") CC: stable <stable@vger.kernel.org> Reported-by: NAmbarish Soman <asoman@redhat.com> Suggested-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jacob Keller 提交于
On hardware which supports timestamping all packets, the timestamps are recorded in the packet buffer, and the driver no longer uses or reads the registers. This makes the logic for checking and clearing Rx timestamp hangs meaningless. If we run the ixgbe_ptp_rx_hang() function in this case, then the driver will continuously spam the log output with "Clearing Rx timestamp hang". These messages are spurious, and confusing to end users. The original code in commit a9763f3c ("ixgbe: Update PTP to support X550EM_x devices", 2015-12-03) did have a flag PTP_RX_TIMESTAMP_IN_REGISTER which was intended to be used to avoid the Rx timestamp hang check, however it did not actually check the flag before calling the function. Do so now in order to stop the checks and prevent the spurious log messages. Fixes: a9763f3c ("ixgbe: Update PTP to support X550EM_x devices", 2015-12-03) Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 27 1月, 2018 2 次提交
-
-
由 Emil Tantilov 提交于
commit 2de6aa3a ("ixgbe: Add support for padding packet") Uses RXDCTL.RLPML to limit the maximum frame size on Rx when using build_skb. Unfortunately that register does not work on 82599. Added an explicit check to avoid setting this register on 82599 MAC. Extended the comment related to the setting of RXDCTL.RLPML to better explain its purpose. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Paul Greenwalt 提交于
Since 5G link speed is supported by some devices, add reporting of 5G link speed. Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 26 1月, 2018 1 次提交
-
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 1月, 2018 7 次提交
-
-
由 Shannon Nelson 提交于
With all the support code in place we can now link in the ipsec offload operations and set the ESP feature flag for the XFRM subsystem to see. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Add a simple statistic to count the ipsec offloads. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
If the skb has a security association referenced in the skb, then set up the Tx descriptor with the ipsec offload bits. While we're here, we fix an oddly named field in the context descriptor struct. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
If the chip sees and decrypts an ipsec offload, set up the skb sp pointer with the ralated SA info. Since the chip is rude enough to keep to itself the table index it used for the decryption, we have to do our own table lookup, using the hash for speed. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
On a chip reset most of the table contents are lost, so must be restored. This scans the driver's ipsec tables and restores both the filled and empty table slots to their pre-reset values. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Add the functions for setting up and removing offloaded SAs (Security Associations) with the x540 hardware. We set up the callback structure but we don't yet set the hardware feature bit to be sure the XFRM service won't actually try to use us for an offload yet. The software tables are made up to mimic the hardware tables to make it easier to track what's in the hardware, and the SA table index is used for the XFRM offload handle. However, there is a hashing field in the Rx SA tracking that will be used to facilitate faster table searches in the Rx fast path. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Shannon Nelson 提交于
Add a few routines to make access to the ipsec registers just a little easier, and throw in the beginnings of an initialization. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 13 1月, 2018 8 次提交
-
-
由 Tony Nguyen 提交于
Recent checks added for formatting kernel-doc comments are causing warnings if W= is run with a non-zero value. This patch fixes function comments to resolve warnings when W=1 is used. Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This update makes it so that we report the actual number of Tx queues via real_num_tx_queues but are still restricted to RSS on only the first pool by setting num_tc equal to 1. Doing this locks us into only having the ability to setup XPS on the queues in that pool, and only those queues should be used for transmitting anything other than macvlan traffic. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that instead of bringing rings up/down for various we just update the netdev pointer for the Rx ring and set or clear the MAC filter for the interface. By doing it this way we can avoid a number of races and issues in the code as things were getting messy with the macvlan clean-up racing with the interface clean-up to bring the rings down on shutdown. With this change we opt to leave the rings owned by the PF interface for both Tx and Rx and just direct the packets once they are received to the macvlan netdev. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
We should not be stopping/starting the upper devices Tx queues when handling a macvlan offload. Instead we should be stopping and starting traffic on our own queues. In order to prevent us from doing this I am updating the code so that we no longer change the queue configuration on the upper device, nor do we update the queue_index on our own device. Instead we can just use the queue index for our local device and not update the netdev in the case of the transmit rings. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
We shouldn't be recording the Rx queue on macvlan offloaded frames since the macvlan is normally brought up as a single queue device, and it will trigger warnings for RPS if we have recorded queue IDs larger than the "real_num_rx_queues" value recorded for the device. Instead we should be recording the macvlan statistics since we are bypassing the normal macvlan statistics that would have been generated by the receive path. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The code throughout ixgbe was assuming that dev->num_tc was populated and configured with the driver, when in fact this can be configured via mqprio without any hardware coordination other than restricting us to the real number of Tx queues we advertise. Instead of handling things this way we need to keep a local copy of the number of TCs in use so that we don't accidentally pull in the TC configuration from mqprio when it is configured in software mode. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
We might as well configure the limit to default to 1 pool always for the interface. This accounts for the fact that the PF counts as 1 pool if SR-IOV is enabled, and in general we are always running in 1 pool mode when RSS or DCB is enabled as well, though we don't need to actually evaluate any of the VMDq features in those cases. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The macvlan driver itself will validate the MAC address that is configured for a given interface. There is no need for us to verify it again. Instead we should be checking to verify that we actually allocate the filter and have not run out of resources to configure a MAC rule in our filter table. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 10 1月, 2018 1 次提交
-
-
由 Alexander Duyck 提交于
The l2 acceleration private pointer isn't needed in the ring struct. It isn't really used anywhere other than to test and see if we are supporting an offloaded macvlan netdev, and it is much easier to test netdev for not being ixgbe based to verify that. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-