- 26 4月, 2017 3 次提交
-
-
由 Artemy Kovalyov 提交于
Currenlty ODP supports only regular MMU pages. Add ODP support for regions consisting of physically contiguous chunks of arbitrary order (huge pages for instance) to improve performance. Signed-off-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Artemy Kovalyov 提交于
Size of pages are held by struct ib_umem in page_size field. It is better to store it as an exponent, because page size by nature is always power-of-two and used as a factor, divisor or ilog2's argument. The conversion of page_size to be page_shift allows to have portable code and avoid following error while compiling on ARM: ERROR: "__aeabi_uldivmod" [drivers/infiniband/core/ib_core.ko] undefined! CC: Selvin Xavier <selvin.xavier@broadcom.com> CC: Steve Wise <swise@chelsio.com> CC: Lijun Ou <oulijun@huawei.com> CC: Shiraz Saleem <shiraz.saleem@intel.com> CC: Adit Ranadive <aditr@vmware.com> CC: Dennis Dalessandro <dennis.dalessandro@intel.com> CC: Ram Amrani <Ram.Amrani@Cavium.com> Signed-off-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Acked-by: NRam Amrani <Ram.Amrani@cavium.com> Acked-by: NShiraz Saleem <shiraz.saleem@intel.com> Acked-by: NSelvin Xavier <selvin.xavier@broadcom.com> Acked-by: NSelvin Xavier <selvin.xavier@broadcom.com> Acked-by: NAdit Ranadive <aditr@vmware.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Zhu Yanjun 提交于
The function ib_unregister_mad_agent always returns zero. And this returned value is not checked. As such, chane the return type to void. CC: Joe Jin <joe.jin@oracle.com> CC: Junxiao Bi <junxiao.bi@oracle.com> Signed-off-by: NZhu Yanjun <yanjun.zhu@oracle.com> Reviewed-by: NYuval Shaia <yuval.shaia@oracle.com> Reviewed-by: NHal Rosenstock <hal@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 22 4月, 2017 2 次提交
-
-
由 Noa Osherovich 提交于
Add high data rate speed to the ib_port_speed enumeration. Signed-off-by: NNoa Osherovich <noaos@mellanox.com> Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Slava Shwartsman 提交于
This flow steering specification identifies flow for drop by the HW. If user create a flow only with the drop specification, then all the packets that hit this flow will be dropped, otherwise the HW will drop only the packets that match the other L2/L3/L4 specifications. Signed-off-by: NSlava Shwartsman <slavash@mellanox.com> Reviewed-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 21 4月, 2017 4 次提交
-
-
Add RDMA netdev interface to ib device structure allowing RDMA netdev devices to be allocated by ib clients. The idea is to allow to providers to optimize IPoIB data path. New struct that includes functions and data member is exposed. It exposes set of callback functions for handling data path flows in IPoIB driver. Each provider can support these set of functions in order to optimize its specific data path, and let IPoIB to leverage its data path. There is an assumption, that providers should give the full set of functions and not only part of them, in order to work properly. Signed-off-by: NErez Shitrit <erezsh@mellanox.com> Signed-off-by: NNiranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
HFI1 HW specific support for VNIC functionality. Dynamically allocate a set of contexts for VNIC when the first vnic port is instantiated. Allocate VNIC contexts from user contexts pool and return them back to the same pool while freeing up. Set aside enough MSI-X interrupts for VNIC contexts and assign them when the contexts are allocated. On the receive side, use an RSM rule to spread TCP/UDP streams among VNIC contexts. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NNiranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: NAndrzej Kacprowski <andrzej.kacprowski@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
Define OPA VNIC interface between hardware independent VNIC functionality and the hardware dependent VNIC functionality. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NNiranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
Add rdma netdev interface to ib device structure allowing rdma netdev devices to be allocated by ib clients. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NNiranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 20 4月, 2017 1 次提交
-
-
由 Matan Barak 提交于
We rename the "write" flags to "exclusive", as it's used for both WRITE and DESTROY actions. Fixes: 38321256 ('IB/core: Add support for idr types') Signed-off-by: NMatan Barak <matanb@mellanox.com> Reviewed-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 06 4月, 2017 9 次提交
-
-
由 Don Hiatt 提交于
Add ability to fault packets on transmit by opcode. Dropping by packet can be achieved by setting the mask to 0. In order to drop non-verbs traffic we set PbcInsertHrc to NONE (0x2). The packet will still be delivered to the receiving node but a KHdrHCRCErr (KDETH packet with a bad HCRC) will be triggered and the packet will not be delivered to the correct context. In order to drop regular verbs traffic we set the PbcTestEbp flag. The packet will still be delivered to the receiving node but a 'late ebp error' will be triggered and will be dropped. A global toggle (/sys/kernel/debug/hfi1/hfi1_X/fault_suppress_err) has been added to suppress the error messages on the receive node when a packet was faulted on the sending node. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDon Hiatt <don.hiatt@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Mike Marciniszyn 提交于
The wqe should be read only and in fact the superfluous reset of the RVT_SEND_RESERVE_USED flag causes an issue where reserved operations elicit a bad completion to the ULP. The maintenance of the flag is now entirely within rvt_post_one_wr() where a reserved operation will set the flag and a non-reserved operation will insure the operation that is about to be posted has the flag reset. Fixes: Commit 856cc4c2 ("IB/hfi1: Add the capability for reserved operations") Reviewed-by: NDon Hiatt <don.hiatt@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Mike Marciniszyn 提交于
The work to create a completion helper moved the translation of send wqe operations to completion opcodes to rdmvat. This precludes having driver dependent operations. Make the translation driver dependent by doing the translation in the driver prior to the rvt_qp_swqe_complete() call using restored translation tables. Fixes: Commit f2dc9cdc ("IB/rdmavt: Add a send completion helper") Fixes: Commit 0771da5a ("IB/hfi1,IB/qib: Use new send completion helper") Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
This patch adds the standard fd based type - completion_channel. The completion_channel is now prefixed with ib_uobject, similarly to the rest of the uobjects. This requires a few changes: (1) We define a new completion channel fd based object type. (2) completion_event and async_event are now two different types. This means they use different fops. (3) We release the completion_channel exactly as we release other idr based objects. (4) Since ib_uobjects are already kref-ed, we only add the kref to the async event. A fd object requires filling out several parameters. Its op pointer should point to uverbs_fd_ops and its size should be at least the size if ib_uobject. We use a macro to make the type declaration easier. Signed-off-by: NMatan Barak <matanb@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
The completion channel we use in verbs infrastructure is FD based. Previously, we had a separate way to manage this object. Since we strive for a single way to manage any kind of object in this infrastructure, we conceptually treat all objects as subclasses of ib_uobject. This commit adds the necessary mechanism to support FD based objects like their IDR counterparts. FD objects release need to be synchronized with context release. We use the cleanup_mutex on the uverbs_file for that. Signed-off-by: NMatan Barak <matanb@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
This changes only the handlers which deals with idr based objects to use the new idr allocation, fetching and destruction schema. This patch consists of the following changes: (1) Allocation, fetching and destruction is done via idr ops. (2) Context initializing and release is done through uverbs_initialize_ucontext and uverbs_cleanup_ucontext. (3) Ditching the live flag. Mostly, this is pretty straight forward. The only place that is a bit trickier is in ib_uverbs_open_qp. Commit [1] added code to check whether the uobject is already live and initialized. This mostly happens because of a race between open_qp and events. We delayed assigning the uobject's pointer in order to eliminate this race without using the live variable. [1] commit a040f95d ("IB/core: Fix XRC race condition in ib_uverbs_open_qp") Signed-off-by: NMatan Barak <matanb@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
This patch adds the standard idr based types. These types are used in downstream patches in order to initialize, destroy and lookup IB standard objects which are based on idr objects. An idr object requires filling out several parameters. Its op pointer should point to uverbs_idr_ops and its size should be at least the size of ib_uobject. We add a macro to make the type declaration easier. Signed-off-by: NMatan Barak <matanb@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Reviewed-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
The new ioctl infrastructure supports driver specific objects. Each such object type has a hot unplug function, allocation size and an order of destruction. When a ucontext is created, a new list is created in this ib_ucontext. This list contains all objects created under this ib_ucontext. When a ib_ucontext is destroyed, we traverse this list several time destroying the various objects by the order mentioned in the object type description. If few object types have the same destruction order, they are destroyed in an order opposite to their creation. Adding an object is done in two parts. First, an object is allocated and added to idr tree. Then, the command's handlers (in downstream patches) could work on this object and fill in its required details. After a successful command, the commit part is called and the user objects become ucontext visible. If the handler failed, alloc_abort should be called. Removing an uboject is done by calling lookup_get with the write flag and finalizing it with destroy_commit. A major change from the previous code is that we actually destroy the kernel object itself in destroy_commit (rather than just the uobject). We should make sure idr (per-uverbs-file) and list (per-ucontext) could be accessed concurrently without corrupting them. Signed-off-by: NMatan Barak <matanb@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
The current code creates an idr per type. Since types are currently common for all drivers and known in advance, this was good enough. However, the proposed ioctl based infrastructure allows each driver to declare only some of the common types and declare its own specific types. Thus, we decided to implement idr to be per uverbs_file. Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Reviewed-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 25 3月, 2017 1 次提交
-
-
由 Bart Van Assche 提交于
Avoid that the following error message is reported on the console while loading an RDMA driver with I/O MMU support enabled: DMAR: Allocating domain for mlx5_0 failed Ensure that DMA mapping operations that use to_pci_dev() to access to struct pci_dev see the correct PCI device. E.g. the s390 and powerpc DMA mapping operations use to_pci_dev() even with I/O MMU support disabled. This patch preserves the following changes of the DMA mapping updates patch series: - Introduction of dma_virt_ops. - Removal of ib_device.dma_ops. - Removal of struct ib_dma_mapping_ops. - Removal of an if-statement from each ib_dma_*() operation. - IB HW drivers no longer set dma_device directly. Reported-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reported-by: NParav Pandit <parav@mellanox.com> Fixes: commit 99db9494 ("IB/core: Remove ib_device.dma_device") Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: parav@mellanox.com Tested-by: parav@mellanox.com Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 02 3月, 2017 1 次提交
-
-
由 Ingo Molnar 提交于
Add #include <linux/cred.h> dependencies to all .c files rely on sched.h doing that for them. Note that even if the count where we need to add extra headers seems high, it's still a net win, because <linux/sched.h> is included in over 2,200 files ... Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 19 2月, 2017 9 次提交
-
-
由 Don Hiatt 提交于
Rename RVT AETH defines and export in rdma/ib_hdrs.h Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDon Hiatt <don.hiatt@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Don Hiatt 提交于
Return usec from an index into ib_rvt_rnr_table. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDon Hiatt <don.hiatt@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Mike Marciniszyn 提交于
The send complete for RC QPs mismanages the ack count when the responder side is only in RTR. A QP in that state cannot send requests, but it can be the target for operations that elicit responses. Adjust the RC completion logic to correct the count maintenance by reflecting RECV_OK in a new state test. Reviewed-by: NKaike Wan <kaike.wan@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Brian Welty 提交于
To improve code reuse, add small SGE state helper routines to rdmavt_mr.h. Leverage these in hfi1, including refactoring of hfi1_copy_sge. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NBrian Welty <brian.welty@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Brian Welty 提交于
Convert copy_sge and related SGE state functions to use boolean. For determining if QP is in user mode, add helper function in rdmavt_qp.h. This is used to determine if QP needs the last byte ordering. While here, change rvt_pd.user to a boolean. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NBrian Welty <brian.welty@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
To move common code across target to rdmavt for code reuse. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NBrian Welty <brian.welty@intel.com> Signed-off-by: NVenkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Brian Welty 提交于
Add rvt_compute_aeth() and rvt_get_credit() as shared functions in rdmavt, moved from hfi1/qib logic. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NBrian Welty <brian.welty@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Brian Welty 提交于
Add rvt_rc_error() and rvt_comm_est() as shared functions in rdmavt, moved from hfi1/qib logic. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NBrian Welty <brian.welty@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sebastian Sanchez 提交于
Having per-CPU reference count for each MR prevents cache-line bouncing across the system. Thus, it prevents bottlenecks. Use per-CPU reference counts per MR. The per-CPU reference count for FMRs is used in atomic mode to allow accurate testing of the busy state. Other MR types run in per-CPU mode MR until they're freed. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 15 2月, 2017 8 次提交
-
-
由 Or Gerlitz 提交于
Add protocol definition for the proprietary the USNIC driver. Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Reviewed-by: NChristian Benvenuti <benve@cisco.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Or Gerlitz 提交于
Define raw packet protocol which comes to denote this port supports working with raw ethernet frames, e.g as done with RAW_PACKET QPs. Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Artemy Kovalyov 提交于
Currently ODP MR may explicitly register virtual address space area of limited length. This change allows MR to cover entire process virtual address space dynamicaly adding/removing translation entries to device MTT. Add following changes to support implicit MR: * Allow umem to be zero size to back-up implicit MR. * Add new function ib_alloc_odp_umem() to add virtual memory regions to implicit MR dynamically on demand. * Add new function rbt_ib_umem_lookup() to find dynamically added virtual memory regions. * Expose function rbt_ib_umem_for_each_in_range() to other modules and make it safe Signed-off-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Artemy Kovalyov 提交于
Add flag IB_ODP_SUPPORT_IMPLICIT indicating implicit MR supported. Signed-off-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Noa Osherovich 提交于
Add a new creation flag to set the scatter FCS capability of a WQ. Signed-off-by: NNoa Osherovich <noaos@mellanox.com> Reviewed-by: NMajd Dibbiny <majd@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Noa Osherovich 提交于
Add a QP creation flag to support cvlan stripping, it's applicable for RAW Ethernet QP. Signed-off-by: NNoa Osherovich <noaos@mellanox.com> Reviewed-by: NMaor Gottlieb <maorg@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Noa Osherovich 提交于
Enable WQ creation and modification with cvlan stripping offload. This includes: - Adding WQ creation flags. - Extending modify WQ to get flags and flags mask to enable turning it on and off. Signed-off-by: NNoa Osherovich <noaos@mellanox.com> Reviewed-by: NMaor Gottlieb <maorg@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Noa Osherovich 提交于
Expose raw packet capabilities in the core layer to enable a device to report it. Two existing capabilities, scatter FCS and IP CSUM were added to this field for a better user experience by exposing the raw packet caps from one location. This field will serve also for future capabilities for raw packet QP. A new capability was introduced - cvlan stripping, which is the device's ability to remove cvlan tag from an incoming packet and report it in the matching work completion. Signed-off-by: NNoa Osherovich <noaos@mellanox.com> Reviewed-by: NMaor Gottlieb <maorg@mellanox.com> Reviewed-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 14 2月, 2017 1 次提交
-
-
由 Moses Reuben 提交于
This specification identifies flow with a specific tag-id. This tag-id will be reported in the CQE. Signed-off-by: NMoses Reuben <mosesr@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 07 2月, 2017 1 次提交
-
-
由 Parav Pandit 提交于
This patch makes use of is_vlan_dev() function instead of flag comparison which is exactly done by is_vlan_dev() helper function. Signed-off-by: NParav Pandit <parav@mellanox.com> Reviewed-by: NDaniel Jurgens <danielj@mellanox.com> Acked-by: NStephen Hemminger <stephen@networkplumber.org> Acked-by: NJon Maxwell <jmaxwell37@gmail.com> Acked-by: NJohannes Thumshirn <jth@kernel.org> Acked-by: NHaiyang Zhang <haiyangz@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-