- 30 4月, 2017 3 次提交
-
-
由 Paul Greenwalt 提交于
Add support for new 1000Base-T device based on X550EM_X MAC type. All PHY operations are disabled as the PHY is controlled by FW. Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 John Fastabend 提交于
A couple design choices were made here. First I use a new ring pointer structure xdp_ring[] in the adapter struct instead of pushing the newly allocated XDP TX rings into the tx_ring[] structure. This means we have to duplicate loops around rings in places we want to initialize both TX rings and XDP rings. But by making it explicit it is obvious when we are using XDP rings and when we are using TX rings. Further we don't have to do ring arithmatic which is error prone. As a proof point for doing this my first patches used only a single ring structure and introduced bugs in FCoE code and macvlan code paths. Second I am aware this is not the most optimized version of this code possible. I want to get baseline support in using the most readable format possible and then once this series is included I will optimize the TX path in another series of patches. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 John Fastabend 提交于
Basic XDP drop support for ixgbe. Uses READ_ONCE/xchg semantics on XDP programs instead of RCU primitives as suggested by Daniel Borkmann and Alex Duyck. v2: fix the build issues seen w/ XDP when page sizes are larger than 4K and made minor fixes based on feedback from Jakub Kicinski Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 19 4月, 2017 1 次提交
-
-
由 Alexander Duyck 提交于
This patch increases the headroom allocated when using build_skb on a system with 4K pages. Specifically the breakdown of headroom versus cache size is as follows: L1 Cache Size Headroom 64 192 64, NET_IP_ALIGN == 2 194 128 128 128, NET_IP_ALIGN == 2 130 256 512 256, NET_IP_ALIGN == 2 258 I stopped at supporting only a cache line size of 256 as that was the largest cache size I could find supported in the kernel. With this we are guaranteeing at least 128 bytes of headroom to spare in the frame. This should be enough for us to insert a couple of IPv6 headers if needed which is likely enough room for anything XDP should need. I'm leaving the padding for systems with pages larger than 4K unmodified for now. XDP currently isn't really setup to work on those types of systems so we can cross that bridge when we get there. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 03 3月, 2017 2 次提交
-
-
由 Alexander Duyck 提交于
On architectures that have a cache line size larger than 64 Bytes we start running into issues where the amount of headroom for the frame starts shrinking. The size of skb_shared_info on a system with a 64B L1 cache line size is 320. This increases to 384 with a 128B cache line, and 512 with a 256B cache line. In addition the NET_SKB_PAD value increases as well consistent with the cache line size. As a result when we get to a 256B cache line as seen on the s390 we end up 768 bytes used by padding and shared info leaving us with only 1280 bytes to use for data storage. On architectures such as this we should default to using 3K Rx buffers out of a 8K page instead of trying to do 1.5K buffers out of a 4K page. To take all of this into account I have added one small check so that we compare the max_frame to the amount of actual data we can store. This was already occurring for igb, but I had overlooked it for ixgbe as it doesn't have strict limits for 82599 once we enable jumbo frames. By adding this check we will automatically enable 3K Rx buffers as soon as the maximum frame size we can handle drops below the standard Ethernet MTU. I also went through and fixed one small typo that I found where I had left an IGB in a variable name due to a copy/paste error. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Paolo Abeni 提交于
Currently ixgbe_set_rxfh() updates the rss_key copy in the driver memory, but does not push the new value into the h/w. This commit add a new helper for the latter operation and call it in ixgbe_set_rxfh(), so that the h/w rss key value can be really updated via ethtool. Signed-off-by: NPaolo Abeni <pabeni@redhat.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 16 2月, 2017 5 次提交
-
-
由 Alexander Duyck 提交于
This patch adds support for providing a buffer with headroom and tailroom to allow for shared info, NET_SKB_PAD, and NET_IP_ALIGN. With this combined with the DMA changes we can start using build_skb to build frames around an incoming Rx buffer instead of having to memcpy the headers. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In order to support build_skb with jumbo frames it will be necessary to use 3K buffers for the Rx path with 8K pages backing them. This is needed on architectures that implement 4K pages because we can't support 2K buffers plus padding in a 4K page. In the case of systems that support page sizes larger than 4K the 3K attribute will only be applied to FCoE as we can fall back to using just 2K buffers and adding the padding. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Batch the page count updates instead of doing them one at a time. By doing this we can improve the overall performance as the atomic increment operations can be expensive due to the fact that on x86 they are locked operations which can cause stalls. By doing bulk updates we can consolidate the stall which should help to improve the overall receive performance. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Acked-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC and DMA_ATTR_WEAK_ORDERING. By enabling both of these for the Rx path we are able to see performance improvements on architectures that implement either one due to the fact that page mapping and unmapping only has to sync what is actually being used instead of the entire buffer. In addition by enabling the weak ordering attribute enables a performance improvement for architectures that can associate a memory ordering with a DMA buffer such as Sparc. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Stephen Hemminger 提交于
Since dcbnl_ops is global, it should be prefixed by ixgbe_ Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 04 2月, 2017 1 次提交
-
-
由 Eric Dumazet 提交于
In linux-4.5, busy polling was implemented in core NAPI stack, meaning that all custom implementation can be removed from drivers. Not only we remove lot's of code, we also remove one lock operation in fast path, and allow GRO to do its job. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 1月, 2017 2 次提交
-
-
由 Don Skidmore 提交于
This patch extends the xcast mailbox message to include support for unicast promiscuous mode. To allow a VF to enter this mode the PF must be in promiscuous mode. A later patch will add the support needed in the VF driver (ixgbevf) Signed-off-by: NDon Skidmore <donald.c.skidmore@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mark Rustad 提交于
Implement support for devices that have firmware-controlled PHYs. Signed-off-by: NMark Rustad <mark.d.rustad@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 05 11月, 2016 1 次提交
-
-
由 Don Skidmore 提交于
This patch adds X553 flow control auto negotiation for fiber and backplain. To enable this new function pointers were added as well as creating a function to dynamically set function pointer we can't define only on MAC type. Signed-off-by: NDon Skidmore <donald.c.skidmore@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 13 9月, 2016 1 次提交
-
-
由 Javier Martinez Canillas 提交于
The IS_ENABLED() macro checks if a Kconfig symbol has been enabled either built-in or as a module, use that macro instead of open coding the same. Using the macro makes the code more readable by helping abstract away some of the Kconfig built-in and module enable details. Signed-off-by: NJavier Martinez Canillas <javier@osg.samsung.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 8月, 2016 1 次提交
-
-
由 Emil Tantilov 提交于
Add geneve Rx offload support for x550em_a. The implementation follows the vxlan code with the lower 16 bits of the VXLANCTRL register holding the UDP port for VXLAN and the upper for Geneve. Disabled NFS filters in the RFCTL register which allows us to simplify the check for VXLAN and Geneve packets in ixgbe_rx_checksum(). Removed vxlan from the name of the callback functions and replaced it with udp_tunnel which is more in line with the new API. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 19 8月, 2016 1 次提交
-
-
由 Emil Tantilov 提交于
Use atomic bitwise operations when setting and checking reset requests. This should help with possible races in the service task. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 22 7月, 2016 1 次提交
-
-
由 Don Skidmore 提交于
This patch address a few issues with the initial crosstalk fix. Most important of which is the SDP that indicates the presents of a SFP+ module changes between HW types. With this change that is taken in to consideration It also moves the check closer to the base code that checks link. This makes it so we only need to do the check in one spot. Signed-off-by: NDon Skidmore <donald.c.skidmore@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 04 5月, 2016 3 次提交
-
-
由 Usha Ketineni 提交于
This patch adds IXGBE_FLAG_DCB_CAPABLE flag that is set for all MACs other than X550EM_x and x550em_a. DCB and FCoE is disabled for these MACS. DCB initialization code is moved to a separate function. Signed-off-by: NUsha Ketineni <usha.k.ketineni@intel.com> Tested-by: NRonald Bynoe <ronald.j.bynoe@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
This change aims to simplify the logic we use to determine WOL support by reading the EEPROM bits for MACs X540 and newer. Also some cleanups in ixgbe_wol_supported() - changed return type to bool and removed redundant return variable by simply using return after the checks. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Amritha Nambiar 提交于
Adds support to set filters with multiple header fields (L3,L4)to match on. This is achieved in the following order: 1. Create a leaf hash table for the next header. 2. Create a link to the leaf hash table from the base hash table with matches on next header type and current header fields. 3. Add filter in leaf hash table with match on next header fields and action. Verified with the following filters : Match TCP and DIP: handle 1: u32 divisor 1 u32 ht 800: order 1 link 1: \ offset at 0 mask 0f00 shift 6 plus 0 eat \ match ip protocol 6 ff match ip dst 10.0.0.1/32 match tcp src 28 ffff action drop Delete the filter: Match on DIP, SIP, UDP (SPort, DPort): handle 2: u32 divisor 1 u32 ht 800: order 2 link 2: \ offset at 0 mask 0f00 shift 6 plus 0 eat \ match ip dst 15.0.0.2/32 match ip protocol 17 ff \ match ip src 15.0.0.1/32 match udp src 30 ffff match udp dst 32 ffff action drop Signed-off-by: NAmritha Nambiar <amritha.nambiar@intel.com> Acked-by: NSridhar Samudrala <sridhar.samudrala@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 25 4月, 2016 4 次提交
-
-
由 Jacob Keller 提交于
Several areas of ixgbe were written before widespread usage of the BIT(n) macro. With the impending release of GCC 6 and its associated new warnings, some usages such as (1 << 31) have been noted within the ixgbe driver source. Fix these wholesale and prevent future issues by simply using BIT macro instead of hand coded bit shifts. Also fix a few shifts that are shifting values into place by using the 'u' prefix to indicate unsigned. It doesn't strictly matter in these cases because we're not shifting by too large a value, but these are all unsigned values and should be indicated as such. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Don Skidmore 提交于
It is possible on some systems that crosstalk could lead to link flap on empty SFP+ cages. A new NVM bit was defined to let SW know it needs to implement the work around which consists of verifying that there is a module in the cage before acting on the LSC. Signed-off-by: NDon Skidmore <donald.c.skidmore@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Sridhar Samudrala 提交于
This field is used to record the RX queue index for a redirect action passed via ring_cookie field in struct ethtool_rx_flow_spec which is a u64 value. For ex: after adding a filter rule to redirect to a VF using ethtool # echo 4 > /sys/class/net/p4p1/device/sriov_numvfs # ethtool -N p4p1 flow-type ip4 src-ip 192.168.0.1 action 0x100000000 querying for the rule shows the Action as 'Direct to queue 0' # ethtool -n p4p1 4 RX rings available Total 1 rules Filter: 2045 Rule Type: Raw IPv4 Src IP addr: 192.168.0.1 mask: 0.0.0.0 Dest IP addr: 0.0.0.0 mask: 255.255.255.255 TOS: 0x0 mask: 0xff Protocol: 0 mask: 0xff L4 bytes: 0x0 mask: 0xffffffff VLAN EtherType: 0x0 mask: 0xffff VLAN: 0x0 mask: 0xffff User-defined: 0x0 mask: 0xffffffffffffffff Action: Direct to queue 0 With this fix, ethtool will report the right queue index even for VFs. Action: Direct to queue 4294967296 Here 4294967296 corresponds to 0x100000000. We need to update 'ethtool' to report the queue index as a Hex value so that it is more user friendly and matches with the 'action' value that is passed when adding the rule. Signed-off-by: NSridhar Samudrala <sridhar.samudrala@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
Previously the PF driver would only set VLAN spoof checking if the VF had created VLANs. This was done by setting and checking a counter (vlan_count) whenever a VLAN was created by the VF. However it is possible for the vlan_count to be !=0 while there are no VLANs assigned to the VF due to the count incrementing every time a VLAN 0 is added on ifdown/up, which resulted in VLAN spoofing always being set for those VFs. This patch cleans up the logic by unconditionally setting VLAN based on how the VF is configured (via ip link set ethX vf Y spoofchk on/off). This change also resolves an issue where the VLAN spoofing can remain set even after being disabled by the user due to the driver enabling VLAN spoof checking every time a VLAN is added to the VF, but would only allow changes in the setting if vlan_count != 0. Also default_vf_vlan_id and vlans_enabled were removed from the vf_data_storage structure since they are not being used in the driver. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 08 4月, 2016 1 次提交
-
-
由 Mark Rustad 提交于
Add support for x550em_a 10G MAC type to the ixgbe driver. The new MAC includes new firmware commands that need to be used to control PHY and IOSF access, so that support is also added. The interface supported is a native SFP+ interface. Signed-off-by: NMark Rustad <mark.d.rustad@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 05 4月, 2016 2 次提交
-
-
由 Mark Rustad 提交于
The source for the ops structure contents are const, so make them so. Copy them in place with structure assignments instead of memcpys. Make the mbx_ops accessed by reference instead of making a copy of the source structure. Update copyright date on the touched files. Reported-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NMark Rustad <mark.d.rustad@intel.com> Acked-by: NJulia Lawall <julia.lawall@lip6.fr> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Pavel Tikhomirov 提交于
It seem to be non intentionally changed to Tx in commit adc81090 ("ixgbe: Refactor busy poll socket code to address multiple issues") Lock is taken from ixgbe_low_latency_recv, and there under this lock we use ixgbe_clean_rx_irq so it looks wrong for me to increment Tx counter. Yield stats can be shown through ethtool: ethtool -S enp129s0 | grep yield Signed-off-by: NPavel Tikhomirov <ptikhomirov@virtuozzo.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 30 3月, 2016 2 次提交
-
-
由 Stefan Assmann 提交于
Calling dev_close() causes IFF_UP to be cleared which will remove the interfaces routes and some addresses. That's probably not what the user intended when running the offline selftest. Besides this does not happen if the interface is brought down before the test, so the current behaviour is inconsistent. Instead call the net_device_ops ndo_stop function directly and avoid touching IFF_UP at all. Signed-off-by: NStefan Assmann <sassmann@kpanic.de> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The VXLAN port number should be stored in network order instead of in host order as it is accessed from the hot-path in ATR. This way we can avoid having to do any byte swaps in order to validate the port number. I moved the vxlan_port value into a hole in the read-mostly region of the adapter struct. This way it should be in a warm cache-line instead of in some isolated region in memory when it needs to be accessed. In addition I went through and stripped a bunch of unneeded ifdef flags since having an extra variable present doesn't really hurt anything and makes the code easier to read. I also went through and dropped the NETIF_F_RXCSUM flag which was being set in hw_encap_features but provides no value as the flag is not evaluated in the Rx path. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 17 2月, 2016 2 次提交
-
-
由 John Fastabend 提交于
This patch ensures ixgbe will not try to offload hash tables from the u32 module. The device class does not currently support this so until it is enabled just abort on these tables. Interestingly the more flexible your hardware is the less code you need to implement to guard against these cases. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
This adds initial support for offloading the u32 tc classifier. This initial implementation only implements a few base matches and actions to illustrate the use of the infrastructure patches. However it is an interesting subset because it handles the u32 next hdr logic to correctly map tcp packets from ip headers using the ihl and protocol fields. After this is accepted we can extend the match and action fields easily by updating the model header file. Also only the drop action is supported initially. Here is a short test script, #tc qdisc add dev eth4 ingress #tc filter add dev eth4 parent ffff: protocol ip \ u32 ht 800: order 1 \ match ip dst 15.0.0.1/32 match ip src 15.0.0.2/32 action drop <-- hardware has dst/src ip match rule installed --> #tc filter del dev eth4 parent ffff: prio 49152 #tc filter add dev eth4 parent ffff: protocol ip prio 99 \ handle 1: u32 divisor 1 #tc filter add dev eth4 protocol ip parent ffff: prio 99 \ u32 ht 800: order 1 link 1: \ offset at 0 mask 0f00 shift 6 plus 0 eat match ip protocol 6 ff #tc filter add dev eth4 parent ffff: protocol ip \ u32 ht 1: order 3 match tcp src 23 ffff action drop <-- hardware has tcp src port rule installed --> #tc qdisc del dev eth4 parent ffff: <-- hardware cleaned up --> Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NJamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 12月, 2015 1 次提交
-
-
由 Emil Tantilov 提交于
X550 allows for up to 64 RSS queues, but the driver can have max of 63 (-1 MSIX vector for link). On systems with >= 64 CPUs the driver will set the redirection table for all 64 queues which will result in packets being dropped. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 12 12月, 2015 2 次提交
-
-
由 Alexander Duyck 提交于
This patch is a follow-on for enabling VLAN promiscuous and allowing the PF to add VLANs without adding a VLVF entry. What this patch does is go through and free the VLVF registers if they are not needed as the VLAN belongs only to the PF which is the default pool. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds support for VLAN promiscuous with SR-IOV enabled. The code prior to this patch was only adding the PF to VLANs that the VF had added. As such enabling promiscuous mode would actually not add any additional VLAN filters so visibility was limited. This lead to a number of issues as the bridge and OVS would expect us to accept all VLAN tagged packets when promiscuous mode was enabled, and instead we would filter out most if not all depending on the configuration of the PF. With this patch what we do is set all the bits in the VFTA and all of the VLVF bits associated with the pool belonging to the PF. By doing this the PF is guaranteed to receive all VLAN tagged traffic associated with the RAR filters assigned to the PF. In addition we will clean up those same bits in the event of promiscuous mode being disabled. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 03 12月, 2015 3 次提交
-
-
由 Mark Rustad 提交于
Save VF device pointers and take references to speed accesses used to monitor the device behavior to avoid slot resets. The saved information avoids lock contention during the search used to access each of the VFs. Signed-off-by: NMark Rustad <mark.d.rustad@intel.com> Tested-by: NDarin Miller <darin.j.miller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mark Rustad 提交于
The X550EM_x devices handle clocking differently, so update the PTP implementation to accommodate them. This involves significant changes to ixgbe's PTP code to accommodate the new range of behaviors including things like non-power-of-2 clock wrapping. Signed-off-by: NMark Rustad <mark.d.rustad@intel.com> Tested-by: NDarin Miller <darin.j.miller@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In the process of tracking down a memory leak when adding/removing FDB entries I had to go through the MAC address configuration code for ixgbe. In the process of doing so I found a number of issues that impacted readability and performance. This change updates the code in general to clean it up so it becomes clear what each step is doing. From what I can tell there a couple of bugs cleaned up in this code. First is the fact that the MAC addresses were being double counted for the PF. As a result once entries up to 63 had been used you could no longer add additional filters. A simple test case for this: for i in `seq 0 96` do ip link add link ens8 name mv$i type macvlan ip link set dev mv$i up done Test script: ethregs -s 0:8.0 | grep -e "RAH" | grep 8000....$ When things are working correctly RAL/H registers 1 - 97 will be consumed. In the failing case it will stop at 63 and prevent any further filters from being added. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NDarin Miller <darin.j.miller@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 23 10月, 2015 1 次提交
-
-
由 Hiroshi Shimamoto 提交于
The limitation of the number of multicast address for VF is not enough for the large scale server with SR-IOV feature. IPv6 requires the multicast MAC address for each IP address to handle the Neighbor Solicitation message. We couldn't assign over 30 IPv6 addresses to a single VF. This patch introduces the new mailbox API, IXGBE_VF_UPDATE_XCAST_MODE, to update multicast mode of VF. This adds 3 modes; - NONE only L2 exact match addresses or Flow Director enabled - MULTI BAM and ROMPE set - ALLMULTI BAM, ROMPE and MPE set If a guest VF user wants over 30 MAC multicast addresses, set IFF_ALLMULTI to request PF to update xcast mode to enable VF multicast promiscuous mode. On the other hand, enabling VF multicast promiscuous mode may affect security and performance in the network of the NIC. Only trusted VF can enable multicast promiscuous mode. The behavior of untrusted VF is the same as previous version. Signed-off-by: NHiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-