- 10 10月, 2017 1 次提交
-
-
由 Alexander Duyck 提交于
The following change is meant to update the adaptive ITR algorithm to better support the needs of the network. Specifically with this change what I have done is make it so that our ITR algorithm will try to prevent either starving a socket buffer for memory in the case of Tx, or overrunning an Rx socket buffer on receive. In addition a side effect of the calculations used is that we should function better with new features such as XDP which can handle small packets at high rates without needing to lock us into NAPI polling mode. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 18 7月, 2017 1 次提交
-
-
由 John Fastabend 提交于
tx_rings and rx_rings are cleaned up on close paths in ixgbe driver however, xdp_rings are not. Set the xdp_rings to NULL here so that we can use the pointer to indicate if the XDP rings are initialized. Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 4月, 2017 1 次提交
-
-
由 John Fastabend 提交于
A couple design choices were made here. First I use a new ring pointer structure xdp_ring[] in the adapter struct instead of pushing the newly allocated XDP TX rings into the tx_ring[] structure. This means we have to duplicate loops around rings in places we want to initialize both TX rings and XDP rings. But by making it explicit it is obvious when we are using XDP rings and when we are using TX rings. Further we don't have to do ring arithmatic which is error prone. As a proof point for doing this my first patches used only a single ring structure and introduced bugs in FCoE code and macvlan code paths. Second I am aware this is not the most optimized version of this code possible. I want to get baseline support in using the most readable format possible and then once this series is included I will optimize the TX path in another series of patches. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 04 2月, 2017 1 次提交
-
-
由 Eric Dumazet 提交于
In linux-4.5, busy polling was implemented in core NAPI stack, meaning that all custom implementation can be removed from drivers. Not only we remove lot's of code, we also remove one lock operation in fast path, and allow GRO to do its job. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 1月, 2017 1 次提交
-
-
由 Emil Tantilov 提交于
The indirection table was reported incorrectly for X550 and newer where we can support up to 64 RSS queues. Reported-by Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 23 9月, 2016 1 次提交
-
-
由 Alexander Duyck 提交于
Instead of limiting the VFs if we don't use 4 queues for RSS in the PF we can instead just limit the RSS queues used to a power of 2. By doing this we can support use cases where VFs are using more queues than the PF is currently using and can support RSS if so desired. The only limitation on this is that we cannot support 3 queues of RSS in the PF or VF. In either of these cases we should fall back to 2 queues in order to be able to use the power of 2 masking provided by the psrtype register. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 08 4月, 2016 1 次提交
-
-
由 Mark Rustad 提交于
Add support for x550em_a 10G MAC type to the ixgbe driver. The new MAC includes new firmware commands that need to be used to control PHY and IOSF access, so that support is also added. The interface supported is a native SFP+ interface. Signed-off-by: NMark Rustad <mark.d.rustad@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 19 11月, 2015 1 次提交
-
-
由 Eric Dumazet 提交于
NAPI drivers no longer need to observe a particular protocol to benefit from busy polling (CONFIG_NET_RX_BUSY_POLL=y) napi_hash_add() and napi_hash_del() are automatically called from core networking stack, respectively from netif_napi_add() and netif_napi_del() This patch depends on free_netdev() and netif_napi_del() being called from process context, which seems to be the norm. Drivers might still prefer to call napi_hash_del() on their own, since they might combine all the rcu grace periods into a single one, knowing their NAPI structures lifetime, while core networking stack has no idea of a possible combining. Once this patch proves to not bring serious regressions, we will cleanup drivers to either remove napi_hash_del() or provide appropriate rcu grace periods combining. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 9月, 2015 1 次提交
-
-
由 Alexander Duyck 提交于
This patch updates the lowest limit for adaptive interrupt interrupt moderation to roughly 12K interrupts per second. The way I came about reaching 12K as the desired interrupt rate is by testing with UDP flows. Specifically I had a simple test that ran a netperf UDP_STREAM test at varying sizes. What I found was as the packet sizes increased the performance fell steadily behind until we were only able to receive at ~4Gb/s with a message size of 65507. A bit of digging found that we were dropping packets for the socket in the network stack, and looking at things further what I found was I could solve it by increasing the interrupt rate, or increasing the rmem_default/rmem_max. What I found was that when the interrupt coalescing resulted in more data being processed per interrupt than could be stored in the socket buffer we started losing packets and the performance dropped. So I reached 12K based on the following math. rmem_default = 212992 skb->truesize = 2994 212992 / 2994 = 71.14 packets to fill the buffer packet rate at 1514 packet size is 812744pps 71.14 / 812744 = 87.9us to fill socket buffer From there it was just a matter of choosing the interrupt rate and providing a bit of wiggle room which is why I decided to go with 12K interrupts per second as that uses a value of 84us. The data below is based on VM to VM over a direct assigned ixgbe interface. The test run was: netperf -H <ip> -t UDP_STREAM" Socket Message Elapsed Messages CPU Service Size Size Time Okay Errors Throughput Util Demand bytes bytes secs # # 10^6bits/sec % SS us/KB Before: 212992 65507 60.00 1100662 0 9613.4 10.89 0.557 212992 60.00 473474 4135.4 11.27 0.576 After: 212992 65507 60.00 1100413 0 9611.2 10.73 0.549 212992 60.00 974132 8508.3 11.69 0.598 Using bare metal the data is similar but not as dramatic as the throughput increases from about 8.5Gb/s to 9.5Gb/s. Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 11 11月, 2014 1 次提交
-
-
由 Don Skidmore 提交于
This patch will add in the new MAC defines and fit it into the switch cases throughout the driver. New functionality and enablement support will be added in following patches. Signed-off-by: NDon Skidmore <donald.c.skidmore@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 18 9月, 2014 7 次提交
-
-
由 Jacob Keller 提交于
When we can't get MSI-X vectors, we disable a few features which require MSI-X vectors. Print warnings just like we do when disabling DCB. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Again, we should not be directly using netif_printk, as we have our own error print routines that we generate. In addition, instead of using an early return we can just use the else block of this one line if statement. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
In this case, disabling DCB is not an error. We can still function, but we just have to let the user know. In addition, since we call this during probe before allocating our netdevice structure, we should use e_dev_warn instead of e_warn. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Our calculated v_budget doesn't matter except if we allocate MSI-X vectors. We shouldn't need to calculate this outside of the function, so don't. Instead, only calculate it once we attempt to acquire MSI-X vectors. This helps collocate all of the MSI-X vector code together. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
We already have to kfree this value if we fail, and this is only part of MSI-X mode, so we should simply allocate the value where we need it. This is cleaner, and makes it a lot more obvious why we are freeing it inside of ixgbe_acquire_msix_vectors. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Similar to how ixgbevf handles acquiring MSI-X vectors, we can return an error code instead of relying on the flag being set. This makes it more clear that we have failed to setup MSI-X mode, and also will make it easier to consolidate MSI-X related code all into the single function. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
The netif_printk relies on our netdevice structure to be registered already. We may call ixgbe_acquire_msix_vectors prior to registering our netdevice, so we should not use the netdevice specific printk. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 12 9月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
This change addresses several issues in the current ixgbe implementation of busy poll sockets. First was the fact that it was possible for frames to be delivered out of order if they were held in GRO. This is addressed by flushing the GRO buffers before releasing the q_vector back to the idle state. The other issue was the fact that we were having to take a spinlock on changing the state to and from idle. To resolve this I have replaced the state value with an atomic and use atomic_cmpxchg to change the value from idle, and a simple atomic set to restore it back to idle after we have acquired it. This allows us to only use a locked operation on acquiring the vector without a need for a locked operation to release it. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 04 9月, 2014 1 次提交
-
-
由 Jacob Keller 提交于
Since we previously called ixgbe_set_num_queues just prior to attempting to set our interrupt scheme, it may be non obvious why we have to call it again inside the function. Add a comment which helps make it more obvious that we are resetting features based on the fact that we do not have MSI-X enabled, and cannot use the previous settings. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 23 5月, 2014 1 次提交
-
-
由 Jacob Keller 提交于
This patch fixes various log strings that are split over multiple lines in the ixgbe driver. This cleans up checkpatch.pl warnings, and makes it easier to search the code for warning strings displayed to the kernel log. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 13 3月, 2014 1 次提交
-
-
由 Jacob Keller 提交于
This patch updates the contact information on the ixgbe driver files so that every file includes the Linux NICS address, as it is still used, but only a few of the files mentioned it. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 19 2月, 2014 1 次提交
-
-
由 Alexander Gordeev 提交于
As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() and pci_enable_msix_range() interfaces. Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Cc: Bruce Allan <bruce.w.allan@intel.com> Cc: e1000-devel@lists.sourceforge.net Cc: netdev@vger.kernel.org Cc: linux-pci@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 11月, 2013 1 次提交
-
-
由 John Fastabend 提交于
Now that l2 acceleration ops are in place from the prior patch, enable ixgbe to take advantage of these operations. Allow it to allocate queues for a macvlan so that when we transmit a frame, we can do the switching in hardware inside the ixgbe card, rather than in software. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NNeil Horman <nhorman@tuxdriver.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 6月, 2013 1 次提交
-
-
由 Eliezer Tamir 提交于
Add the ixgbe driver code implementing ndo_ll_poll. Adds ndo_ll_poll method and locking between it and the napi poll. When receiving a packet we use skb_mark_ll to record the napi it came from. Add each napi to the napi_hash right after netif_napi_add(). Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NEliezer Tamir <eliezer.tamir@linux.intel.com> Reviewed-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 2月, 2013 2 次提交
-
-
由 Alexander Duyck 提交于
This change adds support for ixgbe to configure the XPS queue mapping on load. The result of this change is that on open we will now be resetting the number of Tx queues, and then setting the default configuration for XPS based on if ATR is enabled or disabled. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Reviewed-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Instead of adjusting the FCoE and Flow director limits based on the number of CPUs we can define them much sooner. This allows the user to come through later and adjust them once we have updated the code to support the set_channels ethtool operation. I am still allowing for FCoE and RSS queues to be separated if the number queues is less than the number of CPUs. This essentially treats the two groupings like they are two separate traffic classes. In addition I am changing the initialization to use the MAX_TX/RX_QUEUES defines instead of trying to compute the value as it will be possible in upcoming patches for the user to request the maximum number of queues. I have also updated things so that the upper limit on queues is exactly 63 instead of allowing it to go up to 64. The reason for this change is to address the fact thqt the driver only supports up to 63 queue vectors since the hardware supports 64 MSI-X vectors, but one must be reserved for "other" causes. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 05 2月, 2013 1 次提交
-
-
由 Don Skidmore 提交于
Signed-off-by: NDon Skidmore <donald.c.skidmore@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 01 11月, 2012 1 次提交
-
-
由 Emil Tantilov 提交于
The q_vector->itr check in ixgbe_configure_tx_ring() was done prior to it being set, which resulted in TXDCTL.WTHRESH always being set to 1 on driver load, while consequent resets would set it to 8. This patch moves the setting of q_vector->itr in ixgbe_alloc_q_vector() to make sure that TXDCTL.WTHRESH is set to 8 by default. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 19 10月, 2012 1 次提交
-
-
由 Alexander Duyck 提交于
When enabling DCB the rings belonging to a q_vector on CPU 0 were not reinitializing their DCA registers. Upon closer inspection the issue was that the q_vector CPU variable was left at 0 resulting in the driver not updating the DCA registers. In order to guarantee the DCA registers will be updated I am adding a couple line change so that we initialize the CPU variable to -1 which will force a DCA update the first time an interrupt fires on that q_vector. In addition we were setting the CPU affinity hint to all CPUs when we were not specifying a CPU. Instead we should leave it as all zeros to avoid any possible confusion about the fact that we shouldn't be giving a hint. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 22 7月, 2012 3 次提交
-
-
由 Alexander Duyck 提交于
This change makes it so that we can use 1TC DCB in the case of MSI and legacy interrupts. The advantage to this is that it allows us to fully support FCoE w/ DCB instead of having to drop to link flow control only when using these interrupt modes. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we can use the atr_sample_rate to determine if we are capable of supporting ATR. The advantage to this approach is that it allows us to now determine the setting of the IXGBE_FLAG_FDIR_HASH_CAPABLE based on the queueing scheme, instead of the queueing scheme being based on the flag. Using this approach there are essentially 5 conditions that must be checked prior to trying to enable ATR: 1. Is SR-IOV disabled? 2. Are the number of TCs <= 1? 3. Is RSS queueing limit greater than 1? 4. Is atr_sample_rate set? 5. Is Flow Director perfect filtering disabled? If any of these conditions are enabled they should disable ATR filtering. Note that in the case of conditions 1 through 4 being met we will set things up for ATR queueing, however if test 5 fails we will still leave the queues allocated for use by perfect filters. The reason for this is to allow for us to switch back and forth between ntuple and ATR without needing to reallocate the descriptor rings. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This is meant to fix a bug in which we were not checking for pre-existing VFs if we were not setting the max_vfs value at driver load. What happens now is that we always call ixgbe_enable_sriov and this checks for pre-existing VFs ore requested VFs prior to deciding on no SR-IOV. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Tested-by: NSibai Li <sibai.li@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 19 7月, 2012 2 次提交
-
-
由 Alexander Duyck 提交于
All of our hardware supports RSS even if it is only for a single queue. So instead of toting around the RSS enable flag I am updating the code so that all devices are enabled and if we want to disable RSS it is indicated via the RSS mask. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change essentially makes it so that we can enable almost all of the features all at once. This patch allows for the combination of SR-IOV, DCB, and FCoE in the case of the x540. It also beefs up the SR-IOV by adding support for RSS to the PF. The testing matrix gets to be very complex for this patch as there are a number of different features and subsets for queueing options. I tried to narrow these down a bit by restricting the PF to only supporting 4TC DCB when it is enabled in addition to SR-IOV. Cc: Greg Rose <gregory.v.rose@intel.com> Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 18 7月, 2012 2 次提交
-
-
由 Alexander Duyck 提交于
This change cleans up some of the logic in an attempt to try and simplify things for how we are configuring DCB w/ RSS. In this patch I basically did 3 things. I updated the logic for getting the first register index. I applied the fact that all TCs get the same number of queues to simplify the looping logic in caching the DCB ring register. Finally I updated how we configure the RQTC register to match the fact that all TCs are assigned the same number of queues. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
It makes much more sense for us to configure the real number of Tx and Rx queues in the ixgbe_open call than it does in ixgbe_set_num_queues. By setting the number in ixgbe_open we can avoid a number of unecessary updates and only have to make the calls once. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 15 7月, 2012 2 次提交
-
-
由 Alexander Duyck 提交于
This change merges the ixgbe_cache_ring_fcoe and ixgbe_set_fcoe_queues logic into the DCB and RSS initialization calls. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In upcoming patches it will become increasingly common to need to determine the FCoE traffic class in order to determine the correct queues for FCoE. In order to make this easier I am adding a function for obtaining the FCoE traffic class based on the user priority. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 11 7月, 2012 2 次提交
-
-
由 Alexander Duyck 提交于
There are really only 3 modes that can control the number of queues. Those are RSS, DCB, and VMDq/SR-IOV. Currently we have things much more broken up than they need to be for how we are configuring the rings. In order to try and straiten some of this out I am going to start merging similar functionality into single functions. To start with I am merging the Flow Director ring configuration into the RSS ring configuration since Flow Director cannot function with DCB or SR-IOV. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The mask value for ring features was overloaded for FCoE which can lead to some confusion. In order to avoid any confusion I am splitting the mask value and adding an offset value. This can be used for the start of the FCoE rings, and in the future I hope to use it to store the start of the registers for SR-IOV. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-