- 16 4月, 2015 17 次提交
-
-
由 Sagi Grimberg 提交于
Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
In singleton scatterlists, DMA memory registration code is taken both for Fastreg and FMR code paths. Move it to a function. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Instead of passing ib_sge as output variable, we pass the mem_reg pointer to have the routines fill the rkey as well. This reduces code duplication and extra assignments. This is a preparation step to unify some registration logics together. Also, pass iser_fast_reg_mr the fastreg descriptor directly. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
No need to keep lkey, va, len variables, we can keep them as struct ib_sge. This will help when we change the memory registration logic. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Memory regions are resources that are saved in the device caches. Increase the probability for a cache hit by adding the MRU descriptor to pool head. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Make iser_[create|destroy]_fastreg_desc shorter, more readable and easily extendable. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Instead of open-coding connection fastreg pool get/put, we introduce iser_reg_desc[get|put] helpers. We aren't setting these static as this will be a per-device routine later on. Also, cleanup iser_unreg_rdma_mem_fastreg a bit. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
No need for these two separate. Keep it in a single routine like in the fastreg case. This will also make iser_reg_page_vec closer to iser_fast_reg_mr arguments. This is a preparation step for registration flow refactor. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
This struct members other than struct iser_mem_reg are unused, so remove it altogether. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Buffer length was assigned twice, and no reason to set va to io_addr and then add the offset, just set va to io_addr + offset. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
As memory registration/de-registration methods, lets move them to their natural location. While we're at it, make iser_reg_page_vec routine static. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
No need to pass that, we can take it from the task. In a later stage, this function will be invoked according to a device capability. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
No need to keep two iser_data_buf structures just in case we use mem copy. We can avoid that just by adding a pointer to the original sg. So keep only two iser_data_buf per command (data and protection) and pass the relevant data_buf to bounce buffer routine. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
This code was added before we had protection data length calculation (in iser_send_command), so we needed to calc the sg data length from the sg itself. This is not needed anymore. This patch does not change any functionality. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NAdir Lev <adirl@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
This length miss-calculation may cause a silent data corruption in the DIX case and cause the device to reference unmapped area. Fixes: d77e6535 ('libiscsi, iser: Adjust data_length to include protection information') Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Fast registration and local invalidate work requests can also fail. We should call error completion handler for them. Reported-by: NRoi Dayan <roid@mellanox.com> Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
In case the user unloaded ib_iser while ep_connect is in progress, we need to destroy the endpoint although ep_disconnect wasn't invoked (we detect this by the iser conn state != DOWN). However, if we got an REJECTED/UNREACHABLE CM event we move the connection state to DOWN which will prevent us from destroying the endpoint in the module unload stage. Fix this by setting the connection state to TERMINATING in iser_conn_error so we can still destroy the endpoint at unload stage. Reported-by: NAriel Nahum <arieln@mellanox.com> Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 15 4月, 2015 23 次提交
-
-
由 David Rientjes 提交于
Allocating a large number of elements in atomic context could quickly deplete memory reserves, so just disallow atomic resizing entirely. Nothing currently uses mempool_resize() with anything other than GFP_KERNEL, so convert existing callers to drop the gfp_mask. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NDavid Rientjes <rientjes@google.com> Acked-by: Steffen Maier <maier@linux.vnet.ibm.com> [zfcp] Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Steve French <sfrench@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Vladimir Davydov 提交于
Currently, cleancache_register_ops returns the previous value of cleancache_ops to allow chaining. However, chaining, as it is implemented now, is extremely dangerous due to possible pool id collisions. Suppose, a new cleancache driver is registered after the previous one assigned an id to a super block. If the new driver assigns the same id to another super block, which is perfectly possible, we will have two different filesystems using the same id. No matter if the new driver implements chaining or not, we are likely to get data corruption with such a configuration eventually. This patch therefore disables the ability to override cleancache_ops altogether as potentially dangerous. If there is already cleancache driver registered, all further calls to cleancache_register_ops will return EBUSY. Since no user of cleancache implements chaining, we only need to make minor changes to the code outside the cleancache core. Signed-off-by: NVladimir Davydov <vdavydov@parallels.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Stefan Hengelein <ilendir@googlemail.com> Cc: Florian Schmaus <fschmaus@gmail.com> Cc: Andor Daam <andor.daam@googlemail.com> Cc: Dan Magenheimer <dan.magenheimer@oracle.com> Cc: Bob Liu <lliubbo@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Konstantin Khlebnikov 提交于
This patch replaces cancel_dirty_page() with a helper function account_page_cleaned() which only updates counters. It's called from truncate_complete_page() and from try_to_free_buffers() (hack for ext3). Page is locked in both cases, page-lock protects against concurrent dirtiers: see commit 2d6d7f98 ("mm: protect set_page_dirty() from ongoing truncation"). Delete_from_page_cache() shouldn't be called for dirty pages, they must be handled by caller (either written or truncated). This patch treats final dirty accounting fixup at the end of __delete_from_page_cache() as a debug check and adds WARN_ON_ONCE() around it. If something removes dirty pages without proper handling that might be a bug and unwritten data might be lost. Hugetlbfs has no dirty pages accounting, ClearPageDirty() is enough here. cancel_dirty_page() in nfs_wb_page_cancel() is redundant. This is helper for nfs_invalidate_page() and it's called only in case complete invalidation. The mess was started in v2.6.20 after commits 46d2277c ("Clean up and make try_to_free_buffers() not race with dirty pages") and 3e67c098 ("truncate: clear page dirtiness before running try_to_free_buffers()") first was reverted right in v2.6.20 in commit ecdfc978 ("Resurrect 'try_to_free_buffers()' VM hackery"), second in v2.6.25 commit a2b34564 ("Fix dirty page accounting leak with ext3 data=journal"). Custom fixes were introduced between these points. NFS in v2.6.23, commit 1b3b4a1a ("NFS: Fix a write request leak in nfs_invalidate_page()"). Kludge in __delete_from_page_cache() in v2.6.24, commit 3a692790 ("Do dirty page accounting when removing a page from the page cache"). Since v2.6.25 all of them are redundant. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Tejun Heo <tj@kernel.org> Cc: Jan Kara <jack@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
There's a deadlock when concurrently hot-adding memory through the probe interface and switching a memory block from offline to online. When hot-adding memory via the probe interface, add_memory() first takes mem_hotplug_begin() and then device_lock() is later taken when registering the newly initialized memory block. This creates a lock dependency of (1) mem_hotplug.lock (2) dev->mutex. When switching a memory block from offline to online, dev->mutex is first grabbed in device_online() when the write(2) transitions an existing memory block from offline to online, and then online_pages() will take mem_hotplug_begin(). This creates a lock inversion between mem_hotplug.lock and dev->mutex. Vitaly reports that this deadlock can happen when kworker handling a probe event races with systemd-udevd switching a memory block's state. This patch requires the state transition to take mem_hotplug_begin() before dev->mutex. Hot-adding memory via the probe interface creates a memory block while holding mem_hotplug_begin(), there is no way to take dev->mutex first in this case. online_pages() and offline_pages() are only called when transitioning memory block state. We now require that mem_hotplug_begin() is taken before calling them -- this requires exporting the mem_hotplug_begin() and mem_hotplug_done() to generic code. In all hot-add and hot-remove cases, mem_hotplug_begin() is done prior to device_online(). This is all that is needed to avoid the deadlock. Signed-off-by: NDavid Rientjes <rientjes@google.com> Reported-by: NVitaly Kuznetsov <vkuznets@redhat.com> Tested-by: NVitaly Kuznetsov <vkuznets@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zhang Zhen <zhenzhang.zhang@huawei.com> Cc: Vladimir Davydov <vdavydov@parallels.com> Cc: Wang Nan <wangnan0@huawei.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Sheng Yong 提交于
Use macro section_nr_to_pfn() to switch between section and pfn, instead of open-coding it. No semantic changes. Signed-off-by: NSheng Yong <shengyong1@huawei.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jeff Kirsher 提交于
With the recent driver changes, bump the version. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com>
-
由 Jeff Kirsher 提交于
VFs were being improperly added to the switch's multicast group. The error stems from the fact that incorrect arguments were passed to the "update_mc_addr" function. It would seem to be a copy paste error since the parameters are similar to the "update_uc_addr" function. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NNgai-Mint Kwan <ngai-mint.kwan@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
When we call update_max_size it does not drop all oversized messages. This is due to the difficulty in performing this operation, since it is a FIFO which makes updating anything other than head or tail very difficult. To fix this, modify validate_msg_size to ensure that we error out later when trying to transmit the message that could be oversized. This will generally be a rare condition, as it requires the FIFO to include a message larger than the max_size negotiated during mailbox connect. Note that max_size is always smaller than rx.size so it should be safe to use here. Also, update the update_max_size function header comment to clearly indicate that it does not drop all oversized messages, but only those at the head of the FIFO. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
When we forcefully shutdown the mailbox, we then go about resetting max size to 0, and clearing all messages in the FIFO. Instead, we should just reset the head pointer so that the FIFO becomes empty, rather than changing the max size to 0. This helps prevent increment in tx_dropped counter during mailbox negotiation, which is confusing to viewers of Linux ethtool statistics output. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
The use of dropped doesn't really mean dropped mailbox messages, but rather specifically messages which were too large to fit in the remote Rx FIFO. Rename the stat to more clearly indicate what it means. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
When the PF receives a request to update a multicast address for the VF, it checks the enabled multicast mode first. Fix a bug where the VF tried to set a multicast address before requesting the required xcast mode. This ensures the multicast addresses are honored as long as the xcast mode was allowed. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Since the service task handles varying work that doesn't all require the interface to be up, launch the service timer immediately. This ensures that we continually check the mailbox, as well as handle other tasks while the device is down. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
The header comment included a miscopy of a C-code line, and also mis-used Rx FIFO when it clearly meant Tx FIFO Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Add a header comment explaining why we have the somewhat crazy mailbox flow. This flow is necessary as it prevents the PF<->SM mailbox from being flooded by the VF messages, which normally trigger a message to the PF. This helps prevent the case where we see a PF mailbox timeout. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Since we already schedule the service task, we can just wait for this task to handle the mailbox events from the VF. This reduces some complex code flow, and makes it so we have a single path for handling the VF messages. There is a possibility that we have a slight delay in handling VF messages, but it should be minimal. The result of tx_complete and !rx_ready is insufficient to determine whether we need to process the mailbox. There is a possible race condition whereby the VF fills up the mbmem for us, but we have already recently processed the mailboxes in the interrupt. During this time, the interrupt is disabled. Thus, our Rx FIFO is empty, but the mbmem now has data in it. Since we continually check whether Rx FIFO is empty, we then never call process. This results in the possibility to prevent PF from handling the VF mailbox messages. Instead, just call process every time, despite the fact that we may or may not have anything to process for the VF. There should be minimal overhead for doing this, and it resolves an issue where the VF never comes up due to never getting response for its SET_LPORT_STATE message. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Since we run the watchdog periodically, which might take a while and potentially monopolize the system default workqueue, create our own separate work queue. This also helps reduce and stabilize latency between scheduling the work in our interrupt and actually performing the work. Still use a timer for the regular scheduled interval but queue the work onto its own work queue. It seemed overkill to create a single workqueue per interface, so we just spawn a single work queue for all interfaces upon driver load. For this reason, use a multi-threaded workqueue with one thread per processor, rather than single threaded queue. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
When returning virtualization queues from the VF back to the PF, do not retain the VF rate limiter. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NTodd Russell <todd.a.russell@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Named it tx_hang_count to differentiate it from tx_hwtstamp_timeout. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
We were incrementing the tx_timeout_count for both the Tx hang and then for all reset flows. Instead, we should only increment tx_timeout_count in the Tx hang path, so that our Tx hang counter does not increment when it was not caused by a Tx hang. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Since we already print this message when a reset is requested via the RESET_REQUESTED flag, we do not need to print it before setting the flag. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
This patch resolves an issue with ethtool stats displaying useless values on the VF, because some stats simply have no meaning to the VF. Resolve this by splitting these out into PF_STATS and only showing them if we aren't the VF. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Even though it shouldn't strictly matter, don't count queue stats higher than the max_queues value stored for this mac. This ensures that we don't attempt to check queues which don't belong to use in VFs. This shouldn't be a visible change, as the VFs should see zero for queues which don't belong to them. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Currently, we show statistics for all 128 queues, even though we don't necessarily have that many queues available especially in the VF case. Instead, use the hw->mac.max_queues value, which tells us how many queues we actually have, rather than the space for the rings we allocated. In this way, we prevent dumping statistics that are useless on the VF. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-