- 29 4月, 2015 14 次提交
-
-
由 Liad Kaufman 提交于
Previously there was a check that compared window->average_tpt to some value, and if it was different - it set it to that value. However, this value was already calculated and set in _rs_collect_tx_data(), so the entire check is unneeded. Signed-off-by: NLiad Kaufman <liad.kaufman@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Liad Kaufman 提交于
This adds support for configuring and retrieving the FW monitor in MARBH mode. Signed-off-by: NLiad Kaufman <liad.kaufman@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
Net-detect scans were using the same type as sched scan, which was causing the driver to return -EBUSY and prevent the system from suspending if there was an ongoing scheduled scan. To avoid this, add a new type for net-detect and don't stop anything when it is requested, so that the existing scheduled scan will be resumed when the system wakes up. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
Move all the scan code that was in mac80211.c to scan.c where it belongs, leaving only the parts that are specific to mac80211 ops. Change some function definitions slightly to improve consistency. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
All scans are using the unified APIs now, so using "unified" in the symbols is useless and just make them much longer and the main difference between scans now is LMAC vs. UMAC. Remove "unified" from all relevant symbols. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
Instead of hardcoding the differences between UMAC scans and LMAC scans (which in this case is the number of simultaneous scans that can run), introduce a max_scans variable and stop scans of the other type (i.e. stop sched scan if regular scan is being attempted and vice-versa) if the number of running scans reached the maximum. Add a function that checks if the maximum number of scans was reached and stops the appropriate scan to make room for the new scan. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
If a new scan cannot be run for some reason, we shouldn't cancel other ongoing scans. Move the checks to before the code that cancels other scans. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
Now that we have separate flags for stopping scans, we don't need to wait for the scan stopped work to complete before starting the new scan. Previously we needed it because we had no way of distinguishing the scan that was being stopped from the scan that was currently running. With the new flags there won't be any confusions and we are able to handle the stop for the correct type of scan. Thus we can remove the iwl_mvm_cancel_scan_wait_notif() function. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
LMAC scans cannot handle more than one scan at a time, but UMAC scans can. To avoid confusion we should combine the states of these two types of scans. To do so, we need to support mutliple scans at the same time for UMAC. This commit changes the scan_status element from a single value to a bitmask of running scan types for LMAC. Later, we will modify UMAC scans to use the same state bitmask. Additionally, add stopping scan flags for scheduled and regular scans. This makes it easier to differentiate and handle stop requests triggered by the driver and spontaneous stops generated by the firmware. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
In some cases, max_out_time value is smaller than 200 and having the NL80211_SCAN_FLAG_LOW_PRIORITY flag was actually causing the max_out_time to be increased. To avoid that, set max_out_time to 200 only if it's greater than 200. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NAlexander Bondar <alexander.bondar@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
Add scan parameters information to make it easier to debug scan dwell times and fragmentation. Signed-off-by: NLuciano Coelho <luciano.coelho@intel.com> Reviewed-by: NAlexander Bondar <alexander.bondar@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Eyal Shapira 提交于
last_txrate_idx isn't used anymore and can be dropped as this info exists already somewhere else. Signed-off-by: NEyal Shapira <eyalx.shapira@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Liad Kaufman 提交于
Same code appear a few lines later while the position has no effect on the actual flow. Signed-off-by: NLiad Kaufman <liad.kaufman@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Matti Gottlieb 提交于
When user space requests mac80211 to transmit a frame off channel, mac80211 notifies the driver, and the driver requests a time event from the ucode, and then transmits the frame. When the driver requests a time event, it can specify what is the allowed max delay for starting the time event. When the max delay is too big, this can cause a timeout in the user space, that is waiting for the frame to be transmitted. Currently the max delay is extremely long. Reduce the max delay for the AUX ROC time event that is sent to the ucode. Signed-off-by: NMatti Gottlieb <matti.gottlieb@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 28 4月, 2015 3 次提交
-
-
由 Johannes Berg 提交于
During firmware restart, the quota command isn't calculated multiple times, but after the firmware restart it has to be sent, so force it. Otherwise the firmware crashes. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
I forgot to rename the CPTCFG_ prefix... Fixes: 484b3d13 ("iwlwifi: mvm: add debugfs entry with the number of net-detect scans") Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Eran Harary 提交于
Our device needs two different firmwares: the INIT firmware and the operational (OPER) firmware. The first one is run when the driver loads and it returns calibrations results as well as the NVM. The second one implements the WiFi protocol. If the wlan interface is not brought up, the device is put to low power state: no firmware will be running. When the interface is brought up, we would run the OPER firmware only and reuse the results of the run of the INIT firmware when the driver was loaded. This is changing with this patch. We now run the INIT firmware every time mac80211 calls start(). The penalty for that is minimal since the INIT firwmare run fast. I now also avoid to power down the device between the INIT and OPER firmware on certains buses. The motivation for this change is that there are components on the device (MFUART) that are triggered by the INIT firmware and need the device to be powered up in order to keep running. Powering the device down between the INIT and OPER firmware would stop these components and prevent them from running again since they are triggered by the INIT firmware only. The new flow allows this and also allows to trigger these components again when the interface is brought up after it has been brought down. Signed-off-by: NEran Harary <eran.harary@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 20 4月, 2015 1 次提交
-
-
由 Liad Kaufman 提交于
In the case of a DMA mapping error on the last iteration of the loop of the allocation of memory of the FW monitor we indeed free the pages, but don't NULL out the page variable thus allowing for the possibility of setting the FW monitor variables with invalid data to use. Fixes: c2d20201 ("iwlwifi: pcie: add firmware monitor capabilities") Signed-off-by: NLiad Kaufman <liad.kaufman@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 19 4月, 2015 4 次提交
-
-
由 Alexander Bondar 提交于
If for some reason statistics notification received from the firmware reports 0 in average beacon RSSI value, then skip it and avoid signal based decisions. Signed-off-by: NAlexander Bondar <alexander.bondar@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Avraham Stern 提交于
Scan iteration complete notification handling uses the wrong FW API version (version 2 instead of version 3). Fix that by removing version 2 API which is no longer used, and using only the updated version. Signed-off-by: NAvraham Stern <avraham.stern@intel.com> Reviewed-by: NDavid Spinadel <david.spinadel@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
When the delay paramatere is provided, we need to stop the collection only after the delay has elapsed. Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Avri Altman 提交于
The firmware doesn't relate the scan to a vif. The scan is run by a separate entity called auxiliary MAC (aka AUX MAC). This AUX MAC needs to get Tx power limitations that are not applied on a specific vif, but on the device as a whole. This can be implemented by using the minimum of all the values set by the user for all the MACs. But then we need to ignore the limitations that come from the AP or regulatory for a specific vif: a specific vif might have regulatory limitations because of the channel is works on. This limit is irrelevant for the AUX MAC. Use the new API from mac80211: the user_power_level in bss_conf to achieve this. Firmware -13.ucode has already moved to this API. Change-Id: Ifba83660f378e91b93bd46d29fe8ba35a7c168a4 Signed-off-by: NAvri Altman <avri.altman@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 15 4月, 2015 18 次提交
-
-
由 David Rientjes 提交于
Allocating a large number of elements in atomic context could quickly deplete memory reserves, so just disallow atomic resizing entirely. Nothing currently uses mempool_resize() with anything other than GFP_KERNEL, so convert existing callers to drop the gfp_mask. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NDavid Rientjes <rientjes@google.com> Acked-by: Steffen Maier <maier@linux.vnet.ibm.com> [zfcp] Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Steve French <sfrench@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Vladimir Davydov 提交于
Currently, cleancache_register_ops returns the previous value of cleancache_ops to allow chaining. However, chaining, as it is implemented now, is extremely dangerous due to possible pool id collisions. Suppose, a new cleancache driver is registered after the previous one assigned an id to a super block. If the new driver assigns the same id to another super block, which is perfectly possible, we will have two different filesystems using the same id. No matter if the new driver implements chaining or not, we are likely to get data corruption with such a configuration eventually. This patch therefore disables the ability to override cleancache_ops altogether as potentially dangerous. If there is already cleancache driver registered, all further calls to cleancache_register_ops will return EBUSY. Since no user of cleancache implements chaining, we only need to make minor changes to the code outside the cleancache core. Signed-off-by: NVladimir Davydov <vdavydov@parallels.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Stefan Hengelein <ilendir@googlemail.com> Cc: Florian Schmaus <fschmaus@gmail.com> Cc: Andor Daam <andor.daam@googlemail.com> Cc: Dan Magenheimer <dan.magenheimer@oracle.com> Cc: Bob Liu <lliubbo@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Konstantin Khlebnikov 提交于
This patch replaces cancel_dirty_page() with a helper function account_page_cleaned() which only updates counters. It's called from truncate_complete_page() and from try_to_free_buffers() (hack for ext3). Page is locked in both cases, page-lock protects against concurrent dirtiers: see commit 2d6d7f98 ("mm: protect set_page_dirty() from ongoing truncation"). Delete_from_page_cache() shouldn't be called for dirty pages, they must be handled by caller (either written or truncated). This patch treats final dirty accounting fixup at the end of __delete_from_page_cache() as a debug check and adds WARN_ON_ONCE() around it. If something removes dirty pages without proper handling that might be a bug and unwritten data might be lost. Hugetlbfs has no dirty pages accounting, ClearPageDirty() is enough here. cancel_dirty_page() in nfs_wb_page_cancel() is redundant. This is helper for nfs_invalidate_page() and it's called only in case complete invalidation. The mess was started in v2.6.20 after commits 46d2277c ("Clean up and make try_to_free_buffers() not race with dirty pages") and 3e67c098 ("truncate: clear page dirtiness before running try_to_free_buffers()") first was reverted right in v2.6.20 in commit ecdfc978 ("Resurrect 'try_to_free_buffers()' VM hackery"), second in v2.6.25 commit a2b34564 ("Fix dirty page accounting leak with ext3 data=journal"). Custom fixes were introduced between these points. NFS in v2.6.23, commit 1b3b4a1a ("NFS: Fix a write request leak in nfs_invalidate_page()"). Kludge in __delete_from_page_cache() in v2.6.24, commit 3a692790 ("Do dirty page accounting when removing a page from the page cache"). Since v2.6.25 all of them are redundant. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Tejun Heo <tj@kernel.org> Cc: Jan Kara <jack@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
There's a deadlock when concurrently hot-adding memory through the probe interface and switching a memory block from offline to online. When hot-adding memory via the probe interface, add_memory() first takes mem_hotplug_begin() and then device_lock() is later taken when registering the newly initialized memory block. This creates a lock dependency of (1) mem_hotplug.lock (2) dev->mutex. When switching a memory block from offline to online, dev->mutex is first grabbed in device_online() when the write(2) transitions an existing memory block from offline to online, and then online_pages() will take mem_hotplug_begin(). This creates a lock inversion between mem_hotplug.lock and dev->mutex. Vitaly reports that this deadlock can happen when kworker handling a probe event races with systemd-udevd switching a memory block's state. This patch requires the state transition to take mem_hotplug_begin() before dev->mutex. Hot-adding memory via the probe interface creates a memory block while holding mem_hotplug_begin(), there is no way to take dev->mutex first in this case. online_pages() and offline_pages() are only called when transitioning memory block state. We now require that mem_hotplug_begin() is taken before calling them -- this requires exporting the mem_hotplug_begin() and mem_hotplug_done() to generic code. In all hot-add and hot-remove cases, mem_hotplug_begin() is done prior to device_online(). This is all that is needed to avoid the deadlock. Signed-off-by: NDavid Rientjes <rientjes@google.com> Reported-by: NVitaly Kuznetsov <vkuznets@redhat.com> Tested-by: NVitaly Kuznetsov <vkuznets@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zhang Zhen <zhenzhang.zhang@huawei.com> Cc: Vladimir Davydov <vdavydov@parallels.com> Cc: Wang Nan <wangnan0@huawei.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Sheng Yong 提交于
Use macro section_nr_to_pfn() to switch between section and pfn, instead of open-coding it. No semantic changes. Signed-off-by: NSheng Yong <shengyong1@huawei.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jeff Kirsher 提交于
With the recent driver changes, bump the version. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com>
-
由 Jeff Kirsher 提交于
VFs were being improperly added to the switch's multicast group. The error stems from the fact that incorrect arguments were passed to the "update_mc_addr" function. It would seem to be a copy paste error since the parameters are similar to the "update_uc_addr" function. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NNgai-Mint Kwan <ngai-mint.kwan@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
When we call update_max_size it does not drop all oversized messages. This is due to the difficulty in performing this operation, since it is a FIFO which makes updating anything other than head or tail very difficult. To fix this, modify validate_msg_size to ensure that we error out later when trying to transmit the message that could be oversized. This will generally be a rare condition, as it requires the FIFO to include a message larger than the max_size negotiated during mailbox connect. Note that max_size is always smaller than rx.size so it should be safe to use here. Also, update the update_max_size function header comment to clearly indicate that it does not drop all oversized messages, but only those at the head of the FIFO. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
When we forcefully shutdown the mailbox, we then go about resetting max size to 0, and clearing all messages in the FIFO. Instead, we should just reset the head pointer so that the FIFO becomes empty, rather than changing the max size to 0. This helps prevent increment in tx_dropped counter during mailbox negotiation, which is confusing to viewers of Linux ethtool statistics output. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
The use of dropped doesn't really mean dropped mailbox messages, but rather specifically messages which were too large to fit in the remote Rx FIFO. Rename the stat to more clearly indicate what it means. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
When the PF receives a request to update a multicast address for the VF, it checks the enabled multicast mode first. Fix a bug where the VF tried to set a multicast address before requesting the required xcast mode. This ensures the multicast addresses are honored as long as the xcast mode was allowed. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Since the service task handles varying work that doesn't all require the interface to be up, launch the service timer immediately. This ensures that we continually check the mailbox, as well as handle other tasks while the device is down. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
The header comment included a miscopy of a C-code line, and also mis-used Rx FIFO when it clearly meant Tx FIFO Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Add a header comment explaining why we have the somewhat crazy mailbox flow. This flow is necessary as it prevents the PF<->SM mailbox from being flooded by the VF messages, which normally trigger a message to the PF. This helps prevent the case where we see a PF mailbox timeout. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Since we already schedule the service task, we can just wait for this task to handle the mailbox events from the VF. This reduces some complex code flow, and makes it so we have a single path for handling the VF messages. There is a possibility that we have a slight delay in handling VF messages, but it should be minimal. The result of tx_complete and !rx_ready is insufficient to determine whether we need to process the mailbox. There is a possible race condition whereby the VF fills up the mbmem for us, but we have already recently processed the mailboxes in the interrupt. During this time, the interrupt is disabled. Thus, our Rx FIFO is empty, but the mbmem now has data in it. Since we continually check whether Rx FIFO is empty, we then never call process. This results in the possibility to prevent PF from handling the VF mailbox messages. Instead, just call process every time, despite the fact that we may or may not have anything to process for the VF. There should be minimal overhead for doing this, and it resolves an issue where the VF never comes up due to never getting response for its SET_LPORT_STATE message. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Since we run the watchdog periodically, which might take a while and potentially monopolize the system default workqueue, create our own separate work queue. This also helps reduce and stabilize latency between scheduling the work in our interrupt and actually performing the work. Still use a timer for the regular scheduled interval but queue the work onto its own work queue. It seemed overkill to create a single workqueue per interface, so we just spawn a single work queue for all interfaces upon driver load. For this reason, use a multi-threaded workqueue with one thread per processor, rather than single threaded queue. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
When returning virtualization queues from the VF back to the PF, do not retain the VF rate limiter. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NTodd Russell <todd.a.russell@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Named it tx_hang_count to differentiate it from tx_hwtstamp_timeout. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-