- 06 7月, 2016 5 次提交
-
-
由 Sara Sharon 提交于
Currently the scratch buffer is set to 16 bytes and indicates the size of the bi-directional DMA. However, next HW generation will perform additional offloading, and will write the result in the key location of the TX command, so the size of the bi-directional consistent memory should grow accordingly - increase it to 40. Generalize the code to get rid of now irrelevant scratch references. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
-
由 Sara Sharon 提交于
In MQ environment and new architecture in early stages we may encounter DMA issues. Track RXB status and bail out in case we receive index to an RXB that was not mapped and handed over to HW. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
-
由 Emmanuel Grumbach 提交于
Upon firmware load interrupt (FH_TX), the ISR re-enables the firmware load interrupt only to avoid races with other flows as described in the commit below. When the firmware is completely loaded, the thread that is loading the firmware will enable all the interrupts to make sure that the driver gets the ALIVE interrupt. The problem with that is that the thread that is loading the firmware is actually racing against the ISR and we can get to the following situation: CPU0 CPU1 iwl_pcie_load_given_ucode ... iwl_pcie_load_firmware_chunk wait_for_interrupt <interrupt> ISR handles CSR_INT_BIT_FH_TX ISR wakes up the thread on CPU0 /* enable all the interrupts * to get the ALIVE interrupt */ iwl_enable_interrupts ISR re-enables CSR_INT_BIT_FH_TX only /* start the firmware */ iwl_write32(trans, CSR_RESET, 0); BUG! ALIVE interrupt will never arrive since it has been masked by CPU1. In order to fix that, change the ISR to first check if STATUS_INT_ENABLED is set. If so, re-enable all the interrupts. If STATUS_INT_ENABLED is clear, then we can check what specific interrupt happened and re-enable only that specific interrupt (RFKILL or FH_TX). All the credit for the analysis goes to Kirtika who did the actual debugging work. Cc: <stable@vger.kernel.org> [4.5+] Fixes: a6bd005f ("iwlwifi: pcie: fix RF-Kill vs. firmware load race") Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
-
由 Johannes Berg 提交于
The PCIe transport needs to store two pointers in each TX SKB, and currently assumes mac80211's ieee80211_tx_info is present in the CB to do that. In order to remove that assumption, have the opmodes pass in the offset to where the pointers can be stored in the CB and use the offset in the PCIe code. To make the disentanglement complete, remove mac80211.h includes from everywhere in the generic iwlwifi code. This required adding an include of cfg80211.h in one place. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
-
由 Liad Kaufman 提交于
Support DQA queue sharing when no free queue exists for allocation to a STA that already exists. This means that a single queue will serve more than a single TID (although the RA will be the same for all TIDs served). We try to choose the lowest AC possible, to ensure the shared queues have the lowest possible combined AC requirements. The queue to share is chosen only from the same RA's DATA queues as follows (in descending priority): 1. An AC_BE queue 2. Same AC queue 3. Highest AC queue that is lower than new AC 4. Any existing AC (there always is at least 1 DATA queue) If any aggregations existed for any of the TIDs of the shared queue - they are stopped (the FW is notified), but no delBA is sent. Signed-off-by: NLiad Kaufman <liad.kaufman@intel.com> Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
-
- 01 7月, 2016 1 次提交
-
-
由 Sara Sharon 提交于
Integrated 9000 devices have a bug with shadow registers value retention. If driver writes RBD registers while MAC is asleep the values are stored in shadow registers to be copied whenever MAC wakes up. However, in 9000 devices a MAC wakeup is not triggered and when the bus powers down due to inactivity the shadow values and dirty bits are lost. Turn on the chicken-bits that cause MAC wakeup for RX-related values as well when the device is in D0. When the device is in low power mode turn the RX wakeup chicken bits off since driver is idle and this W/A is not needed. Remove previous W/A which was ineffective. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
-
- 11 5月, 2016 1 次提交
-
-
由 Luca Coelho 提交于
It's cleaner to always call the iwl_trans_ref/unref() functions instead of sometimes calling the trans-specific ops directly. This also prepares for moving some of the code from the trans-specific ops to the common trans code. Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
-
- 30 3月, 2016 2 次提交
-
-
由 Luca Coelho 提交于
We don't use the refcount value anymore, all the refcounting is done in the runtime PM usage_count value. Remove it. Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
My patch resized the pool size, but neglected to resize the global table, which is obviously wrong since the global table maps the pool's rxb to vid one to one. This results in a panic in 9000 devices. Add a build bug to avoid such a case in the future. Fixes: 7b542436 ("iwlwifi: pcie: fine tune number of rxbs") Reported-by: NHaim Dreyfuss <haim.dreyfuss@intel.com> Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 10 3月, 2016 1 次提交
-
-
由 Sara Sharon 提交于
We kick the allocator when we have 2 RBDs that don't have attached RBs, and the allocator allocates 8 RBs meaning that it needs another 6 RBDs to attach the RBs to. The design is that allocator should always have enough RBDs to fulfill requests, so we give in advance 6 RBDs to the allocator so that when it is kicked, it gets additional 2 RBDs and has enough RBDs. These RBDs were taken from the Rx queue itself, meaning that each Rx queue didn't have the maximal number of RBDs, but MAX - 6. Change initial number of RBDs in the system to include both queue size and allocator reserves. Note the multi-queue is always 511 instead of 512 to avoid a full queue since we cannot detect this state easily enough in the 9000 arch. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 28 2月, 2016 1 次提交
-
-
由 Haim Dreyfuss 提交于
Working with MSIX requires prior configuration. This includes requesting interrupt vectors from the OS, registering the vectors and mapping the optional causes to the relevant interrupt. In addition add new interrupt handler to handle MSIX interrupt. Signed-off-by: NHaim Dreyfuss <haim.dreyfuss@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 15 2月, 2016 1 次提交
-
-
由 Emmanuel Grumbach 提交于
When we load the firmware, we hold trans_pcie->mutex to avoid nested flows. We also rely on the ISR to wake up the thread when the DMA has finished copying a chunk. During this flow, we enable the RF-Kill interrupt. The problem is that the RF-Kill interrupt handler can take the mutex and bring the device down. This means that if we load the firmware while the RF-Kill switch is enabled (which will happen when we load the INIT firmware to read the device's capabilities and register to mac80211), we may get an RF-Kill interrupt immediately and the ISR will be waiting for the mutex held by the thread that is currently loading the firmware. At this stage, the ISR won't be able to service the DMA's interrupt needed to wake up the thread that load the firmware. We are in a deadlock situation which ends when the thread that loads the firmware fails on timeout and releases the mutex. To fix this, take the mutex later in the flow, disable the interrupts and synchronize_irq() to give a chance to the RF-Kill interrupt to run and complete. After that, mask all the interrupts besides the DMA interrupt and proceed with firmware load. Make sure to check that there was no RF-Kill interrupt when the interrupts were disabled. This fixes https://bugzilla.kernel.org/show_bug.cgi?id=111361Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 01 2月, 2016 2 次提交
-
-
由 Sara Sharon 提交于
Previous patches enabled new 9000 hardware DMA for one queue only. Enable the actual multi-queue path and configuration now. This requires also per-queue NAPI struct. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
Enable runtime power management (RTPM) for PCIe devices and implement the corresponding functions to enable D0i3 mode when the device is idle. Additionally, remove some unnecessary #ifdef's because the RTPM code will not be called if runtime PM is not configured. Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 31 1月, 2016 3 次提交
-
-
由 Sara Sharon 提交于
The 9000 series introduces several changes in the device DMA operation. As the device now supports multi-queue rx, several DMA channels should be configured. The flows of providing the device with the allocated RBDs now changes as well - the device maintains a separate table of used and free table. The hardware may use the free table to feed RBDs to any queue. This requires maintaing a shared table to map returned RBDs to the original RXB - for that purpose the VID is introduced - an internal identifier of the RB placed in the lower 12 bits and returned by HW in the used data. Another change is the support of 64 bit DMA address. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
The 9000 series devices will support multi rx queues. Current code has one static rx queue - change it to allocate a number of queues per the device capability (pre-9000 devices have the number of rx queues set to one). Subsequent generalizations are: Change the code to access an explicit numbered rx queue only when the queue number is known - when handling interrupt, when accessing the default queue and when iterating the queues. The rest of the functions will receive the rx queue as a pointer. Generalize the warning in allocation failure to consider the allocator status instead of a single rx queue status. Move the rx initial pool of memory buffers to be shared among all the queues and allocated to the default queue on init. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
When the Tx queues are full above a threshold, we immediately stop the mac80211's queue to stop getting new packets. This worked until TSO was enabled. With TSO, one single packet from mac80211 can use many descriptors since a large send needs to be split into several segments. This means that stopping mac80211's queues is not enough and we also need to ensure that we don't overflow the Tx queues with one single packet from mac80211. Add code to transport layer to do just that. Stop mac80211's queue as soon as the queue is full above the same threshold as before, and keep pushing the current packet along with its segments on the queue, but check that we don't overflow. If that would happen, buffer the segments, and send them when there is room in the Tx queue again. Of course, we first need to send the buffered segments and only then, wake up mac80211's queues. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 21 12月, 2015 1 次提交
-
-
由 Emmanuel Grumbach 提交于
When the op_mode sends an skb whose payload is bigger than MSS, PCIe will create an A-MSDU out of it. PCIe assumes that the skb that is coming from the op_mode can fit in one A-MSDU. It is the op_mode's responsibility to make sure that this guarantee holds. Additional headers need to be built for the subframes. The TSO core code takes care of the IP / TCP headers and the driver takes care of the 802.11 subframe headers. These headers are stored on a per-cpu page that is re-used for all the packets handled on that same CPU. Each skb holds a reference to that page and releases the page when it is reclaimed. When the page gets full, it is released and a new one is allocated. Since any SKB that doesn't go through the fast-xmit path of mac80211 will be segmented, we can assume here that the packet is not WEP / TKIP and has a proper SNAP header. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 20 12月, 2015 2 次提交
-
-
由 Emmanuel Grumbach 提交于
Allow to configure the driver to pretend to have TX CSUM offload support. This will be useful to test the TSO flows that will come in further patches. This configuration is disabled by default. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
ilw@linux.intel.com is not available anymore. linuxwifi@intel.com should be used instead. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 13 12月, 2015 2 次提交
-
-
由 Sharon Dvir 提交于
Host commands now have a group id, express this in printed messages. Signed-off-by: NSharon Dvir <sharon.dvir@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
In certain flows (see next patches), the op_mode may need to block the Tx queues for a short period. Provide an API for that. The transport is in charge of counting the number of times the queues are blocked since the op_mode may block the queues several times in a row before unblocking them. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 02 12月, 2015 1 次提交
-
-
由 Johannes Berg 提交于
Transport code currently calls itself through the transport ops, which is quite pointless. Clean up all of this. While at it, remove the unnecessary dir argument and the redundant IDI code. In slave transports, call both the common slave debugfs and the transport's own. SDIO has no files, so remove it all there. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 26 11月, 2015 2 次提交
-
-
由 Johannes Berg 提交于
Make the various conversion functions typesafe, so we don't accidentally try to call them with the wrong pointers and cast them to something that will crash. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
802.11ac allows A-MSDU that can be up to 12KB long. Since an entire A-MSDU needs to fit into one single Receive Buffer (RB), add support for big RBs. Since this adds lots of pressure to the memory manager and significantly increase the true_size of the RX buffers, don't enable this by default. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 18 11月, 2015 1 次提交
-
-
由 Kalle Valo 提交于
Part of reorganising wireless drivers directory and Kconfig. Signed-off-by: NKalle Valo <kvalo@codeaurora.org>
-
- 05 8月, 2015 2 次提交
-
-
由 Sara Sharon 提交于
As a preperation for multiple RX queues change the RBD allocation model. The new model includes a background allocator. The allocator is called by the interrupt handler when there are two released buffers by the queue, and the allocator starts allocating eight pages per request. When the queue has released 8 pages it tries claiming the request. If the pages are not ready - it keeps claiming. This new model should make sure that RBDs are always available across the multiple queues. The RBDs are transferred between the allocator and the queue. The queue moves the free RBDs upon freeing them to the allocator. The allocator moves them back to the queue's possession when the request is claimed. The allocator has an initial pool to make sure there are always RBDs available for the request completion. Release of the buffers at exit is done per pools - the allocator frees its own initial pool and the queue frees its own pool. Existing code refactor - -Queue's initial pool is the size of the queue only as the allocation of the new buffers no longer uses this pool. -Removal of replenish background work, and replenish calls in the interrupt handler and restock(). -The replenish() and the rxq used_list are used only during initialization. -Moved page allocation to a new function for code reuse. New code - Allocator code - new structure and functions. Interrupt handler uses the allocator functions for replenishing buffers. Reuse of the restock() method. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Johannes Berg 提交于
Allow frag SKBs in PCIe and advertise the maximum number of frags to the opmode. As a fallback. linearize the SKB if it exceeds the maximum number of fragments. This allows using the hardware better (filling more TBs) and should improve performance when used by the opmode. Also adjust tracing to be able to deal with this. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 04 8月, 2015 2 次提交
-
-
由 Aviya Erenfeld 提交于
As the firmware is slowly running out of command IDs and grouping of commands is desirable anyway, the firmware is extending the command header from 4 bytes to 8 bytes to introduce a group (in place of the former flags field, since that's always 0 on commands and thus can be easily used to distinguish between the two. In order to support this most easily in the driver widen the command command ID used in the command sending functions and encode the new values (group and version) in the ID. That way existing code doesn't have to be changed (since the higher bits are 0 automatically) and newer code can easily use the new ID generation function to create a value to use in place of just the command ID. Signed-off-by: NAviya Erenfeld <aviya.erenfeld@intel.com> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Johannes Berg 提交于
With the previous patch series, no opmode continues using the command or handler_status (i.e. the return value from the RX) so it can be removed now. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 13 7月, 2015 1 次提交
-
-
由 Emmanuel Grumbach 提交于
This reverts commit 5f175703. This patch introduced a high latency in buffer allocation under extreme load. This latency caused a firmwre crash. The same scenario works fine with this patch reverted. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 26 6月, 2015 1 次提交
-
-
由 Emmanuel Grumbach 提交于
This allows to ensure that we don't have races between them. A user reported that stop_device was called twice upon rfkill interrupt after suspend. When the interrupts are enabled, and right after when we directly check the rfkill state. Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 03 6月, 2015 1 次提交
-
-
由 Sara Sharon 提交于
As a preperation for multiple RX queues change the RBD allocation model. The new model includes a background allocator. The allocator is called by the interrupt handler when there are two released buffers by the queue, and the allocator starts allocating eight pages per request. When the queue has released 8 pages it tries claiming the request. If the pages are not ready - it keeps claiming. This new model should make sure that RBDs are always available across the multiple queues. The RBDs are transferred between the allocator and the queue. The queue moves the free RBDs upon freeing them to the allocator. The allocator moves them back to the queue's possession when the request is claimed. The allocator has an initial pool to make sure there are always RBDs available for the request completion. Release of the buffers at exit is done per pools - the allocator frees its own initial pool and the queue frees its own pool. Existing code refactor - -Queue's initial pool is the size of the queue only as the allocation of the new buffers no longer uses this pool. -Removal of replenish background work, and replenish calls in the interrupt handler and restock(). -The replenish() and the rxq used_list are used only during initialization. -Moved page allocation to a new function for code reuse. New code - Allocator code - new structure and functions. Interrupt handler uses the allocator functions for replenishing buffers. Reuse of the restock() method. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 28 5月, 2015 1 次提交
-
-
由 Ilan Peer 提交于
The cmd_in_flight tracking was introduced to workaround faulty power management hardware, by having the driver keep the NIC awake as long as there are commands in flight. However, some of the code handling this workaround was unconditionally executed, which resulted with an inconsistent state where the driver assumed that the NIC was awake although it wasn't. Fix this by renaming 'cmd_in_flight' to 'cmd_hold_nic_awake' and handling the NIC requested awake state only for hardwares for which the workaround is needed. Signed-off-by: NIlan Peer <ilan.peer@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 12 3月, 2015 1 次提交
-
-
由 Emmanuel Grumbach 提交于
This allows the op_mode to let the transport know that a queue is currently frozen and that its timer should be stopped. When the queue is unfrozen, its timer should be set to expire after the remainder of the timeout has elapsed. This can be used when stations go to sleep. When a station goes to sleep, the op_mode can freeze the timer so that the queue will never be considered as stuck. When the station wakes up, the queue will be unfrozen. This is meant to avoid false positives that would happen if a buggy station goes to sleep for a very long time. In case we have a dedicated queue for this station (BA agreement) and it goes to sleep for a very long time, the queue would rightfully be stopped during all that time. In this case, the stuck queue timer could fire and that would be a false positive. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 01 2月, 2015 1 次提交
-
-
由 Emmanuel Grumbach 提交于
Different queue can have different behavior. While it can be unacceptable for a certain queue to be stuck for 2 seconds (e.g. the command queue), it can happen that another queue will stay stuck for even longer (a queue servicing a power saving client in GO). The op_mode can even make the timeout be a function of the listen interval. Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 29 12月, 2014 1 次提交
-
-
由 Eliad Peller 提交于
Implement the ref/unref trans ops and track both tx and host command queues (and hold references while they are not empty). Signed-off-by: NEliad Peller <eliadx.peller@intel.com> Reviewed-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 14 9月, 2014 1 次提交
-
-
由 Emmanuel Grumbach 提交于
This configuration is not needed for dvm, and it actually broke it. Reported-by: NOliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 04 9月, 2014 2 次提交
-
-
由 Johannes Berg 提交于
Our legal structure changed at some point (see wikipedia), but we forgot to immediately switch over to the new copyright notice. For files that we have modified in the time since the change, add the proper copyright notice now. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Johannes Berg 提交于
In a later patch, the hardware configuration will be moved to firmware. Prepare for this by allowing hardware configuration in the transport to be skipped by not passing a configuration on enable and passing configure_scd=false on disable. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-