- 30 3月, 2016 7 次提交
-
-
由 Haim Dreyfuss 提交于
Update device id and FW serial number for 2X2 antenna devices in 9000 generation product. These will not be available on the market in the coming year. Signed-off-by: NHaim Dreyfuss <haim.dreyfuss@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
iwlwifi / iwlmvm didn't destroy their mutexes. Fix that. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
We insert padding if the MAC header's size is not a multiple of 4 to ensure that the SNAP header is DWORD aligned. When we do so, we let the firmware know by setting a bit in Tx command (TX_CMD_FLG_MH_PAD) which will instruct the firmware to drop those 2 bytes before sending the frame. However, this is not needed for AMSDU as the sub frame header (14B) complements the MAC header (26B) so that the SNAP header is DWORD aligned without adding any pad. Until 9000, the firmware didn't check the TX_CMD_FLG_MH_PAD bit but rather checked the length of the MAC header itself and assumed the entity that enqueued the frame (driver or internal firmware code) added the pad. Since the driver inserted the pad even for AMSDU this logic applied. Note that the padding is a DMA optimization but it's not strictly needed, so we could pad even if it was not needed. However, the CSUM hardware introduced for the 9000 devices requires to not pad AMSDU as it is not needed, and will fail if such a pad exists. Due to older FW not checking the padding bit but checking the mac header size itself - we cannot do this adjustments for older generations. Do not align the size if it is an AMSDU and HW checksum is enabled - which will only happen on 9000 devices and on. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
Bjorn pointed out that printing an error value as an hexadecimal isn't very convenient. Change that. Reported-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luca Coelho 提交于
We don't use the refcount value anymore, all the refcounting is done in the runtime PM usage_count value. Remove it. Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
When entering suspend the driver calls iwl_disable_interrupts() and then iwl_pcie_disable_ict(). On resume the driver calls only iwl_pcie_reset_ict() without calling explicitly to iwl_enable_interrupts(). This mostly works since iwl_pcie_reset_ict is calling to iwl_enable_interrupts, but it doesn't work when there is no ict_table in MSIx mode. The result is that driver tries to resume but fails since it doesn't get the RX interrupt from FW indicating that d0i3 exit was completed. Fix it by adding an explicit call to enable interrupts. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
My patch resized the pool size, but neglected to resize the global table, which is obviously wrong since the global table maps the pool's rxb to vid one to one. This results in a panic in 9000 devices. Add a build bug to avoid such a case in the future. Fixes: 7b542436 ("iwlwifi: pcie: fine tune number of rxbs") Reported-by: NHaim Dreyfuss <haim.dreyfuss@intel.com> Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 20 3月, 2016 1 次提交
-
-
由 Sara Sharon 提交于
Currently when stop flow is performed, there might be transport TX RTPM references that are not freed in case we unmap a queue that still has packets not reclaimed. Fix that. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 10 3月, 2016 2 次提交
-
-
由 Gregory Greenman 提交于
When trying to reach high Rx throughput of more than 500Mbps on a device with a relatively weak CPU (Atom x5-Z8500), CPU utilization may become a bottleneck. Analysis showed that we are looping in iwl_pcie_rx_handle for very long periods which led to starvation of other threads (iwl_pcie_rx_handle runs with _bh disabled). We were handling Rx and allocating new buffers and the new buffers were ready quickly enough to be available before we had finished handling all the buffers available in the hardware. As a consequence, we called iwl_pcie_rxq_restock to refill the hardware with the new buffers, and start again handling new buffers without exiting the function. Since we read the hardware pointer again when we goto restart, new buffers were handled immediately instead of exiting the function. This patch avoids refilling RBs inside rx handling loop, unless an emergency situation is reached. It also doesn't read the hardware pointer again unless we are in an emergency (unlikely) case. This significantly reduce the maximal time we spend in iwl_pcie_rx_handle with _bh disabled. Signed-off-by: NGregory Greenman <gregory.greenman@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
We kick the allocator when we have 2 RBDs that don't have attached RBs, and the allocator allocates 8 RBs meaning that it needs another 6 RBDs to attach the RBs to. The design is that allocator should always have enough RBDs to fulfill requests, so we give in advance 6 RBDs to the allocator so that when it is kicked, it gets additional 2 RBDs and has enough RBDs. These RBDs were taken from the Rx queue itself, meaning that each Rx queue didn't have the maximal number of RBDs, but MAX - 6. Change initial number of RBDs in the system to include both queue size and allocator reserves. Note the multi-queue is always 511 instead of 512 to avoid a full queue since we cannot detect this state easily enough in the 9000 arch. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 07 3月, 2016 3 次提交
-
-
由 Sara Sharon 提交于
128 byte chunk size is supported only on PCIe and not on IOSF. For now, change it back to 64 byte. Reported-by: NOren Givon <oren.givon@intel.com> Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
Change the code to move rxbs directly from the allocator's list to the queue's free list. This makes the code more readable, saves the interim array and the double loop over the free RBs. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luca Coelho 提交于
The pci driver keeps any unbound device in active state and forbids runtime PM. When our driver gets probed, we take control of the state. When the device is released (i.e. during unbind or module removal), we should return the state to what it was before. To do so, we need to forbid RTPM in the driver remove op. Additionally, remove an unnecessary pm_runtime_disable() call, move the initial ref_count setting to a better place and add some comments explaining what is going on. Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 02 3月, 2016 1 次提交
-
-
由 Sara Sharon 提交于
In 9000 series A0 step the closed_rb_num is not wrapping around properly. The queue is wrapping around as it should, so we can W/A it by wrapping the closed_rb_num in the driver. While at it, extend RX logging and add error handling of other cases HW values may cause us to access invalid memory locations. Add also a proper masking of vid value read from HW - this should not have actual affect, but better to be on the safe side. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 28 2月, 2016 6 次提交
-
-
由 Emmanuel Grumbach 提交于
The patch below introduced a variable shadowing. Fix that. Fixes: 3955525d ("iwlwifi: pcie: buffer packets to avoid overflowing Tx queues") Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luca Coelho 提交于
With these ops, we can know when we are about to enter system suspend. This allows us to exit D0i3 state before entering suspend. Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
Fine tune RFH registers further: * Set default queue explicitly * Set RFH to drop frames exceeding RB size * Set the maximum rx transfer size to DRAM to 128 instead of 64 Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
A curly brace was misplaced, fix this. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Haim Dreyfuss 提交于
Working with MSIX requires prior configuration. This includes requesting interrupt vectors from the OS, registering the vectors and mapping the optional causes to the relevant interrupt. In addition add new interrupt handler to handle MSIX interrupt. Signed-off-by: NHaim Dreyfuss <haim.dreyfuss@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
Instead of waking up the device each time we write a register, wake it up once, and writes the registers at once. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 15 2月, 2016 2 次提交
-
-
由 Anton Protopopov 提交于
The iwl_trans_pcie_start_fw() function may return the positive value EIO instead of -EIO in case of error. Signed-off-by: NAnton Protopopov <a.s.protopopov@gmail.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
When we load the firmware, we hold trans_pcie->mutex to avoid nested flows. We also rely on the ISR to wake up the thread when the DMA has finished copying a chunk. During this flow, we enable the RF-Kill interrupt. The problem is that the RF-Kill interrupt handler can take the mutex and bring the device down. This means that if we load the firmware while the RF-Kill switch is enabled (which will happen when we load the INIT firmware to read the device's capabilities and register to mac80211), we may get an RF-Kill interrupt immediately and the ISR will be waiting for the mutex held by the thread that is currently loading the firmware. At this stage, the ISR won't be able to service the DMA's interrupt needed to wake up the thread that load the firmware. We are in a deadlock situation which ends when the thread that loads the firmware fails on timeout and releases the mutex. To fix this, take the mutex later in the flow, disable the interrupts and synchronize_irq() to give a chance to the RF-Kill interrupt to run and complete. After that, mask all the interrupts besides the DMA interrupt and proceed with firmware load. Make sure to check that there was no RF-Kill interrupt when the interrupts were disabled. This fixes https://bugzilla.kernel.org/show_bug.cgi?id=111361Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 01 2月, 2016 5 次提交
-
-
由 Sara Sharon 提交于
Previous patches enabled new 9000 hardware DMA for one queue only. Enable the actual multi-queue path and configuration now. This requires also per-queue NAPI struct. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
No need to include net/ip6_checksum.h twice. Remove TODOs. Remove trailing space. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Matti Gottlieb 提交于
Currently when the driver is configured with wowlan parameters, and enters D3 mode, the driver switches the FW image to D3, and when it exists suspend, it reloads the D0 image. If the firmware supports the consolidation of the D0 & D3 images there is no need to load the D3 image on suspend, and no need to reload the D0 image on resume. Do not switch images on suspend / resume, for firmwares that support consolidated images. Signed-off-by: NMatti Gottlieb <matti.gottlieb@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luciano Coelho 提交于
Enable runtime power management (RTPM) for PCIe devices and implement the corresponding functions to enable D0i3 mode when the device is idle. Additionally, remove some unnecessary #ifdef's because the RTPM code will not be called if runtime PM is not configured. Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Luca Coelho 提交于
Add an initial implementation of runtime power management (RTPM) for PCI devices. With this patch, RTPM is only used when wifi is off (i.e. the wifi interface is down). This implementation is behind a new Kconfig flag, IWLWIFI_PCIE_RTPM. Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 31 1月, 2016 3 次提交
-
-
由 Sara Sharon 提交于
The 9000 series introduces several changes in the device DMA operation. As the device now supports multi-queue rx, several DMA channels should be configured. The flows of providing the device with the allocated RBDs now changes as well - the device maintains a separate table of used and free table. The hardware may use the free table to feed RBDs to any queue. This requires maintaing a shared table to map returned RBDs to the original RXB - for that purpose the VID is introduced - an internal identifier of the RB placed in the lower 12 bits and returned by HW in the used data. Another change is the support of 64 bit DMA address. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Sara Sharon 提交于
The 9000 series devices will support multi rx queues. Current code has one static rx queue - change it to allocate a number of queues per the device capability (pre-9000 devices have the number of rx queues set to one). Subsequent generalizations are: Change the code to access an explicit numbered rx queue only when the queue number is known - when handling interrupt, when accessing the default queue and when iterating the queues. The rest of the functions will receive the rx queue as a pointer. Generalize the warning in allocation failure to consider the allocator status instead of a single rx queue status. Move the rx initial pool of memory buffers to be shared among all the queues and allocated to the default queue on init. Signed-off-by: NSara Sharon <sara.sharon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
When the Tx queues are full above a threshold, we immediately stop the mac80211's queue to stop getting new packets. This worked until TSO was enabled. With TSO, one single packet from mac80211 can use many descriptors since a large send needs to be split into several segments. This means that stopping mac80211's queues is not enough and we also need to ensure that we don't overflow the Tx queues with one single packet from mac80211. Add code to transport layer to do just that. Stop mac80211's queue as soon as the queue is full above the same threshold as before, and keep pushing the current packet along with its segments on the queue, but check that we don't overflow. If that would happen, buffer the segments, and send them when there is room in the Tx queue again. Of course, we first need to send the buffered segments and only then, wake up mac80211's queues. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 25 1月, 2016 2 次提交
-
-
由 Oren Givon 提交于
Signed-off-by: NOren Givon <oren.givon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Oren Givon 提交于
Add new sub-system PCI IDs to the 3168 series. Added 0x2010, 0x2050 and 0x2150 sub-system IDs. Signed-off-by: NOren Givon <oren.givon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 08 1月, 2016 3 次提交
-
-
由 Emmanuel Grumbach 提交于
8000 device family has a new debug engine that needs to be configured differently than 7000's. The debug engine's DMA works in chunks of memory and the size of the buffer really means the start of the last chunk. Since one chunk is 256-byte long, we should configure the device to write to buffer_size - 256. This fixes a situation were the device would write to memory it is not allowed to access. CC: <stable@vger.kernel.org> [4.1+] Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
The debug functions of fw-dbg.c don't really need to modify the trigger and the description they receive as a parameter. Constify the pointers. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Oren Givon 提交于
Update and fix some 7265 PCI IDs entries. CC: <stable@vger.kernel.org> [3.13+] Signed-off-by: NOren Givon <oren.givon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 21 12月, 2015 3 次提交
-
-
由 Emmanuel Grumbach 提交于
All the callers used silent = false. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Oren Givon 提交于
A new PCI IDs update to the 8000 and 9000 series. type=feature Signed-off-by: NOren Givon <oren.givon@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
When the op_mode sends an skb whose payload is bigger than MSS, PCIe will create an A-MSDU out of it. PCIe assumes that the skb that is coming from the op_mode can fit in one A-MSDU. It is the op_mode's responsibility to make sure that this guarantee holds. Additional headers need to be built for the subframes. The TSO core code takes care of the IP / TCP headers and the driver takes care of the 802.11 subframe headers. These headers are stored on a per-cpu page that is re-used for all the packets handled on that same CPU. Each skb holds a reference to that page and releases the page when it is reclaimed. When the page gets full, it is released and a new one is allocated. Since any SKB that doesn't go through the fast-xmit path of mac80211 will be segmented, we can assume here that the packet is not WEP / TKIP and has a proper SNAP header. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 20 12月, 2015 2 次提交
-
-
由 Emmanuel Grumbach 提交于
The code that handles the TBs that contain the WiFi payload will be changed for TSO. Move the current code into a separate function. Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-
由 Emmanuel Grumbach 提交于
Allow to configure the driver to pretend to have TX CSUM offload support. This will be useful to test the TSO flows that will come in further patches. This configuration is disabled by default. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
-