1. 20 11月, 2019 3 次提交
  2. 30 10月, 2019 1 次提交
  3. 05 9月, 2019 3 次提交
  4. 25 6月, 2019 1 次提交
  5. 01 5月, 2019 19 次提交
  6. 07 3月, 2019 2 次提交
  7. 19 2月, 2019 3 次提交
  8. 11 1月, 2019 4 次提交
  9. 30 11月, 2018 3 次提交
    • F
      mt76: do not wake tx queues during flush · 13c6d5f8
      Felix Fietkau 提交于
      While the queue is being cleaned up, the stack must not attempt to add
      any extra packets
      Signed-off-by: NFelix Fietkau <nbd@nbd.name>
      13c6d5f8
    • F
      mt76: avoid queue/status spinlocks while passing tx status to mac80211 · 79d1c94c
      Felix Fietkau 提交于
      There is some code in the mac80211 tx status processing code that could
      potentially call back into the tx codepath.
      To avoid deadlocks, make sure that no tx related spinlocks are taken
      during the ieee80211_tx_status call.
      Signed-off-by: NFelix Fietkau <nbd@nbd.name>
      79d1c94c
    • F
      mt76: add support for reporting tx status with skb · 88046b2c
      Felix Fietkau 提交于
      MT76x2/MT76x0 has somewhat unreliable tx status reporting, and for that
      reason the driver currently does not report per-skb tx ack status at all.
      This breaks things like client idle polling, which relies on the tx ack
      status of a transmitted nullfunc frame.
      
      This patch adds code to report skb-attached tx status if requested by
      mac80211 or the rate control module. Since tx status is polled from a
      simple FIFO register, the code needs to account for the possibility of
      tx status events getting lost.
      
      The code keeps a list of skbs for which tx status is required and passes
      them to mac80211 once tx status has been filled in and the DMA queue is
      done with it.
      If a tx status event is not received after one second, the status rates
      are cleared, and a succesful ACK is indicated to avoid spurious disassoc
      during assoc or client polling.
      Signed-off-by: NFelix Fietkau <nbd@nbd.name>
      88046b2c
  10. 19 9月, 2018 1 次提交
    • F
      mt76: use a per rx queue page fragment cache · c12128ce
      Felix Fietkau 提交于
      Using the NAPI or netdev frag cache along with other drivers can lead to
      32 KiB pages being held for a long time, despite only being used for
      very few page fragments.
      
      This can happen if the driver grabs one or two fragments for rx ring
      refill, while other drivers use (and free up) the remaining fragments.
      The 32 KiB higher-order page can only be freed once all users have freed
      their fragments.
      
      Depending on the traffic patterns, this can waste a lot of memory and
      look a lot like a memory leak.
      Signed-off-by: NFelix Fietkau <nbd@nbd.name>
      c12128ce