1. 31 10月, 2014 4 次提交
    • M
      ath10k: split reset logic from power up · 0bc14d06
      Michal Kazior 提交于
      The power up procedure was overly complex due to
      warm/cold reset workarounds and issues.
      Signed-off-by: NMichal Kazior <michal.kazior@tieto.com>
      Signed-off-by: NKalle Valo <kvalo@qca.qualcomm.com>
      0bc14d06
    • M
      ath10k: make warm reset a bit safer and faster · 61c1648b
      Michal Kazior 提交于
      One of the problems with warm reset I've found is
      that it must be guaranteed that copy engine
      registers are not being accessed while being
      reset. Otherwise in worst case scenario the host
      may lock up.
      
      Instead of using sleeps and hoping the device is
      operational in some arbitrary timeframes use
      firmware indication register.
      
      As a side effect this makes driver
      boot/stop/recovery faster.
      Signed-off-by: NMichal Kazior <michal.kazior@tieto.com>
      Signed-off-by: NKalle Valo <kvalo@qca.qualcomm.com>
      61c1648b
    • M
      ath10k: change ce ring cleanup logic · 099ac7ce
      Michal Kazior 提交于
      Make ath10k_pci_init_pipes() effectively only
      alter shared target-host data.
      
      The per_transfer_context is a host-only thing.
      It is necessary to preserve it's contents for a
      more robust ring cleanup.
      
      This is required for future warm reset fixes.
      Signed-off-by: NMichal Kazior <michal.kazior@tieto.com>
      Signed-off-by: NKalle Valo <kvalo@qca.qualcomm.com>
      099ac7ce
    • M
      ath10k: avoid possible deadlock with scan timeout · 4eb2e164
      Michal Kazior 提交于
      This should prevent deadlock predicted by the
      following splat:
      
       ======================================================
       [ INFO: possible circular locking dependency detected ]
       3.17.0-wl-ath+ #67 Not tainted
       -------------------------------------------------------
       kworker/u32:1/7230 is trying to acquire lock:
        (&ar->conf_mutex){+.+.+.}, at: [<ffffffffa040a57d>] ath10k_scan_timeout_work+0x2d/0x50 [ath10k_core]
      
       but task is already holding lock:
        ((&(&ar->scan.timeout)->work)){+.+...}, at: [<ffffffff8106dae1>] process_one_work+0x151/0x470
      
       which lock already depends on the new lock.
      
       the existing dependency chain (in reverse order) is:
      
       -> #1 ((&(&ar->scan.timeout)->work)){+.+...}:
              [<ffffffff810a12e5>] lock_acquire+0x85/0x100
              [<ffffffff8106cb4d>] flush_work+0x3d/0x270
              [<ffffffff8106e49d>] __cancel_work_timer+0x7d/0x110
              [<ffffffff8106e543>] cancel_delayed_work_sync+0x13/0x20
              [<ffffffffa0409f16>] ath10k_cancel_remain_on_channel+0x36/0x60 [ath10k_core]
              [<ffffffffa028c75c>] ieee80211_cancel_roc+0x1cc/0x2f0 [mac80211]
              [<ffffffffa028c8a2>] ieee80211_mgmt_tx_cancel_wait+0x22/0x30 [mac80211]
              [<ffffffffa0132288>] nl80211_tx_mgmt_cancel_wait+0xa8/0x130 [cfg80211]
              [<ffffffff816654a5>] genl_family_rcv_msg+0x1a5/0x3c0
              [<ffffffff81665749>] genl_rcv_msg+0x89/0xc0
              [<ffffffff81664e91>] netlink_rcv_skb+0xb1/0xc0
              [<ffffffff816650bc>] genl_rcv+0x2c/0x40
              [<ffffffff8166474d>] netlink_unicast+0x18d/0x200
              [<ffffffff81664add>] netlink_sendmsg+0x31d/0x430
              [<ffffffff8161a9ac>] sock_sendmsg+0x9c/0xd0
              [<ffffffff8161b469>] ___sys_sendmsg+0x389/0x3a0
              [<ffffffff8161bed9>] __sys_sendmsg+0x49/0x90
              [<ffffffff8161bf32>] SyS_sendmsg+0x12/0x20
              [<ffffffff8174c456>] system_call_fastpath+0x1a/0x1f
      
       -> #0 (&ar->conf_mutex){+.+.+.}:
              [<ffffffff810a0bde>] __lock_acquire+0x1b6e/0x1ce0
              [<ffffffff810a12e5>] lock_acquire+0x85/0x100
              [<ffffffff817491eb>] mutex_lock_nested+0x4b/0x370
              [<ffffffffa040a57d>] ath10k_scan_timeout_work+0x2d/0x50 [ath10k_core]
              [<ffffffff8106db41>] process_one_work+0x1b1/0x470
              [<ffffffff8106df63>] worker_thread+0x123/0x460
              [<ffffffff81073f34>] kthread+0xe4/0x100
              [<ffffffff8174c3ac>] ret_from_fork+0x7c/0xb0
      
       other info that might help us debug this:
      
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock((&(&ar->scan.timeout)->work));
                                      lock(&ar->conf_mutex);
                                      lock((&(&ar->scan.timeout)->work));
         lock(&ar->conf_mutex);
      
        *** DEADLOCK ***
      Reported-by: NMarek Puzyniak <marek.puzyniak@tieto.com>
      Signed-off-by: NMichal Kazior <michal.kazior@tieto.com>
      Signed-off-by: NKalle Valo <kvalo@qca.qualcomm.com>
      4eb2e164
  2. 24 10月, 2014 13 次提交
  3. 23 10月, 2014 3 次提交
  4. 21 10月, 2014 8 次提交
  5. 13 10月, 2014 1 次提交
  6. 08 10月, 2014 5 次提交
  7. 07 10月, 2014 4 次提交
  8. 01 10月, 2014 2 次提交