1. 31 8月, 2018 2 次提交
  2. 02 8月, 2018 4 次提交
  3. 26 7月, 2018 4 次提交
  4. 30 5月, 2018 1 次提交
  5. 26 4月, 2018 1 次提交
  6. 25 10月, 2017 1 次提交
    • M
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns... · 6aa7de05
      Mark Rutland 提交于
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
      
      Please do not apply this to mainline directly, instead please re-run the
      coccinelle script shown below and apply its output.
      
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't harmful, and changing them results in
      churn.
      
      However, for some features, the read/write distinction is critical to
      correct operation. To distinguish these cases, separate read/write
      accessors must be used. This patch migrates (most) remaining
      ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
      coccinelle script:
      
      ----
      // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
      // WRITE_ONCE()
      
      // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
      
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: tj@kernel.org
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6aa7de05
  7. 24 8月, 2017 1 次提交
    • L
      iwlwifi: pcie: move rx workqueue initialization to iwl_trans_pcie_alloc() · 10a54d81
      Luca Coelho 提交于
      Work queues cannot be allocated when a mutex is held because the mutex
      may be in use and that would make it sleep.  Doing so generates the
      following splat with 4.13+:
      
      [   19.513298] ======================================================
      [   19.513429] WARNING: possible circular locking dependency detected
      [   19.513557] 4.13.0-rc5+ #6 Not tainted
      [   19.513638] ------------------------------------------------------
      [   19.513767] cpuhp/0/12 is trying to acquire lock:
      [   19.513867]  (&tz->lock){+.+.+.}, at: [<ffffffff924afebb>] thermal_zone_get_temp+0x5b/0xb0
      [   19.514047]
      [   19.514047] but task is already holding lock:
      [   19.514166]  (cpuhp_state){+.+.+.}, at: [<ffffffff91cc4baa>] cpuhp_thread_fun+0x3a/0x210
      [   19.514338]
      [   19.514338] which lock already depends on the new lock.
      
      This lock dependency already existed with previous kernel versions,
      but it was not detected until commit 49dfe2a6 ("cpuhotplug: Link
      lock stacks for hotplug callbacks") was introduced.
      Reported-by: NDavid Weinehall <david.weinehall@intel.com>
      Reported-by: NJiri Kosina <jikos@kernel.org>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      Signed-off-by: NKalle Valo <kvalo@codeaurora.org>
      10a54d81
  8. 18 8月, 2017 1 次提交
  9. 30 6月, 2017 1 次提交
  10. 29 6月, 2017 3 次提交
  11. 23 6月, 2017 6 次提交
  12. 26 4月, 2017 1 次提交
  13. 20 4月, 2017 2 次提交
    • S
      iwlwifi: pcie: alloc queues dynamically · 13a3a390
      Sara Sharon 提交于
      Change queue allocation to be dynamic. On transport init only
      the command queue is being allocated. Other queues are allocated
      on demand.
      This is due to the huge amount of queues we will soon enable (512)
      and as a preparation for TX Virtual Queue Manager feature (TVQM),
      where firmware will assign the actual queue number on demand.
      This includes also allocation of the byte count table per queue
      and not as a contiguous chunk of memory.
      Signed-off-by: NSara Sharon <sara.sharon@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      13a3a390
    • S
      iwlwifi: pcie: prepare for dynamic queue allocation · b2a3b1c1
      Sara Sharon 提交于
      In a000 transport we will allocate queues dynamically.
      Right now queue are allocated as one big chunk of memory
      and accessed as such.
      The dynamic allocation of the queues will require accessing
      the queues as pointers.
      In order to keep simplicity of pre-a000 tx queues handling,
      keep allocating and freeing the memory in the same style,
      but move to access the queues in the various functions as
      individual pointers.
      Dynamic allocation for the a000 devices will be in a separate
      patch.
      Signed-off-by: NSara Sharon <sara.sharon@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      b2a3b1c1
  14. 11 4月, 2017 2 次提交
    • S
      iwlwifi: pcie: add context information support · eda50cde
      Sara Sharon 提交于
      Context information structure is going to be used in a000
      devices for firmware self init.
      
      The self init includes firmware self loading from DRAM by
      ROM.
      This means the TFH relevant firmware loading can be cleaned up.
      
      The firmware loading includes the paging memory as well, so op
      mode can stop initializing the paging and sending the DRAM_BLOCK_CMD.
      
      Firmware is doing RFH, TFH and SCD configuration, while driver
      only fills the required configurations and addresses in the
      context information structure.
      
      The only remaining access to RFH is the write pointer, which
      is updated upon alive interrupt after FW configured the RFH.
      Signed-off-by: NSara Sharon <sara.sharon@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      eda50cde
    • E
      iwlwifi: pcie: print less data upon firmware crash · afb84431
      Emmanuel Grumbach 提交于
      We don't need to print so much data in the kernel log.
      Limit the data to be printed to the queue that actually
      got stuck in case of a TFD queue hang, and stop dumping
      all the CSR and FH registers. Over the course of time, the
      CSR and FH values haven't proven themselves to be really
      useful for debugging, and they are now in the firmware dump
      anyway.
      
      This comes as a preparation to the addition of more data
      required to be printed by the firwmare team.
      Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      afb84431
  15. 08 2月, 2017 1 次提交
    • G
      iwlwifi: pcie: set STATUS_RFKILL immediately after interrupt · 2b18824a
      Golan Ben Ami 提交于
      Currently, when getting a RFKILL interrupt, the transport enters a flow
      in which it stops the device, disables other interrupts, etc. After
      stopping the device, the transport resets the hw, and sleeps. During
      the sleep, a context switch occurs and host commands are sent by upper
      layers (e.g. mvm) to the fw. This is possible since the op_mode layer
      and the transport layer hold different mutexes.
      
      Since the STATUS_RFKILL bit isn't set, the transport layer doesn't
      recognize that RFKILL was toggled on, and no commands can actually be
      sent, so it enqueues the command to the tx queue and sets a timer on
      the queue.
      
      After switching context back to stopping the device, STATUS_RFKILL is
      set, and then the transport can't send the command to the fw.
      This eventually results in a queue hang.
      
      Fix this by setting STATUS_RFKILL immediately when
      the interrupt is fired.
      Signed-off-by: NGolan Ben-Ami <golan.ben.ami@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      2b18824a
  16. 07 2月, 2017 1 次提交
    • J
      iwlwifi: pcie: fix another RF-kill race · 23aeea94
      Johannes Berg 提交于
      When resuming, it's possible for the following scenario to occur:
      
       * iwl_pci_resume() enables the RF-kill interrupt
       * iwl_pci_resume() reads the RF-kill state (e.g. to 'radio enabled')
       * RF_KILL interrupt triggers, and iwl_pcie_irq_handler() reads the
         state, now 'radio disabled', and acquires the &trans_pcie->mutex.
       * iwl_pcie_irq_handler() further calls iwl_trans_pcie_rf_kill() to
         indicate to the higher layers that the radio is now disabled (and
         stops the device while at it)
       * iwl_pcie_irq_handler() drops the mutex
       * iwl_pci_resume() continues, acquires the mutex and calls the higher
         layers to indicate that the radio is enabled.
      
      At this point, the device is stopped but the higher layers think it's
      available, and can call deeply into the driver to try to enable it.
      However, this will fail since the device is actually disabled.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      23aeea94
  17. 19 9月, 2016 2 次提交
  18. 16 9月, 2016 2 次提交
  19. 06 7月, 2016 4 次提交
    • S
      iwlwifi: centralize 64 bit HW registers write · 12a17458
      Sara Sharon 提交于
      Move the write_prph_64 of pcie to be transport agnostic.
      Add direct write as well, as it is needed for a000 HW.
      Signed-off-by: NSara Sharon <sara.sharon@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      12a17458
    • S
      iwlwifi: pcie: track rxb status · b1753c62
      Sara Sharon 提交于
      In MQ environment and new architecture in early stages
      we may encounter DMA issues. Track RXB status and bail
      out in case we receive index to an RXB that was not
      mapped and handed over to HW.
      Signed-off-by: NSara Sharon <sara.sharon@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      b1753c62
    • E
      iwlwifi: pcie: fix a race in firmware loading flow · f16c3ebf
      Emmanuel Grumbach 提交于
      Upon firmware load interrupt (FH_TX), the ISR re-enables the
      firmware load interrupt only to avoid races with other
      flows as described in the commit below. When the firmware
      is completely loaded, the thread that is loading the
      firmware will enable all the interrupts to make sure that
      the driver gets the ALIVE interrupt.
      The problem with that is that the thread that is loading
      the firmware is actually racing against the ISR and we can
      get to the following situation:
      
      CPU0					CPU1
      iwl_pcie_load_given_ucode
      	...
      	iwl_pcie_load_firmware_chunk
      		wait_for_interrupt
      					<interrupt>
      					ISR handles CSR_INT_BIT_FH_TX
      					ISR wakes up the thread on CPU0
      	/* enable all the interrupts
      	 * to get the ALIVE interrupt
      	 */
      	iwl_enable_interrupts
      					ISR re-enables CSR_INT_BIT_FH_TX only
      	/* start the firmware */
      	iwl_write32(trans, CSR_RESET, 0);
      
      BUG! ALIVE interrupt will never arrive since it has been
      masked by CPU1.
      
      In order to fix that, change the ISR to first check if
      STATUS_INT_ENABLED is set. If so, re-enable all the
      interrupts. If STATUS_INT_ENABLED is clear, then we can
      check what specific interrupt happened and re-enable only
      that specific interrupt (RFKILL or FH_TX).
      
      All the credit for the analysis goes to Kirtika who did the
      actual debugging work.
      
      Cc: <stable@vger.kernel.org> [4.5+]
      Fixes: a6bd005f ("iwlwifi: pcie: fix RF-Kill vs. firmware load race")
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      f16c3ebf
    • S
      iwlwifi: pcie: don't use vid 0 · e25d65f2
      Sara Sharon 提交于
      In cases of hardware or DMA error, the vid read from
      a zeroed location will be 0, and we will access the rxb
      at index 0 in the global table, while it may be NULL or
      owned by hardware.
      Invalidate vid 0 in order to detect the situation and
      bail out.
      Signed-off-by: NSara Sharon <sara.sharon@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      e25d65f2