1. 06 3月, 2020 2 次提交
  2. 13 2月, 2020 1 次提交
    • D
      iwlwifi: mvm: Do not require PHY_SKU NVM section for 3168 devices · a9149d24
      Dan Moulding 提交于
      The logic for checking required NVM sections was recently fixed in
      commit b3f20e09 ("iwlwifi: mvm: fix NVM check for 3168
      devices"). However, with that fixed the else is now taken for 3168
      devices and within the else clause there is a mandatory check for the
      PHY_SKU section. This causes the parsing to fail for 3168 devices.
      
      The PHY_SKU section is really only mandatory for the IWL_NVM_EXT
      layout (the phy_sku parameter of iwl_parse_nvm_data is only used when
      the NVM type is IWL_NVM_EXT). So this changes the PHY_SKU section
      check so that it's only mandatory for IWL_NVM_EXT.
      
      Fixes: b3f20e09 ("iwlwifi: mvm: fix NVM check for 3168 devices")
      Signed-off-by: NDan Moulding <dmoulding@me.com>
      Signed-off-by: NKalle Valo <kvalo@codeaurora.org>
      a9149d24
  3. 04 2月, 2020 7 次提交
  4. 01 2月, 2020 1 次提交
  5. 23 1月, 2020 2 次提交
    • M
      net: Fix packet reordering caused by GRO and listified RX cooperation · c8079432
      Maxim Mikityanskiy 提交于
      Commit 323ebb61 ("net: use listified RX for handling GRO_NORMAL
      skbs") introduces batching of GRO_NORMAL packets in napi_frags_finish,
      and commit 6570bc79 ("net: core: use listified Rx for GRO_NORMAL in
      napi_gro_receive()") adds the same to napi_skb_finish. However,
      dev_gro_receive (that is called just before napi_{frags,skb}_finish) can
      also pass skbs to the networking stack: e.g., when the GRO session is
      flushed, napi_gro_complete is called, which passes pp directly to
      netif_receive_skb_internal, skipping napi->rx_list. It means that the
      packet stored in pp will be handled by the stack earlier than the
      packets that arrived before, but are still waiting in napi->rx_list. It
      leads to TCP reorderings that can be observed in the TCPOFOQueue counter
      in netstat.
      
      This commit fixes the reordering issue by making napi_gro_complete also
      use napi->rx_list, so that all packets going through GRO will keep their
      order. In order to keep napi_gro_flush working properly, gro_normal_list
      calls are moved after the flush to clear napi->rx_list.
      
      iwlwifi calls napi_gro_flush directly and does the same thing that is
      done by gro_normal_list, so the same change is applied there:
      napi_gro_flush is moved to be before the flush of napi->rx_list.
      
      A few other drivers also use napi_gro_flush (brocade/bna/bnad.c,
      cortina/gemini.c, hisilicon/hns3/hns3_enet.c). The first two also use
      napi_complete_done afterwards, which performs the gro_normal_list flush,
      so they are fine. The latter calls napi_gro_receive right after
      napi_gro_flush, so it can end up with non-empty napi->rx_list anyway.
      
      Fixes: 323ebb61 ("net: use listified RX for handling GRO_NORMAL skbs")
      Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com>
      Cc: Alexander Lobakin <alobakin@dlink.ru>
      Cc: Edward Cree <ecree@solarflare.com>
      Acked-by: NAlexander Lobakin <alobakin@dlink.ru>
      Acked-by: NSaeed Mahameed <saeedm@mellanox.com>
      Acked-by: NEdward Cree <ecree@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c8079432
    • E
      iwlwifi: mvm: don't send the IWL_MVM_RXQ_NSSN_SYNC notif to Rx queues · d829229e
      Emmanuel Grumbach 提交于
      The purpose of this was to keep all the queues updated with
      the Rx sequence numbers because unlikely yet possible
      situations where queues can't understand if a specific
      packet needs to be dropped or not.
      
      Unfortunately, it was reported that this caused issues in
      our DMA engine. We don't fully understand how this is related,
      but this is being currently debugged. For now, just don't send
      this notification to the Rx queues. This de-facto reverts my
      commit 3c514bf8:
      
      iwlwifi: mvm: add a loose synchronization of the NSSN across Rx queues
      
      This issue was reported here:
      https://bugzilla.kernel.org/show_bug.cgi?id=204873
      https://bugzilla.kernel.org/show_bug.cgi?id=205001
      and others maybe.
      
      Fixes: 3c514bf8 ("iwlwifi: mvm: add a loose synchronization of the NSSN across Rx queues")
      CC: <stable@vger.kernel.org> # 5.3+
      Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
      Signed-off-by: NKalle Valo <kvalo@codeaurora.org>
      d829229e
  6. 09 1月, 2020 1 次提交
  7. 04 1月, 2020 12 次提交
  8. 30 12月, 2019 1 次提交
  9. 24 12月, 2019 7 次提交
  10. 23 12月, 2019 6 次提交
    • L
      iwlwifi: remove CSR registers abstraction · 6dece0e9
      Luca Coelho 提交于
      We needed this abstraction for some CSR registers for
      IWL_DEVICE_22560, but that has been removed, so we don't need the
      abstraction anymore.  Remove it.
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      6dece0e9
    • L
      iwlwifi: remove some outdated iwl22000 configurations · b81b7bd0
      Luca Coelho 提交于
      A few configuration structures were either not referenced anymore or
      assigned to devices IDs that were not in use anymore.  Remove them.
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      b81b7bd0
    • J
      iwlwifi: pcie: validate queue ID before array deref/bit ops · 0e002708
      Johannes Berg 提交于
      Validate that the queue ID is in range before trying to use it as
      an index or for test_bit() - the previous bug showed that this has
      in fact happened, and it was lucky that we caught it there, had the
      bit been set then we'd have actually used the value despite being
      far out of range.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      0e002708
    • J
      iwlwifi: pcie: use partial pages if applicable · cfdc20ef
      Johannes Berg 提交于
      If we have only 2k RBs like on the latest (AX210) hardware, then
      even on x86 where PAGE_SIZE is 4k we currently waste half of the
      memory.
      
      If this is the case, return partial pages from the allocator and
      track the offset in each RBD (to be able to find the data in them
      and remap them later.)
      
      This might also address other platforms with larger PAGE_SIZE by
      putting more RBs into a single large page.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      cfdc20ef
    • J
      iwlwifi: pcie: map only used part of RX buffers · 80084e35
      Johannes Berg 提交于
      We don't need to map *everything* of the RX buffers, we won't use
      that much, map only the part we're going to use. This save some
      IOMMU space (if applicable and it can deal with that) and also
      prepares a bit for mapping partial pages for 2K buffers later.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      80084e35
    • J
      iwlwifi: allocate more receive buffers for HE devices · c042f0c7
      Johannes Berg 提交于
      For HE-capable devices, we need to allocate more receive buffers as
      there could be 256 frames aggregated into a single A-MPDU, and then
      they might contain A-MSDUs as well. Until 22000 family, the devices
      are able to put multiple frames into a single RB and the default RB
      size is 4k, but starting from AX210 family this is no longer true.
      On the other hand, those newer devices only use 2k receive buffers
      (by default).
      
      Modify the code and configuration to allocate an appropriate number
      of RBs depending on the device capabilities:
      
       * 4096 for AX210 HE devices, which use 2k buffers by default,
       * 2048 for 22000 family devices which use 4k buffers by default,
       * 512 for existing 9000 family devices, which doesn't really
         change anything since that's the default before this patch,
       * 512 also for AX210/22000 family devices that don't do HE.
      
      Theoretically, for devices lower than AX210, we wouldn't have to
      allocate that many RBs if the RB size was manually increased, but
      to support that the code got more complex, and it didn't really
      seem necessary as that's a use case for monitor mode only, where
      hopefully the wasted memory isn't really much of a concern.
      
      Note that AX210 devices actually support bigger than 12-bit VID,
      which is required here as we want to allocate 4096 buffers plus
      some for quick recycling, so adjust the code for that as well.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      c042f0c7