1. 21 6月, 2022 1 次提交
  2. 15 6月, 2022 2 次提交
  3. 14 6月, 2022 1 次提交
  4. 02 6月, 2022 5 次提交
  5. 24 5月, 2022 1 次提交
  6. 23 5月, 2022 1 次提交
  7. 17 5月, 2022 1 次提交
  8. 13 5月, 2022 1 次提交
  9. 07 5月, 2022 2 次提交
  10. 29 4月, 2022 1 次提交
  11. 20 4月, 2022 2 次提交
  12. 19 4月, 2022 1 次提交
  13. 16 4月, 2022 2 次提交
  14. 15 4月, 2022 1 次提交
  15. 14 4月, 2022 1 次提交
  16. 12 4月, 2022 1 次提交
    • I
      Revert "module, async: async_synchronize_full() on module init iff async is used" · dc8da50c
      Igor Pylypiv 提交于
      stable inclusion
      from linux-4.19.231
      commit a0c66ac8b72f816d5631fde0ca0b39af602dce48
      
      --------------------------------
      
      [ Upstream commit 67d6212a ]
      
      This reverts commit 774a1221.
      
      We need to finish all async code before the module init sequence is
      done.  In the reverted commit the PF_USED_ASYNC flag was added to mark a
      thread that called async_schedule().  Then the PF_USED_ASYNC flag was
      used to determine whether or not async_synchronize_full() needs to be
      invoked.  This works when modprobe thread is calling async_schedule(),
      but it does not work if module dispatches init code to a worker thread
      which then calls async_schedule().
      
      For example, PCI driver probing is invoked from a worker thread based on
      a node where device is attached:
      
      	if (cpu < nr_cpu_ids)
      		error = work_on_cpu(cpu, local_pci_probe, &ddi);
      	else
      		error = local_pci_probe(&ddi);
      
      We end up in a situation where a worker thread gets the PF_USED_ASYNC
      flag set instead of the modprobe thread.  As a result,
      async_synchronize_full() is not invoked and modprobe completes without
      waiting for the async code to finish.
      
      The issue was discovered while loading the pm80xx driver:
      (scsi_mod.scan=async)
      
      modprobe pm80xx                      worker
      ...
        do_init_module()
        ...
          pci_call_probe()
            work_on_cpu(local_pci_probe)
                                           local_pci_probe()
                                             pm8001_pci_probe()
                                               scsi_scan_host()
                                                 async_schedule()
                                                 worker->flags |= PF_USED_ASYNC;
                                           ...
            < return from worker >
        ...
        if (current->flags & PF_USED_ASYNC) <--- false
        	async_synchronize_full();
      
      Commit 21c3c5d2 ("block: don't request module during elevator init")
      fixed the deadlock issue which the reverted commit 774a1221
      ("module, async: async_synchronize_full() on module init iff async is
      used") tried to fix.
      
      Since commit 0fdff3ec ("async, kmod: warn on synchronous
      request_module() from async workers") synchronous module loading from
      async is not allowed.
      
      Given that the original deadlock issue is fixed and it is no longer
      allowed to call synchronous request_module() from async we can remove
      PF_USED_ASYNC flag to make module init consistently invoke
      async_synchronize_full() unless async module probe is requested.
      Signed-off-by: NIgor Pylypiv <ipylypiv@google.com>
      Reviewed-by: NChangyuan Lyu <changyuanl@google.com>
      Reviewed-by: NLuis Chamberlain <mcgrof@kernel.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NYongqiang Liu <liuyongqiang13@huawei.com>
      Signed-off-by: NLaibin Qiu <qiulaibin@huawei.com>
      dc8da50c
  17. 02 4月, 2022 2 次提交
    • L
      Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE"" · 3e109690
      Linus Torvalds 提交于
      mainline inclusion
      from mainline-v5.18-rc1
      commit 901c7280
      category: bugfix
      bugzilla: 186478, https://gitee.com/openeuler/kernel/issues/I4Z86P
      CVE: CVE-2022-0854
      
      --------------------------------
      
      Halil Pasic points out [1] that the full revert of that commit (revert
      in bddac7c1), and that a partial revert that only reverts the
      problematic case, but still keeps some of the cleanups is probably
      better.  
      
      And that partial revert [2] had already been verified by Oleksandr
      Natalenko to also fix the issue, I had just missed that in the long
      discussion.
      
      So let's reinstate the cleanups from commit aa6f8dcb ("swiotlb:
      rework "fix info leak with DMA_FROM_DEVICE""), and effectively only
      revert the part that caused problems.
      
      Link: https://lore.kernel.org/all/20220328013731.017ae3e3.pasic@linux.ibm.com/ [1]
      Link: https://lore.kernel.org/all/20220324055732.GB12078@lst.de/ [2]
      Link: https://lore.kernel.org/all/4386660.LvFx2qVVIh@natalenko.name/ [3]
      Suggested-by: NHalil Pasic <pasic@linux.ibm.com>
      Tested-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Cc: Christoph Hellwig" <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Conflicts:
      	Documentation/core-api/dma-attributes.rst
      	include/linux/dma-mapping.h
      	kernel/dma/swiotlb.c
      Signed-off-by: NLiu Shixin <liushixin2@huawei.com>
      Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com>
      Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: NYongqiang Liu <liuyongqiang13@huawei.com>
      3e109690
    • L
      Revert "swiotlb: rework "fix info leak with DMA_FROM_DEVICE"" · a5e62d73
      Linus Torvalds 提交于
      mainline inclusion
      from mainline-v5.18-rc1
      commit bddac7c1
      category: bugfix
      bugzilla: 186478, https://gitee.com/openeuler/kernel/issues/I4Z86P
      CVE: CVE-2022-0854
      
      --------------------------------
      
      This reverts commit aa6f8dcb.
      
      It turns out this breaks at least the ath9k wireless driver, and
      possibly others.
      
      What the ath9k driver does on packet receive is to set up the DMA
      transfer with:
      
        int ath_rx_init(..)
        ..
                      bf->bf_buf_addr = dma_map_single(sc->dev, skb->data,
                                                       common->rx_bufsize,
                                                       DMA_FROM_DEVICE);
      
      and then the receive logic (through ath_rx_tasklet()) will fetch
      incoming packets
      
        static bool ath_edma_get_buffers(..)
        ..
              dma_sync_single_for_cpu(sc->dev, bf->bf_buf_addr,
                                      common->rx_bufsize, DMA_FROM_DEVICE);
      
              ret = ath9k_hw_process_rxdesc_edma(ah, rs, skb->data);
              if (ret == -EINPROGRESS) {
                      /*let device gain the buffer again*/
                      dma_sync_single_for_device(sc->dev, bf->bf_buf_addr,
                                      common->rx_bufsize, DMA_FROM_DEVICE);
                      return false;
              }
      
      and it's worth noting how that first DMA sync:
      
          dma_sync_single_for_cpu(..DMA_FROM_DEVICE);
      
      is there to make sure the CPU can read the DMA buffer (possibly by
      copying it from the bounce buffer area, or by doing some cache flush).
      The iommu correctly turns that into a "copy from bounce bufer" so that
      the driver can look at the state of the packets.
      
      In the meantime, the device may continue to write to the DMA buffer, but
      we at least have a snapshot of the state due to that first DMA sync.
      
      But that _second_ DMA sync:
      
          dma_sync_single_for_device(..DMA_FROM_DEVICE);
      
      is telling the DMA mapping that the CPU wasn't interested in the area
      because the packet wasn't there.  In the case of a DMA bounce buffer,
      that is a no-op.
      
      Note how it's not a sync for the CPU (the "for_device()" part), and it's
      not a sync for data written by the CPU (the "DMA_FROM_DEVICE" part).
      
      Or rather, it _should_ be a no-op.  That's what commit aa6f8dcb
      broke: it made the code bounce the buffer unconditionally, and changed
      the DMA_FROM_DEVICE to just unconditionally and illogically be
      DMA_TO_DEVICE.
      
      [ Side note: purely within the confines of the swiotlb driver it wasn't
        entirely illogical: The reason it did that odd DMA_FROM_DEVICE ->
        DMA_TO_DEVICE conversion thing is because inside the swiotlb driver,
        it uses just a swiotlb_bounce() helper that doesn't care about the
        whole distinction of who the sync is for - only which direction to
        bounce.
      
        So it took the "sync for device" to mean that the CPU must have been
        the one writing, and thought it meant DMA_TO_DEVICE. ]
      
      Also note how the commentary in that commit was wrong, probably due to
      that whole confusion, claiming that the commit makes the swiotlb code
      
                                        "bounce unconditionally (that is, also
          when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
          data from the swiotlb buffer"
      
      which is nonsensical for two reasons:
      
       - that "also when dir == DMA_TO_DEVICE" is nonsensical, as that was
         exactly when it always did - and should do - the bounce.
      
       - since this is a sync for the device (not for the CPU), we're clearly
         fundamentally not coping back stale data from the bounce buffers at
         all, because we'd be copying *to* the bounce buffers.
      
      So that commit was just very confused.  It confused the direction of the
      synchronization (to the device, not the cpu) with the direction of the
      DMA (from the device).
      Reported-and-bisected-by: NOleksandr Natalenko <oleksandr@natalenko.name>
      Reported-by: NOlha Cherevyk <olha.cherevyk@gmail.com>
      Cc: Halil Pasic <pasic@linux.ibm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Kalle Valo <kvalo@kernel.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Toke Høiland-Jørgensen <toke@toke.dk>
      Cc: Maxime Bizon <mbizon@freebox.fr>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Conflicts:
      	Documentation/core-api/dma-attributes.rst
      	include/linux/dma-mapping.h
      	kernel/dma/swiotlb.c
      Signed-off-by: NLiu Shixin <liushixin2@huawei.com>
      Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com>
      Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: NYongqiang Liu <liuyongqiang13@huawei.com>
      a5e62d73
  18. 30 3月, 2022 1 次提交
  19. 23 3月, 2022 2 次提交
    • H
      swiotlb: rework "fix info leak with DMA_FROM_DEVICE" · 3f80e186
      Halil Pasic 提交于
      mainline inclusion
      from mainline-v5.17-rc8
      commit aa6f8dcb
      category: bugfix
      bugzilla: 186478, https://gitee.com/openeuler/kernel/issues/I4Z86P
      CVE: CVE-2022-0854
      
      --------------------------------
      
      Unfortunately, we ended up merging an old version of the patch "fix info
      leak with DMA_FROM_DEVICE" instead of merging the latest one. Christoph
      (the swiotlb maintainer), he asked me to create an incremental fix
      (after I have pointed this out the mix up, and asked him for guidance).
      So here we go.
      
      The main differences between what we got and what was agreed are:
      * swiotlb_sync_single_for_device is also required to do an extra bounce
      * We decided not to introduce DMA_ATTR_OVERWRITE until we have exploiters
      * The implantation of DMA_ATTR_OVERWRITE is flawed: DMA_ATTR_OVERWRITE
        must take precedence over DMA_ATTR_SKIP_CPU_SYNC
      
      Thus this patch removes DMA_ATTR_OVERWRITE, and makes
      swiotlb_sync_single_for_device() bounce unconditionally (that is, also
      when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
      data from the swiotlb buffer.
      
      Let me note, that if the size used with dma_sync_* API is less than the
      size used with dma_[un]map_*, under certain circumstances we may still
      end up with swiotlb not being transparent. In that sense, this is no
      perfect fix either.
      
      To get this bullet proof, we would have to bounce the entire
      mapping/bounce buffer. For that we would have to figure out the starting
      address, and the size of the mapping in
      swiotlb_sync_single_for_device(). While this does seem possible, there
      seems to be no firm consensus on how things are supposed to work.
      Signed-off-by: NHalil Pasic <pasic@linux.ibm.com>
      Fixes: ddbd89de ("swiotlb: fix info leak with DMA_FROM_DEVICE")
      Cc: stable@vger.kernel.org
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Conflicts:
      	Documentation/core-api/dma-attributes.rst
      	include/linux/dma-mapping.h
      	kernel/dma/swiotlb.c
      Signed-off-by: NLiu Shixin <liushixin2@huawei.com>
      Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com>
      Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: NYongqiang Liu <liuyongqiang13@huawei.com>
      3f80e186
    • H
      swiotlb: fix info leak with DMA_FROM_DEVICE · 04c20fc8
      Halil Pasic 提交于
      mainline inclusion
      from mainline-v5.17-rc6
      commit ddbd89de
      category: bugfix
      bugzilla: 186478, https://gitee.com/openeuler/kernel/issues/I4Z86P
      CVE: CVE-2022-0854
      
      --------------------------------
      
      The problem I'm addressing was discovered by the LTP test covering
      cve-2018-1000204.
      
      A short description of what happens follows:
      1) The test case issues a command code 00 (TEST UNIT READY) via the SG_IO
         interface with: dxfer_len == 524288, dxdfer_dir == SG_DXFER_FROM_DEV
         and a corresponding dxferp. The peculiar thing about this is that TUR
         is not reading from the device.
      2) In sg_start_req() the invocation of blk_rq_map_user() effectively
         bounces the user-space buffer. As if the device was to transfer into
         it. Since commit a45b599a ("scsi: sg: allocate with __GFP_ZERO in
         sg_build_indirect()") we make sure this first bounce buffer is
         allocated with GFP_ZERO.
      3) For the rest of the story we keep ignoring that we have a TUR, so the
         device won't touch the buffer we prepare as if the we had a
         DMA_FROM_DEVICE type of situation. My setup uses a virtio-scsi device
         and the  buffer allocated by SG is mapped by the function
         virtqueue_add_split() which uses DMA_FROM_DEVICE for the "in" sgs (here
         scatter-gather and not scsi generics). This mapping involves bouncing
         via the swiotlb (we need swiotlb to do virtio in protected guest like
         s390 Secure Execution, or AMD SEV).
      4) When the SCSI TUR is done, we first copy back the content of the second
         (that is swiotlb) bounce buffer (which most likely contains some
         previous IO data), to the first bounce buffer, which contains all
         zeros.  Then we copy back the content of the first bounce buffer to
         the user-space buffer.
      5) The test case detects that the buffer, which it zero-initialized,
        ain't all zeros and fails.
      
      One can argue that this is an swiotlb problem, because without swiotlb
      we leak all zeros, and the swiotlb should be transparent in a sense that
      it does not affect the outcome (if all other participants are well
      behaved).
      
      Copying the content of the original buffer into the swiotlb buffer is
      the only way I can think of to make swiotlb transparent in such
      scenarios. So let's do just that if in doubt, but allow the driver
      to tell us that the whole mapped buffer is going to be overwritten,
      in which case we can preserve the old behavior and avoid the performance
      impact of the extra bounce.
      Signed-off-by: NHalil Pasic <pasic@linux.ibm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Conflicts:
      	Documentation/core-api/dma-attributes.rst
      	include/linux/dma-mapping.h
      	kernel/dma/swiotlb.c
      Signed-off-by: NLiu Shixin <liushixin2@huawei.com>
      Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com>
      Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: NYongqiang Liu <liuyongqiang13@huawei.com>
      04c20fc8
  20. 14 3月, 2022 2 次提交
  21. 09 3月, 2022 2 次提交
  22. 06 3月, 2022 2 次提交
  23. 04 3月, 2022 2 次提交
  24. 01 3月, 2022 3 次提交