1. 23 11月, 2020 1 次提交
  2. 19 11月, 2020 4 次提交
    • D
      powerpc/64s: rename pnv|pseries_setup_rfi_flush to _setup_security_mitigations · da631f7f
      Daniel Axtens 提交于
      pseries|pnv_setup_rfi_flush already does the count cache flush setup, and
      we just added entry and uaccess flushes. So the name is not very accurate
      any more. In both platforms we then also immediately setup the STF flush.
      
      Rename them to _setup_security_mitigations and fold the STF flush in.
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      da631f7f
    • M
      powerpc: Only include kup-radix.h for 64-bit Book3S · 178d52c6
      Michael Ellerman 提交于
      In kup.h we currently include kup-radix.h for all 64-bit builds, which
      includes Book3S and Book3E. The latter doesn't make sense, Book3E
      never uses the Radix MMU.
      
      This has worked up until now, but almost by accident, and the recent
      uaccess flush changes introduced a build breakage on Book3E because of
      the bad structure of the code.
      
      So disentangle things so that we only use kup-radix.h for Book3S. This
      requires some more stubs in kup.h and fixing an include in
      syscall_64.c.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      178d52c6
    • N
      powerpc/64s: flush L1D after user accesses · 9a32a7e7
      Nicholas Piggin 提交于
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern. This patch flushes the L1 cache after user accesses.
      
      This is part of the fix for CVE-2020-4788.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9a32a7e7
    • N
      powerpc/64s: flush L1D on kernel entry · f7964378
      Nicholas Piggin 提交于
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern. This patch flushes the L1 cache on kernel entry.
      
      This is part of the fix for CVE-2020-4788.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f7964378
  3. 18 11月, 2020 2 次提交
  4. 17 11月, 2020 4 次提交
  5. 16 11月, 2020 2 次提交
    • M
      xtensa: disable preemption around cache alias management calls · 3a860d16
      Max Filippov 提交于
      Although cache alias management calls set up and tear down TLB entries
      and fast_second_level_miss is able to restore TLB entry should it be
      evicted they absolutely cannot preempt each other because they use the
      same TLBTEMP area for different purposes.
      Disable preemption around all cache alias management calls to enforce
      that.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      3a860d16
    • M
      xtensa: fix TLBTEMP area placement · 481535c5
      Max Filippov 提交于
      fast_second_level_miss handler for the TLBTEMP area has an assumption
      that page table directory entry for the TLBTEMP address range is 0. For
      it to be true the TLBTEMP area must be aligned to 4MB boundary and not
      share its 4MB region with anything that may use a page table. This is
      not true currently: TLBTEMP shares space with vmalloc space which
      results in the following kinds of runtime errors when
      fast_second_level_miss loads page table directory entry for the vmalloc
      space instead of fixing up the TLBTEMP area:
      
       Unable to handle kernel paging request at virtual address c7ff0e00
        pc = d0009275, ra = 90009478
       Oops: sig: 9 [#1] PREEMPT
       CPU: 1 PID: 61 Comm: kworker/u9:2 Not tainted 5.10.0-rc3-next-20201110-00007-g1fe4962fa983-dirty #58
       Workqueue: xprtiod xs_stream_data_receive_workfn
       a00: 90009478 d11e1dc0 c7ff0e00 00000020 c7ff0000 00000001 7f8b8107 00000000
       a08: 900c5992 d11e1d90 d0cc88b8 5506e97c 00000000 5506e97c d06c8074 d11e1d90
       pc: d0009275, ps: 00060310, depc: 00000014, excvaddr: c7ff0e00
       lbeg: d0009275, lend: d0009287 lcount: 00000003, sar: 00000010
       Call Trace:
         xs_stream_data_receive_workfn+0x43c/0x770
         process_one_work+0x1a1/0x324
         worker_thread+0x1cc/0x3c0
         kthread+0x10d/0x124
         ret_from_kernel_thread+0xc/0x18
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      481535c5
  6. 15 11月, 2020 1 次提交
    • P
      kvm: mmu: fix is_tdp_mmu_check when the TDP MMU is not in use · c887c9b9
      Paolo Bonzini 提交于
      In some cases where shadow paging is in use, the root page will
      be either mmu->pae_root or vcpu->arch.mmu->lm_root.  Then it will
      not have an associated struct kvm_mmu_page, because it is allocated
      with alloc_page instead of kvm_mmu_alloc_page.
      
      Just return false quickly from is_tdp_mmu_root if the TDP MMU is
      not in use, which also includes the case where shadow paging is
      enabled.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c887c9b9
  7. 13 11月, 2020 13 次提交
  8. 12 11月, 2020 3 次提交
  9. 11 11月, 2020 3 次提交
  10. 10 11月, 2020 7 次提交