1. 07 1月, 2021 2 次提交
  2. 10 12月, 2020 1 次提交
  3. 04 12月, 2020 1 次提交
  4. 01 12月, 2020 1 次提交
  5. 25 11月, 2020 1 次提交
  6. 19 11月, 2020 2 次提交
    • N
      powerpc/64s: flush L1D after user accesses · 9a32a7e7
      Nicholas Piggin 提交于
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern. This patch flushes the L1 cache after user accesses.
      
      This is part of the fix for CVE-2020-4788.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9a32a7e7
    • N
      powerpc/64s: flush L1D on kernel entry · f7964378
      Nicholas Piggin 提交于
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern. This patch flushes the L1 cache on kernel entry.
      
      This is part of the fix for CVE-2020-4788.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f7964378
  7. 17 11月, 2020 1 次提交
  8. 31 10月, 2020 1 次提交
  9. 23 10月, 2020 1 次提交
  10. 20 10月, 2020 1 次提交
    • J
      xen/events: defer eoi in case of excessive number of events · e99502f7
      Juergen Gross 提交于
      In case rogue guests are sending events at high frequency it might
      happen that xen_evtchn_do_upcall() won't stop processing events in
      dom0. As this is done in irq handling a crash might be the result.
      
      In order to avoid that, delay further inter-domain events after some
      time in xen_evtchn_do_upcall() by forcing eoi processing into a
      worker on the same cpu, thus inhibiting new events coming in.
      
      The time after which eoi processing is to be delayed is configurable
      via a new module parameter "event_loop_timeout" which specifies the
      maximum event loop time in jiffies (default: 2, the value was chosen
      after some tests showing that a value of 2 was the lowest with an
      only slight drop of dom0 network throughput while multiple guests
      performed an event storm).
      
      How long eoi processing will be delayed can be specified via another
      parameter "event_eoi_delay" (again in jiffies, default 10, again the
      value was chosen after testing with different delay values).
      
      This is part of XSA-332.
      
      Cc: stable@vger.kernel.org
      Reported-by: NJulien Grall <julien@xen.org>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
      Reviewed-by: NWei Liu <wl@xen.org>
      e99502f7
  11. 17 10月, 2020 1 次提交
  12. 06 10月, 2020 1 次提交
  13. 25 9月, 2020 9 次提交
  14. 17 9月, 2020 1 次提交
  15. 11 9月, 2020 1 次提交
  16. 01 9月, 2020 1 次提交
    • B
      dma-contiguous: provide the ability to reserve per-numa CMA · b7176c26
      Barry Song 提交于
      Right now, drivers like ARM SMMU are using dma_alloc_coherent() to get
      coherent DMA buffers to save their command queues and page tables. As
      there is only one default CMA in the whole system, SMMUs on nodes other
      than node0 will get remote memory. This leads to significant latency.
      
      This patch provides per-numa CMA so that drivers like SMMU can get local
      memory. Tests show localizing CMA can decrease dma_unmap latency much.
      For instance, before this patch, SMMU on node2  has to wait for more than
      560ns for the completion of CMD_SYNC in an empty command queue; with this
      patch, it needs 240ns only.
      
      A positive side effect of this patch would be improving performance even
      further for those users who are worried about performance more than DMA
      security and use iommu.passthrough=1 to skip IOMMU. With local CMA, all
      drivers can get local coherent DMA buffers.
      
      Also, this patch changes the default CONFIG_CMA_AREAS to 19 in NUMA. As
      1+CONFIG_CMA_AREAS should be quite enough for most servers on the market
      even they enable both hugetlb_cma and pernuma_cma.
      2 numa nodes: 2(hugetlb) + 2(pernuma) + 1(default global cma) = 5
      4 numa nodes: 4(hugetlb) + 4(pernuma) + 1(default global cma) = 9
      8 numa nodes: 8(hugetlb) + 8(pernuma) + 1(default global cma) = 17
      Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      b7176c26
  17. 28 8月, 2020 1 次提交
    • M
      net: add option to not create fall-back tunnels in root-ns as well · 316cdaa1
      Mahesh Bandewar 提交于
      The sysctl that was added  earlier by commit 79134e6c ("net: do
      not create fallback tunnels for non-default namespaces") to create
      fall-back only in root-ns. This patch enhances that behavior to provide
      option not to create fallback tunnels in root-ns as well. Since modules
      that create fallback tunnels could be built-in and setting the sysctl
      value after booting is pointless, so added a kernel cmdline options to
      change this default. The default setting is preseved for backward
      compatibility. The kernel command line option of fb_tunnels=initns will
      set the sysctl value to 1 and will create fallback tunnels only in initns
      while kernel cmdline fb_tunnels=none will set the sysctl value to 2 and
      fallback tunnels are skipped in every netns.
      Signed-off-by: NMahesh Bandewar <maheshb@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Maciej Zenczykowski <maze@google.com>
      Cc: Jian Yang <jianyang@google.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      316cdaa1
  18. 25 8月, 2020 5 次提交
    • P
      rcutorture: Allow pointer leaks to test diagnostic code · d6855142
      Paul E. McKenney 提交于
      This commit adds an rcutorture.leakpointer module parameter that
      intentionally leaks an RCU-protected pointer out of the RCU read-side
      critical section and checks to see if the corresponding grace period
      has elapsed, emitting a WARN_ON_ONCE() if so.  This module parameter can
      be used to test facilities like CONFIG_RCU_STRICT_GRACE_PERIOD that end
      grace periods quickly.
      
      While in the area, also document rcutorture.irqreader, which was
      previously left out.
      
      Reported-by Jann Horn <jannh@google.com>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      d6855142
    • P
      rcu: Provide optional RCU-reader exit delay for strict GPs · 3d29aaf1
      Paul E. McKenney 提交于
      The goal of this series is to increase the probability of tools like
      KASAN detecting that an RCU-protected pointer was used outside of its
      RCU read-side critical section.  Thus far, the approach has been to make
      grace periods and callback processing happen faster.  Another approach
      is to delay the pointer leaker.  This commit therefore allows a delay
      to be applied to exit from RCU read-side critical sections.
      
      This slowdown is specified by a new rcutree.rcu_unlock_delay kernel boot
      parameter that specifies this delay in microseconds, defaulting to zero.
      
      Reported-by Jann Horn <jannh@google.com>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      3d29aaf1
    • P
      rcuperf: Change rcuperf to rcuscale · 4e88ec4a
      Paul E. McKenney 提交于
      This commit further avoids conflation of rcuperf with the kernel's perf
      feature by renaming kernel/rcu/rcuperf.c to kernel/rcu/rcuscale.c, and
      also by similarly renaming the functions and variables inside this file.
      This has the side effect of changing the names of the kernel boot
      parameters, so kernel-parameters.txt and ver_functions.sh are also
      updated.  The rcutorture --torture type was also updated from rcuperf
      to rcuscale.
      
      [ paulmck: Fix bugs located by Stephen Rothwell. ]
      Reported-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      4e88ec4a
    • P
      scftorture: Add smp_call_function() torture test · e9d338a0
      Paul E. McKenney 提交于
      This commit adds an smp_call_function() torture test that repeatedly
      invokes this function and complains if things go badly awry.
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      e9d338a0
    • P
      lib: Add backtrace_idle parameter to force backtrace of idle CPUs · 160c7ba3
      Paul E. McKenney 提交于
      Currently, the nmi_cpu_backtrace() declines to produce backtraces for
      idle CPUs.  This is a good choice in the common case in which problems are
      caused only by non-idle CPUs.  However, there are occasionally situations
      in which idle CPUs are helping to cause problems.  This commit therefore
      adds an nmi_backtrace.backtrace_idle kernel boot parameter that causes
      nmi_cpu_backtrace() to dump stacks even of idle CPUs.
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: <linux-doc@vger.kernel.org>
      160c7ba3
  19. 20 8月, 2020 1 次提交
    • A
      Documentation: efi: remove description of efi=old_map · fb1201ae
      Ard Biesheuvel 提交于
      The old EFI runtime region mapping logic that was kept around for some
      time has finally been removed entirely, along with the SGI UV1 support
      code that was its last remaining user. So remove any mention of the
      efi=old_map command line parameter from the docs.
      
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: linux-doc@vger.kernel.org
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      fb1201ae
  20. 12 8月, 2020 1 次提交
  21. 08 8月, 2020 1 次提交
  22. 29 7月, 2020 1 次提交
  23. 23 7月, 2020 1 次提交
    • P
      debugfs: Add access restriction option · a24c6f7b
      Peter Enderborg 提交于
      Since debugfs include sensitive information it need to be treated
      carefully. But it also has many very useful debug functions for userspace.
      With this option we can have same configuration for system with
      need of debugfs and a way to turn it off. This gives a extra protection
      for exposure on systems where user-space services with system
      access are attacked.
      
      It is controlled by a configurable default value that can be override
      with a kernel command line parameter. (debugfs=)
      
      It can be on or off, but also internally on but not seen from user-space.
      This no-mount mode do not register a debugfs as filesystem, but client can
      register their parts in the internal structures. This data can be readed
      with a debugger or saved with a crashkernel. When it is off clients
      get EPERM error when accessing the functions for registering their
      components.
      Signed-off-by: NPeter Enderborg <peter.enderborg@sony.com>
      Link: https://lore.kernel.org/r/20200716071511.26864-3-peter.enderborg@sony.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a24c6f7b
  24. 09 7月, 2020 2 次提交
    • Z
      xen: Mark "xen_nopvspin" parameter obsolete · 9a3c05e6
      Zhenzhong Duan 提交于
      Map "xen_nopvspin" to "nopvspin", fix stale description of "xen_nopvspin"
      as we use qspinlock now.
      Signed-off-by: NZhenzhong Duan <zhenzhong.duan@oracle.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9a3c05e6
    • Z
      x86/kvm: Add "nopvspin" parameter to disable PV spinlocks · 05eee619
      Zhenzhong Duan 提交于
      There are cases where a guest tries to switch spinlocks to bare metal
      behavior (e.g. by setting "xen_nopvspin" on XEN platform and
      "hv_nopvspin" on HYPER_V).
      
      That feature is missed on KVM, add a new parameter "nopvspin" to disable
      PV spinlocks for KVM guest.
      
      The new 'nopvspin' parameter will also replace Xen and Hyper-V specific
      parameters in future patches.
      
      Define variable nopvsin as global because it will be used in future
      patches as above.
      Signed-off-by: NZhenzhong Duan <zhenzhong.duan@oracle.com>
      Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      05eee619
  25. 06 7月, 2020 1 次提交