1. 13 11月, 2019 15 次提交
  2. 10 11月, 2019 11 次提交
  3. 06 11月, 2019 14 次提交
    • N
      powerpc/powernv: Fix CPU idle to be called with IRQs disabled · 8f560302
      Nicholas Piggin 提交于
      [ Upstream commit 7d6475051fb3d9339c5c760ed9883bc0a9048b21 ]
      
      Commit e78a7614f3876 ("idle: Prevent late-arriving interrupts from
      disrupting offline") changes arch_cpu_idle_dead to be called with
      interrupts disabled, which triggers the WARN in pnv_smp_cpu_kill_self.
      
      Fix this by fixing up irq_happened after hard disabling, rather than
      requiring there are no pending interrupts, similarly to what was done
      done until commit 2525db04 ("powerpc/powernv: Simplify lazy IRQ
      handling in CPU offline").
      
      Fixes: e78a7614f3876 ("idle: Prevent late-arriving interrupts from disrupting offline")
      Reported-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      [mpe: Add unexpected_mask rather than checking for known bad values,
            change the WARN_ON() to a WARN_ON_ONCE()]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20191022115814.22456-1-npiggin@gmail.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      8f560302
    • C
      arm64: Ensure VM_WRITE|VM_SHARED ptes are clean by default · a8166916
      Catalin Marinas 提交于
      commit aa57157be69fb599bd4c38a4b75c5aad74a60ec0 upstream.
      
      Shared and writable mappings (__S.1.) should be clean (!dirty) initially
      and made dirty on a subsequent write either through the hardware DBM
      (dirty bit management) mechanism or through a write page fault. A clean
      pte for the arm64 kernel is one that has PTE_RDONLY set and PTE_DIRTY
      clear.
      
      The PAGE_SHARED{,_EXEC} attributes have PTE_WRITE set (PTE_DBM) and
      PTE_DIRTY clear. Prior to commit 73e86cb0 ("arm64: Move PTE_RDONLY
      bit handling out of set_pte_at()"), it was the responsibility of
      set_pte_at() to set the PTE_RDONLY bit and mark the pte clean if the
      software PTE_DIRTY bit was not set. However, the above commit removed
      the pte_sw_dirty() check and the subsequent setting of PTE_RDONLY in
      set_pte_at() while leaving the PAGE_SHARED{,_EXEC} definitions
      unchanged. The result is that shared+writable mappings are now dirty by
      default
      
      Fix the above by explicitly setting PTE_RDONLY in PAGE_SHARED{,_EXEC}.
      In addition, remove the superfluous PTE_DIRTY bit from the kernel PROT_*
      attributes.
      
      Fixes: 73e86cb0 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()")
      Cc: <stable@vger.kernel.org> # 4.14.x-
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a8166916
    • H
      s390/idle: fix cpu idle time calculation · 8dd60660
      Heiko Carstens 提交于
      commit 3d7efa4edd07be5c5c3ffa95ba63e97e070e1f3f upstream.
      
      The idle time reported in /proc/stat sometimes incorrectly contains
      huge values on s390. This is caused by a bug in arch_cpu_idle_time().
      
      The kernel tries to figure out when a different cpu entered idle by
      accessing its per-cpu data structure. There is an ordering problem: if
      the remote cpu has an idle_enter value which is not zero, and an
      idle_exit value which is zero, it is assumed it is idle since
      "now". The "now" timestamp however is taken before the idle_enter
      value is read.
      
      Which in turn means that "now" can be smaller than idle_enter of the
      remote cpu. Unconditionally subtracting idle_enter from "now" can thus
      lead to a negative value (aka large unsigned value).
      
      Fix this by moving the get_tod_clock() invocation out of the
      loop. While at it also make the code a bit more readable.
      
      A similar bug also exists for show_idle_time(). Fix this is as well.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8dd60660
    • Y
      s390/cmm: fix information leak in cmm_timeout_handler() · ced8cb02
      Yihui ZENG 提交于
      commit b8e51a6a9db94bc1fb18ae831b3dab106b5a4b5f upstream.
      
      The problem is that we were putting the NUL terminator too far:
      
      	buf[sizeof(buf) - 1] = '\0';
      
      If the user input isn't NUL terminated and they haven't initialized the
      whole buffer then it leads to an info leak.  The NUL terminator should
      be:
      
      	buf[len - 1] = '\0';
      Signed-off-by: NYihui Zeng <yzeng56@asu.edu>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      [heiko.carstens@de.ibm.com: keep semantics of how *lenp and *ppos are handled]
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ced8cb02
    • V
      ARM: 8914/1: NOMMU: Fix exc_ret for XIP · ce005e5d
      Vladimir Murzin 提交于
      [ Upstream commit 4c0742f65b4ee466546fd24b71b56516cacd4613 ]
      
      It was reported that 72cd4064fcca "NOMMU: Toggle only bits in
      EXC_RETURN we are really care of" breaks NOMMU+XIP combination.
      It happens because saved EXC_RETURN gets overwritten when data
      section is relocated.
      
      The fix is to propagate EXC_RETURN via register and let relocation
      code to commit that value into memory.
      
      Fixes: 72cd4064fcca ("ARM: 8830/1: NOMMU: Toggle only bits in EXC_RETURN we are really care of")
      Reported-by: Nafzal mohammed <afzal.mohd.ma@gmail.com>
      Tested-by: Nafzal mohammed <afzal.mohd.ma@gmail.com>
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      ce005e5d
    • C
      s390/uaccess: avoid (false positive) compiler warnings · 12e13266
      Christian Borntraeger 提交于
      [ Upstream commit 062795fcdcb2d22822fb42644b1d76a8ad8439b3 ]
      
      Depending on inlining decisions by the compiler, __get/put_user_fn
      might become out of line. Then the compiler is no longer able to tell
      that size can only be 1,2,4 or 8 due to the check in __get/put_user
      resulting in false positives like
      
      ./arch/s390/include/asm/uaccess.h: In function ‘__put_user_fn’:
      ./arch/s390/include/asm/uaccess.h:113:9: warning: ‘rc’ may be used uninitialized in this function [-Wmaybe-uninitialized]
        113 |  return rc;
            |         ^~
      ./arch/s390/include/asm/uaccess.h: In function ‘__get_user_fn’:
      ./arch/s390/include/asm/uaccess.h:143:9: warning: ‘rc’ may be used uninitialized in this function [-Wmaybe-uninitialized]
        143 |  return rc;
            |         ^~
      
      These functions are supposed to be always inlined. Mark it as such.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      12e13266
    • T
      MIPS: fw: sni: Fix out of bounds init of o32 stack · 5865397d
      Thomas Bogendoerfer 提交于
      [ Upstream commit efcb529694c3b707dc0471b312944337ba16e4dd ]
      
      Use ARRAY_SIZE to caluculate the top of the o32 stack.
      Signed-off-by: NThomas Bogendoerfer <tbogendoerfer@suse.de>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: linux-mips@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      5865397d
    • T
      MIPS: include: Mark __xchg as __always_inline · 317b6f68
      Thomas Bogendoerfer 提交于
      [ Upstream commit 46f1619500d022501a4f0389f9f4c349ab46bb86 ]
      
      Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING
      forcibly") allows compiler to uninline functions marked as 'inline'.
      In cace of __xchg this would cause to reference function
      __xchg_called_with_bad_pointer, which is an error case
      for catching bugs and will not happen for correct code, if
      __xchg is inlined.
      Signed-off-by: NThomas Bogendoerfer <tbogendoerfer@suse.de>
      Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: linux-mips@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      317b6f68
    • T
      perf/x86/amd: Change/fix NMI latency mitigation to use a timestamp · a1112c46
      Tom Lendacky 提交于
      [ Upstream commit df4d29732fdad43a51284f826bec3e6ded177540 ]
      
      It turns out that the NMI latency workaround from commit:
      
        6d3edaae16c6 ("x86/perf/amd: Resolve NMI latency issues for active PMCs")
      
      ends up being too conservative and results in the perf NMI handler claiming
      NMIs too easily on AMD hardware when the NMI watchdog is active.
      
      This has an impact, for example, on the hpwdt (HPE watchdog timer) module.
      This module can produce an NMI that is used to reset the system. It
      registers an NMI handler for the NMI_UNKNOWN type and relies on the fact
      that nothing has claimed an NMI so that its handler will be invoked when
      the watchdog device produces an NMI. After the referenced commit, the
      hpwdt module is unable to process its generated NMI if the NMI watchdog is
      active, because the current NMI latency mitigation results in the NMI
      being claimed by the perf NMI handler.
      
      Update the AMD perf NMI latency mitigation workaround to, instead, use a
      window of time. Whenever a PMC is handled in the perf NMI handler, set a
      timestamp which will act as a perf NMI window. Any NMIs arriving within
      that window will be claimed by perf. Anything outside that window will
      not be claimed by perf. The value for the NMI window is set to 100 msecs.
      This is a conservative value that easily covers any NMI latency in the
      hardware. While this still results in a window in which the hpwdt module
      will not receive its NMI, the window is now much, much smaller.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jerry Hoemann <jerry.hoemann@hpe.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 6d3edaae16c6 ("x86/perf/amd: Resolve NMI latency issues for active PMCs")
      Link: https://lkml.kernel.org/r/Message-ID:
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      a1112c46
    • K
      x86/cpu: Add Comet Lake to the Intel CPU models header · 58d33d4a
      Kan Liang 提交于
      [ Upstream commit 8d7c6ac3b2371eb1cbc9925a88f4d10efff374de ]
      
      Comet Lake is the new 10th Gen Intel processor. Add two new CPU model
      numbers to the Intel family list.
      
      The CPU model numbers are not published in the SDM yet but they come
      from an authoritative internal source.
      
       [ bp: Touch up commit message. ]
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Cc: ak@linux.intel.com
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/1570549810-25049-2-git-send-email-kan.liang@linux.intel.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      58d33d4a
    • Y
      arm64: armv8_deprecated: Checking return value for memory allocation · 6258745b
      Yunfeng Ye 提交于
      [ Upstream commit 3e7c93bd04edfb0cae7dad1215544c9350254b8f ]
      
      There are no return value checking when using kzalloc() and kcalloc() for
      memory allocation. so add it.
      Signed-off-by: NYunfeng Ye <yeyunfeng@huawei.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      6258745b
    • B
      x86/xen: Return from panic notifier · af140367
      Boris Ostrovsky 提交于
      [ Upstream commit c6875f3aacf2a5a913205accddabf0bfb75cac76 ]
      
      Currently execution of panic() continues until Xen's panic notifier
      (xen_panic_event()) is called at which point we make a hypercall that
      never returns.
      
      This means that any notifier that is supposed to be called later as
      well as significant part of panic() code (such as pstore writes from
      kmsg_dump()) is never executed.
      
      There is no reason for xen_panic_event() to be this last point in
      execution since panic()'s emergency_restart() will call into
      xen_emergency_restart() from where we can perform our hypercall.
      
      Nevertheless, we will provide xen_legacy_crash boot option that will
      preserve original behavior during crash. This option could be used,
      for example, if running kernel dumper (which happens after panic
      notifiers) is undesirable.
      Reported-by: NJames Dingwall <james@dingwall.me.uk>
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      af140367
    • T
      MIPS: include: Mark __cmpxchg as __always_inline · 01691986
      Thomas Bogendoerfer 提交于
      [ Upstream commit 88356d09904bc606182c625575237269aeece22e ]
      
      Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING
      forcibly") allows compiler to uninline functions marked as 'inline'.
      In cace of cmpxchg this would cause to reference function
      __cmpxchg_called_with_bad_pointer, which is a error case
      for catching bugs and will not happen for correct code, if
      __cmpxchg is inlined.
      Signed-off-by: NThomas Bogendoerfer <tbogendoerfer@suse.de>
      [paul.burton@mips.com: s/__cmpxchd/__cmpxchg in subject]
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: linux-mips@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      01691986
    • D
      efi/x86: Do not clean dummy variable in kexec path · 9b7591cf
      Dave Young 提交于
      [ Upstream commit 2ecb7402cfc7f22764e7bbc80790e66eadb20560 ]
      
      kexec reboot fails randomly in UEFI based KVM guest.  The firmware
      just resets while calling efi_delete_dummy_variable();  Unfortunately
      I don't know how to debug the firmware, it is also possible a potential
      problem on real hardware as well although nobody reproduced it.
      
      The intention of the efi_delete_dummy_variable is to trigger garbage collection
      when entering virtual mode.  But SetVirtualAddressMap can only run once
      for each physical reboot, thus kexec_enter_virtual_mode() is not necessarily
      a good place to clean a dummy object.
      
      Drop the efi_delete_dummy_variable so that kexec reboot can work.
      Signed-off-by: NDave Young <dyoung@redhat.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMatthew Garrett <mjg59@google.com>
      Cc: Ben Dooks <ben.dooks@codethink.co.uk>
      Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
      Cc: Jerry Snitselaar <jsnitsel@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Lukas Wunner <lukas@wunner.de>
      Cc: Lyude Paul <lyude@redhat.com>
      Cc: Octavian Purdila <octavian.purdila@intel.com>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott Talbert <swt@techie.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Cc: linux-integrity@vger.kernel.org
      Link: https://lkml.kernel.org/r/20191002165904.8819-8-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      9b7591cf