1. 13 2月, 2019 15 次提交
    • M
      xtensa: xtfpga.dtsi: fix dtc warnings about SPI · 64257468
      Max Filippov 提交于
      [ Upstream commit f37598be4e3896359e87c824be57ddddc280cc3f ]
      
      Rename SPI controller node in the XTFPGA DTS to spi@...
      This fixes the following build warnings:
      
      arch/xtensa/boot/dts/kc705_nommu.dtb: Warning (spi_bus_bridge):
       /soc/spi-master@0d0a0000: node name for SPI buses should be 'spi'
      arch/xtensa/boot/dts/kc705_nommu.dtb: Warning (spi_bus_reg):
       Failed prerequisite 'spi_bus_bridge'
      arch/xtensa/boot/dts/lx200mx.dtb: Warning (spi_bus_bridge):
       /soc/spi-master@0d0a0000: node name for SPI buses should be 'spi'
      arch/xtensa/boot/dts/lx200mx.dtb: Warning (spi_bus_reg):
       Failed prerequisite 'spi_bus_bridge'
      arch/xtensa/boot/dts/kc705.dtb: Warning (spi_bus_bridge):
       /soc/spi-master@0d0a0000: node name for SPI buses should be 'spi'
      arch/xtensa/boot/dts/kc705.dtb: Warning (spi_bus_reg):
       Failed prerequisite 'spi_bus_bridge'
      arch/xtensa/boot/dts/ml605.dtb: Warning (spi_bus_bridge):
       /soc/spi-master@0d0a0000: node name for SPI buses should be 'spi'
      arch/xtensa/boot/dts/ml605.dtb: Warning (spi_bus_reg):
       Failed prerequisite 'spi_bus_bridge'
      arch/xtensa/boot/dts/lx60.dtb: Warning (spi_bus_bridge):
       /soc/spi-master@0d0a0000: node name for SPI buses should be 'spi'
      arch/xtensa/boot/dts/lx60.dtb: Warning (spi_bus_reg):
       Failed prerequisite 'spi_bus_bridge'
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      64257468
    • S
      x86/fpu: Add might_fault() to user_insn() · c91ff9ab
      Sebastian Andrzej Siewior 提交于
      [ Upstream commit 6637401c35b2f327a35d27f44bda05e327f2f017 ]
      
      Every user of user_insn() passes an user memory pointer to this macro.
      
      Add might_fault() to user_insn() so we can spot users which are using
      this macro in sections where page faulting is not allowed.
      
       [ bp: Space it out to make it more visible. ]
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NRik van Riel <riel@surriel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kvm ML <kvm@vger.kernel.org>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20181128222035.2996-6-bigeasy@linutronix.deSigned-off-by: NSasha Levin <sashal@kernel.org>
      c91ff9ab
    • R
      ARM: dts: aspeed: add missing memory unit-address · 9d79635b
      Rob Herring 提交于
      [ Upstream commit 8ef86955fe59f7912a40d57ae4c6d511f0187b4d ]
      
      The base aspeed-g5.dtsi already defines a '/memory@80000000' node, so
      '/memory' in the board files create a duplicate node. We're probably
      getting lucky that the bootloader fixes up the memory node that the
      kernel ends up using. Add the unit-address so it's merged with the base
      node.
      
      Found with DT json-schema checks.
      
      Cc: Joel Stanley <joel@jms.id.au>
      Cc: Andrew Jeffery <andrew@aj.id.au>
      Cc: devicetree@vger.kernel.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-aspeed@lists.ozlabs.org
      Signed-off-by: NRob Herring <robh@kernel.org>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      9d79635b
    • L
      ARM: dts: mmp2: fix TWSI2 · 4f770b4b
      Lubomir Rintel 提交于
      [ Upstream commit 1147e05ac9fc2ef86a3691e7ca5c2db7602d81dd ]
      
      Marvell keeps their MMP2 datasheet secret, but there are good clues
      that TWSI2 is not on 0xd4025000 on that platform, not does it use
      IRQ 58. In fact, the IRQ 58 on MMP2 seems to be a signal processor:
      
         arch/arm/mach-mmp/irqs.h:#define IRQ_MMP2_MSP  58
      
      I'm taking a somewhat educated guess that is probably a copy & paste
      error from PXA168 or PXA910 and that the real controller in fact hides
      at address 0xd4031000 and uses an interrupt line multiplexed via IRQ 17.
      
      I'm also copying some properties from TWSI1 that were missing or
      incorrect.
      
      Tested on a OLPC XO 1.75 machine, where the RTC is on TWSI2.
      Signed-off-by: NLubomir Rintel <lkundrak@v3.sk>
      Tested-by: NPavel Machek <pavel@ucw.cz>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      4f770b4b
    • M
      arm64: ftrace: don't adjust the LR value · 14743f85
      Mark Rutland 提交于
      [ Upstream commit 6e803e2e6e367db9a0d6ecae1bd24bb5752011bd ]
      
      The core ftrace code requires that when it is handed the PC of an
      instrumented function, this PC is the address of the instrumented
      instruction. This is necessary so that the core ftrace code can identify
      the specific instrumentation site. Since the instrumented function will
      be a BL, the address of the instrumented function is LR - 4 at entry to
      the ftrace code.
      
      This fixup is applied in the mcount_get_pc and mcount_get_pc0 helpers,
      which acquire the PC of the instrumented function.
      
      The mcount_get_lr helper is used to acquire the LR of the instrumented
      function, whose value does not require this adjustment, and cannot be
      adjusted to anything meaningful. No adjustment of this value is made on
      other architectures, including arm. However, arm64 adjusts this value by
      4.
      
      This patch brings arm64 in line with other architectures and removes the
      adjustment of the LR value.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Torsten Duwe <duwe@suse.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      14743f85
    • H
      s390/zcrypt: improve special ap message cmd handling · 87ae7932
      Harald Freudenberger 提交于
      [ Upstream commit be534791011100d204602e2e0496e9e6ce8edf63 ]
      
      There exist very few ap messages which need to have the 'special' flag
      enabled. This flag tells the firmware layer to do some pre- and maybe
      postprocessing. However, it may happen that this special flag is
      enabled but the firmware is unable to deal with this kind of message
      and thus returns with reply code 0x41. For example older firmware may
      not know the newest messages triggered by the zcrypt device driver and
      thus react with reject and the named reply code. Unfortunately this
      reply code is not known to the zcrypt error routines and thus default
      behavior is to switch the ap queue offline.
      
      This patch now makes the ap error routine aware of the reply code and
      so userspace is informed about the bad processing result but the queue
      is not switched to offline state any more.
      Signed-off-by: NHarald Freudenberger <freude@linux.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      87ae7932
    • W
      arm64: io: Ensure value passed to __iormb() is held in a 64-bit register · ed0526b2
      Will Deacon 提交于
      [ Upstream commit 1b57ec8c75279b873639eb44a215479236f93481 ]
      
      As of commit 6460d3201471 ("arm64: io: Ensure calls to delay routines
      are ordered against prior readX()"), MMIO reads smaller than 64 bits
      fail to compile under clang because we end up mixing 32-bit and 64-bit
      register operands for the same data processing instruction:
      
      ./include/asm-generic/io.h:695:9: warning: value size does not match register size specified by the constraint and modifier [-Wasm-operand-widths]
              return readb(addr);
                     ^
      ./arch/arm64/include/asm/io.h:147:58: note: expanded from macro 'readb'
                                                                             ^
      ./include/asm-generic/io.h:695:9: note: use constraint modifier "w"
      ./arch/arm64/include/asm/io.h:147:50: note: expanded from macro 'readb'
                                                                     ^
      ./arch/arm64/include/asm/io.h:118:24: note: expanded from macro '__iormb'
              asm volatile("eor       %0, %1, %1\n"                           \
                                          ^
      
      Fix the build by casting the macro argument to 'unsigned long' when used
      as an input to the inline asm.
      Reported-by: NNick Desaulniers <nick.desaulniers@gmail.com>
      Reported-by: NNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      ed0526b2
    • W
      arm64: io: Ensure calls to delay routines are ordered against prior readX() · dd46de15
      Will Deacon 提交于
      [ Upstream commit 6460d32014717686d3b7963595950ba2c6d1bb5e ]
      
      A relatively standard idiom for ensuring that a pair of MMIO writes to a
      device arrive at that device with a specified minimum delay between them
      is as follows:
      
      	writel_relaxed(42, dev_base + CTL1);
      	readl(dev_base + CTL1);
      	udelay(10);
      	writel_relaxed(42, dev_base + CTL2);
      
      the intention being that the read-back from the device will push the
      prior write to CTL1, and the udelay will hold up the write to CTL1 until
      at least 10us have elapsed.
      
      Unfortunately, on arm64 where the underlying delay loop is implemented
      as a read of the architected counter, the CPU does not guarantee
      ordering from the readl() to the delay loop and therefore the delay loop
      could in theory be speculated and not provide the desired interval
      between the two writes.
      
      Fix this in a similar manner to PowerPC by introducing a dummy control
      dependency on the output of readX() which, combined with the ISB in the
      read of the architected counter, guarantees that a subsequent delay loop
      can not be executed until the readX() has returned its result.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      dd46de15
    • M
      powerpc/32: Add .data..Lubsan_data*/.data..Lubsan_type* sections explicitly · 0f9dff37
      Mathieu Malaterre 提交于
      [ Upstream commit beba24ac59133cb36ecd03f9af9ccb11971ee20e ]
      
      When both `CONFIG_LD_DEAD_CODE_DATA_ELIMINATION=y` and `CONFIG_UBSAN=y`
      are set, link step typically produce numberous warnings about orphan
      section:
      
        + powerpc-linux-gnu-ld -EB -m elf32ppc -Bstatic --orphan-handling=warn --build-id --gc-sections -X -o .tmp_vmlinux1 -T ./arch/powerpc/kernel/vmlinux.lds --who
        le-archive built-in.a --no-whole-archive --start-group lib/lib.a --end-group
        powerpc-linux-gnu-ld: warning: orphan section `.data..Lubsan_data393' from `init/main.o' being placed in section `.data..Lubsan_data393'.
        powerpc-linux-gnu-ld: warning: orphan section `.data..Lubsan_data394' from `init/main.o' being placed in section `.data..Lubsan_data394'.
        ...
        powerpc-linux-gnu-ld: warning: orphan section `.data..Lubsan_type11' from `init/main.o' being placed in section `.data..Lubsan_type11'.
        powerpc-linux-gnu-ld: warning: orphan section `.data..Lubsan_type12' from `init/main.o' being placed in section `.data..Lubsan_type12'.
        ...
      
      This commit remove those warnings produced at W=1.
      
      Link: https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg135407.htmlSuggested-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMathieu Malaterre <malat@debian.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      0f9dff37
    • N
      ARM: OMAP2+: hwmod: Fix some section annotations · 8218fcf4
      Nathan Chancellor 提交于
      [ Upstream commit c10b26abeb53cabc1e6271a167d3f3d396ce0218 ]
      
      When building the kernel with Clang, the following section mismatch
      warnings appears:
      
      WARNING: vmlinux.o(.text+0x2d398): Section mismatch in reference from
      the function _setup() to the function .init.text:_setup_iclk_autoidle()
      The function _setup() references
      the function __init _setup_iclk_autoidle().
      This is often because _setup lacks a __init
      annotation or the annotation of _setup_iclk_autoidle is wrong.
      
      WARNING: vmlinux.o(.text+0x2d3a0): Section mismatch in reference from
      the function _setup() to the function .init.text:_setup_reset()
      The function _setup() references
      the function __init _setup_reset().
      This is often because _setup lacks a __init
      annotation or the annotation of _setup_reset is wrong.
      
      WARNING: vmlinux.o(.text+0x2d408): Section mismatch in reference from
      the function _setup() to the function .init.text:_setup_postsetup()
      The function _setup() references
      the function __init _setup_postsetup().
      This is often because _setup lacks a __init
      annotation or the annotation of _setup_postsetup is wrong.
      
      _setup is used in omap_hwmod_allocate_module, which isn't marked __init
      and looks like it shouldn't be, meaning to fix these warnings, those
      functions must be moved out of the init section, which this patch does.
      Signed-off-by: NNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: NTony Lindgren <tony@atomide.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      8218fcf4
    • P
      MIPS: Boston: Disable EG20T prefetch · 4ee24ae8
      Paul Burton 提交于
      [ Upstream commit 5ec17af7ead09701e23d2065e16db6ce4e137289 ]
      
      The Intel EG20T Platform Controller Hub used on the MIPS Boston
      development board supports prefetching memory to optimize DMA transfers.
      Unfortunately for unknown reasons this doesn't work well with some MIPS
      CPUs such as the P6600, particularly when using an I/O Coherence Unit
      (IOCU) to provide cache-coherent DMA. In these systems it is common for
      DMA data to be lost, resulting in broken access to EG20T devices such as
      the MMC or SATA controllers.
      
      Support for a DT property to configure the prefetching was added a while
      back by commit 549ce8f1 ("misc: pch_phub: Read prefetch value from
      device tree if passed") but we never added the DT snippet to make use of
      it. Add that now in order to disable the prefetching & fix DMA on the
      affected systems.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/21068/
      Cc: linux-mips@linux-mips.org
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      4ee24ae8
    • F
      powerpc/pseries: add of_node_put() in dlpar_detach_node() · 22ccd257
      Frank Rowand 提交于
      [ Upstream commit 5b3f5c408d8cc59b87e47f1ab9803dbd006e4a91 ]
      
      The previous commit, "of: overlay: add missing of_node_get() in
      __of_attach_node_sysfs" added a missing of_node_get() to
      __of_attach_node_sysfs().  This results in a refcount imbalance
      for nodes attached with dlpar_attach_node().  The calling sequence
      from dlpar_attach_node() to __of_attach_node_sysfs() is:
      
         dlpar_attach_node()
            of_attach_node()
               __of_attach_node_sysfs()
      
      For more detailed description of the node refcount, see
      commit 68baf692 ("powerpc/pseries: Fix of_node_put() underflow
      during DLPAR remove").
      Tested-by: NAlan Tull <atull@kernel.org>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NFrank Rowand <frank.rowand@sony.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      22ccd257
    • C
      x86/PCI: Fix Broadcom CNB20LE unintended sign extension (redux) · 534a0c21
      Colin Ian King 提交于
      [ Upstream commit 53bb565fc5439f2c8c57a786feea5946804aa3e9 ]
      
      In the expression "word1 << 16", word1 starts as u16, but is promoted to a
      signed int, then sign-extended to resource_size_t, which is probably not
      what was intended.  Cast to resource_size_t to avoid the sign extension.
      
      This fixes an identical issue as fixed by commit 0b2d7076 ("x86/PCI:
      Fix Broadcom CNB20LE unintended sign extension") back in 2014.
      
      Detected by CoverityScan, CID#138749, 138750 ("Unintended sign extension")
      
      Fixes: 3f6ea84a ("PCI: read memory ranges out of Broadcom CNB20LE host bridge")
      Signed-off-by: NColin Ian King <colin.king@canonical.com>
      Signed-off-by: NBjorn Helgaas <helgaas@kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      534a0c21
    • Y
      ARM: 8808/1: kexec:offline panic_smp_self_stop CPU · 46602077
      Yufen Wang 提交于
      [ Upstream commit 82c08c3e7f171aa7f579b231d0abbc1d62e91974 ]
      
      In case panic() and panic() called at the same time on different CPUS.
      For example:
      CPU 0:
        panic()
           __crash_kexec
             machine_crash_shutdown
               crash_smp_send_stop
             machine_kexec
               BUG_ON(num_online_cpus() > 1);
      
      CPU 1:
        panic()
          local_irq_disable
          panic_smp_self_stop
      
      If CPU 1 calls panic_smp_self_stop() before crash_smp_send_stop(), kdump
      fails. CPU1 can't receive the ipi irq, CPU1 will be always online.
      To fix this problem, this patch split out the panic_smp_self_stop()
      and add set_cpu_online(smp_processor_id(), false).
      Signed-off-by: NYufen Wang <wangyufen@huawei.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      46602077
    • N
      nds32: Fix gcc 8.0 compiler option incompatible. · 752abfad
      Nickhu 提交于
      [ Upstream commit 4c3d6174e0e17599549f636ec48ddf78627a17fe ]
      
      When the kernel configs of ftrace and frame pointer options are
      choosed, the compiler option of kernel will incompatible.
      	Error message:
      		nds32le-linux-gcc: error: -pg and -fomit-frame-pointer are incompatible
      Signed-off-by: NNickhu <nickhu@andestech.com>
      Signed-off-by: NZong Li <zong@andestech.com>
      Acked-by: NGreentime Hu <greentime@andestech.com>
      Signed-off-by: NGreentime Hu <greentime@andestech.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      752abfad
  2. 07 2月, 2019 5 次提交
  3. 31 1月, 2019 15 次提交
    • D
      s390/smp: Fix calling smp_call_ipl_cpu() from ipl CPU · 48046a01
      David Hildenbrand 提交于
      commit 60f1bf29c0b2519989927cae640cd1f50f59dc7f upstream.
      
      When calling smp_call_ipl_cpu() from the IPL CPU, we will try to read
      from pcpu_devices->lowcore. However, due to prefixing, that will result
      in reading from absolute address 0 on that CPU. We have to go via the
      actual lowcore instead.
      
      This means that right now, we will read lc->nodat_stack == 0 and
      therfore work on a very wrong stack.
      
      This BUG essentially broke rebooting under QEMU TCG (which will report
      a low address protection exception). And checking under KVM, it is
      also broken under KVM. With 1 VCPU it can be easily triggered.
      
      :/# echo 1 > /proc/sys/kernel/sysrq
      :/# echo b > /proc/sysrq-trigger
      [   28.476745] sysrq: SysRq : Resetting
      [   28.476793] Kernel stack overflow.
      [   28.476817] CPU: 0 PID: 424 Comm: sh Not tainted 5.0.0-rc1+ #13
      [   28.476820] Hardware name: IBM 2964 NE1 716 (KVM/Linux)
      [   28.476826] Krnl PSW : 0400c00180000000 0000000000115c0c (pcpu_delegate+0x12c/0x140)
      [   28.476861]            R:0 T:1 IO:0 EX:0 Key:0 M:0 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
      [   28.476863] Krnl GPRS: ffffffffffffffff 0000000000000000 000000000010dff8 0000000000000000
      [   28.476864]            0000000000000000 0000000000000000 0000000000ab7090 000003e0006efbf0
      [   28.476864]            000000000010dff8 0000000000000000 0000000000000000 0000000000000000
      [   28.476865]            000000007fffc000 0000000000730408 000003e0006efc58 0000000000000000
      [   28.476887] Krnl Code: 0000000000115bfe: 4170f000            la      %r7,0(%r15)
      [   28.476887]            0000000000115c02: 41f0a000            la      %r15,0(%r10)
      [   28.476887]           #0000000000115c06: e370f0980024        stg     %r7,152(%r15)
      [   28.476887]           >0000000000115c0c: c0e5fffff86e        brasl   %r14,114ce8
      [   28.476887]            0000000000115c12: 41f07000            la      %r15,0(%r7)
      [   28.476887]            0000000000115c16: a7f4ffa8            brc     15,115b66
      [   28.476887]            0000000000115c1a: 0707                bcr     0,%r7
      [   28.476887]            0000000000115c1c: 0707                bcr     0,%r7
      [   28.476901] Call Trace:
      [   28.476902] Last Breaking-Event-Address:
      [   28.476920]  [<0000000000a01c4a>] arch_call_rest_init+0x22/0x80
      [   28.476927] Kernel panic - not syncing: Corrupt kernel stack, can't continue.
      [   28.476930] CPU: 0 PID: 424 Comm: sh Not tainted 5.0.0-rc1+ #13
      [   28.476932] Hardware name: IBM 2964 NE1 716 (KVM/Linux)
      [   28.476932] Call Trace:
      
      Fixes: 2f859d0d ("s390/smp: reduce size of struct pcpu")
      Cc: stable@vger.kernel.org # 4.0+
      Reported-by: NCornelia Huck <cohuck@redhat.com>
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      48046a01
    • J
      x86/entry/64/compat: Fix stack switching for XEN PV · dd085f9b
      Jan Beulich 提交于
      commit fc24d75a7f91837d7918e40719575951820b2b8f upstream.
      
      While in the native case entry into the kernel happens on the trampoline
      stack, PV Xen kernels get entered with the current thread stack right
      away. Hence source and destination stacks are identical in that case,
      and special care is needed.
      
      Other than in sync_regs() the copying done on the INT80 path isn't
      NMI / #MC safe, as either of these events occurring in the middle of the
      stack copying would clobber data on the (source) stack.
      
      There is similar code in interrupt_entry() and nmi(), but there is no fixup
      required because those code paths are unreachable in XEN PV guests.
      
      [ tglx: Sanitized subject, changelog, Fixes tag and stable mail address. Sigh ]
      
      Fixes: 7f2590a1 ("x86/entry/64: Use a per-CPU trampoline stack for IDT entries")
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: xen-devel@lists.xenproject.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/5C3E1128020000780020DFAD@prv1-mh.provo.novell.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      dd085f9b
    • D
      x86/kaslr: Fix incorrect i8254 outb() parameters · ed334be9
      Daniel Drake 提交于
      commit 7e6fc2f50a3197d0e82d1c0e86282976c9e6c8a4 upstream.
      
      The outb() function takes parameters value and port, in that order.  Fix
      the parameters used in the kalsr i8254 fallback code.
      
      Fixes: 5bfce5ef ("x86, kaslr: Provide randomness functions")
      Signed-off-by: NDaniel Drake <drake@endlessm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Cc: linux@endlessm.com
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190107034024.15005-1-drake@endlessm.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ed334be9
    • D
      x86/pkeys: Properly copy pkey state at fork() · db01b8d4
      Dave Hansen 提交于
      commit a31e184e4f69965c99c04cc5eb8a4920e0c63737 upstream.
      
      Memory protection key behavior should be the same in a child as it was
      in the parent before a fork.  But, there is a bug that resets the
      state in the child at fork instead of preserving it.
      
      The creation of new mm's is a bit convoluted.  At fork(), the code
      does:
      
        1. memcpy() the parent mm to initialize child
        2. mm_init() to initalize some select stuff stuff
        3. dup_mmap() to create true copies that memcpy() did not do right
      
      For pkeys two bits of state need to be preserved across a fork:
      'execute_only_pkey' and 'pkey_allocation_map'.
      
      Those are preserved by the memcpy(), but mm_init() invokes
      init_new_context() which overwrites 'execute_only_pkey' and
      'pkey_allocation_map' with "new" values.
      
      The author of the code erroneously believed that init_new_context is *only*
      called at execve()-time.  But, alas, init_new_context() is used at execve()
      and fork().
      
      The result is that, after a fork(), the child's pkey state ends up looking
      like it does after an execve(), which is totally wrong.  pkeys that are
      already allocated can be allocated again, for instance.
      
      To fix this, add code called by dup_mmap() to copy the pkey state from
      parent to child explicitly.  Also add a comment above init_new_context() to
      make it more clear to the next poor sod what this code is used for.
      
      Fixes: e8c24d3a ("x86/pkeys: Allocation/free syscalls")
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Cc: peterz@infradead.org
      Cc: mpe@ellerman.id.au
      Cc: will.deacon@arm.com
      Cc: luto@kernel.org
      Cc: jroedel@suse.de
      Cc: stable@vger.kernel.org
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Joerg Roedel <jroedel@suse.de>
      Link: https://lkml.kernel.org/r/20190102215655.7A69518C@viggo.jf.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      db01b8d4
    • K
      KVM/nVMX: Do not validate that posted_intr_desc_addr is page aligned · f9203cd0
      KarimAllah Ahmed 提交于
      commit 22a7cdcae6a4a3c8974899e62851d270956f58ce upstream.
      
      The spec only requires the posted interrupt descriptor address to be
      64-bytes aligned (i.e. bits[0:5] == 0). Using page_address_valid also
      forces the address to be page aligned.
      
      Only validate that the address does not cross the maximum physical address
      without enforcing a page alignment.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Fixes: 6de84e58 ("nVMX x86: check posted-interrupt descriptor addresss on vmentry of L2")
      Signed-off-by: NKarimAllah Ahmed <karahmed@amazon.de>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Reviewed-by: NKrish Sadhuhan <krish.sadhukhan@oracle.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      From: Mark Mielke <mark.mielke@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f9203cd0
    • T
      kvm: x86/vmx: Use kzalloc for cached_vmcs12 · d58f5e63
      Tom Roeder 提交于
      commit 3a33d030daaa7c507e1c12d5adcf828248429593 upstream.
      
      This changes the allocation of cached_vmcs12 to use kzalloc instead of
      kmalloc. This removes the information leak found by Syzkaller (see
      Reported-by) in this case and prevents similar leaks from happening
      based on cached_vmcs12.
      
      It also changes vmx_get_nested_state to copy out the full 4k VMCS12_SIZE
      in copy_to_user rather than only the size of the struct.
      
      Tested: rebuilt against head, booted, and ran the syszkaller repro
        https://syzkaller.appspot.com/text?tag=ReproC&x=174efca3400000 without
        observing any problems.
      
      Reported-by: syzbot+ded1696f6b50b615b630@syzkaller.appspotmail.com
      Fixes: 8fcc4b59
      Cc: stable@vger.kernel.org
      Signed-off-by: NTom Roeder <tmroeder@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d58f5e63
    • S
      KVM: x86: WARN_ONCE if sending a PV IPI returns a fatal error · bbb8c5c7
      Sean Christopherson 提交于
      commit de81c2f912ef57917bdc6d63b410c534c3e07982 upstream.
      
      KVM hypercalls return a negative value error code in case of a fatal
      error, e.g. when the hypercall isn't supported or was made with invalid
      parameters.  WARN_ONCE on fatal errors when sending PV IPIs as any such
      error all but guarantees an SMP system will hang due to a missing IPI.
      
      Fixes: aaffcfd1 ("KVM: X86: Implement PV IPIs in linux guest")
      Cc: stable@vger.kernel.org
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bbb8c5c7
    • S
      KVM: x86: Fix PV IPIs for 32-bit KVM host · b2598858
      Sean Christopherson 提交于
      commit 1ed199a41c70ad7bfaee8b14f78e791fcf43b278 upstream.
      
      The recognition of the KVM_HC_SEND_IPI hypercall was unintentionally
      wrapped in "#ifdef CONFIG_X86_64", causing 32-bit KVM hosts to reject
      any and all PV IPI requests despite advertising the feature.  This
      results in all KVM paravirtualized guests hanging during SMP boot due
      to IPIs never being delivered.
      
      Fixes: 4180bf1b ("KVM: X86: Implement "send IPI" hypercall")
      Cc: stable@vger.kernel.org
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2598858
    • A
      KVM: x86: Fix single-step debugging · 6d3dabbd
      Alexander Popov 提交于
      commit 5cc244a20b86090c087073c124284381cdf47234 upstream.
      
      The single-step debugging of KVM guests on x86 is broken: if we run
      gdb 'stepi' command at the breakpoint when the guest interrupts are
      enabled, RIP always jumps to native_apic_mem_write(). Then other
      nasty effects follow.
      
      Long investigation showed that on Jun 7, 2017 the
      commit c8401dda ("KVM: x86: fix singlestepping over syscall")
      introduced the kvm_run.debug corruption: kvm_vcpu_do_singlestep() can
      be called without X86_EFLAGS_TF set.
      
      Let's fix it. Please consider that for -stable.
      Signed-off-by: NAlexander Popov <alex.popov@linux.com>
      Cc: stable@vger.kernel.org
      Fixes: c8401dda ("KVM: x86: fix singlestepping over syscall")
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6d3dabbd
    • G
      s390/smp: fix CPU hotplug deadlock with CPU rescan · 049c7b06
      Gerald Schaefer 提交于
      commit b7cb707c373094ce4008d4a6ac9b6b366ec52da5 upstream.
      
      smp_rescan_cpus() is called without the device_hotplug_lock, which can lead
      to a dedlock when a new CPU is found and immediately set online by a udev
      rule.
      
      This was observed on an older kernel version, where the cpu_hotplug_begin()
      loop was still present, and it resulted in hanging chcpu and systemd-udev
      processes. This specific deadlock will not show on current kernels. However,
      there may be other possible deadlocks, and since smp_rescan_cpus() can still
      trigger a CPU hotplug operation, the device_hotplug_lock should be held.
      
      For reference, this was the deadlock with the old cpu_hotplug_begin() loop:
      
              chcpu (rescan)                       systemd-udevd
      
       echo 1 > /sys/../rescan
       -> smp_rescan_cpus()
       -> (*) get_online_cpus()
          (increases refcount)
       -> smp_add_present_cpu()
          (new CPU found)
       -> register_cpu()
       -> device_add()
       -> udev "add" event triggered -----------> udev rule sets CPU online
                                               -> echo 1 > /sys/.../online
                                               -> lock_device_hotplug_sysfs()
                                                  (this is missing in rescan path)
                                               -> device_online()
                                               -> (**) device_lock(new CPU dev)
                                               -> cpu_up()
                                               -> cpu_hotplug_begin()
                                                  (loops until refcount == 0)
                                                  -> deadlock with (*)
       -> bus_probe_device()
       -> device_attach()
       -> device_lock(new CPU dev)
          -> deadlock with (**)
      
      Fix this by taking the device_hotplug_lock in the CPU rescan path.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      049c7b06
    • C
      s390/early: improve machine detection · e0d573a0
      Christian Borntraeger 提交于
      commit 03aa047ef2db4985e444af6ee1c1dd084ad9fb4c upstream.
      
      Right now the early machine detection code check stsi 3.2.2 for "KVM"
      and set MACHINE_IS_VM if this is different. As the console detection
      uses diagnose 8 if MACHINE_IS_VM returns true this will crash Linux
      early for any non z/VM system that sets a different value than KVM.
      So instead of assuming z/VM, do not set any of MACHINE_IS_LPAR,
      MACHINE_IS_VM, or MACHINE_IS_KVM.
      
      CC: stable@vger.kernel.org
      Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e0d573a0
    • M
      s390/mm: always force a load of the primary ASCE on context switch · b5637644
      Martin Schwidefsky 提交于
      commit a38662084c8bdb829ff486468c7ea801c13fcc34 upstream.
      
      The ASCE of an mm_struct can be modified after a task has been created,
      e.g. via crst_table_downgrade for a compat process. The active_mm logic
      to avoid the switch_mm call if the next task is a kernel thread can
      lead to a situation where switch_mm is called where 'prev == next' is
      true but 'prev->context.asce == next->context.asce' is not.
      
      This can lead to a situation where a CPU uses the outdated ASCE to run
      a task. The result can be a crash, endless loops and really subtle
      problem due to TLBs being created with an invalid ASCE.
      
      Cc: stable@kernel.org # v3.15+
      Fixes: 53e857f3 ("s390/mm,tlb: race of lazy TLB flush vs. recreation")
      Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b5637644
    • E
      ARC: perf: map generic branches to correct hardware condition · 8cbca173
      Eugeniy Paltsev 提交于
      commit 3affbf0e154ee351add6fcc254c59c3f3947fa8f upstream.
      
      So far we've mapped branches to "ijmp" which also counts conditional
      branches NOT taken. This makes us different from other architectures
      such as ARM which seem to be counting only taken branches.
      
      So use "ijmptak" hardware condition which only counts (all jump
      instructions that are taken)
      
      'ijmptak' event is available on both ARCompact and ARCv2 ISA based
      cores.
      Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      [vgupta: reworked changelog]
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8cbca173
    • E
      ARC: adjust memblock_reserve of kernel memory · 2f0d2f3a
      Eugeniy Paltsev 提交于
      commit a3010a0465383300f909f62b8a83f83ffa7b2517 upstream.
      
      In setup_arch_memory we reserve the memory area wherein the kernel
      is located. Current implementation may reserve more memory than
      it actually required in case of CONFIG_LINUX_LINK_BASE is not
      equal to CONFIG_LINUX_RAM_BASE. This happens because we calculate
      start of the reserved region relatively to the CONFIG_LINUX_RAM_BASE
      and end of the region relatively to the CONFIG_LINUX_RAM_BASE.
      
      For example in case of HSDK board we wasted 256MiB of physical memory:
      ------------------->8------------------------------
      Memory: 770416K/1048576K available (5496K kernel code,
          240K rwdata, 1064K rodata, 2200K init, 275K bss,
          278160K reserved, 0K cma-reserved)
      ------------------->8------------------------------
      
      Fix that.
      
      Fixes: 9ed68785 ("ARC: mm: Decouple RAM base address from kernel link addr")
      Cc: stable@vger.kernel.org	#4.14+
      Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2f0d2f3a
    • E
      ARCv2: lib: memeset: fix doing prefetchw outside of buffer · 7bb78e62
      Eugeniy Paltsev 提交于
      commit e6a72b7daeeb521753803550f0ed711152bb2555 upstream.
      
      ARCv2 optimized memset uses PREFETCHW instruction for prefetching the
      next cache line but doesn't ensure that the line is not past the end of
      the buffer. PRETECHW changes the line ownership and marks it dirty,
      which can cause issues in SMP config when next line was already owned by
      other core. Fix the issue by avoiding the PREFETCHW
      
      Some more details:
      
      The current code has 3 logical loops (ignroing the unaligned part)
        (a) Big loop for doing aligned 64 bytes per iteration with PREALLOC
        (b) Loop for 32 x 2 bytes with PREFETCHW
        (c) any left over bytes
      
      loop (a) was already eliding the last 64 bytes, so PREALLOC was
      safe. The fix was removing PREFETCW from (b).
      
      Another potential issue (applicable to configs with 32 or 128 byte L1
      cache line) is that PREALLOC assumes 64 byte cache line and may not do
      the right thing specially for 32b. While it would be easy to adapt,
      there are no known configs with those lie sizes, so for now, just
      compile out PREALLOC in such cases.
      Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Cc: stable@vger.kernel.org #4.4+
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      [vgupta: rewrote changelog, used asm .macro vs. "C" macro]
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7bb78e62
  4. 26 1月, 2019 5 次提交
    • H
      x86/topology: Use total_cpus for max logical packages calculation · a4772e8b
      Hui Wang 提交于
      [ Upstream commit aa02ef099cff042c2a9109782ec2bf1bffc955d4 ]
      
      nr_cpu_ids can be limited on the command line via nr_cpus=. This can break the
      logical package management because it results in a smaller number of packages
      while in kdump kernel.
      
      Check below case:
      There is a two sockets system, each socket has 8 cores, which has 16 logical
      cpus while HT was turn on.
      
       0  1  2  3  4  5  6  7     |    16 17 18 19 20 21 22 23
       cores on socket 0               threads on socket 0
       8  9 10 11 12 13 14 15     |    24 25 26 27 28 29 30 31
       cores on socket 1               threads on socket 1
      
      While starting the kdump kernel with command line option nr_cpus=16 panic
      was triggered on one of the cpus 24-31 eg. 26, then online cpu will be
      1-15, 26(cpu 0 was disabled in kdump), ncpus will be 16 and
      __max_logical_packages will be 1, but actually two packages were booted on.
      
      This issue can reproduced by set kdump option nr_cpus=<real physical core
      numbers>, and then trigger panic on last socket's thread, for example:
      
      taskset -c 26 echo c > /proc/sysrq-trigger
      
      Use total_cpus which will not be limited by nr_cpus command line to calculate
      the value of __max_logical_packages.
      Signed-off-by: NHui Wang <john.wanghui@huawei.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: <guijianfeng@huawei.com>
      Cc: <wencongyang2@huawei.com>
      Cc: <douliyang1@huawei.com>
      Cc: <qiaonuohan@huawei.com>
      Link: https://lkml.kernel.org/r/20181107023643.22174-1-john.wanghui@huawei.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      a4772e8b
    • W
      arm64: Fix minor issues with the dcache_by_line_op macro · dfbf8c98
      Will Deacon 提交于
      [ Upstream commit 33309ecda0070506c49182530abe7728850ebe78 ]
      
      The dcache_by_line_op macro suffers from a couple of small problems:
      
      First, the GAS directives that are currently being used rely on
      assembler behavior that is not documented, and probably not guaranteed
      to produce the correct behavior going forward. As a result, we end up
      with some undefined symbols in cache.o:
      
      $ nm arch/arm64/mm/cache.o
               ...
               U civac
               ...
               U cvac
               U cvap
               U cvau
      
      This is due to the fact that the comparisons used to select the
      operation type in the dcache_by_line_op macro are comparing symbols
      not strings, and even though it seems that GAS is doing the right
      thing here (undefined symbols by the same name are equal to each
      other), it seems unwise to rely on this.
      
      Second, when patching in a DC CVAP instruction on CPUs that support it,
      the fallback path consists of a DC CVAU instruction which may be
      affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE.
      
      Solve these issues by unrolling the various maintenance routines and
      using the conditional directives that are documented as operating on
      strings. To avoid the complexity of nested alternatives, we move the
      DC CVAP patching to __clean_dcache_area_pop, falling back to a branch
      to __clean_dcache_area_poc if DCPOP is not supported by the CPU.
      Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Suggested-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      dfbf8c98
    • Q
      arm64: kasan: Increase stack size for KASAN_EXTRA · 8f183b33
      Qian Cai 提交于
      [ Upstream commit 6e8830674ea77f57d57a33cca09083b117a71f41 ]
      
      If the kernel is configured with KASAN_EXTRA, the stack size is
      increased significantly due to setting the GCC -fstack-reuse option to
      "none" [1]. As a result, it can trigger a stack overrun quite often with
      32k stack size compiled using GCC 8. For example, this reproducer
      
        https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise06.c
      
      can trigger a "corrupted stack end detected inside scheduler" very
      reliably with CONFIG_SCHED_STACK_END_CHECK enabled. There are other
      reports at:
      
        https://lore.kernel.org/lkml/1542144497.12945.29.camel@gmx.us/
        https://lore.kernel.org/lkml/721E7B42-2D55-4866-9C1A-3E8D64F33F9C@gmx.us/
      
      There are just too many functions that could have a large stack with
      KASAN_EXTRA due to large local variables that have been called over and
      over again without being able to reuse the stacks. Some noticiable ones
      are,
      
      size
      7536 shrink_inactive_list
      7440 shrink_page_list
      6560 fscache_stats_show
      3920 jbd2_journal_commit_transaction
      3216 try_to_unmap_one
      3072 migrate_page_move_mapping
      3584 migrate_misplaced_transhuge_page
      3920 ip_vs_lblcr_schedule
      4304 lpfc_nvme_info_show
      3888 lpfc_debugfs_nvmestat_data.constprop
      
      There are other 49 functions over 2k in size while compiling kernel with
      "-Wframe-larger-than=" on this machine. Hence, it is too much work to
      change Makefiles for each object to compile without
      -fsanitize-address-use-after-scope individually.
      
      [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715#c23Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      8f183b33
    • B
      powerpc/xmon: Fix invocation inside lock region · 115a0d66
      Breno Leitao 提交于
      [ Upstream commit 8d4a862276a9c30a269d368d324fb56529e6d5fd ]
      
      Currently xmon needs to get devtree_lock (through rtas_token()) during its
      invocation (at crash time). If there is a crash while devtree_lock is being
      held, then xmon tries to get the lock but spins forever and never get into
      the interactive debugger, as in the following case:
      
      	int *ptr = NULL;
      	raw_spin_lock_irqsave(&devtree_lock, flags);
      	*ptr = 0xdeadbeef;
      
      This patch avoids calling rtas_token(), thus trying to get the same lock,
      at crash time. This new mechanism proposes getting the token at
      initialization time (xmon_init()) and just consuming it at crash time.
      
      This would allow xmon to be possible invoked independent of devtree_lock
      being held or not.
      Signed-off-by: NBreno Leitao <leitao@debian.org>
      Reviewed-by: NThiago Jung Bauermann <bauerman@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      115a0d66
    • A
      arm64: perf: set suppress_bind_attrs flag to true · 6f88ff11
      Anders Roxell 提交于
      [ Upstream commit 81e9fa8bab381f8b6eb04df7cdf0f71994099bd4 ]
      
      The armv8_pmuv3 driver doesn't have a remove function, and when the test
      'CONFIG_DEBUG_TEST_DRIVER_REMOVE=y' is enabled, the following Call trace
      can be seen.
      
      [    1.424287] Failed to register pmu: armv8_pmuv3, reason -17
      [    1.424870] WARNING: CPU: 0 PID: 1 at ../kernel/events/core.c:11771 perf_event_sysfs_init+0x98/0xdc
      [    1.425220] Modules linked in:
      [    1.425531] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         4.19.0-rc7-next-20181012-00003-ge7a97b1ad77b-dirty #35
      [    1.425951] Hardware name: linux,dummy-virt (DT)
      [    1.426212] pstate: 80000005 (Nzcv daif -PAN -UAO)
      [    1.426458] pc : perf_event_sysfs_init+0x98/0xdc
      [    1.426720] lr : perf_event_sysfs_init+0x98/0xdc
      [    1.426908] sp : ffff00000804bd50
      [    1.427077] x29: ffff00000804bd50 x28: ffff00000934e078
      [    1.427429] x27: ffff000009546000 x26: 0000000000000007
      [    1.427757] x25: ffff000009280710 x24: 00000000ffffffef
      [    1.428086] x23: ffff000009408000 x22: 0000000000000000
      [    1.428415] x21: ffff000009136008 x20: ffff000009408730
      [    1.428744] x19: ffff80007b20b400 x18: 000000000000000a
      [    1.429075] x17: 0000000000000000 x16: 0000000000000000
      [    1.429418] x15: 0000000000000400 x14: 2e79726f74636572
      [    1.429748] x13: 696420656d617320 x12: 656874206e692065
      [    1.430060] x11: 6d616e20656d6173 x10: 2065687420687469
      [    1.430335] x9 : ffff00000804bd50 x8 : 206e6f7361657220
      [    1.430610] x7 : 2c3376756d705f38 x6 : ffff00000954d7ce
      [    1.430880] x5 : 0000000000000000 x4 : 0000000000000000
      [    1.431226] x3 : 0000000000000000 x2 : ffffffffffffffff
      [    1.431554] x1 : 4d151327adc50b00 x0 : 0000000000000000
      [    1.431868] Call trace:
      [    1.432102]  perf_event_sysfs_init+0x98/0xdc
      [    1.432382]  do_one_initcall+0x6c/0x1a8
      [    1.432637]  kernel_init_freeable+0x1bc/0x280
      [    1.432905]  kernel_init+0x18/0x160
      [    1.433115]  ret_from_fork+0x10/0x18
      [    1.433297] ---[ end trace 27fd415390eb9883 ]---
      
      Rework to set suppress_bind_attrs flag to avoid removing the device when
      CONFIG_DEBUG_TEST_DRIVER_REMOVE=y, since there's no real reason to
      remove the armv8_pmuv3 driver.
      
      Cc: Arnd Bergmann <arnd@arndb.de>
      Co-developed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      6f88ff11