1. 19 5月, 2020 1 次提交
    • D
      kgdb: Delay "kgdbwait" to dbg_late_init() by default · b1a57bbf
      Douglas Anderson 提交于
      Using kgdb requires at least some level of architecture-level
      initialization.  If nothing else, it relies on the architecture to
      pass breakpoints / crashes onto kgdb.
      
      On some architectures this all works super early, specifically it
      starts working at some point in time before Linux parses
      early_params's.  On other architectures it doesn't.  A survey of a few
      platforms:
      
      a) x86: Presumably it all works early since "ekgdboc" is documented to
         work here.
      b) arm64: Catching crashes works; with a simple patch breakpoints can
         also be made to work.
      c) arm: Nothing in kgdb works until
         paging_init() -> devicemaps_init() -> early_trap_init()
      
      Let's be conservative and, by default, process "kgdbwait" (which tells
      the kernel to drop into the debugger ASAP at boot) a bit later at
      dbg_late_init() time.  If an architecture has tested it and wants to
      re-enable super early debugging, they can select the
      ARCH_HAS_EARLY_DEBUG KConfig option.  We'll do this for x86 to start.
      It should be noted that dbg_late_init() is still called quite early in
      the system.
      
      Note that this patch doesn't affect when kgdb runs its init.  If kgdb
      is set to initialize early it will still initialize when parsing
      early_param's.  This patch _only_ inhibits the initial breakpoint from
      "kgdbwait".  This means:
      
      * Without any extra patches arm64 platforms will at least catch
        crashes after kgdb inits.
      * arm platforms will catch crashes (and could handle a hardcoded
        kgdb_breakpoint()) any time after early_trap_init() runs, even
        before dbg_late_init().
      Signed-off-by: NDouglas Anderson <dianders@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Link: https://lore.kernel.org/r/20200507130644.v4.4.I3113aea1b08d8ce36dc3720209392ae8b815201b@changeidSigned-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      b1a57bbf
  2. 25 4月, 2020 4 次提交
  3. 23 4月, 2020 8 次提交
  4. 22 4月, 2020 6 次提交
  5. 21 4月, 2020 5 次提交
    • M
      arm64: sync kernel APIAKey when installing · 3fabb438
      Mark Rutland 提交于
      A direct write to a APxxKey_EL1 register requires a context
      synchronization event to ensure that indirect reads made by subsequent
      instructions (e.g. AUTIASP, PACIASP) observe the new value.
      
      When we initialize the boot task's APIAKey in boot_init_stack_canary()
      via ptrauth_keys_switch_kernel() we miss the necessary ISB, and so there
      is a window where instructions are not guaranteed to use the new APIAKey
      value. This has been observed to result in boot-time crashes where
      PACIASP and AUTIASP within a function used a mixture of the old and new
      key values.
      
      Fix this by having ptrauth_keys_switch_kernel() synchronize the new key
      value with an ISB. At the same time, __ptrauth_key_install() is renamed
      to __ptrauth_key_install_nosync() so that it is obvious that this
      performs no synchronization itself.
      
      Fixes: 28321582 ("arm64: initialize ptrauth keys for kernel booting task")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NWill Deacon <will@kernel.org>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NWill Deacon <will@kernel.org>
      3fabb438
    • C
      powerpc/setup_64: Set cache-line-size based on cache-block-size · 94c0b013
      Chris Packham 提交于
      If {i,d}-cache-block-size is set and {i,d}-cache-line-size is not, use
      the block-size value for both. Per the devicetree spec cache-line-size
      is only needed if it differs from the block size.
      
      Originally the code would fallback from block size to line size. An
      error message was printed if both properties were missing.
      
      Later the code was refactored to use clearer names and logic but it
      inadvertently made line size a required property, meaning on systems
      without a line size property we fall back to the default from the
      cputable.
      
      On powernv (OPAL) platforms, since the introduction of device tree CPU
      features (5a61ef74 ("powerpc/64s: Support new device tree binding
      for discovering CPU features")), that has led to the wrong value being
      used, as the fallback value is incorrect for Power8/Power9 CPUs.
      
      The incorrect values flow through to the VDSO and also to the sysconf
      values, SC_LEVEL1_ICACHE_LINESIZE etc.
      
      Fixes: bd067f83 ("powerpc/64: Fix naming of cache block vs. cache line")
      Cc: stable@vger.kernel.org # v4.11+
      Signed-off-by: NChris Packham <chris.packham@alliedtelesis.co.nz>
      Reported-by: NQian Cai <cai@lca.pw>
      [mpe: Add even more detail to change log]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20200416221908.7886-1-chris.packham@alliedtelesis.co.nz
      94c0b013
    • L
      bpf, x86: Fix encoding for lower 8-bit registers in BPF_STX BPF_B · aee194b1
      Luke Nelson 提交于
      This patch fixes an encoding bug in emit_stx for BPF_B when the source
      register is BPF_REG_FP.
      
      The current implementation for BPF_STX BPF_B in emit_stx saves one REX
      byte when the operands can be encoded using Mod-R/M alone. The lower 8
      bits of registers %rax, %rbx, %rcx, and %rdx can be accessed without using
      a REX prefix via %al, %bl, %cl, and %dl, respectively. Other registers,
      (e.g., %rsi, %rdi, %rbp, %rsp) require a REX prefix to use their 8-bit
      equivalents (%sil, %dil, %bpl, %spl).
      
      The current code checks if the source for BPF_STX BPF_B is BPF_REG_1
      or BPF_REG_2 (which map to %rdi and %rsi), in which case it emits the
      required REX prefix. However, it misses the case when the source is
      BPF_REG_FP (mapped to %rbp).
      
      The result is that BPF_STX BPF_B with BPF_REG_FP as the source operand
      will read from register %ch instead of the correct %bpl. This patch fixes
      the problem by fixing and refactoring the check on which registers need
      the extra REX byte. Since no BPF registers map to %rsp, there is no need
      to handle %spl.
      
      Fixes: 62258278 ("net: filter: x86: internal BPF JIT")
      Signed-off-by: NXi Wang <xi.wang@gmail.com>
      Signed-off-by: NLuke Nelson <luke.r.nels@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200418232655.23870-1-luke.r.nels@gmail.com
      aee194b1
    • P
      KVM: PPC: Book3S HV: Handle non-present PTEs in page fault functions · ae49deda
      Paul Mackerras 提交于
      Since cd758a9b "KVM: PPC: Book3S HV: Use __gfn_to_pfn_memslot in HPT
      page fault handler", it's been possible in fairly rare circumstances to
      load a non-present PTE in kvmppc_book3s_hv_page_fault() when running a
      guest on a POWER8 host.
      
      Because that case wasn't checked for, we could misinterpret the non-present
      PTE as being a cache-inhibited PTE.  That could mismatch with the
      corresponding hash PTE, which would cause the function to fail with -EFAULT
      a little further down.  That would propagate up to the KVM_RUN ioctl()
      generally causing the KVM userspace (usually qemu) to fall over.
      
      This addresses the problem by catching that case and returning to the guest
      instead.
      
      For completeness, this fixes the radix page fault handler in the same
      way.  For radix this didn't cause any obvious misbehaviour, because we
      ended up putting the non-present PTE into the guest's partition-scoped
      page tables, leading immediately to another hypervisor data/instruction
      storage interrupt, which would go through the page fault path again
      and fix things up.
      
      Fixes: cd758a9b "KVM: PPC: Book3S HV: Use __gfn_to_pfn_memslot in HPT page fault handler"
      Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1820402Reported-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Tested-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      ae49deda
    • J
      kvm: Disable objtool frame pointer checking for vmenter.S · 7f4b5cde
      Josh Poimboeuf 提交于
      Frame pointers are completely broken by vmenter.S because it clobbers
      RBP:
      
        arch/x86/kvm/svm/vmenter.o: warning: objtool: __svm_vcpu_run()+0xe4: BP used as a scratch register
      
      That's unavoidable, so just skip checking that file when frame pointers
      are configured in.
      
      On the other hand, ORC can handle that code just fine, so leave objtool
      enabled in the !FRAME_POINTER case.
      Reported-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Message-Id: <01fae42917bacad18be8d2cbc771353da6603473.1587398610.git.jpoimboe@redhat.com>
      Tested-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
      Fixes: 199cd1d7 ("KVM: SVM: Split svm_vcpu_run inline assembly to separate file")
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7f4b5cde
  6. 20 4月, 2020 1 次提交
    • E
      KVM: s390: Fix PV check in deliverable_irqs() · d47c4c45
      Eric Farman 提交于
      The diag 0x44 handler, which handles a directed yield, goes into a
      a codepath that does a kvm_for_each_vcpu() and ultimately
      deliverable_irqs().  The new check for kvm_s390_pv_cpu_is_protected()
      contains an assertion that the vcpu->mutex is held, which isn't going
      to be the case in this scenario.
      
      The result is a plethora of these messages if the lock debugging
      is enabled, and thus an implication that we have a problem.
      
        WARNING: CPU: 9 PID: 16167 at arch/s390/kvm/kvm-s390.h:239 deliverable_irqs+0x1c6/0x1d0 [kvm]
        ...snip...
        Call Trace:
         [<000003ff80429bf2>] deliverable_irqs+0x1ca/0x1d0 [kvm]
        ([<000003ff80429b34>] deliverable_irqs+0x10c/0x1d0 [kvm])
         [<000003ff8042ba82>] kvm_s390_vcpu_has_irq+0x2a/0xa8 [kvm]
         [<000003ff804101e2>] kvm_arch_dy_runnable+0x22/0x38 [kvm]
         [<000003ff80410284>] kvm_vcpu_on_spin+0x8c/0x1d0 [kvm]
         [<000003ff80436888>] kvm_s390_handle_diag+0x3b0/0x768 [kvm]
         [<000003ff80425af4>] kvm_handle_sie_intercept+0x1cc/0xcd0 [kvm]
         [<000003ff80422bb0>] __vcpu_run+0x7b8/0xfd0 [kvm]
         [<000003ff80423de6>] kvm_arch_vcpu_ioctl_run+0xee/0x3e0 [kvm]
         [<000003ff8040ccd8>] kvm_vcpu_ioctl+0x2c8/0x8d0 [kvm]
         [<00000001504ced06>] ksys_ioctl+0xae/0xe8
         [<00000001504cedaa>] __s390x_sys_ioctl+0x2a/0x38
         [<0000000150cb9034>] system_call+0xd8/0x2d8
        2 locks held by CPU 2/KVM/16167:
         #0: 00000001951980c0 (&vcpu->mutex){+.+.}, at: kvm_vcpu_ioctl+0x90/0x8d0 [kvm]
         #1: 000000019599c0f0 (&kvm->srcu){....}, at: __vcpu_run+0x4bc/0xfd0 [kvm]
        Last Breaking-Event-Address:
         [<000003ff80429b34>] deliverable_irqs+0x10c/0x1d0 [kvm]
        irq event stamp: 11967
        hardirqs last  enabled at (11975): [<00000001502992f2>] console_unlock+0x4ca/0x650
        hardirqs last disabled at (11982): [<0000000150298ee8>] console_unlock+0xc0/0x650
        softirqs last  enabled at (7940): [<0000000150cba6ca>] __do_softirq+0x422/0x4d8
        softirqs last disabled at (7929): [<00000001501cd688>] do_softirq_own_stack+0x70/0x80
      
      Considering what's being done here, let's fix this by removing the
      mutex assertion rather than acquiring the mutex for every other vcpu.
      
      Fixes: 201ae986 ("KVM: s390: protvirt: Implement interrupt injection")
      Signed-off-by: NEric Farman <farman@linux.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Reviewed-by: NCornelia Huck <cohuck@redhat.com>
      Link: https://lore.kernel.org/r/20200415190353.63625-1-farman@linux.ibm.comSigned-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      d47c4c45
  7. 18 4月, 2020 3 次提交
  8. 17 4月, 2020 7 次提交
  9. 16 4月, 2020 5 次提交