1. 01 12月, 2022 1 次提交
  2. 09 11月, 2022 3 次提交
    • N
      arch/x86/mm/hugetlbpage.c: pud_huge() returns 0 when using 2-level paging · 1fdbed65
      Naoya Horiguchi 提交于
      The following bug is reported to be triggered when starting X on x86-32
      system with i915:
      
        [  225.777375] kernel BUG at mm/memory.c:2664!
        [  225.777391] invalid opcode: 0000 [#1] PREEMPT SMP
        [  225.777405] CPU: 0 PID: 2402 Comm: Xorg Not tainted 6.1.0-rc3-bdg+ #86
        [  225.777415] Hardware name:  /8I865G775-G, BIOS F1 08/29/2006
        [  225.777421] EIP: __apply_to_page_range+0x24d/0x31c
        [  225.777437] Code: ff ff 8b 55 e8 8b 45 cc e8 0a 11 ec ff 89 d8 83 c4 28 5b 5e 5f 5d c3 81 7d e0 a0 ef 96 c1 74 ad 8b 45 d0 e8 2d 83 49 00 eb a3 <0f> 0b 25 00 f0 ff ff 81 eb 00 00 00 40 01 c3 8b 45 ec 8b 00 e8 76
        [  225.777446] EAX: 00000001 EBX: c53a3b58 ECX: b5c00000 EDX: c258aa00
        [  225.777454] ESI: b5c00000 EDI: b5900000 EBP: c4b0fdb4 ESP: c4b0fd80
        [  225.777462] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 EFLAGS: 00010202
        [  225.777470] CR0: 80050033 CR2: b5900000 CR3: 053a3000 CR4: 000006d0
        [  225.777479] Call Trace:
        [  225.777486]  ? i915_memcpy_init_early+0x63/0x63 [i915]
        [  225.777684]  apply_to_page_range+0x21/0x27
        [  225.777694]  ? i915_memcpy_init_early+0x63/0x63 [i915]
        [  225.777870]  remap_io_mapping+0x49/0x75 [i915]
        [  225.778046]  ? i915_memcpy_init_early+0x63/0x63 [i915]
        [  225.778220]  ? mutex_unlock+0xb/0xd
        [  225.778231]  ? i915_vma_pin_fence+0x6d/0xf7 [i915]
        [  225.778420]  vm_fault_gtt+0x2a9/0x8f1 [i915]
        [  225.778644]  ? lock_is_held_type+0x56/0xe7
        [  225.778655]  ? lock_is_held_type+0x7a/0xe7
        [  225.778663]  ? 0xc1000000
        [  225.778670]  __do_fault+0x21/0x6a
        [  225.778679]  handle_mm_fault+0x708/0xb21
        [  225.778686]  ? mt_find+0x21e/0x5ae
        [  225.778696]  exc_page_fault+0x185/0x705
        [  225.778704]  ? doublefault_shim+0x127/0x127
        [  225.778715]  handle_exception+0x130/0x130
        [  225.778723] EIP: 0xb700468a
      
      Recently pud_huge() got aware of non-present entry by commit 3a194f3f
      ("mm/hugetlb: make pud_huge() and follow_huge_pud() aware of non-present
      pud entry") to handle some special states of gigantic page.  However, it's
      overlooked that pud_none() always returns false when running with 2-level
      paging, and as a result pud_huge() can return true pointlessly.
      
      Introduce "#if CONFIG_PGTABLE_LEVELS > 2" to pud_huge() to deal with this.
      
      Link: https://lkml.kernel.org/r/20221107021010.2449306-1-naoya.horiguchi@linux.dev
      Fixes: 3a194f3f ("mm/hugetlb: make pud_huge() and follow_huge_pud() aware of non-present pud entry")
      Signed-off-by: NNaoya Horiguchi <naoya.horiguchi@nec.com>
      Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Tested-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NMiaohe Lin <linmiaohe@huawei.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Liu Shixin <liushixin2@huawei.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Yang Shi <shy828301@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      1fdbed65
    • A
      x86/traps: avoid KMSAN bugs originating from handle_bug() · ba54d194
      Alexander Potapenko 提交于
      There is a case in exc_invalid_op handler that is executed outside the
      irqentry_enter()/irqentry_exit() region when an UD2 instruction is used to
      encode a call to __warn().
      
      In that case the `struct pt_regs` passed to the interrupt handler is never
      unpoisoned by KMSAN (this is normally done in irqentry_enter()), which
      leads to false positives inside handle_bug().
      
      Use kmsan_unpoison_entry_regs() to explicitly unpoison those registers
      before using them.
      
      Link: https://lkml.kernel.org/r/20221102110611.1085175-5-glider@google.comSigned-off-by: NAlexander Potapenko <glider@google.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Marco Elver <elver@google.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      ba54d194
    • A
      x86/uaccess: instrument copy_from_user_nmi() · 11385b26
      Alexander Potapenko 提交于
      Make sure usercopy hooks from linux/instrumented.h are invoked for
      copy_from_user_nmi().  This fixes KMSAN false positives reported when
      dumping opcodes for a stack trace.
      
      Link: https://lkml.kernel.org/r/20221102110611.1085175-2-glider@google.comSigned-off-by: NAlexander Potapenko <glider@google.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      11385b26
  3. 05 11月, 2022 1 次提交
  4. 04 11月, 2022 2 次提交
    • A
      arm64: cpufeature: Fix the visibility of compat hwcaps · 85f15063
      Amit Daniel Kachhap 提交于
      Commit 237405eb ("arm64: cpufeature: Force HWCAP to be based on the
      sysreg visible to user-space") forced the hwcaps to use sanitised
      user-space view of the id registers. However, the ID register structures
      used to select few compat cpufeatures (vfp, crc32, ...) are masked and
      hence such hwcaps do not appear in /proc/cpuinfo anymore for PER_LINUX32
      personality.
      
      Add the ID register structures explicitly and set the relevant entry as
      visible. As these ID registers are now of type visible so make them
      available in 64-bit userspace by making necessary changes in register
      emulation logic and documentation.
      
      While at it, update the comment for structure ftr_generic_32bits[] which
      lists the ID register that use it.
      
      Fixes: 237405eb ("arm64: cpufeature: Force HWCAP to be based on the sysreg visible to user-space")
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Link: https://lore.kernel.org/r/20221103082232.19189-1-amit.kachhap@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      85f15063
    • A
      arm64: efi: Recover from synchronous exceptions occurring in firmware · 23715a26
      Ard Biesheuvel 提交于
      Unlike x86, which has machinery to deal with page faults that occur
      during the execution of EFI runtime services, arm64 has nothing like
      that, and a synchronous exception raised by firmware code brings down
      the whole system.
      
      With more EFI based systems appearing that were not built to run Linux
      (such as the Windows-on-ARM laptops based on Qualcomm SOCs), as well as
      the introduction of PRM (platform specific firmware routines that are
      callable just like EFI runtime services), we are more likely to run into
      issues of this sort, and it is much more likely that we can identify and
      work around such issues if they don't bring down the system entirely.
      
      Since we already use a EFI runtime services call wrapper in assembler,
      we can quite easily add some code that captures the execution state at
      the point where the call is made, allowing us to revert to this state
      and proceed execution if the call triggered a synchronous exception.
      
      Given that the kernel and the firmware don't share any data structures
      that could end up in an indeterminate state, we can happily continue
      running, as long as we mark the EFI runtime services as unavailable from
      that point on.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      23715a26
  5. 03 11月, 2022 7 次提交
    • L
      KVM: x86: Fix a typo about the usage of kvcalloc() · 8670866b
      Liao Chang 提交于
      Swap the 1st and 2nd arguments to be consistent with the usage of
      kvcalloc().
      
      Fixes: c9b8fecd ("KVM: use kvcalloc for array allocations")
      Signed-off-by: NLiao Chang <liaochang1@huawei.com>
      Message-Id: <20221103011749.139262-1-liaochang1@huawei.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8670866b
    • B
      KVM: x86: Use SRCU to protect zap in __kvm_set_or_clear_apicv_inhibit() · 074c0080
      Ben Gardon 提交于
      kvm_zap_gfn_range() must be called in an SRCU read-critical section, but
      there is no SRCU annotation in __kvm_set_or_clear_apicv_inhibit(). This
      can lead to the following warning via
      kvm_arch_vcpu_ioctl_set_guest_debug() if a Shadow MMU is in use (TDP
      MMU disabled or nesting):
      
      [ 1416.659809] =============================
      [ 1416.659810] WARNING: suspicious RCU usage
      [ 1416.659839] 6.1.0-dbg-DEV #1 Tainted: G S        I
      [ 1416.659853] -----------------------------
      [ 1416.659854] include/linux/kvm_host.h:954 suspicious rcu_dereference_check() usage!
      [ 1416.659856]
      ...
      [ 1416.659904]  dump_stack_lvl+0x84/0xaa
      [ 1416.659910]  dump_stack+0x10/0x15
      [ 1416.659913]  lockdep_rcu_suspicious+0x11e/0x130
      [ 1416.659919]  kvm_zap_gfn_range+0x226/0x5e0
      [ 1416.659926]  ? kvm_make_all_cpus_request_except+0x18b/0x1e0
      [ 1416.659935]  __kvm_set_or_clear_apicv_inhibit+0xcc/0x100
      [ 1416.659940]  kvm_arch_vcpu_ioctl_set_guest_debug+0x350/0x390
      [ 1416.659946]  kvm_vcpu_ioctl+0x2fc/0x620
      [ 1416.659955]  __se_sys_ioctl+0x77/0xc0
      [ 1416.659962]  __x64_sys_ioctl+0x1d/0x20
      [ 1416.659965]  do_syscall_64+0x3d/0x80
      [ 1416.659969]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
      
      Always take the KVM SRCU read lock in __kvm_set_or_clear_apicv_inhibit()
      to protect the GFN to memslot translation. The SRCU read lock is not
      technically required when no Shadow MMUs are in use, since the TDP MMU
      walks the paging structures from the roots and does not need to look up
      GFN translations in the memslots, but make the SRCU locking
      unconditional for simplicty.
      
      In most cases, the SRCU locking is taken care of in the vCPU run loop,
      but when called through other ioctls (such as KVM_SET_GUEST_DEBUG)
      there is no srcu_read_lock.
      
      Tested: ran tools/testing/selftests/kvm/x86_64/debug_regs on a DBG
      	build. This patch causes the suspicious RCU warning to disappear.
      	Note that the warning is hit in __kvm_zap_rmaps(), so
      	kvm_memslots_have_rmaps() must return true in order for this to
      	repro (i.e. the TDP MMU must be off or nesting in use.)
      Reported-by: NGreg Thelen <gthelen@google.com>
      Fixes: 36222b11 ("KVM: x86: don't disable APICv memslot when inhibited")
      Signed-off-by: NBen Gardon <bgardon@google.com>
      Message-Id: <20221102205359.1260980-1-bgardon@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      074c0080
    • J
      x86/xen: simplify sysenter and syscall setup · 4bff677b
      Juergen Gross 提交于
      xen_enable_sysenter() and xen_enable_syscall() can be simplified a lot.
      
      While at it, switch to use cpu_feature_enabled() instead of
      boot_cpu_has().
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      4bff677b
    • J
      x86/xen: silence smatch warning in pmu_msr_chk_emulated() · 354d8a4b
      Juergen Gross 提交于
      Commit 8714f7bc ("xen/pv: add fault recovery control to pmu msr
      accesses") introduced code resulting in a warning issued by the smatch
      static checker, claiming to use an uninitialized variable.
      
      This is a false positive, but work around the warning nevertheless.
      
      Fixes: 8714f7bc ("xen/pv: add fault recovery control to pmu msr accesses")
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      354d8a4b
    • S
      KVM: VMX: Ignore guest CPUID for host userspace writes to DEBUGCTL · b333b8eb
      Sean Christopherson 提交于
      Ignore guest CPUID for host userspace writes to the DEBUGCTL MSR, KVM's
      ABI is that setting CPUID vs. state can be done in any order, i.e. KVM
      allows userspace to stuff MSRs prior to setting the guest's CPUID that
      makes the new MSR "legal".
      
      Keep the vmx_get_perf_capabilities() check for guest writes, even though
      it's technically unnecessary since the vCPU's PERF_CAPABILITIES is
      consulted when refreshing LBR support.  A future patch will clean up
      vmx_get_perf_capabilities() to avoid the RDMSR on every call, at which
      point the paranoia will incur no meaningful overhead.
      
      Note, prior to vmx_get_perf_capabilities() checking that the host fully
      supports LBRs via x86_perf_get_lbr(), KVM effectively relied on
      intel_pmu_lbr_is_enabled() to guard against host userspace enabling LBRs
      on platforms without full support.
      
      Fixes: c6462363 ("KVM: vmx/pmu: Add PMU_CAP_LBR_FMT check when guest LBR is enabled")
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20221006000314.73240-5-seanjc@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b333b8eb
    • S
      KVM: VMX: Fold vmx_supported_debugctl() into vcpu_supported_debugctl() · 18e897d2
      Sean Christopherson 提交于
      Fold vmx_supported_debugctl() into vcpu_supported_debugctl(), its only
      caller.  Setting bits only to clear them a few instructions later is
      rather silly, and splitting the logic makes things seem more complicated
      than they actually are.
      
      Opportunistically drop DEBUGCTLMSR_LBR_MASK now that there's a single
      reference to the pair of bits.  The extra layer of indirection provides
      no meaningful value and makes it unnecessarily tedious to understand
      what KVM is doing.
      
      No functional change.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20221006000314.73240-4-seanjc@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      18e897d2
    • S
      KVM: VMX: Advertise PMU LBRs if and only if perf supports LBRs · 145dfad9
      Sean Christopherson 提交于
      Advertise LBR support to userspace via MSR_IA32_PERF_CAPABILITIES if and
      only if perf fully supports LBRs.  Perf may disable LBRs (by zeroing the
      number of LBRs) even on platforms the allegedly support LBRs, e.g. if
      probing any LBR MSRs during setup fails.
      
      Fixes: be635e34 ("KVM: vmx/pmu: Expose LBR_FMT in the MSR_IA32_PERF_CAPABILITIES")
      Reported-by: NLike Xu <like.xu.linux@gmail.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20221006000314.73240-3-seanjc@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      145dfad9
  6. 02 11月, 2022 7 次提交
    • K
      perf/x86/intel: Add Cooper Lake stepping to isolation_ucodes[] · 6f8faf47
      Kan Liang 提交于
      The intel_pebs_isolation quirk checks both model number and stepping.
      Cooper Lake has a different stepping (11) than the other Skylake Xeon.
      It cannot benefit from the optimization in commit 9b545c04
      ("perf/x86/kvm: Avoid unnecessary work in guest filtering").
      
      Add the stepping of Cooper Lake into the isolation_ucodes[] table.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20221031154550.571663-1-kan.liang@linux.intel.com
      6f8faf47
    • K
      perf/x86/intel: Fix pebs event constraints for SPR · 0916886b
      Kan Liang 提交于
      According to the latest event list, update the MEM_INST_RETIRED events
      which support the DataLA facility for SPR.
      
      Fixes: 61b985e3 ("perf/x86/intel: Add perf core PMU support for Sapphire Rapids")
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20221031154119.571386-2-kan.liang@linux.intel.com
      0916886b
    • K
      perf/x86/intel: Fix pebs event constraints for ICL · acc5568b
      Kan Liang 提交于
      According to the latest event list, update the MEM_INST_RETIRED events
      which support the DataLA facility.
      
      Fixes: 60176089 ("perf/x86/intel: Add Icelake support")
      Reported-by: NJannis Klinkenberg <jannis.klinkenberg@rwth-aachen.de>
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20221031154119.571386-1-kan.liang@linux.intel.com
      acc5568b
    • Z
      perf/x86/rapl: Use standard Energy Unit for SPR Dram RAPL domain · 80275ca9
      Zhang Rui 提交于
      Intel Xeon servers used to use a fixed energy resolution (15.3uj) for
      Dram RAPL domain. But on SPR, Dram RAPL domain follows the standard
      energy resolution as described in MSR_RAPL_POWER_UNIT.
      
      Remove the SPR Dram energy unit quirk.
      
      Fixes: bcfd218b ("perf/x86/rapl: Add support for Intel SPR platform")
      Signed-off-by: NZhang Rui <rui.zhang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NKan Liang <kan.liang@linux.intel.com>
      Tested-by: NWang Wendy <wendy.wang@intel.com>
      Link: https://lkml.kernel.org/r/20220924054738.12076-3-rui.zhang@intel.com
      80275ca9
    • K
      x86/tdx: Panic on bad configs that #VE on "private" memory access · 373e715e
      Kirill A. Shutemov 提交于
      All normal kernel memory is "TDX private memory".  This includes
      everything from kernel stacks to kernel text.  Handling
      exceptions on arbitrary accesses to kernel memory is essentially
      impossible because they can happen in horribly nasty places like
      kernel entry/exit.  But, TDX hardware can theoretically _deliver_
      a virtualization exception (#VE) on any access to private memory.
      
      But, it's not as bad as it sounds.  TDX can be configured to never
      deliver these exceptions on private memory with a "TD attribute"
      called ATTR_SEPT_VE_DISABLE.  The guest has no way to *set* this
      attribute, but it can check it.
      
      Ensure ATTR_SEPT_VE_DISABLE is set in early boot.  panic() if it
      is unset.  There is no sane way for Linux to run with this
      attribute clear so a panic() is appropriate.
      
      There's small window during boot before the check where kernel
      has an early #VE handler. But the handler is only for port I/O
      and will also panic() as soon as it sees any other #VE, such as
      a one generated by a private memory access.
      
      [ dhansen: Rewrite changelog and rebase on new tdx_parse_tdinfo().
      	   Add Kirill's tested-by because I made changes since
      	   he wrote this. ]
      
      Fixes: 9a22bf6d ("x86/traps: Add #VE support for TDX guest")
      Reported-by: ruogui.ygr@alibaba-inc.com
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Tested-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/all/20221028141220.29217-3-kirill.shutemov%40linux.intel.com
      373e715e
    • M
      arm64: entry: avoid kprobe recursion · 024f4b2e
      Mark Rutland 提交于
      The cortex_a76_erratum_1463225_debug_handler() function is called when
      handling debug exceptions (and synchronous exceptions from BRK
      instructions), and so is called when a probed function executes. If the
      compiler does not inline cortex_a76_erratum_1463225_debug_handler(), it
      can be probed.
      
      If cortex_a76_erratum_1463225_debug_handler() is probed, any debug
      exception or software breakpoint exception will result in recursive
      exceptions leading to a stack overflow. This can be triggered with the
      ftrace multiple_probes selftest, and as per the example splat below.
      
      This is a regression caused by commit:
      
        6459b846 ("arm64: entry: consolidate Cortex-A76 erratum 1463225 workaround")
      
      ... which removed the NOKPROBE_SYMBOL() annotation associated with the
      function.
      
      My intent was that cortex_a76_erratum_1463225_debug_handler() would be
      inlined into its caller, el1_dbg(), which is marked noinstr and cannot
      be probed. Mark cortex_a76_erratum_1463225_debug_handler() as
      __always_inline to ensure this.
      
      Example splat prior to this patch (with recursive entries elided):
      
      | # echo p cortex_a76_erratum_1463225_debug_handler > /sys/kernel/debug/tracing/kprobe_events
      | # echo p do_el0_svc >> /sys/kernel/debug/tracing/kprobe_events
      | # echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
      | Insufficient stack space to handle exception!
      | ESR: 0x0000000096000047 -- DABT (current EL)
      | FAR: 0xffff800009cefff0
      | Task stack:     [0xffff800009cf0000..0xffff800009cf4000]
      | IRQ stack:      [0xffff800008000000..0xffff800008004000]
      | Overflow stack: [0xffff00007fbc00f0..0xffff00007fbc10f0]
      | CPU: 0 PID: 145 Comm: sh Not tainted 6.0.0 #2
      | Hardware name: linux,dummy-virt (DT)
      | pstate: 604003c5 (nZCv DAIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
      | pc : arm64_enter_el1_dbg+0x4/0x20
      | lr : el1_dbg+0x24/0x5c
      | sp : ffff800009cf0000
      | x29: ffff800009cf0000 x28: ffff000002c74740 x27: 0000000000000000
      | x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
      | x23: 00000000604003c5 x22: ffff80000801745c x21: 0000aaaac95ac068
      | x20: 00000000f2000004 x19: ffff800009cf0040 x18: 0000000000000000
      | x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
      | x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
      | x11: 0000000000000010 x10: ffff800008c87190 x9 : ffff800008ca00d0
      | x8 : 000000000000003c x7 : 0000000000000000 x6 : 0000000000000000
      | x5 : 0000000000000000 x4 : 0000000000000000 x3 : 00000000000043a4
      | x2 : 00000000f2000004 x1 : 00000000f2000004 x0 : ffff800009cf0040
      | Kernel panic - not syncing: kernel stack overflow
      | CPU: 0 PID: 145 Comm: sh Not tainted 6.0.0 #2
      | Hardware name: linux,dummy-virt (DT)
      | Call trace:
      |  dump_backtrace+0xe4/0x104
      |  show_stack+0x18/0x4c
      |  dump_stack_lvl+0x64/0x7c
      |  dump_stack+0x18/0x38
      |  panic+0x14c/0x338
      |  test_taint+0x0/0x2c
      |  panic_bad_stack+0x104/0x118
      |  handle_bad_stack+0x34/0x48
      |  __bad_stack+0x78/0x7c
      |  arm64_enter_el1_dbg+0x4/0x20
      |  el1h_64_sync_handler+0x40/0x98
      |  el1h_64_sync+0x64/0x68
      |  cortex_a76_erratum_1463225_debug_handler+0x0/0x34
      ...
      |  el1h_64_sync_handler+0x40/0x98
      |  el1h_64_sync+0x64/0x68
      |  cortex_a76_erratum_1463225_debug_handler+0x0/0x34
      ...
      |  el1h_64_sync_handler+0x40/0x98
      |  el1h_64_sync+0x64/0x68
      |  cortex_a76_erratum_1463225_debug_handler+0x0/0x34
      |  el1h_64_sync_handler+0x40/0x98
      |  el1h_64_sync+0x64/0x68
      |  do_el0_svc+0x0/0x28
      |  el0t_64_sync_handler+0x84/0xf0
      |  el0t_64_sync+0x18c/0x190
      | Kernel Offset: disabled
      | CPU features: 0x0080,00005021,19001080
      | Memory Limit: none
      | ---[ end Kernel panic - not syncing: kernel stack overflow ]---
      
      With this patch, cortex_a76_erratum_1463225_debug_handler() is inlined
      into el1_dbg(), and el1_dbg() cannot be probed:
      
      | # echo p cortex_a76_erratum_1463225_debug_handler > /sys/kernel/debug/tracing/kprobe_events
      | sh: write error: No such file or directory
      | # grep -w cortex_a76_erratum_1463225_debug_handler /proc/kallsyms | wc -l
      | 0
      | # echo p el1_dbg > /sys/kernel/debug/tracing/kprobe_events
      | sh: write error: Invalid argument
      | # grep -w el1_dbg /proc/kallsyms | wc -l
      | 1
      
      Fixes: 6459b846 ("arm64: entry: consolidate Cortex-A76 erratum 1463225 workaround")
      Cc: <stable@vger.kernel.org> # 5.12.x
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20221017090157.2881408-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      024f4b2e
    • D
      x86/tdx: Prepare for using "INFO" call for a second purpose · a6dd6f39
      Dave Hansen 提交于
      The TDG.VP.INFO TDCALL provides the guest with various details about
      the TDX system that the guest needs to run.  Only one field is currently
      used: 'gpa_width' which tells the guest which PTE bits mark pages shared
      or private.
      
      A second field is now needed: the guest "TD attributes" to tell if
      virtualization exceptions are configured in a way that can harm the guest.
      
      Make the naming and calling convention more generic and discrete from the
      mask-centric one.
      
      Thanks to Sathya for the inspiration here, but there's no code, comments
      or changelogs left from where he started.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Tested-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: stable@vger.kernel.org
      a6dd6f39
  7. 01 11月, 2022 5 次提交
  8. 31 10月, 2022 2 次提交
  9. 29 10月, 2022 12 次提交