- 24 2月, 2021 4 次提交
-
-
由 Sami Tolvanen 提交于
Pass code model and stack alignment to the linker as these are not stored in LLVM bitcode, and allow CONFIG_LTO_CLANG* to be enabled. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NKees Cook <keescook@chromium.org>
-
由 Sami Tolvanen 提交于
Clang incorrectly inlines functions with differing stack protector attributes, which breaks __restore_processor_state() that relies on stack protector being disabled. This change disables LTO for cpu.c to work aroung the bug. Link: https://bugs.llvm.org/show_bug.cgi?id=47479Suggested-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NSami Tolvanen <samitolvanen@google.com>
-
由 Sami Tolvanen 提交于
Disable LTO for the vDSO. Note that while we could use Clang's LTO for the 64-bit vDSO, it won't add noticeable benefit for the small amount of C code. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NKees Cook <keescook@chromium.org>
-
由 Sami Tolvanen 提交于
Select HAVE_OBJTOOL_MCOUNT if STACK_VALIDATION is selected to use objtool to generate __mcount_loc sections for dynamic ftrace with Clang and gcc <5 (later versions of gcc use -mrecord-mcount). Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NKees Cook <keescook@chromium.org>
-
- 22 2月, 2021 2 次提交
-
-
由 Masahiro Yamada 提交于
The 'syscall' variables are not directly used in the commands. Remove the $(srctree)/ prefix because we can rely on VPATH. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
由 Masahiro Yamada 提交于
The rules in these Makefiles cannot detect the command line change because the prerequisite 'FORCE' is missing. Adding 'FORCE' will result in the headers being rebuilt every time because the 'targets' additions are also wrong; the file paths in 'targets' must be relative to the current Makefile. Fix all of them so the if_changed rules work correctly. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
-
- 17 2月, 2021 7 次提交
-
-
由 Peter Zijlstra 提交于
Allow building x86 with PREEMPT_DYNAMIC=n, this is needed for PREEMPT_RT as it makes no sense to not have full preemption on PREEMPT_RT. Fixes: 8c98e8cf723c ("preempt/dynamic: Provide preempt_schedule[_notrace]() static calls") Reported-by: NMike Galbraith <efault@gmx.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Tested-by: NMike Galbraith <efault@gmx.de> Link: https://lkml.kernel.org/r/YCK1+JyFNxQnWeXK@hirez.programming.kicks-ass.net
-
由 Frederic Weisbecker 提交于
Following the idle loop model, cleanly check for pending rcuog wakeup before the last rescheduling point upon resuming to guest mode. This way we can avoid to do it from rcu_user_enter() with the last resort self-IPI hack that enforces rescheduling. Suggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20210131230548.32970-6-frederic@kernel.org
-
由 Peter Zijlstra 提交于
Use the new EXPORT_STATIC_CALL_TRAMP() / static_call_mod() to unexport the static_call_key for the PREEMPT_DYNAMIC calls such that modules can no longer update these calls. Having modules change/hi-jack the preemption calls would be horrible. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Josh Poimboeuf 提交于
When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module can use static_call_update() to change the function called. This is not desirable in general. Not exporting static_call_key however also disallows usage of static_call(), since objtool needs the key to construct the static_call_site. Solve this by allowing objtool to create the static_call_site using the trampoline address when it builds a module and cannot find the static_call_key symbol. The module loader will then try and map the trampole back to a key before it constructs the normal sites list. Doing this requires a trampoline -> key associsation, so add another magic section that keeps those. Originally-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@treble
-
由 Peter Zijlstra (Intel) 提交于
Provide static calls to control preempt_schedule[_notrace]() (called in CONFIG_PREEMPT) so that we can override their behaviour when preempt= is overriden. Since the default behaviour is full preemption, both their calls are initialized to the arch provided wrapper, if any. [fweisbec: only define static calls when PREEMPT_DYNAMIC, make it less dependent on x86 with __preempt_schedule_func] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-7-frederic@kernel.org
-
由 Michal Hocko 提交于
Preemption mode selection is currently hardcoded on Kconfig choices. Introduce a dedicated option to tune preemption flavour at boot time, This will be only available on architectures efficiently supporting static calls in order not to tempt with the feature against additional overhead that might be prohibitive or undesirable. CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE, CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() / __preempt_schedule_notrace_function()). Suggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> [peterz: relax requirement to HAVE_STATIC_CALL] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-5-frederic@kernel.org
-
由 Peter Zijlstra 提交于
Provide a stub function that return 0 and wire up the static call site patching to replace the CALL with a single 5 byte instruction that clears %RAX, the return value register. The function can be cast to any function pointer type that has a single %RAX return (including pointers). Also provide a version that returns an int for convenience. We are clearing the entire %RAX register in any case, whether the return value is 32 or 64 bits, since %RAX is always a scratch register anyway. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-2-frederic@kernel.org
-
- 16 2月, 2021 7 次提交
-
-
由 Andy Shevchenko 提交于
Update Copyright year and drop file names from files themselves. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andy Shevchenko 提交于
After the commit f1be6cda ("x86/platform/intel-mid: Make intel_scu_device_register() static") the platform_device.h is not being used anymore by intel-mid.h. Remove it. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andy Shevchenko 提交于
Since there is no more user of this global variable and associated custom API, we may safely drop this legacy reinvented a wheel from the kernel sources. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andy Shevchenko 提交于
The header is used by a single user. Move header content to that user. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andy Shevchenko 提交于
Describe missed parameter in documentation of type1_access_ok(). Otherwise "make W=1 arch/x86/pci/" produces the following warning: CHECK arch/x86/pci/intel_mid_pci.c CC arch/x86/pci/intel_mid_pci.o arch/x86/pci/intel_mid_pci.c:152: warning: Function parameter or member 'reg' not described in 'type1_access_ok' Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andy Shevchenko 提交于
Switch the platform code to use x86_id_table and accompanying API instead of custom comparison against x86 CPU model. This is one of the last users of custom API for that and following changes will remove it for the good. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andy Shevchenko 提交于
SFI-based platforms are gone. So does this framework. This removes mention of SFI through the drivers and other code as well. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: NHans de Goede <hdegoede@redhat.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 2月, 2021 2 次提交
-
-
由 Jan Beulich 提交于
We should not set up further state if either mapping failed; paying attention to just the user mapping's status isn't enough. Also use GNTST_okay instead of implying its value (zero). This is part of XSA-361. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: stable@vger.kernel.org Reviewed-by: NJuergen Gross <jgross@suse.com> Signed-off-by: NJuergen Gross <jgross@suse.com>
-
由 Jan Beulich 提交于
Its sibling (set_foreign_p2m_mapping()) as well as the sibling of its only caller (gnttab_map_refs()) don't clean up after themselves in case of error. Higher level callers are expected to do so. However, in order for that to really clean up any partially set up state, the operation should not terminate upon encountering an entry in unexpected state. It is particularly relevant to notice here that set_foreign_p2m_mapping() would skip setting up a p2m entry if its grant mapping failed, but it would continue to set up further p2m entries as long as their mappings succeeded. Arguably down the road set_foreign_p2m_mapping() may want its page state related WARN_ON() also converted to an error return. This is part of XSA-361. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: stable@vger.kernel.org Reviewed-by: NJuergen Gross <jgross@suse.com> Signed-off-by: NJuergen Gross <jgross@suse.com>
-
- 13 2月, 2021 3 次提交
-
-
由 Johannes Berg 提交于
This mostly reverts the old commit 3963333f ("uml: cover stubs with a VMA") which had added a VMA to the existing PTEs. However, there's no real reason to have the PTEs in the first place and the VMA cannot be 'fixed' in place, which leads to bugs that userspace could try to unmap them and be forcefully killed, or such. Also, there's a bit of an ugly hole in userspace's address space. Simplify all this: just install the stub code/page at the top of the (inner) address space, i.e. put it just above TASK_SIZE. The pages are simply hard-coded to be mapped in the userspace process we use to implement an mm context, and they're out of reach of the inner mmap/munmap/mprotect etc. since they're above TASK_SIZE. Getting rid of the VMA also makes vma_merge() no longer hit one of the VM_WARN_ON()s there because we installed a VMA while the code assumes the stack VMA is the first one. It also removes a lockdep warning about mmap_sem usage since we no longer have uml_setup_stubs() and thus no longer need to do any manipulation that would require mmap_sem in activate_mm(). Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Johannes Berg 提交于
The userspace stacks mostly have a stack (and in the case of the syscall stub we can just set their stack pointer) that points to the location of the stub data page already. Rework the stubs to use the stack pointer to derive the start of the data page, rather than requiring it to be hard-coded. In the clone stub, also integrate the int3 into the stack remap, since we really must not use the stack while we remap it. This prepares for putting the stub at a variable location that's not part of the normal address space of the userspace processes running inside the UML machine. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Johannes Berg 提交于
If the two are mixed up, then it looks as though the parent returned an error if the child failed (before) the mmap(), and then the resulting process never gets killed. Fix this by splitting the child and parent errors, reporting and using them appropriately. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
- 11 2月, 2021 15 次提交
-
-
由 Alexei Starovoitov 提交于
Since both sleepable and non-sleepable programs execute under migrate_disable add recursion prevention mechanism to both types of programs when they're executed via bpf trampoline. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210210033634.62081-5-alexei.starovoitov@gmail.com
-
由 Alexei Starovoitov 提交于
Since sleepable programs don't migrate from the cpu the excution stats can be computed for them as well. Reuse the same infrastructure for both sleepable and non-sleepable programs. run_cnt -> the number of times the program was executed. run_time_ns -> the program execution time in nanoseconds including the off-cpu time when the program was sleeping. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andrii@kernel.org> Acked-by: NKP Singh <kpsingh@kernel.org> Link: https://lore.kernel.org/bpf/20210210033634.62081-4-alexei.starovoitov@gmail.com
-
由 Sean Christopherson 提交于
Add a 2 byte pad to struct compat_vcpu_info so that the sum size of its fields is actually 64 bytes. The effective size without the padding is also 64 bytes due to the compiler aligning evtchn_pending_sel to a 4-byte boundary, but depending on compiler alignment is subtle and unnecessary. Opportunistically replace spaces with tables in the other fields. Cc: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20210210182609.435200-6-seanjc@google.com> Reviewed-by: NDavid Woodhouse <dwmw@amazon.co.uk> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wei Yongjun 提交于
The sparse tool complains as follows: arch/x86/kvm/svm/svm.c:204:6: warning: symbol 'svm_gp_erratum_intercept' was not declared. Should it be static? This symbol is not used outside of svm.c, so this commit marks it static. Fixes: 82a11e9c ("KVM: SVM: Add emulation support for #GP triggered by SVM instructions") Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Message-Id: <20210210075958.1096317-1-weiyongjun1@huawei.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wei Liu 提交于
Just like MSI/MSI-X, IO-APIC interrupts are remapped by Microsoft Hypervisor when Linux runs as the root partition. Implement an IRQ domain to handle mapping and unmapping of IO-APIC interrupts. Signed-off-by: NWei Liu <wei.liu@kernel.org> Acked-by: NJoerg Roedel <joro@8bytes.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-17-wei.liu@kernel.org
-
由 Wei Liu 提交于
When Linux runs as the root partition on Microsoft Hypervisor, its interrupts are remapped. Linux will need to explicitly map and unmap interrupts for hardware. Implement an MSI domain to issue the correct hypercalls. And initialize this irq domain as the default MSI irq domain. Signed-off-by: NSunil Muthuswamy <sunilmut@microsoft.com> Co-Developed-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-16-wei.liu@kernel.org
-
由 Wei Liu 提交于
Signed-off-by: NSunil Muthuswamy <sunilmut@microsoft.com> Co-Developed-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-15-wei.liu@kernel.org
-
由 Wei Liu 提交于
We will soon need to access fields inside the MSI address and MSI data fields. Introduce hv_msi_address_register and hv_msi_data_register. Fix up one user of hv_msi_entry in mshyperv.h. No functional change expected. Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-12-wei.liu@kernel.org
-
由 Wei Liu 提交于
Microsoft Hypervisor requires the root partition to make a few hypercalls to setup application processors before they can be used. Signed-off-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Signed-off-by: NSunil Muthuswamy <sunilmut@microsoft.com> Co-Developed-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Co-Developed-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-11-wei.liu@kernel.org
-
由 Wei Liu 提交于
They are used to deposit pages into Microsoft Hypervisor and bring up logical and virtual processors. Signed-off-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Signed-off-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NNuno Das Neves <nunodasneves@linux.microsoft.com> Co-Developed-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Co-Developed-by: NSunil Muthuswamy <sunilmut@microsoft.com> Co-Developed-by: NNuno Das Neves <nunodasneves@linux.microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-10-wei.liu@kernel.org
-
由 Wei Liu 提交于
When Linux is running as the root partition, the hypercall page will have already been setup by Hyper-V. Copy the content over to the allocated page. Add checks to hv_suspend & co to bail early because they are not supported in this setup yet. Signed-off-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Signed-off-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NNuno Das Neves <nunodasneves@linux.microsoft.com> Co-Developed-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Co-Developed-by: NSunil Muthuswamy <sunilmut@microsoft.com> Co-Developed-by: NNuno Das Neves <nunodasneves@linux.microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-8-wei.liu@kernel.org
-
由 Wei Liu 提交于
We will need the partition ID for executing some hypercalls later. Signed-off-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Co-Developed-by: NSunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-7-wei.liu@kernel.org
-
由 Wei Liu 提交于
When Linux runs as the root partition, it will need to make hypercalls which return data from the hypervisor. Allocate pages for storing results when Linux runs as the root partition. Signed-off-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Co-Developed-by: NLillian Grassin-Drake <ligrassi@microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-6-wei.liu@kernel.org
-
由 Wei Liu 提交于
For now we can use the privilege flag to check. Stash the value to be used later. Put in a bunch of defines for future use when we want to have more fine-grained detection. Signed-off-by: NWei Liu <wei.liu@kernel.org> Reviewed-by: NPavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20210203150435.27941-3-wei.liu@kernel.org
-
由 Andrea Parri (Microsoft) 提交于
If bit 22 of Group B Features is set, the guest has access to the Isolation Configuration CPUID leaf. On x86, the first four bits of EAX in this leaf provide the isolation type of the partition; we entail three isolation types: 'SNP' (hardware-based isolation), 'VBS' (software-based isolation), and 'NONE' (no isolation). Signed-off-by: NAndrea Parri (Microsoft) <parri.andrea@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: x86@kernel.org Cc: linux-arch@vger.kernel.org Link: https://lore.kernel.org/r/20210201144814.2701-2-parri.andrea@gmail.comReviewed-by: NMichael Kelley <mikelley@microsoft.com> Signed-off-by: NWei Liu <wei.liu@kernel.org>
-