- 21 4月, 2020 3 次提交
-
-
由 Vitaly Kuznetsov 提交于
Hyper-V PV TLB flush mechanism does TLB flush on behalf of the guest so doing tlb_flush_all() is an overkill, switch to using tlb_flush_guest() (just like KVM PV TLB flush mechanism) instead. Introduce KVM_REQ_HV_TLB_FLUSH to support the change. Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Add a dedicated hook to handle flushing TLB entries on behalf of the guest, i.e. for a paravirtualized TLB flush, and use it directly instead of bouncing through kvm_vcpu_flush_tlb(). For VMX, change the effective implementation implementation to never do INVEPT and flush only the current context, i.e. to always flush via INVVPID(SINGLE_CONTEXT). The INVEPT performed by __vmx_flush_tlb() when @invalidate_gpa=false and enable_vpid=0 is unnecessary, as it will only flush guest-physical mappings; linear and combined mappings are flushed by VM-Enter when VPID is disabled, and changes in the guest pages tables do not affect guest-physical mappings. When EPT and VPID are enabled, doing INVVPID is not required (by Intel's architecture) to invalidate guest-physical mappings, i.e. TLB entries that cache guest-physical mappings can live across INVVPID as the mappings are associated with an EPTP, not a VPID. The intent of @invalidate_gpa is to inform vmx_flush_tlb() that it must "invalidate gpa mappings", i.e. do INVEPT and not simply INVVPID. Other than nested VPID handling, which now calls vpid_sync_context() directly, the only scenario where KVM can safely do INVVPID instead of INVEPT (when EPT is enabled) is if KVM is flushing TLB entries from the guest's perspective, i.e. is only required to invalidate linear mappings. For SVM, flushing TLB entries from the guest's perspective can be done by flushing the current ASID, as changes to the guest's page tables are associated only with the current ASID. Adding a dedicated ->tlb_flush_guest() paves the way toward removing @invalidate_gpa, which is a potentially dangerous control flag as its meaning is not exactly crystal clear, even for those who are familiar with the subtleties of what mappings Intel CPUs are/aren't allowed to keep across various invalidation scenarios. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-15-sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Wrap the combination of mmu->invlpg and kvm_x86_ops->tlb_flush_gva into a new function. This function also lets us specify the host PGD to invalidate and also the MMU, both of which will be useful in fixing and simplifying kvm_inject_emulated_page_fault. A nested guest's MMU however has g_context->invlpg == NULL. Instead of setting it to nonpaging_invlpg, make kvm_mmu_invalidate_gva the only entry point to mmu->invlpg and make a NULL invlpg pointer equivalent to nonpaging_invlpg, saving a retpoline. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 16 4月, 2020 1 次提交
-
-
由 Sean Christopherson 提交于
Export the page fault propagation helper so that VMX can use it to correctly emulate TLB invalidation on page faults in an upcoming patch. In the (hopefully) not-too-distant future, SGX virtualization will also want access to the helper for injecting page faults to the correct level (L1 vs. L2) when emulating ENCLS instructions. Rename the function to kvm_inject_emulated_page_fault() to clarify that it is (a) injecting a fault and (b) only for page faults. WARN if it's invoked with an exception other than PF_VECTOR. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320212833.3507-6-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 14 4月, 2020 1 次提交
-
-
由 Paolo Bonzini 提交于
Manipulate IF around vmload/vmsave to remove the confusing usage of local_irq_enable where interrupts are actually disabled via GIF. And stuff the RSB immediately without waiting for a RET to avoid Spectre-v2 attacks. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 4月, 2020 2 次提交
-
-
由 Anshuman Khandual 提交于
Idea of a foreign VMA with respect to the present context is very generic. But currently there are two identical definitions for this in powerpc and x86 platforms. Lets consolidate those redundant definitions while making vma_is_foreign() available for general use later. This should not cause any functional change. Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Link: http://lkml.kernel.org/r/1582782965-3274-3-git-send-email-anshuman.khandual@arm.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Masahiro Yamada 提交于
Change a header to mandatory-y if both of the following are met: [1] At least one architecture (except um) specifies it as generic-y in arch/*/include/asm/Kbuild [2] Every architecture (except um) either has its own implementation (arch/*/include/asm/*.h) or specifies it as generic-y in arch/*/include/asm/Kbuild This commit was generated by the following shell script. ----------------------------------->8----------------------------------- arches=$(cd arch; ls -1 | sed -e '/Kconfig/d' -e '/um/d') tmpfile=$(mktemp) grep "^mandatory-y +=" include/asm-generic/Kbuild > $tmpfile find arch -path 'arch/*/include/asm/Kbuild' | xargs sed -n 's/^generic-y += \(.*\)/\1/p' | sort -u | while read header do mandatory=yes for arch in $arches do if ! grep -q "generic-y += $header" arch/$arch/include/asm/Kbuild && ! [ -f arch/$arch/include/asm/$header ]; then mandatory=no break fi done if [ "$mandatory" = yes ]; then echo "mandatory-y += $header" >> $tmpfile for arch in $arches do sed -i "/generic-y += $header/d" arch/$arch/include/asm/Kbuild done fi done sed -i '/^mandatory-y +=/d' include/asm-generic/Kbuild LANG=C sort $tmpfile >> include/asm-generic/Kbuild ----------------------------------->8----------------------------------- One obvious benefit is the diff stat: 25 files changed, 52 insertions(+), 557 deletions(-) It is tedious to list generic-y for each arch that needs it. So, mandatory-y works like a fallback default (by just wrapping asm-generic one) when arch does not have a specific header implementation. See the following commits: def3f7ce a1b39bae It is tedious to convert headers one by one, so I processed by a shell script. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Michal Simek <michal.simek@xilinx.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Link: http://lkml.kernel.org/r/20200210175452.5030-1-masahiroy@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 4月, 2020 3 次提交
-
-
由 Linus Torvalds 提交于
This is partly for readability - using named arguments instead of numbered ones makes it muchmore obvious just what is going on. Using "%[efault]" instead of "%4" for the special -EFAULT constant just means that you don't have to count the arguments to see what's up. But the motivation for all this cleanup is that when we'll start to conditionally use "asm goto" even for the __get_user_asm() case, the argument numbers will depend on whether we have an error output, or an error label we can just directly jump to. So this moves us towards named arguments for the same reason that we have to use named arguments for the asms that use SET_CC(): numbering will eventually become similarly unreliable and depends on whether we can use particular compiler features or not. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
This is the exact same thing as 36807856 ("x86: get rid of 'rtype' argument to __put_user_goto() macro") except it's about __get_user_asm() rather than __put_user_goto(). The reasons are the same: having the low-level asm access the argument with a different size than the compiler thinks it does is fundamentally wrong. But unlike the __put_user_goto() case, we actually did tell the compiler that we used a bigger variable (either long or long long), and then only filled in the low bits, and ended up "fixing" this by casting the result to the proper pointer type. That's because we needed to use a non-qualified type (the user pointer might be a const pointer!), and that makes this a bit more painful. Our '__inttype()' macro used to be lazy and only differentiate between "fits in a register" or "needs two registers". So this fix had to also make that '__inttype()' macro more precise. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
The 'rtype' argument goes back to pre-git (and pre-BK) times, and comes from the fact that we used to not necessarily have the same type sizes for the arguments of the inline asm as we did for the actual accesses we did. So 'rtype' is the 'register type' - the override of the register size in the inline asm when it doesn't match the actual size of the variable we use as the output argument (for when you used "put_user()" on an "int" value that was assigned to a byte-sized user space access etc). That mismatch doesn't actually exist any more, and should probably never have existed in the first place. It's a horrid bug just waiting to happen (using more - or less - of the variable that the compiler expected us to use). I think we had some odd casting going on to hide the effects of that oddity after-the-fact, but those are long gone, and these days we should always have the right size value in the first place, using things like __typeof__(*(ptr)) __pu_val = (x); and gcc should thus have the right register size without any manual 'rtype' games. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 4月, 2020 2 次提交
-
-
由 Linus Torvalds 提交于
Every remaining user just has the error case returning -EFAULT. In fact, the exception was __get_user_asm_nozero(), which was removed in commit 4b842e4e ("x86: get rid of small constant size cases in raw_copy_{to,from}_user()"), and the other __get_user_xyz() macros just followed suit for consistency. Fix up some macro whitespace while at it. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
The last user was removed by commit 4b842e4e ("x86: get rid of small constant size cases in raw_copy_{to,from}_user()"). Get rid of the left-overs before somebody tries to use it again. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 3月, 2020 3 次提交
-
-
由 Sean Christopherson 提交于
Remove the __exit annotation from VMX hardware_unsetup(), the hook can be reached during kvm_init() by way of kvm_arch_hardware_unsetup() if failure occurs at various points during initialization. Removing the annotation also lets us annotate vmx_x86_ops and svm_x86_ops with __initdata; otherwise, objtool complains because it doesn't understand that the vendor specific __initdata is being copied by value to a non-__initdata instance. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321202603.19355-8-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Replace the kvm_x86_ops pointer in common x86 with an instance of the struct to save one pointer dereference when invoking functions. Copy the struct by value to set the ops during kvm_init(). Arbitrarily use kvm_x86_ops.hardware_enable to track whether or not the ops have been initialized, i.e. a vendor KVM module has been loaded. Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321202603.19355-7-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Move the kvm_x86_ops functions that are used only within the scope of kvm_init() into a separate struct, kvm_x86_init_ops. In addition to identifying the init-only functions without restorting to code comments, this also sets the stage for waiting until after ->hardware_setup() to set kvm_x86_ops. Setting kvm_x86_ops after ->hardware_setup() is desirable as many of the hooks are not usable until ->hardware_setup() completes. No functional change intended. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321202603.19355-3-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 3月, 2020 4 次提交
-
-
由 Al Viro 提交于
Only one user left; the thing had been made polymorphic back in 2013 for the sake of MPX. No point keeping it now that MPX is gone. Convert futex_atomic_cmpxchg_inatomic() to user_access_{begin,end}() while we are at it. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
lock cmpxchg leaves the current value in eax; no need to reload it. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Lift stac/clac pairs from __futex_atomic_op{1,2} into arch_futex_atomic_op_inuser(), fold them with access_ok() in there. The switch in arch_futex_atomic_op_inuser() is what has required the previous (objtool) commit... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Move access_ok() in and pagefault_enable()/pagefault_disable() out. Mechanical conversion only - some instances don't really need a separate access_ok() at all (e.g. the ones only using get_user()/put_user(), or architectures where access_ok() is always true); we'll deal with that in followups. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 27 3月, 2020 3 次提交
-
-
由 Benjamin Thiel 提交于
Add missing includes and move prototypes into the header set_memory.h in order to fix -Wmissing-prototypes warnings. [ bp: Add ifdeffery around arch_invalidate_pmem() ] Signed-off-by: NBenjamin Thiel <b.thiel@posteo.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200320145028.6013-1-b.thiel@posteo.de
-
由 Benjamin Thiel 提交于
... in order to fix a -Wmissing-prototypes warning: arch/x86/platform/uv/tlb_uv.c:1275:6: warning: no previous prototype for ‘uv_bau_message_interrupt’ [-Wmissing-prototypes] \ void uv_bau_message_interrupt(struct pt_regs *regs) Signed-off-by: NBenjamin Thiel <b.thiel@posteo.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200327072621.2255-1-b.thiel@posteo.de
-
由 Al Viro 提交于
finally Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 25 3月, 2020 4 次提交
-
-
由 Brian Gerst 提交于
Add missing semicolon. Fixes: a74d187c ("x86/entry: Refactor SYS_NI macros") Reported-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200324143520.898733-1-brgerst@gmail.com
-
由 Thomas Gleixner 提交于
No more users. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131510.900226233@linutronix.de
-
由 Thomas Gleixner 提交于
Finding all places which build x86_cpu_id match tables is tedious and the logic is hidden in lots of differently named macro wrappers. Most of these initializer macros use plain C89 initializers which rely on the ordering of the struct members. So new members could only be added at the end of the struct, but that's ugly as hell and C99 initializers are really the right thing to use. Provide a set of macros which: - Have a proper naming scheme, starting with X86_MATCH_ - Use C99 initializers The set of provided macros are all subsets of the base macro X86_MATCH_VENDOR_FAM_MODEL_FEATURE() which allows to supply all possible selection criteria: vendor, family, model, feature The other macros shorten this to avoid typing all arguments when they are not needed and would require one of the _ANY constants. They have been created due to the requirements of the existing usage sites. Also add a few model constants for Centaur CPUs and QUARK. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131508.826011988@linutronix.de
-
由 Thomas Gleixner 提交于
There is no reason that this gunk is in a generic header file. The wildcard defines need to stay as they are required by file2alias. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131508.736205164@linutronix.de
-
- 24 3月, 2020 1 次提交
-
-
由 Vincenzo Frascino 提交于
User Mode Linux is a flavor of x86 that from the vDSO prospective always falls back on system calls. This implies that it does not require any of the unified vDSO definitions and their inclusion causes side effects like this: In file included from include/vdso/processor.h:10:0, from include/vdso/datapage.h:17, from arch/x86/include/asm/vgtod.h:7, from arch/x86/um/../kernel/sys_ia32.c:49: >> arch/x86/include/asm/vdso/processor.h:11:29: error: redefinition of 'rep_nop' static __always_inline void rep_nop(void) ^~~~~~~ In file included from include/linux/rcupdate.h:30:0, from include/linux/rculist.h:11, from include/linux/pid.h:5, from include/linux/sched.h:14, from arch/x86/um/../kernel/sys_ia32.c:25: arch/x86/um/asm/processor.h:24:20: note: previous definition of 'rep_nop' was here static inline void rep_nop(void) Make sure that the unnecessary headers are not included when um is built to address the problem. Fixes: abc22418 ("x86/vdso: Enable x86 to use common headers") Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200323124109.7104-1-vincenzo.frascino@arm.com
-
- 23 3月, 2020 1 次提交
-
-
由 Anshuman Khandual 提交于
There is an inconsistency between PMD and PUD-based THP page table helpers like the following, as pud_present() does not test for _PAGE_PSE. pmd_present(pmd_mknotpresent(pmd)) : True pud_present(pud_mknotpresent(pud)) : False Drop pud_mknotpresent() as there are no current users. If/when needed back later, pud_present() will also have to be fixed to accommodate _PAGE_PSE. Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NBaoquan He <bhe@redhat.com> Acked-by: NBalbir Singh <bsingharora@gmail.com> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lkml.kernel.org/r/1584925542-13034-1-git-send-email-anshuman.khandual@arm.com
-
- 22 3月, 2020 1 次提交
-
-
由 Wei Huang 提交于
Newer AMD CPUs support a feature called protected processor identification number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23]. However, CPUID alone is not enough to read the processor identification number - MSR_AMD_PPIN_CTL also needs to be configured properly. If, for any reason, MSR_AMD_PPIN_CTL[PPIN_EN] can not be turned on, such as disabled in BIOS, the CPU capability bit X86_FEATURE_AMD_PPIN needs to be cleared. When the X86_FEATURE_AMD_PPIN capability is available, the identification number is issued together with the MCE error info in order to keep track of the source of MCE errors. [ bp: Massage. ] Co-developed-by: NSmita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> Signed-off-by: NSmita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> Signed-off-by: NWei Huang <wei.huang2@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200321193800.3666964-1-wei.huang2@amd.com
-
- 21 3月, 2020 11 次提交
-
-
由 Peter Zijlstra 提交于
Because moar '_' isn't always moar readable. git grep -l "___preempt_schedule\(_notrace\)*" | while read file; do sed -ie 's/___preempt_schedule\(_notrace\)*/preempt_schedule\1_thunk/g' $file; done Reported-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NWill Deacon <will@kernel.org> Link: https://lkml.kernel.org/r/20200320115858.995685950@infradead.org
-
由 Brian Gerst 提交于
Clean up includes of and in <asm/syscalls.h> Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200313195144.164260-19-brgerst@gmail.com
-
由 Brian Gerst 提交于
asmlinkage is no longer required since the syscall ABI is now fully under x86 architecture control. This makes the 32-bit native syscalls a bit more effecient by passing in regs via EAX instead of on the stack. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: NAndy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-18-brgerst@gmail.com
-
由 Brian Gerst 提交于
Enable pt_regs based syscalls for 32-bit. This makes the 32-bit native kernel consistent with the 64-bit kernel, and improves the syscall interface by not needing to push all 6 potential arguments onto the stack. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Link: https://lkml.kernel.org/r/20200313195144.164260-17-brgerst@gmail.com
-
由 Brian Gerst 提交于
Instead of using an array in asm-offsets to calculate the max syscall number, calculate it when writing out the syscall headers. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200313195144.164260-9-brgerst@gmail.com
-
由 Brian Gerst 提交于
so it can be available to multiple syscall tables. Also directly return -ENOSYS instead of bouncing to the generic sys_ni_syscall(). Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200313195144.164260-7-brgerst@gmail.com
-
由 Brian Gerst 提交于
Add missing syscall wrapper for x32_rt_sigreturn(). Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: NAndy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-6-brgerst@gmail.com
-
由 Brian Gerst 提交于
Pull the common code out from the SYS_NI macros into a new __SYS_NI macro. Also conditionalize the X64 version in preparation for enabling syscall wrappers on 32-bit native kernels. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: NAndy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-5-brgerst@gmail.com
-
由 Brian Gerst 提交于
Pull the common code out from the COND_SYSCALL macros into a new __COND_SYSCALL macro. Also conditionalize the X64 version in preparation for enabling syscall wrappers on 32-bit native kernels. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: NAndy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-4-brgerst@gmail.com
-
由 Brian Gerst 提交于
Pull the common code out from the SYSCALL_DEFINE0 macros into a new __SYS_STUB0 macro. Also conditionalize the X64 version in preparation for enabling syscall wrappers on 32-bit native kernels. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: NAndy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-3-brgerst@gmail.com
-
由 Brian Gerst 提交于
Pull the common code out from the SYSCALL_DEFINEx macros into a new __SYS_STUBx macro. Also conditionalize the X64 version in preparation for enabling syscall wrappers on 32-bit native kernels. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: NAndy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200313195144.164260-2-brgerst@gmail.com
-