- 23 6月, 2019 1 次提交
-
-
由 Vincenzo Frascino 提交于
The x86 vDSO library requires some adaptations to take advantage of the newly introduced generic vDSO library. Introduce the following changes: - Modification of vdso.c to be compliant with the common vdso datapage - Use of lib/vdso for gettimeofday [ tglx: Massaged changelog and cleaned up the function signature formatting ] Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mips@vger.kernel.org Cc: linux-kselftest@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Huw Davies <huw@codeweavers.com> Cc: Shijith Thotton <sthotton@marvell.com> Cc: Andre Przywara <andre.przywara@arm.com> Link: https://lkml.kernel.org/r/20190621095252.32307-23-vincenzo.frascino@arm.com
-
- 22 6月, 2019 1 次提交
-
-
由 Andy Lutomirski 提交于
GCC 5.5.0 sometimes cleverly hoists reads of the pvclock and/or hvclock pages before the vclock mode checks. This creates a path through vclock_gettime() in which no vclock is enabled at all (due to disabled TSC on old CPUs, for example) but the pvclock or hvclock page nevertheless read. This will segfault on bare metal. This fixes commit 459e3a21 ("gcc-9: properly declare the {pv,hv}clock_page storage") in the sense that, before that commit, GCC didn't seem to generate the offending code. There was nothing wrong with that commit per se, and -stable maintainers should backport this to all supported kernels regardless of whether the offending commit was present, since the same crash could just as easily be triggered by the phase of the moon. On GCC 9.1.1, this doesn't seem to affect the generated code at all, so I'm not too concerned about performance regressions from this fix. Cc: stable@vger.kernel.org Cc: x86@kernel.org Cc: Borislav Petkov <bp@alien8.de> Reported-by: NDuncan Roe <duncan_roe@optusnet.com.au> Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 5月, 2019 2 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): subject to the gnu public license v 2 extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 9 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAllison Randal <allison@lohutok.net> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NSteve Winslow <swinslow@gmail.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190528171440.130801526@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): subject to the gpl v 2 extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 2 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAllison Randal <allison@lohutok.net> Reviewed-by: NSteve Winslow <swinslow@gmail.com> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190528171439.372657724@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 08 5月, 2019 1 次提交
-
-
由 jia zhang 提交于
This trivial cleanup finalizes the removal of vDSO HPET support. Fixes: 1ed95e52 ("x86/vdso: Remove direct HPET access through the vDSO") Signed-off-by:
Jia Zhang <zhang.jia@linux.alibaba.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: luto@kernel.org Cc: bp@alien8.de Link: https://lkml.kernel.org/r/20190401114045.7280-1-zhang.jia@linux.alibaba.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 5月, 2019 1 次提交
-
-
由 Linus Torvalds 提交于
The pvlock_page and hvclock_page variables are (as the name implies) addresses to pages, created by the linker script. But we declared them as just "extern u8" variables, which _works_, but now that gcc does some more bounds checking, it causes warnings like warning: array subscript 1 is outside array bounds of ‘u8[1]’ when we then access more than one byte from those variables. Fix this by simply making the declaration of the variables match reality, which makes the compiler happy too. Signed-off-by: NLinus Torvalds <torvalds@-linux-foundation.org>
-
- 20 4月, 2019 1 次提交
-
-
由 Leonardo Brás 提交于
The go32() and go64() functions has an argument and a local variable called ‘name’. Rename both to clarify the code and to fix a warning with -Wshadow. Signed-off-by: NLeonardo Brás <leobras.c@gmail.com> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: David.Laight@aculab.com Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <michal.lkml@markovi.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: helen@koikeco.de Cc: linux-kbuild@vger.kernel.org Cc: lkcamp@lists.libreplanetbr.org Link: http://lkml.kernel.org/r/20181023011022.GA6574@WindFlashSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 18 4月, 2019 1 次提交
-
-
由 Masahiro Yamada 提交于
$(call if_changed,...) must have FORCE as a prerequisite. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1554280212-10578-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 15 12月, 2018 1 次提交
-
-
由 Alistair Strachan 提交于
Commit 379d98dd ("x86: vdso: Use $LD instead of $CC to link") accidentally broke unwinding from userspace, because ld would strip the .eh_frame sections when linking. Originally, the compiler would implicitly add --eh-frame-hdr when invoking the linker, but when this Makefile was converted from invoking ld via the compiler, to invoking it directly (like vmlinux does), the flag was missed. (The EH_FRAME section is important for the VDSO shared libraries, but not for vmlinux.) Fix the problem by explicitly specifying --eh-frame-hdr, which restores parity with the old method. See relevant bug reports for additional info: https://bugzilla.kernel.org/show_bug.cgi?id=201741 https://bugzilla.redhat.com/show_bug.cgi?id=1659295 Fixes: 379d98dd ("x86: vdso: Use $LD instead of $CC to link") Reported-by: NFlorian Weimer <fweimer@redhat.com> Reported-by: NCarlos O'Donell <carlos@redhat.com> Reported-by: N"H. J. Lu" <hjl.tools@gmail.com> Signed-off-by: NAlistair Strachan <astrachan@google.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Tested-by: NLaura Abbott <labbott@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Carlos O'Donell <carlos@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: kernel-team@android.com Cc: Laura Abbott <labbott@redhat.com> Cc: stable <stable@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: X86 ML <x86@kernel.org> Link: https://lkml.kernel.org/r/20181214223637.35954-1-astrachan@google.com
-
- 08 12月, 2018 1 次提交
-
-
由 Nick Desaulniers 提交于
GNU linker's -z common-page-size's default value is based on the target architecture. arch/x86/entry/vdso/Makefile sets it to the architecture default, which is implicit and redundant. Drop it. Fixes: 2aae950b ("x86_64: Add vDSO for x86-64 with gettimeofday/clock_gettime/getcpu") Reported-by: NDmitry Golovin <dima@golovin.in> Reported-by: NBill Wendling <morbo@google.com> Suggested-by: NDmitry Golovin <dima@golovin.in> Suggested-by: NRui Ueyama <ruiu@google.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Fangrui Song <maskray@google.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20181206191231.192355-1-ndesaulniers@google.com Link: https://bugs.llvm.org/show_bug.cgi?id=38774 Link: https://github.com/ClangBuiltLinux/linux/issues/31
-
- 05 12月, 2018 2 次提交
-
-
由 Sean Christopherson 提交于
Once upon a time, vdso2c aggressively stripped data from the vDSO image when generating the final userspace image. This included stripping the .altinstructions and .altinstr_replacement sections. Eventually, the stripping process reverted to "objdump -S" and no longer removed the aforementioned sections, but the comment remained. Keeping the .alt* sections at the end of the PT_LOAD segment is no longer necessary, but there's no harm in doing so and it's a helpful reminder that they don't need to be included in the final vDSO image, i.e. someone may want to take another stab at zapping/stripping the unneeded sections. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: da861e18 ("x86, vdso: Get rid of the fake section mechanism") Link: http://lkml.kernel.org/r/20181204212600.28090-3-sean.j.christopherson@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Sean Christopherson 提交于
At one point the vDSO image was manually stripped down by vdso2c in an attempt to minimize the size of the image mapped into userspace. Part of that stripping process involved building a fake section table so as not to break userspace processes that parse the section table. Memory for the fake section table was reserved in the .rodata section so that vdso2c could simply copy the entire PT_LOAD segment into the userspace image after building the fake table. Eventually, the entire fake section table approach was dropped in favor of stripping the vdso "the old fashioned way", i.e. via objdump -S. But, the reservation in .rodata for the fake table was left behind. Remove the reserveration along with a few other related defines and section entries. Removing the fake section table placeholder zaps a whopping 0x340 bytes from the 64-bit vDSO image, which drops the current image's size to under 4k, i.e. reduces the effective size of the userspace vDSO mapping by a full page. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: da861e18 ("x86, vdso: Get rid of the fake section mechanism") Link: http://lkml.kernel.org/r/20181204212600.28090-2-sean.j.christopherson@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 12月, 2018 1 次提交
-
-
由 Ingo Molnar 提交于
Go over arch/x86/ and fix common typos in comments, and a typo in an actual function argument name. No change in functionality intended. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 27 10月, 2018 1 次提交
-
-
由 Matthew Wilcox 提交于
Return vm_fault_t codes directly from the appropriate mm routines instead of converting from errnos ourselves. Fixes a minor bug where we'd return SIGBUS instead of the correct OOM code if we ran out of memory allocating page tables. Link: http://lkml.kernel.org/r/20180828145728.11873-5-willy@infradead.orgSigned-off-by: NMatthew Wilcox <willy@infradead.org> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Souptick Joarder <jrdr.linux@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 10月, 2018 5 次提交
-
-
由 Ingo Molnar 提交于
So: - use 'extern' consistently for APIs - fix weird header guard - clarify code comments - reorder APIs by type Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Chang S. Bae <chang.seok.bae@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Markus T Metzger <markus.t.metzger@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Shankar <ravi.v.shankar@intel.com> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1537312139-5580-2-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
We have a special segment descriptor entry in the GDT, whose sole purpose is to encode the CPU and node numbers in its limit (size) field. There are user-space instructions that allow the reading of the limit field, which gives us a really fast way to read the CPU and node IDs from the vDSO for example. But the naming of related functionality does not make this clear, at all: VDSO_CPU_SIZE VDSO_CPU_MASK __CPU_NUMBER_SEG GDT_ENTRY_CPU_NUMBER vdso_encode_cpu_node vdso_read_cpu_node There's a number of problems: - The 'VDSO_CPU_SIZE' doesn't really make it clear that these are number of bits, nor does it make it clear which 'CPU' this refers to, i.e. that this is about a GDT entry whose limit encodes the CPU and node number. - Furthermore, the 'CPU_NUMBER' naming is actively misleading as well, because the segment limit encodes not just the CPU number but the node ID as well ... So use a better nomenclature all around: name everything related to this trick as 'CPUNODE', to make it clear that this is something special, and add _BITS to make it clear that these are number of bits, and propagate this to every affected name: VDSO_CPU_SIZE => VDSO_CPUNODE_BITS VDSO_CPU_MASK => VDSO_CPUNODE_MASK __CPU_NUMBER_SEG => __CPUNODE_SEG GDT_ENTRY_CPU_NUMBER => GDT_ENTRY_CPUNODE vdso_encode_cpu_node => vdso_encode_cpunode vdso_read_cpu_node => vdso_read_cpunode This, beyond being less confusing, also makes it easier to grep for all related functionality: $ git grep -i cpunode arch/x86 Also, while at it, fix "return is not a function" style sloppiness in vdso_encode_cpunode(). Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Chang S. Bae <chang.seok.bae@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Markus T Metzger <markus.t.metzger@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Shankar <ravi.v.shankar@intel.com> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1537312139-5580-2-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Chang S. Bae 提交于
Currently the CPU/node NR segment descriptor (GDT_ENTRY_CPU_NUMBER) is initialized relatively late during CPU init, from the vCPU code, which has a number of disadvantages, such as hotplug CPU notifiers and SMP cross-calls. Instead just initialize it much earlier, directly in cpu_init(). This reduces complexity and increases robustness. [ mingo: Wrote new changelog. ] Suggested-by: NH. Peter Anvin <hpa@zytor.com> Suggested-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Markus T Metzger <markus.t.metzger@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Shankar <ravi.v.shankar@intel.com> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1537312139-5580-9-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Chang S. Bae 提交于
Clean up the CPU/node number related code a bit, to make it more apparent how we are encoding/extracting the CPU and node fields from the segment limit. No change in functionality intended. [ mingo: Wrote new changelog. ] Suggested-by: NAndy Lutomirski <luto@kernel.org> Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Markus T Metzger <markus.t.metzger@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Shankar <ravi.v.shankar@intel.com> Cc: Rik van Riel <riel@surriel.com> Link: http://lkml.kernel.org/r/1537312139-5580-8-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Chang S. Bae 提交于
The old 'per CPU' naming was misleading: 64-bit kernels don't use this GDT entry for per CPU data, but to store the CPU (and node) ID. [ mingo: Wrote new changelog. ] Suggested-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Markus T Metzger <markus.t.metzger@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Shankar <ravi.v.shankar@intel.com> Cc: Rik van Riel <riel@surriel.com> Link: http://lkml.kernel.org/r/1537312139-5580-7-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 10月, 2018 1 次提交
-
-
由 Andy Lutomirski 提交于
vgetcyc() is full of barriers, so fetching values out of the vvar page before vgetcyc() for use after vgetcyc() results in poor code generation. Put vgetcyc() first to avoid this problem. Also, pull the tv_sec division into the loop and put all the ts writes together. The old code wrote ts->tv_sec on each iteration before the syscall fallback check and then added in the offset afterwards, which forced the compiler to pointlessly copy base->sec to ts->tv_sec on each iteration. The new version seems to generate sensible code. Saves several cycles. With this patch applied, the result is faster than before the clock_gettime() rewrite. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/3c05644d010b72216aa286a6d20b5078d5fae5cd.1538762487.git.luto@kernel.org
-
- 05 10月, 2018 9 次提交
-
-
由 Andy Lutomirski 提交于
When a vDSO clock function falls back to the syscall, no special barriers or ordering is needed, and the syscall fallbacks don't clobber any memory that is not explicitly listed in the asm constraints. Remove the "memory" clobber. This causes minor changes to the generated code, but otherwise has no obvious performance impact. I think it's nice to have, though, since it may help the optimizer in the future. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/3a7438f5fb2422ed881683d2ccffd7f987b2dc44.1538689401.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Thomas Gleixner 提交于
Dereferencing gtod->cycle_last all over the place and foing the cycles < last comparison in the vclock read functions generates horrible code. Doing it at the call site is much better and gains a few cycles both for TSC and pvclock. Caveat: This adds the comparison to the hyperv vclock as well, but I have no way to test that. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: devel@linuxdriverproject.org Cc: virtualization@lists.linux-foundation.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/20180917130707.741440803@linutronix.de
-
由 Thomas Gleixner 提交于
The code flow for the vclocks is convoluted as it requires the vclocks which can be invalidated separately from the vsyscall_gtod_data sequence to store the fact in a separate variable. That's inefficient. Restructure the code so the vclock readout returns cycles and the conversion to nanoseconds is handled at the call site. If the clock gets invalidated or vclock is already VCLOCK_NONE, return U64_MAX as the cycle value, which is invalid for all clocks and leave the sequence loop immediately in that case by calling the fallback function directly. This allows to remove the gettimeofday fallback as it now uses the clock_gettime() fallback and does the nanoseconds to microseconds conversion in the same way as it does when the vclock is functional. It does not make a difference whether the division by 1000 happens in the kernel fallback or in userspace. Generates way better code and gains a few cycles back. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: devel@linuxdriverproject.org Cc: virtualization@lists.linux-foundation.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/20180917130707.657928937@linutronix.de
-
由 Thomas Gleixner 提交于
Now that the time getter functions use the clockid as index into the storage array for the base time access, the switch case can be replaced. - Check for clockid >= MAX_CLOCKS and for negative clockid (CPU/FD) first and call the fallback function right away. - After establishing that clockid is < MAX_CLOCKS, convert the clockid to a bitmask - Check for the supported high resolution and coarse functions by anding the bitmask of supported clocks and check whether a bit is set. This completely avoids jump tables, reduces the number of conditionals and makes the VDSO extensible for other clock ids. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: devel@linuxdriverproject.org Cc: virtualization@lists.linux-foundation.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/20180917130707.574315796@linutronix.de
-
由 Thomas Gleixner 提交于
do_realtime_coarse() and do_monotonic_coarse() are now the same except for the storage array index. Hand the index in as an argument and collapse the functions. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: devel@linuxdriverproject.org Cc: virtualization@lists.linux-foundation.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/20180917130707.490733779@linutronix.de
-
由 Thomas Gleixner 提交于
do_realtime() and do_monotonic() are now the same except for the storage array index. Hand the index in as an argument and collapse the functions. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: devel@linuxdriverproject.org Cc: virtualization@lists.linux-foundation.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/20180917130707.407955860@linutronix.de
-
由 Thomas Gleixner 提交于
It's desired to support more clocks in the VDSO, e.g. CLOCK_TAI. This results either in indirect calls due to the larger switch case, which then requires retpolines or when the compiler is forced to avoid jump tables it results in even more conditionals. To avoid both variants which are bad for performance the high resolution functions and the coarse grained functions will be collapsed into one for each. That requires to store the clock specific base time in an array. Introcude struct vgtod_ts for storage and convert the data store, the update function and the individual clock functions over to use it. The new storage does not longer use gtod_long_t for seconds depending on 32 or 64 bit compile because this needs to be the full 64bit value even for 32bit when a Y2038 function is added. No point in keeping the distinction alive in the internal representation. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: devel@linuxdriverproject.org Cc: virtualization@lists.linux-foundation.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/20180917130707.324679401@linutronix.de
-
由 Thomas Gleixner 提交于
The sequence count in vgtod_data is unsigned int, but the call sites use unsigned long, which is a pointless exercise. Fix the call sites and replace 'unsigned' with unsinged 'int' while at it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: devel@linuxdriverproject.org Cc: virtualization@lists.linux-foundation.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/20180917130707.236250416@linutronix.de
-
由 Thomas Gleixner 提交于
All VDSO clock sources are TSC based and use CLOCKSOURCE_MASK(64). There is no point in masking with all FF. Get rid of it and enforce the mask in the sanity checker. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: devel@linuxdriverproject.org Cc: virtualization@lists.linux-foundation.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Juergen Gross <jgross@suse.com> Link: https://lkml.kernel.org/r/20180917130707.151963007@linutronix.de
-
- 04 10月, 2018 1 次提交
-
-
由 Andy Lutomirski 提交于
When I added the missing memory outputs, I failed to update the index of the first argument (ebx) on 32-bit builds, which broke the fallbacks. Somehow I must have screwed up my testing or gotten lucky. Add another test to cover gettimeofday() as well. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: 715bd9d1 ("x86/vdso: Fix asm constraints on vDSO syscall fallbacks") Link: http://lkml.kernel.org/r/21bd45ab04b6d838278fa5bebfa9163eceffa13c.1538608971.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 10月, 2018 1 次提交
-
-
由 Andy Lutomirski 提交于
When I fixed the vDSO build to use inline retpolines, I messed up the Makefile logic and made it unconditional. It should have depended on CONFIG_RETPOLINE and on the availability of compiler support. This broke the build on some older compilers. Reported-by: nikola.ciprich@linuxbox.cz Signed-off-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Rickard <matt@softrans.com.au> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: jason.vas.dias@gmail.com Cc: stable@vger.kernel.org Fixes: 2e549b2e ("x86/vdso: Fix vDSO build if a retpoline is emitted") Link: http://lkml.kernel.org/r/08a1f29f2c238dd1f493945e702a521f8a5aa3ae.1538540801.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 10月, 2018 1 次提交
-
-
由 Andy Lutomirski 提交于
The syscall fallbacks in the vDSO have incorrect asm constraints. They are not marked as writing to their outputs -- instead, they are marked as clobbering "memory", which is useless. In particular, gcc is smart enough to know that the timespec parameter hasn't escaped, so a memory clobber doesn't clobber it. And passing a pointer as an asm *input* does not tell gcc that the pointed-to value is changed. Add in the fact that the asm instructions weren't volatile, and gcc was free to omit them entirely unless their sole output (the return value) is used. Which it is (phew!), but that stops happening with some upcoming patches. As a trivial example, the following code: void test_fallback(struct timespec *ts) { vdso_fallback_gettime(CLOCK_MONOTONIC, ts); } compiles to: 00000000000000c0 <test_fallback>: c0: c3 retq To add insult to injury, the RCX and R11 clobbers on 64-bit builds were missing. The "memory" clobber is also unnecessary -- no ordering with respect to other memory operations is needed, but that's going to be fixed in a separate not-for-stable patch. Fixes: 2aae950b ("x86_64: Add vDSO for x86-64 with gettimeofday/clock_gettime/getcpu") Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/2c0231690551989d2fafa60ed0e7b5cc8b403908.1538422295.git.luto@kernel.org
-
- 21 8月, 2018 1 次提交
-
-
由 Andy Lutomirski 提交于
Currently, if the vDSO ends up containing an indirect branch or call, GCC will emit the "external thunk" style of retpoline, and it will fail to link. Fix it by building the vDSO with inline retpoline thunks. I haven't seen any reports of this triggering on an unpatched kernel. Fixes: commit 76b04384 ("x86/retpoline: Add initial retpoline support") Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMatt Rickard <matt@softrans.com.au> Cc: Borislav Petkov <bp@alien8.de> Cc: Jason Vas Dias <jason.vas.dias@gmail.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/c76538cd3afbe19c6246c2d1715bc6a60bd63985.1534448381.git.luto@kernel.org
-
- 06 8月, 2018 1 次提交
-
-
由 Alistair Strachan 提交于
The vdso{32,64}.so can fail to link with CC=clang when clang tries to find a suitable GCC toolchain to link these libraries with. /usr/bin/ld: arch/x86/entry/vdso/vclock_gettime.o: access beyond end of merged section (782) This happens because the host environment leaked into the cross compiler environment due to the way clang searches for suitable GCC toolchains. Clang is a retargetable compiler, and each invocation of it must provide --target=<something> --gcc-toolchain=<something> to allow it to find the correct binutils for cross compilation. These flags had been added to KBUILD_CFLAGS, but the vdso code uses CC and not KBUILD_CFLAGS (for various reasons) which breaks clang's ability to find the correct linker when cross compiling. Most of the time this goes unnoticed because the host linker is new enough to work anyway, or is incompatible and skipped, but this cannot be reliably assumed. This change alters the vdso makefile to just use LD directly, which bypasses clang and thus the searching problem. The makefile will just use ${CROSS_COMPILE}ld instead, which is always what we want. This matches the method used to link vmlinux. This drops references to DISABLE_LTO; this option doesn't seem to be set anywhere, and not knowing what its possible values are, it's not clear how to convert it from CC to LD flag. Signed-off-by: NAlistair Strachan <astrachan@google.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: kernel-team@android.com Cc: joel@joelfernandes.org Cc: Andi Kleen <andi.kleen@intel.com> Link: https://lkml.kernel.org/r/20180803173931.117515-1-astrachan@google.com
-
- 18 7月, 2018 1 次提交
-
-
由 Laura Abbott 提交于
The vDSO needs to have a unique build id in a similar manner to the kernel and modules. Use the build salt macro. Acked-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 03 7月, 2018 1 次提交
-
-
由 Masahiro Yamada 提交于
No reason to use 'define' directive here. Just use the = operator. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1530582614-5173-3-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 15 5月, 2018 3 次提交
-
-
由 Masahiro Yamada 提交于
Let's put generated files to $(obj)/ rather than $(src)/ although this is just a matter of taste because both are the same. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Weinberger <richard@nod.at> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: user-mode-linux-devel@lists.sourceforge.net Cc: user-mode-linux-user@lists.sourceforge.net Link: http://lkml.kernel.org/r/1526352744-28229-4-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Masahiro Yamada 提交于
CPPFLAGS_vdso.lds is assigned and referenced internally in each Makefile. No need to export it. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Weinberger <richard@nod.at> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: user-mode-linux-devel@lists.sourceforge.net Cc: user-mode-linux-user@lists.sourceforge.net Link: http://lkml.kernel.org/r/1526352744-28229-3-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Masahiro Yamada 提交于
Since commit bfad381c ("x86/vdso: Improve the fake section headers"), $(vobjs-nox32) is empty. Therefore, $(vobjs64-for-x32) is the same as $(vobjs-y). Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1526352744-28229-2-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 5月, 2018 1 次提交
-
-
由 Jann Horn 提交于
commit da861e18 ("x86, vdso: Get rid of the fake section mechanism") left this file behind; nothing is using it anymore. Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: hpa@zytor.com Cc: luto@amacapital.net Link: https://lkml.kernel.org/r/20180504175935.104085-1-jannh@google.com
-