1. 23 6月, 2019 1 次提交
    • V
      x86/vdso: Switch to generic vDSO implementation · 7ac87074
      Vincenzo Frascino 提交于
      The x86 vDSO library requires some adaptations to take advantage of the
      newly introduced generic vDSO library.
      
      Introduce the following changes:
       - Modification of vdso.c to be compliant with the common vdso datapage
       - Use of lib/vdso for gettimeofday
      
      [ tglx: Massaged changelog and cleaned up the function signature formatting ]
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-mips@vger.kernel.org
      Cc: linux-kselftest@vger.kernel.org
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Mark Salyzyn <salyzyn@android.com>
      Cc: Peter Collingbourne <pcc@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Dmitry Safonov <0x7f454c46@gmail.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Huw Davies <huw@codeweavers.com>
      Cc: Shijith Thotton <sthotton@marvell.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Link: https://lkml.kernel.org/r/20190621095252.32307-23-vincenzo.frascino@arm.com
      7ac87074
  2. 22 6月, 2019 1 次提交
    • A
      x86/vdso: Prevent segfaults due to hoisted vclock reads · ff17bbe0
      Andy Lutomirski 提交于
      GCC 5.5.0 sometimes cleverly hoists reads of the pvclock and/or hvclock
      pages before the vclock mode checks.  This creates a path through
      vclock_gettime() in which no vclock is enabled at all (due to disabled
      TSC on old CPUs, for example) but the pvclock or hvclock page
      nevertheless read.  This will segfault on bare metal.
      
      This fixes commit 459e3a21 ("gcc-9: properly declare the
      {pv,hv}clock_page storage") in the sense that, before that commit, GCC
      didn't seem to generate the offending code.  There was nothing wrong
      with that commit per se, and -stable maintainers should backport this to
      all supported kernels regardless of whether the offending commit was
      present, since the same crash could just as easily be triggered by the
      phase of the moon.
      
      On GCC 9.1.1, this doesn't seem to affect the generated code at all, so
      I'm not too concerned about performance regressions from this fix.
      
      Cc: stable@vger.kernel.org
      Cc: x86@kernel.org
      Cc: Borislav Petkov <bp@alien8.de>
      Reported-by: NDuncan Roe <duncan_roe@optusnet.com.au>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ff17bbe0
  3. 31 5月, 2019 2 次提交
  4. 08 5月, 2019 1 次提交
  5. 02 5月, 2019 1 次提交
    • L
      gcc-9: properly declare the {pv,hv}clock_page storage · 459e3a21
      Linus Torvalds 提交于
      The pvlock_page and hvclock_page variables are (as the name implies)
      addresses to pages, created by the linker script.
      
      But we declared them as just "extern u8" variables, which _works_, but
      now that gcc does some more bounds checking, it causes warnings like
      
          warning: array subscript 1 is outside array bounds of ‘u8[1]’
      
      when we then access more than one byte from those variables.
      
      Fix this by simply making the declaration of the variables match
      reality, which makes the compiler happy too.
      Signed-off-by: NLinus Torvalds <torvalds@-linux-foundation.org>
      459e3a21
  6. 20 4月, 2019 1 次提交
  7. 18 4月, 2019 1 次提交
  8. 15 12月, 2018 1 次提交
  9. 08 12月, 2018 1 次提交
  10. 05 12月, 2018 2 次提交
    • S
      x86/vdso: Remove a stale/misleading comment from the linker script · 29434801
      Sean Christopherson 提交于
      Once upon a time, vdso2c aggressively stripped data from the vDSO
      image when generating the final userspace image.  This included
      stripping the .altinstructions and .altinstr_replacement sections.
      Eventually, the stripping process reverted to "objdump -S" and no
      longer removed the aforementioned sections, but the comment remained.
      
      Keeping the .alt* sections at the end of the PT_LOAD segment is no
      longer necessary, but there's no harm in doing so and it's a helpful
      reminder that they don't need to be included in the final vDSO image,
      i.e. someone may want to take another stab at zapping/stripping the
      unneeded sections.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: da861e18 ("x86, vdso: Get rid of the fake section mechanism")
      Link: http://lkml.kernel.org/r/20181204212600.28090-3-sean.j.christopherson@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      29434801
    • S
      x86/vdso: Remove obsolete "fake section table" reservation · 24b7c77b
      Sean Christopherson 提交于
      At one point the vDSO image was manually stripped down by vdso2c in an
      attempt to minimize the size of the image mapped into userspace.  Part
      of that stripping process involved building a fake section table so as
      not to break userspace processes that parse the section table.  Memory
      for the fake section table was reserved in the .rodata section so that
      vdso2c could simply copy the entire PT_LOAD segment into the userspace
      image after building the fake table.
      
      Eventually, the entire fake section table approach was dropped in favor
      of stripping the vdso "the old fashioned way", i.e. via objdump -S.
      But, the reservation in .rodata for the fake table was left behind.
      Remove the reserveration along with a few other related defines and
      section entries.
      
      Removing the fake section table placeholder zaps a whopping 0x340 bytes
      from the 64-bit vDSO image, which drops the current image's size to
      under 4k, i.e. reduces the effective size of the userspace vDSO mapping
      by a full page.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: da861e18 ("x86, vdso: Get rid of the fake section mechanism")
      Link: http://lkml.kernel.org/r/20181204212600.28090-2-sean.j.christopherson@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      24b7c77b
  11. 03 12月, 2018 1 次提交
    • I
      x86: Fix various typos in comments · a97673a1
      Ingo Molnar 提交于
      Go over arch/x86/ and fix common typos in comments,
      and a typo in an actual function argument name.
      
      No change in functionality intended.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a97673a1
  12. 27 10月, 2018 1 次提交
  13. 08 10月, 2018 5 次提交
    • I
      x86/fsgsbase/64: Clean up various details · ec3a9418
      Ingo Molnar 提交于
      So:
      
       - use 'extern' consistently for APIs
      
       - fix weird header guard
      
       - clarify code comments
      
       - reorder APIs by type
      
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang S. Bae <chang.seok.bae@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/1537312139-5580-2-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ec3a9418
    • I
      x86/segments: Introduce the 'CPUNODE' naming to better document the segment limit CPU/node NR trick · 22245bdf
      Ingo Molnar 提交于
      We have a special segment descriptor entry in the GDT, whose sole purpose is to
      encode the CPU and node numbers in its limit (size) field. There are user-space
      instructions that allow the reading of the limit field, which gives us a really
      fast way to read the CPU and node IDs from the vDSO for example.
      
      But the naming of related functionality does not make this clear, at all:
      
      	VDSO_CPU_SIZE
      	VDSO_CPU_MASK
      	__CPU_NUMBER_SEG
      	GDT_ENTRY_CPU_NUMBER
      	vdso_encode_cpu_node
      	vdso_read_cpu_node
      
      There's a number of problems:
      
       - The 'VDSO_CPU_SIZE' doesn't really make it clear that these are number
         of bits, nor does it make it clear which 'CPU' this refers to, i.e.
         that this is about a GDT entry whose limit encodes the CPU and node number.
      
       - Furthermore, the 'CPU_NUMBER' naming is actively misleading as well,
         because the segment limit encodes not just the CPU number but the
         node ID as well ...
      
      So use a better nomenclature all around: name everything related to this trick
      as 'CPUNODE', to make it clear that this is something special, and add
      _BITS to make it clear that these are number of bits, and propagate this to
      every affected name:
      
      	VDSO_CPU_SIZE         =>  VDSO_CPUNODE_BITS
      	VDSO_CPU_MASK         =>  VDSO_CPUNODE_MASK
      	__CPU_NUMBER_SEG      =>  __CPUNODE_SEG
      	GDT_ENTRY_CPU_NUMBER  =>  GDT_ENTRY_CPUNODE
      	vdso_encode_cpu_node  =>  vdso_encode_cpunode
      	vdso_read_cpu_node    =>  vdso_read_cpunode
      
      This, beyond being less confusing, also makes it easier to grep for all related
      functionality:
      
        $ git grep -i cpunode arch/x86
      
      Also, while at it, fix "return is not a function" style sloppiness in vdso_encode_cpunode().
      
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chang S. Bae <chang.seok.bae@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/1537312139-5580-2-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      22245bdf
    • C
      x86/vdso: Initialize the CPU/node NR segment descriptor earlier · b2e2ba57
      Chang S. Bae 提交于
      Currently the CPU/node NR segment descriptor (GDT_ENTRY_CPU_NUMBER) is
      initialized relatively late during CPU init, from the vCPU code, which
      has a number of disadvantages, such as hotplug CPU notifiers and SMP
      cross-calls.
      
      Instead just initialize it much earlier, directly in cpu_init().
      
      This reduces complexity and increases robustness.
      
      [ mingo: Wrote new changelog. ]
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Suggested-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1537312139-5580-9-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b2e2ba57
    • C
      x86/vdso: Introduce helper functions for CPU and node number · ffebbaed
      Chang S. Bae 提交于
      Clean up the CPU/node number related code a bit, to make it more apparent
      how we are encoding/extracting the CPU and node fields from the
      segment limit.
      
      No change in functionality intended.
      
      [ mingo: Wrote new changelog. ]
      Suggested-by: NAndy Lutomirski <luto@kernel.org>
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Link: http://lkml.kernel.org/r/1537312139-5580-8-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ffebbaed
    • C
      x86/segments/64: Rename the GDT PER_CPU entry to CPU_NUMBER · c4755613
      Chang S. Bae 提交于
      The old 'per CPU' naming was misleading: 64-bit kernels don't use this
      GDT entry for per CPU data, but to store the CPU (and node) ID.
      
      [ mingo: Wrote new changelog. ]
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Markus T Metzger <markus.t.metzger@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Rik van Riel <riel@surriel.com>
      Link: http://lkml.kernel.org/r/1537312139-5580-7-git-send-email-chang.seok.bae@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c4755613
  14. 06 10月, 2018 1 次提交
  15. 05 10月, 2018 9 次提交
    • A
      x86/vdso: Remove "memory" clobbers in the vDSO syscall fallbacks · 89fe0a1f
      Andy Lutomirski 提交于
      When a vDSO clock function falls back to the syscall, no special
      barriers or ordering is needed, and the syscall fallbacks don't
      clobber any memory that is not explicitly listed in the asm
      constraints.  Remove the "memory" clobber.
      
      This causes minor changes to the generated code, but otherwise has
      no obvious performance impact.  I think it's nice to have, though,
      since it may help the optimizer in the future.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/3a7438f5fb2422ed881683d2ccffd7f987b2dc44.1538689401.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      89fe0a1f
    • T
      x86/vdso: Move cycle_last handling into the caller · 3e89bf35
      Thomas Gleixner 提交于
      Dereferencing gtod->cycle_last all over the place and foing the cycles <
      last comparison in the vclock read functions generates horrible code. Doing
      it at the call site is much better and gains a few cycles both for TSC and
      pvclock.
      
      Caveat: This adds the comparison to the hyperv vclock as well, but I have
      no way to test that.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.741440803@linutronix.de
      3e89bf35
    • T
      x86/vdso: Simplify the invalid vclock case · 4f72adc5
      Thomas Gleixner 提交于
      The code flow for the vclocks is convoluted as it requires the vclocks
      which can be invalidated separately from the vsyscall_gtod_data sequence to
      store the fact in a separate variable. That's inefficient.
      
      Restructure the code so the vclock readout returns cycles and the
      conversion to nanoseconds is handled at the call site.
      
      If the clock gets invalidated or vclock is already VCLOCK_NONE, return
      U64_MAX as the cycle value, which is invalid for all clocks and leave the
      sequence loop immediately in that case by calling the fallback function
      directly.
      
      This allows to remove the gettimeofday fallback as it now uses the
      clock_gettime() fallback and does the nanoseconds to microseconds
      conversion in the same way as it does when the vclock is functional. It
      does not make a difference whether the division by 1000 happens in the
      kernel fallback or in userspace.
      
      Generates way better code and gains a few cycles back.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.657928937@linutronix.de
      4f72adc5
    • T
      x86/vdso: Replace the clockid switch case · f3e83938
      Thomas Gleixner 提交于
      Now that the time getter functions use the clockid as index into the
      storage array for the base time access, the switch case can be replaced.
      
      - Check for clockid >= MAX_CLOCKS and for negative clockid (CPU/FD) first
        and call the fallback function right away.
      
      - After establishing that clockid is < MAX_CLOCKS, convert the clockid to a
        bitmask
      
      - Check for the supported high resolution and coarse functions by anding
        the bitmask of supported clocks and check whether a bit is set.
      
      This completely avoids jump tables, reduces the number of conditionals and
      makes the VDSO extensible for other clock ids.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.574315796@linutronix.de
      f3e83938
    • T
      x86/vdso: Collapse coarse functions · 6deec5bd
      Thomas Gleixner 提交于
      do_realtime_coarse() and do_monotonic_coarse() are now the same except for
      the storage array index. Hand the index in as an argument and collapse the
      functions.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.490733779@linutronix.de
      6deec5bd
    • T
      x86/vdso: Collapse high resolution functions · e9a62f76
      Thomas Gleixner 提交于
      do_realtime() and do_monotonic() are now the same except for the storage
      array index. Hand the index in as an argument and collapse the functions.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.407955860@linutronix.de
      e9a62f76
    • T
      x86/vdso: Introduce and use vgtod_ts · 49116f20
      Thomas Gleixner 提交于
      It's desired to support more clocks in the VDSO, e.g. CLOCK_TAI. This
      results either in indirect calls due to the larger switch case, which then
      requires retpolines or when the compiler is forced to avoid jump tables it
      results in even more conditionals.
      
      To avoid both variants which are bad for performance the high resolution
      functions and the coarse grained functions will be collapsed into one for
      each. That requires to store the clock specific base time in an array.
      
      Introcude struct vgtod_ts for storage and convert the data store, the
      update function and the individual clock functions over to use it.
      
      The new storage does not longer use gtod_long_t for seconds depending on 32
      or 64 bit compile because this needs to be the full 64bit value even for
      32bit when a Y2038 function is added. No point in keeping the distinction
      alive in the internal representation.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.324679401@linutronix.de
      49116f20
    • T
      x86/vdso: Use unsigned int consistently for vsyscall_gtod_data:: Seq · 77e9c678
      Thomas Gleixner 提交于
      The sequence count in vgtod_data is unsigned int, but the call sites use
      unsigned long, which is a pointless exercise. Fix the call sites and
      replace 'unsigned' with unsinged 'int' while at it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.236250416@linutronix.de
      77e9c678
    • T
      x86/vdso: Enforce 64bit clocksource · a51e996d
      Thomas Gleixner 提交于
      All VDSO clock sources are TSC based and use CLOCKSOURCE_MASK(64). There is
      no point in masking with all FF. Get rid of it and enforce the mask in the
      sanity checker.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Rickard <matt@softrans.com.au>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: devel@linuxdriverproject.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Juergen Gross <jgross@suse.com>
      Link: https://lkml.kernel.org/r/20180917130707.151963007@linutronix.de
      a51e996d
  16. 04 10月, 2018 1 次提交
  17. 03 10月, 2018 1 次提交
  18. 02 10月, 2018 1 次提交
    • A
      x86/vdso: Fix asm constraints on vDSO syscall fallbacks · 715bd9d1
      Andy Lutomirski 提交于
      The syscall fallbacks in the vDSO have incorrect asm constraints.
      They are not marked as writing to their outputs -- instead, they are
      marked as clobbering "memory", which is useless.  In particular, gcc
      is smart enough to know that the timespec parameter hasn't escaped,
      so a memory clobber doesn't clobber it.  And passing a pointer as an
      asm *input* does not tell gcc that the pointed-to value is changed.
      
      Add in the fact that the asm instructions weren't volatile, and gcc
      was free to omit them entirely unless their sole output (the return
      value) is used.  Which it is (phew!), but that stops happening with
      some upcoming patches.
      
      As a trivial example, the following code:
      
      void test_fallback(struct timespec *ts)
      {
      	vdso_fallback_gettime(CLOCK_MONOTONIC, ts);
      }
      
      compiles to:
      
      00000000000000c0 <test_fallback>:
        c0:   c3                      retq
      
      To add insult to injury, the RCX and R11 clobbers on 64-bit
      builds were missing.
      
      The "memory" clobber is also unnecessary -- no ordering with respect to
      other memory operations is needed, but that's going to be fixed in a
      separate not-for-stable patch.
      
      Fixes: 2aae950b ("x86_64: Add vDSO for x86-64 with gettimeofday/clock_gettime/getcpu")
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/2c0231690551989d2fafa60ed0e7b5cc8b403908.1538422295.git.luto@kernel.org
      715bd9d1
  19. 21 8月, 2018 1 次提交
  20. 06 8月, 2018 1 次提交
    • A
      x86: vdso: Use $LD instead of $CC to link · 379d98dd
      Alistair Strachan 提交于
      The vdso{32,64}.so can fail to link with CC=clang when clang tries to find
      a suitable GCC toolchain to link these libraries with.
      
      /usr/bin/ld: arch/x86/entry/vdso/vclock_gettime.o:
        access beyond end of merged section (782)
      
      This happens because the host environment leaked into the cross compiler
      environment due to the way clang searches for suitable GCC toolchains.
      
      Clang is a retargetable compiler, and each invocation of it must provide
      --target=<something> --gcc-toolchain=<something> to allow it to find the
      correct binutils for cross compilation. These flags had been added to
      KBUILD_CFLAGS, but the vdso code uses CC and not KBUILD_CFLAGS (for various
      reasons) which breaks clang's ability to find the correct linker when cross
      compiling.
      
      Most of the time this goes unnoticed because the host linker is new enough
      to work anyway, or is incompatible and skipped, but this cannot be reliably
      assumed.
      
      This change alters the vdso makefile to just use LD directly, which
      bypasses clang and thus the searching problem. The makefile will just use
      ${CROSS_COMPILE}ld instead, which is always what we want. This matches the
      method used to link vmlinux.
      
      This drops references to DISABLE_LTO; this option doesn't seem to be set
      anywhere, and not knowing what its possible values are, it's not clear how
      to convert it from CC to LD flag.
      Signed-off-by: NAlistair Strachan <astrachan@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: kernel-team@android.com
      Cc: joel@joelfernandes.org
      Cc: Andi Kleen <andi.kleen@intel.com>
      Link: https://lkml.kernel.org/r/20180803173931.117515-1-astrachan@google.com
      379d98dd
  21. 18 7月, 2018 1 次提交
  22. 03 7月, 2018 1 次提交
  23. 15 5月, 2018 3 次提交
  24. 06 5月, 2018 1 次提交