1. 22 5月, 2014 1 次提交
  2. 15 5月, 2014 1 次提交
    • L
      x86-64, modify_ldt: Make support for 16-bit segments a runtime option · fa81511b
      Linus Torvalds 提交于
      Checkin:
      
      b3b42ac2 x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
      
      disabled 16-bit segments on 64-bit kernels due to an information
      leak.  However, it does seem that people are genuinely using Wine to
      run old 16-bit Windows programs on Linux.
      
      A proper fix for this ("espfix64") is coming in the upcoming merge
      window, but as a temporary fix, create a sysctl to allow the
      administrator to re-enable support for 16-bit segments.
      
      It adds a "/proc/sys/abi/ldt16" sysctl that defaults to zero (off). If
      you hit this issue and care about your old Windows program more than
      you care about a kernel stack address information leak, you can do
      
         echo 1 > /proc/sys/abi/ldt16
      
      as root (add it to your startup scripts), and you should be ok.
      
      The sysctl table is only added if you have COMPAT support enabled on
      x86-64, but I assume anybody who runs old windows binaries very much
      does that ;)
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/CA%2B55aFw9BPoD10U1LfHbOMpHWZkvJTkMcfCs9s3urPr1YyWBxw@mail.gmail.com
      Cc: <stable@vger.kernel.org>
      fa81511b
  3. 04 4月, 2014 1 次提交
  4. 31 3月, 2014 1 次提交
  5. 26 3月, 2014 2 次提交
  6. 25 3月, 2014 1 次提交
  7. 21 3月, 2014 2 次提交
  8. 19 3月, 2014 9 次提交
  9. 14 3月, 2014 1 次提交
  10. 14 2月, 2014 1 次提交
  11. 12 1月, 2014 1 次提交
  12. 07 1月, 2014 1 次提交
  13. 06 11月, 2013 1 次提交
  14. 18 7月, 2013 1 次提交
  15. 19 6月, 2013 1 次提交
  16. 15 2月, 2013 1 次提交
  17. 12 12月, 2012 1 次提交
  18. 28 11月, 2012 1 次提交
  19. 25 9月, 2012 1 次提交
    • J
      time: Convert x86_64 to using new update_vsyscall · 650ea024
      John Stultz 提交于
      Switch x86_64 to using sub-ns precise vsyscall
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      650ea024
  20. 08 6月, 2012 1 次提交
  21. 24 3月, 2012 3 次提交
    • J
      coredump: remove VM_ALWAYSDUMP flag · 909af768
      Jason Baron 提交于
      The motivation for this patchset was that I was looking at a way for a
      qemu-kvm process, to exclude the guest memory from its core dump, which
      can be quite large.  There are already a number of filter flags in
      /proc/<pid>/coredump_filter, however, these allow one to specify 'types'
      of kernel memory, not specific address ranges (which is needed in this
      case).
      
      Since there are no more vma flags available, the first patch eliminates
      the need for the 'VM_ALWAYSDUMP' flag.  The flag is used internally by
      the kernel to mark vdso and vsyscall pages.  However, it is simple
      enough to check if a vma covers a vdso or vsyscall page without the need
      for this flag.
      
      The second patch then replaces the 'VM_ALWAYSDUMP' flag with a new
      'VM_NODUMP' flag, which can be set by userspace using new madvise flags:
      'MADV_DONTDUMP', and unset via 'MADV_DODUMP'.  The core dump filters
      continue to work the same as before unless 'MADV_DONTDUMP' is set on the
      region.
      
      The qemu code which implements this features is at:
      
        http://people.redhat.com/~jbaron/qemu-dump/qemu-dump.patch
      
      In my testing the qemu core dump shrunk from 383MB -> 13MB with this
      patch.
      
      I also believe that the 'MADV_DONTDUMP' flag might be useful for
      security sensitive apps, which might want to select which areas are
      dumped.
      
      This patch:
      
      The VM_ALWAYSDUMP flag is currently used by the coredump code to
      indicate that a vma is part of a vsyscall or vdso section.  However, we
      can determine if a vma is in one these sections by checking it against
      the gate_vma and checking for a non-NULL return value from
      arch_vma_name().  Thus, freeing a valuable vma bit.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Acked-by: NRoland McGrath <roland@hack.frob.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      909af768
    • A
      x86-64: Inline vdso clock_gettime helpers · 5f293474
      Andy Lutomirski 提交于
      This is about a 3% speedup on Sandy Bridge.
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      5f293474
    • A
      x86-64: Simplify and optimize vdso clock_gettime monotonic variants · 91ec87d5
      Andy Lutomirski 提交于
      We used to store the wall-to-monotonic offset and the realtime base.
      It's faster to precompute the monotonic base.
      
      This is about a 3% speedup on Sandy Bridge for CLOCK_MONOTONIC.
      It's much more impressive for CLOCK_MONOTONIC_COARSE.
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      91ec87d5
  22. 16 3月, 2012 2 次提交
    • T
      x86: vdso: Use seqcount instead of seqlock · 2ab51657
      Thomas Gleixner 提交于
      The update of the vdso data happens under xtime_lock, so adding a
      nested lock is pointless. Just use a seqcount to sync the readers.
      Reviewed-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      2ab51657
    • J
      time: x86: Fix race switching from vsyscall to non-vsyscall clock · a939e817
      John Stultz 提交于
      When switching from a vsyscall capable to a non-vsyscall capable
      clocksource, there was a small race, where the last vsyscall
      gettimeofday before the switch might return a invalid time value
      using the new non-vsyscall enabled clocksource values after the
      switch is complete.
      
      This is due to the vsyscall code checking the vclock_mode once
      outside of the seqcount protected section. After it reads the
      vclock mode, it doesn't re-check that the sampled clock data
      that is obtained in the seqcount critical section still matches.
      
      The fix is to sample vclock_mode inside the protected section,
      and as long as it isn't VCLOCK_NONE, return the calculated
      value. If it has changed and is now VCLOCK_NONE, fall back
      to the syscall gettime calculation.
      
      v2:
        * Cleanup checks as suggested by tglx
        * Also fix same issue present in gettimeofday path
      
      CC: Andy Lutomirski <luto@amacapital.net>
      CC: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      a939e817
  23. 23 2月, 2012 1 次提交
  24. 22 2月, 2012 1 次提交
  25. 21 2月, 2012 1 次提交
    • H
      x32: Add x32 VDSO support · 1a21d4e0
      H. J. Lu 提交于
      Add support for the x32 VDSO.  The x32 VDSO takes advantage of the
      similarity between the x86-64 and the x32 ABIs to contain the same
      content, only the container is different, as the x32 VDSO obviously is
      an x32 shared object.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      1a21d4e0
  26. 24 8月, 2011 1 次提交
  27. 06 8月, 2011 1 次提交
    • B
      x86, amd: Avoid cache aliasing penalties on AMD family 15h · dfb09f9b
      Borislav Petkov 提交于
      This patch provides performance tuning for the "Bulldozer" CPU. With its
      shared instruction cache there is a chance of generating an excessive
      number of cache cross-invalidates when running specific workloads on the
      cores of a compute module.
      
      This excessive amount of cross-invalidations can be observed if cache
      lines backed by shared physical memory alias in bits [14:12] of their
      virtual addresses, as those bits are used for the index generation.
      
      This patch addresses the issue by clearing all the bits in the [14:12]
      slice of the file mapping's virtual address at generation time, thus
      forcing those bits the same for all mappings of a single shared library
      across processes and, in doing so, avoids instruction cache aliases.
      
      It also adds the command line option "align_va_addr=(32|64|on|off)" with
      which virtual address alignment can be enabled for 32-bit or 64-bit x86
      individually, or both, or be completely disabled.
      
      This change leaves virtual region address allocation on other families
      and/or vendors unaffected.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Link: http://lkml.kernel.org/r/1312550110-24160-2-git-send-email-bp@amd64.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      dfb09f9b