1. 24 5月, 2016 1 次提交
  2. 13 4月, 2016 1 次提交
    • A
      x86/vdso: Remove direct HPET access through the vDSO · 1ed95e52
      Andy Lutomirski 提交于
      Allowing user code to map the HPET is problematic.  HPET
      implementations are notoriously buggy, and there are probably many
      machines on which even MMIO reads from bogus HPET addresses are
      problematic.
      
      We have a report that the Dell Precision M2800 with:
      
        ACPI: HPET 0x00000000C8FE6238 000038 (v01 DELL   CBX3  01072009 AMI. 00000005)
      
      is either so slow when accessing the HPET or actually hangs in some
      regard, causing soft lockups to be reported if users do unexpected
      things to the HPET.
      
      The vclock HPET code has also always been a questionable speedup.
      Accessing an HPET is exceedingly slow (on the order of several
      microseconds), so the added overhead in requiring a syscall to read
      the HPET is a small fraction of the total code of accessing it.
      
      To avoid future problems, let's just delete the code entirely.
      
      In the long run, this could actually be a speedup.  Waiman Long as a
      patch to optimize the case where multiple CPUs contend for the HPET,
      but that won't help unless all the accesses are mediated by the
      kernel.
      Reported-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Waiman Long <Waiman.Long@hpe.com>
      Cc: Waiman Long <waiman.long@hpe.com>
      Link: http://lkml.kernel.org/r/d2f90bba98db9905041cff294646d290d378f67a.1460074438.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1ed95e52
  3. 30 1月, 2016 2 次提交
  4. 12 1月, 2016 4 次提交
  5. 11 12月, 2015 2 次提交
  6. 07 10月, 2015 1 次提交
  7. 06 7月, 2015 1 次提交
  8. 04 6月, 2015 1 次提交
    • I
      x86/asm/entry, x86/vdso: Move the vDSO code to arch/x86/entry/vdso/ · d603c8e1
      Ingo Molnar 提交于
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d603c8e1
  9. 21 12月, 2014 1 次提交
    • A
      x86_64, vdso: Fix the vdso address randomization algorithm · 394f56fe
      Andy Lutomirski 提交于
      The theory behind vdso randomization is that it's mapped at a random
      offset above the top of the stack.  To avoid wasting a page of
      memory for an extra page table, the vdso isn't supposed to extend
      past the lowest PMD into which it can fit.  Other than that, the
      address should be a uniformly distributed address that meets all of
      the alignment requirements.
      
      The current algorithm is buggy: the vdso has about a 50% probability
      of being at the very end of a PMD.  The current algorithm also has a
      decent chance of failing outright due to incorrect handling of the
      case where the top of the stack is near the top of its PMD.
      
      This fixes the implementation.  The paxtest estimate of vdso
      "randomisation" improves from 11 bits to 18 bits.  (Disclaimer: I
      don't know what the paxtest code is actually calculating.)
      
      It's worth noting that this algorithm is inherently biased: the vdso
      is more likely to end up near the end of its PMD than near the
      beginning.  Ideally we would either nix the PMD sharing requirement
      or jointly randomize the vdso and the stack to reduce the bias.
      
      In the mean time, this is a considerable improvement with basically
      no risk of compatibility issues, since the allowed outputs of the
      algorithm are unchanged.
      
      As an easy test, doing this:
      
      for i in `seq 10000`
        do grep -P vdso /proc/self/maps |cut -d- -f1
      done |sort |uniq -d
      
      used to produce lots of output (1445 lines on my most recent run).
      A tiny subset looks like this:
      
      7fffdfffe000
      7fffe01fe000
      7fffe05fe000
      7fffe07fe000
      7fffe09fe000
      7fffe0bfe000
      7fffe0dfe000
      
      Note the suspicious fe000 endings.  With the fix, I get a much more
      palatable 76 repeated addresses.
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      394f56fe
  10. 02 11月, 2014 1 次提交
    • A
      x86: vdso: Fix build with older gcc · a92f101b
      Andrew Morton 提交于
      gcc-4.4.4:
      
      arch/x86/vdso/vma.c: In function 'vgetcpu_cpu_init':
      arch/x86/vdso/vma.c:247: error: unknown field 'limit0' specified in initializer
      arch/x86/vdso/vma.c:247: warning: missing braces around initializer
      arch/x86/vdso/vma.c:247: warning: (near initialization for '(anonymous).<anonymous>')
      arch/x86/vdso/vma.c:248: error: unknown field 'limit' specified in initializer
      arch/x86/vdso/vma.c:248: warning: excess elements in struct initializer
      arch/x86/vdso/vma.c:248: warning: (near initialization for '(anonymous)')
      ....
      
      I couldn't find any way of tricking it into accepting an initializer
      format :(
      Reported-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Fixes: 25880156 ("x86/vdso: Change the PER_CPU segment to use struct desc_struct")
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      a92f101b
  11. 28 10月, 2014 5 次提交
  12. 26 7月, 2014 1 次提交
  13. 12 7月, 2014 1 次提交
  14. 11 7月, 2014 1 次提交
  15. 21 5月, 2014 2 次提交
  16. 06 5月, 2014 3 次提交
  17. 21 3月, 2014 2 次提交
  18. 19 3月, 2014 1 次提交
  19. 12 12月, 2012 1 次提交
  20. 24 3月, 2012 1 次提交
    • J
      coredump: remove VM_ALWAYSDUMP flag · 909af768
      Jason Baron 提交于
      The motivation for this patchset was that I was looking at a way for a
      qemu-kvm process, to exclude the guest memory from its core dump, which
      can be quite large.  There are already a number of filter flags in
      /proc/<pid>/coredump_filter, however, these allow one to specify 'types'
      of kernel memory, not specific address ranges (which is needed in this
      case).
      
      Since there are no more vma flags available, the first patch eliminates
      the need for the 'VM_ALWAYSDUMP' flag.  The flag is used internally by
      the kernel to mark vdso and vsyscall pages.  However, it is simple
      enough to check if a vma covers a vdso or vsyscall page without the need
      for this flag.
      
      The second patch then replaces the 'VM_ALWAYSDUMP' flag with a new
      'VM_NODUMP' flag, which can be set by userspace using new madvise flags:
      'MADV_DONTDUMP', and unset via 'MADV_DODUMP'.  The core dump filters
      continue to work the same as before unless 'MADV_DONTDUMP' is set on the
      region.
      
      The qemu code which implements this features is at:
      
        http://people.redhat.com/~jbaron/qemu-dump/qemu-dump.patch
      
      In my testing the qemu core dump shrunk from 383MB -> 13MB with this
      patch.
      
      I also believe that the 'MADV_DONTDUMP' flag might be useful for
      security sensitive apps, which might want to select which areas are
      dumped.
      
      This patch:
      
      The VM_ALWAYSDUMP flag is currently used by the coredump code to
      indicate that a vma is part of a vsyscall or vdso section.  However, we
      can determine if a vma is in one these sections by checking it against
      the gate_vma and checking for a non-NULL return value from
      arch_vma_name().  Thus, freeing a valuable vma bit.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Acked-by: NRoland McGrath <roland@hack.frob.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      909af768
  21. 22 2月, 2012 1 次提交
  22. 21 2月, 2012 1 次提交
    • H
      x32: Add x32 VDSO support · 1a21d4e0
      H. J. Lu 提交于
      Add support for the x32 VDSO.  The x32 VDSO takes advantage of the
      similarity between the x86-64 and the x32 ABIs to contain the same
      content, only the container is different, as the x32 VDSO obviously is
      an x32 shared object.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      1a21d4e0
  23. 06 8月, 2011 1 次提交
    • B
      x86, amd: Avoid cache aliasing penalties on AMD family 15h · dfb09f9b
      Borislav Petkov 提交于
      This patch provides performance tuning for the "Bulldozer" CPU. With its
      shared instruction cache there is a chance of generating an excessive
      number of cache cross-invalidates when running specific workloads on the
      cores of a compute module.
      
      This excessive amount of cross-invalidations can be observed if cache
      lines backed by shared physical memory alias in bits [14:12] of their
      virtual addresses, as those bits are used for the index generation.
      
      This patch addresses the issue by clearing all the bits in the [14:12]
      slice of the file mapping's virtual address at generation time, thus
      forcing those bits the same for all mappings of a single shared library
      across processes and, in doing so, avoids instruction cache aliases.
      
      It also adds the command line option "align_va_addr=(32|64|on|off)" with
      which virtual address alignment can be enabled for 32-bit or 64-bit x86
      individually, or both, or be completely disabled.
      
      This change leaves virtual region address allocation on other families
      and/or vendors unaffected.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Link: http://lkml.kernel.org/r/1312550110-24160-2-git-send-email-bp@amd64.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      dfb09f9b
  24. 22 7月, 2011 1 次提交
  25. 14 7月, 2011 1 次提交
  26. 24 5月, 2011 1 次提交
    • A
      x86-64: Clean up vdso/kernel shared variables · 8c49d9a7
      Andy Lutomirski 提交于
      Variables that are shared between the vdso and the kernel are
      currently a bit of a mess.  They are each defined with their own
      magic, they are accessed differently in the kernel, the vsyscall page,
      and the vdso, and one of them (vsyscall_clock) doesn't even really
      exist.
      
      This changes them all to use a common mechanism.  All of them are
      delcared in vvar.h with a fixed address (validated by the linker
      script).  In the kernel (as before), they look like ordinary
      read-write variables.  In the vsyscall page and the vdso, they are
      accessed through a new macro VVAR, which gives read-only access.
      
      The vdso is now loaded verbatim into memory without any fixups.  As a
      side bonus, access from the vdso is faster because a level of
      indirection is removed.
      
      While we're at it, pack jiffies and vgetcpu_mode into the same
      cacheline.
      Signed-off-by: NAndy Lutomirski <luto@mit.edu>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Borislav Petkov <bp@amd64.org>
      Link: http://lkml.kernel.org/r/%3C7357882fbb51fa30491636a7b6528747301b7ee9.1306156808.git.luto%40mit.edu%3ESigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      8c49d9a7
  27. 03 8月, 2010 1 次提交