1. 21 6月, 2017 1 次提交
    • D
      ARM: 8683/1: ARM32: Support mremap() for sigpage/vDSO · 280e87e9
      Dmitry Safonov 提交于
      CRIU restores application mappings on the same place where they
      were before Checkpoint. That means, that we need to move vDSO
      and sigpage during restore on exactly the same place where
      they were before C/R.
      
      Make mremap() code update mm->context.{sigpage,vdso} pointers
      during VMA move. Sigpage is used for landing after handling
      a signal - if the pointer is not updated during moving, the
      application might crash on any signal after mremap().
      
      vDSO pointer on ARM32 is used only for setting auxv at this moment,
      update it during mremap() in case of future usage.
      
      Without those updates, current work of CRIU on ARM32 is not reliable.
      Historically, we error Checkpointing if we find vDSO page on ARM32
      and suggest user to disable CONFIG_VDSO.
      But that's not correct - it goes from x86 where signal processing
      is ended in vDSO blob. For arm32 it's sigpage, which is not disabled
      with `CONFIG_VDSO=n'.
      
      Looks like C/R was working by luck - because userspace on ARM32 at
      this moment always sets SA_RESTORER.
      Signed-off-by: NDmitry Safonov <dsafonov@virtuozzo.com>
      Acked-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Pavel Emelyanov <xemul@virtuozzo.com>
      Cc: Christopher Covington <cov@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      280e87e9
  2. 16 3月, 2017 1 次提交
    • T
      x86: Remap GDT tables in the fixmap section · 69218e47
      Thomas Garnier 提交于
      Each processor holds a GDT in its per-cpu structure. The sgdt
      instruction gives the base address of the current GDT. This address can
      be used to bypass KASLR memory randomization. With another bug, an
      attacker could target other per-cpu structures or deduce the base of
      the main memory section (PAGE_OFFSET).
      
      This patch relocates the GDT table for each processor inside the
      fixmap section. The space is reserved based on number of supported
      processors.
      
      For consistency, the remapping is done by default on 32 and 64-bit.
      
      Each processor switches to its remapped GDT at the end of
      initialization. For hibernation, the main processor returns with the
      original GDT and switches back to the remapping at completion.
      
      This patch was tested on both architectures. Hibernation and KVM were
      both tested specially for their usage of the GDT.
      
      Thanks to Boris Ostrovsky <boris.ostrovsky@oracle.com> for testing and
      recommending changes for Xen support.
      Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Luis R . Rodriguez <mcgrof@kernel.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Rafael J . Wysocki <rjw@rjwysocki.net>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Stanislaw Gruszka <sgruszka@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: kasan-dev@googlegroups.com
      Cc: kernel-hardening@lists.openwall.com
      Cc: kvm@vger.kernel.org
      Cc: lguest@lists.ozlabs.org
      Cc: linux-doc@vger.kernel.org
      Cc: linux-efi@vger.kernel.org
      Cc: linux-mm@kvack.org
      Cc: linux-pm@vger.kernel.org
      Cc: xen-devel@lists.xenproject.org
      Cc: zijun_hu <zijun_hu@htc.com>
      Link: http://lkml.kernel.org/r/20170314170508.100882-2-thgarnie@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      69218e47
  3. 11 3月, 2017 1 次提交
  4. 02 3月, 2017 1 次提交
  5. 25 2月, 2017 1 次提交
  6. 25 12月, 2016 1 次提交
  7. 15 12月, 2016 1 次提交
  8. 28 10月, 2016 1 次提交
    • D
      x86/vdso: Set vDSO pointer only after success · 67dece7d
      Dmitry Safonov 提交于
      Those pointers were initialized before call to _install_special_mapping()
      after the commit:
      
        f7b6eb3f ("x86: Set context.vdso before installing the mapping")
      
      This is not required anymore as special mappings have their vma name and
      don't use arch_vma_name() after commit:
      
        a62c34bd ("x86, mm: Improve _install_special_mapping and fix x86 vdso naming")
      
      So, this way to init looks less entangled.
      
      I even belive that we can remove NULL initializers:
      
       - on failure load_elf_binary() will not start a new thread;
       - arch_prctl will have the same pointers as before syscall.
      Signed-off-by: NDmitry Safonov <dsafonov@virtuozzo.com>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: 0x7f454c46@gmail.com
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Cc: oleg@redhat.com
      Link: http://lkml.kernel.org/r/20161027141516.28447-3-dsafonov@virtuozzo.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      67dece7d
  9. 15 9月, 2016 4 次提交
  10. 14 7月, 2016 1 次提交
  11. 08 7月, 2016 1 次提交
    • D
      x86/vdso: Add mremap hook to vm_special_mapping · b059a453
      Dmitry Safonov 提交于
      Add possibility for 32-bit user-space applications to move
      the vDSO mapping.
      
      Previously, when a user-space app called mremap() for the vDSO
      address, in the syscall return path it would land on the previous
      address of the vDSOpage, resulting in segmentation violation.
      
      Now it lands fine and returns to userspace with a remapped vDSO.
      
      This will also fix the context.vdso pointer for 64-bit, which does
      not affect the user of vDSO after mremap() currently, but this
      may change in the future.
      
      As suggested by Andy, return -EINVAL for mremap() that would
      split the vDSO image: that operation cannot possibly result in
      a working system so reject it.
      
      Renamed and moved the text_mapping structure declaration inside
      map_vdso(), as it used only there and now it complements the
      vvar_mapping variable.
      
      There is still a problem for remapping the vDSO in glibc
      applications: the linker relocates addresses for syscalls
      on the vDSO page, so you need to relink with the new
      addresses.
      
      Without that the next syscall through glibc may fail:
      
        Program received signal SIGSEGV, Segmentation fault.
        #0  0xf7fd9b80 in __kernel_vsyscall ()
        #1  0xf7ec8238 in _exit () from /usr/lib32/libc.so.6
      Signed-off-by: NDmitry Safonov <dsafonov@virtuozzo.com>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: 0x7f454c46@gmail.com
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160628113539.13606-2-dsafonov@virtuozzo.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b059a453
  12. 24 5月, 2016 1 次提交
  13. 13 4月, 2016 1 次提交
    • A
      x86/vdso: Remove direct HPET access through the vDSO · 1ed95e52
      Andy Lutomirski 提交于
      Allowing user code to map the HPET is problematic.  HPET
      implementations are notoriously buggy, and there are probably many
      machines on which even MMIO reads from bogus HPET addresses are
      problematic.
      
      We have a report that the Dell Precision M2800 with:
      
        ACPI: HPET 0x00000000C8FE6238 000038 (v01 DELL   CBX3  01072009 AMI. 00000005)
      
      is either so slow when accessing the HPET or actually hangs in some
      regard, causing soft lockups to be reported if users do unexpected
      things to the HPET.
      
      The vclock HPET code has also always been a questionable speedup.
      Accessing an HPET is exceedingly slow (on the order of several
      microseconds), so the added overhead in requiring a syscall to read
      the HPET is a small fraction of the total code of accessing it.
      
      To avoid future problems, let's just delete the code entirely.
      
      In the long run, this could actually be a speedup.  Waiman Long as a
      patch to optimize the case where multiple CPUs contend for the HPET,
      but that won't help unless all the accesses are mediated by the
      kernel.
      Reported-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Waiman Long <Waiman.Long@hpe.com>
      Cc: Waiman Long <waiman.long@hpe.com>
      Link: http://lkml.kernel.org/r/d2f90bba98db9905041cff294646d290d378f67a.1460074438.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1ed95e52
  14. 30 1月, 2016 2 次提交
  15. 12 1月, 2016 4 次提交
  16. 11 12月, 2015 2 次提交
  17. 07 10月, 2015 1 次提交
  18. 06 7月, 2015 1 次提交
  19. 04 6月, 2015 1 次提交
    • I
      x86/asm/entry, x86/vdso: Move the vDSO code to arch/x86/entry/vdso/ · d603c8e1
      Ingo Molnar 提交于
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d603c8e1
  20. 21 12月, 2014 1 次提交
    • A
      x86_64, vdso: Fix the vdso address randomization algorithm · 394f56fe
      Andy Lutomirski 提交于
      The theory behind vdso randomization is that it's mapped at a random
      offset above the top of the stack.  To avoid wasting a page of
      memory for an extra page table, the vdso isn't supposed to extend
      past the lowest PMD into which it can fit.  Other than that, the
      address should be a uniformly distributed address that meets all of
      the alignment requirements.
      
      The current algorithm is buggy: the vdso has about a 50% probability
      of being at the very end of a PMD.  The current algorithm also has a
      decent chance of failing outright due to incorrect handling of the
      case where the top of the stack is near the top of its PMD.
      
      This fixes the implementation.  The paxtest estimate of vdso
      "randomisation" improves from 11 bits to 18 bits.  (Disclaimer: I
      don't know what the paxtest code is actually calculating.)
      
      It's worth noting that this algorithm is inherently biased: the vdso
      is more likely to end up near the end of its PMD than near the
      beginning.  Ideally we would either nix the PMD sharing requirement
      or jointly randomize the vdso and the stack to reduce the bias.
      
      In the mean time, this is a considerable improvement with basically
      no risk of compatibility issues, since the allowed outputs of the
      algorithm are unchanged.
      
      As an easy test, doing this:
      
      for i in `seq 10000`
        do grep -P vdso /proc/self/maps |cut -d- -f1
      done |sort |uniq -d
      
      used to produce lots of output (1445 lines on my most recent run).
      A tiny subset looks like this:
      
      7fffdfffe000
      7fffe01fe000
      7fffe05fe000
      7fffe07fe000
      7fffe09fe000
      7fffe0bfe000
      7fffe0dfe000
      
      Note the suspicious fe000 endings.  With the fix, I get a much more
      palatable 76 repeated addresses.
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      394f56fe
  21. 02 11月, 2014 1 次提交
    • A
      x86: vdso: Fix build with older gcc · a92f101b
      Andrew Morton 提交于
      gcc-4.4.4:
      
      arch/x86/vdso/vma.c: In function 'vgetcpu_cpu_init':
      arch/x86/vdso/vma.c:247: error: unknown field 'limit0' specified in initializer
      arch/x86/vdso/vma.c:247: warning: missing braces around initializer
      arch/x86/vdso/vma.c:247: warning: (near initialization for '(anonymous).<anonymous>')
      arch/x86/vdso/vma.c:248: error: unknown field 'limit' specified in initializer
      arch/x86/vdso/vma.c:248: warning: excess elements in struct initializer
      arch/x86/vdso/vma.c:248: warning: (near initialization for '(anonymous)')
      ....
      
      I couldn't find any way of tricking it into accepting an initializer
      format :(
      Reported-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Fixes: 25880156 ("x86/vdso: Change the PER_CPU segment to use struct desc_struct")
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      a92f101b
  22. 28 10月, 2014 5 次提交
  23. 26 7月, 2014 1 次提交
  24. 12 7月, 2014 1 次提交
  25. 11 7月, 2014 1 次提交
  26. 21 5月, 2014 2 次提交
  27. 06 5月, 2014 1 次提交