1. 04 6月, 2015 1 次提交
    • I
      x86/asm/entry: Move the vsyscall code to arch/x86/entry/vsyscall/ · 00398a00
      Ingo Molnar 提交于
      The vsyscall code is entry code too, so move it to arch/x86/entry/vsyscall/.
      
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      00398a00
  2. 10 11月, 2014 1 次提交
  3. 04 11月, 2014 2 次提交
  4. 03 11月, 2014 1 次提交
  5. 28 10月, 2014 2 次提交
  6. 04 9月, 2014 1 次提交
    • A
      seccomp,x86,arm,mips,s390: Remove nr parameter from secure_computing · a4412fc9
      Andy Lutomirski 提交于
      The secure_computing function took a syscall number parameter, but
      it only paid any attention to that parameter if seccomp mode 1 was
      enabled.  Rather than coming up with a kludge to get the parameter
      to work in mode 2, just remove the parameter.
      
      To avoid churn in arches that don't have seccomp filters (and may
      not even support syscall_get_nr right now), this leaves the
      parameter in secure_computing_strict, which is now a real function.
      
      For ARM, this is a bit ugly due to the fact that ARM conditionally
      supports seccomp filters.  Fixing that would probably only be a
      couple of lines of code, but it should be coordinated with the audit
      maintainers.
      
      This will be a slight slowdown on some arches.  The right fix is to
      pass in all of seccomp_data instead of trying to make just the
      syscall nr part be fast.
      
      This is a prerequisite for making two-phase seccomp work cleanly.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux-s390@vger.kernel.org
      Cc: x86@kernel.org
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      a4412fc9
  7. 26 7月, 2014 1 次提交
  8. 06 5月, 2014 1 次提交
  9. 20 3月, 2014 1 次提交
    • S
      x86, vsyscall: Fix CPU hotplug callback registration · 42112a0f
      Srivatsa S. Bhat 提交于
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Instead, the correct and race-free way of performing the callback
      registration is:
      
      	cpu_notifier_register_begin();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	/* Note the use of the double underscored version of the API */
      	__register_cpu_notifier(&foobar_cpu_notifier);
      
      	cpu_notifier_register_done();
      
      Fix the vsyscall code in x86 by using this latter form of callback
      registration.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      42112a0f
  10. 19 3月, 2014 1 次提交
  11. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  12. 02 10月, 2012 1 次提交
  13. 25 9月, 2012 3 次提交
    • J
      time: Convert x86_64 to using new update_vsyscall · 650ea024
      John Stultz 提交于
      Switch x86_64 to using sub-ns precise vsyscall
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      650ea024
    • J
      time: Convert CONFIG_GENERIC_TIME_VSYSCALL to CONFIG_GENERIC_TIME_VSYSCALL_OLD · 70639421
      John Stultz 提交于
      To help migrate archtectures over to the new update_vsyscall method,
      redfine CONFIG_GENERIC_TIME_VSYSCALL as CONFIG_GENERIC_TIME_VSYSCALL_OLD
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      70639421
    • J
      time: Move update_vsyscall definitions to timekeeper_internal.h · 189374ae
      John Stultz 提交于
      Since users will need to include timekeeper_internal.h, move
      update_vsyscall definitions to timekeeper_internal.h.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      189374ae
  14. 15 7月, 2012 1 次提交
  15. 14 7月, 2012 1 次提交
    • W
      x86/vsyscall: allow seccomp filter in vsyscall=emulate · 5651721e
      Will Drewry 提交于
      If a seccomp filter program is installed, older static binaries and
      distributions with older libc implementations (glibc 2.13 and earlier)
      that rely on vsyscall use will be terminated regardless of the filter
      program policy when executing time, gettimeofday, or getcpu.  This is
      only the case when vsyscall emulation is in use (vsyscall=emulate is the
      default).
      
      This patch emulates system call entry inside a vsyscall=emulate by
      populating regs->ax and regs->orig_ax with the system call number prior
      to calling into seccomp such that all seccomp-dependencies function
      normally.  Additionally, system call return behavior is emulated in line
      with other vsyscall entrypoints for the trace/trap cases.
      
      [ v2: fixed ip and sp on SECCOMP_RET_TRAP/TRACE (thanks to luto@mit.edu) ]
      Reported-and-tested-by: NOwen Kibel <qmewlo@gmail.com>
      Signed-off-by: NWill Drewry <wad@chromium.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5651721e
  16. 06 6月, 2012 1 次提交
  17. 06 4月, 2012 1 次提交
  18. 24 3月, 2012 2 次提交
  19. 16 3月, 2012 2 次提交
  20. 13 3月, 2012 1 次提交
  21. 05 12月, 2011 2 次提交
  22. 01 11月, 2011 1 次提交
  23. 11 10月, 2011 1 次提交
  24. 11 8月, 2011 2 次提交
  25. 05 8月, 2011 2 次提交
  26. 15 7月, 2011 1 次提交
  27. 14 7月, 2011 2 次提交
  28. 07 6月, 2011 1 次提交
    • A
      x86-64: Emulate legacy vsyscalls · 5cec93c2
      Andy Lutomirski 提交于
      There's a fair amount of code in the vsyscall page.  It contains
      a syscall instruction (in the gettimeofday fallback) and who
      knows what will happen if an exploit jumps into the middle of
      some other code.
      
      Reduce the risk by replacing the vsyscalls with short magic
      incantations that cause the kernel to emulate the real
      vsyscalls. These incantations are useless if entered in the
      middle.
      
      This causes vsyscalls to be a little more expensive than real
      syscalls.  Fortunately sensible programs don't use them.
      The only exception is time() which is still called by glibc
      through the vsyscall - but calling time() millions of times
      per second is not sensible. glibc has this fixed in the
      development tree.
      
      This patch is not perfect: the vread_tsc and vread_hpet
      functions are still at a fixed address.  Fixing that might
      involve making alternative patching work in the vDSO.
      Signed-off-by: NAndy Lutomirski <luto@mit.edu>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Jesper Juhl <jj@chaosbits.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Jan Beulich <JBeulich@novell.com>
      Cc: richard -rw- weinberger <richard.weinberger@gmail.com>
      Cc: Mikael Pettersson <mikpe@it.uu.se>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Louis Rilling <Louis.Rilling@kerlabs.com>
      Cc: Valdis.Kletnieks@vt.edu
      Cc: pageexec@freemail.hu
      Link: http://lkml.kernel.org/r/e64e1b3c64858820d12c48fa739efbd1485e79d5.1307292171.git.luto@mit.edu
      [ Removed the CONFIG option - it's simpler to just do it unconditionally. Tidied up the code as well. ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5cec93c2
  29. 06 6月, 2011 2 次提交