1. 01 5月, 2014 1 次提交
    • H
      x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack · 3891a04a
      H. Peter Anvin 提交于
      The IRET instruction, when returning to a 16-bit segment, only
      restores the bottom 16 bits of the user space stack pointer.  This
      causes some 16-bit software to break, but it also leaks kernel state
      to user space.  We have a software workaround for that ("espfix") for
      the 32-bit kernel, but it relies on a nonzero stack segment base which
      is not available in 64-bit mode.
      
      In checkin:
      
          b3b42ac2 x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
      
      we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
      the logic that 16-bit support is crippled on 64-bit kernels anyway (no
      V86 support), but it turns out that people are doing stuff like
      running old Win16 binaries under Wine and expect it to work.
      
      This works around this by creating percpu "ministacks", each of which
      is mapped 2^16 times 64K apart.  When we detect that the return SS is
      on the LDT, we copy the IRET frame to the ministack and use the
      relevant alias to return to userspace.  The ministacks are mapped
      readonly, so if IRET faults we promote #GP to #DF which is an IST
      vector and thus has its own stack; we then do the fixup in the #DF
      handler.
      
      (Making #GP an IST exception would make the msr_safe functions unsafe
      in NMI/MC context, and quite possibly have other effects.)
      
      Special thanks to:
      
      - Andy Lutomirski, for the suggestion of using very small stack slots
        and copy (as opposed to map) the IRET frame there, and for the
        suggestion to mark them readonly and let the fault promote to #DF.
      - Konrad Wilk for paravirt fixup and testing.
      - Borislav Petkov for testing help and useful comments.
      Reported-by: NBrian Gerst <brgerst@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andrew Lutomriski <amluto@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Dirk Hohndel <dirk@hohndel.org>
      Cc: Arjan van de Ven <arjan.van.de.ven@intel.com>
      Cc: comex <comexk@gmail.com>
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: <stable@vger.kernel.org> # consider after upstream merge
      3891a04a
  2. 15 4月, 2014 1 次提交
  3. 08 4月, 2014 4 次提交
    • M
      x86: use generic early_ioremap · 5b7c73e0
      Mark Salter 提交于
      Move x86 over to the generic early ioremap implementation.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b7c73e0
    • D
      x86/mm: sparse warning fix for early_memremap · 6b550f6f
      Dave Young 提交于
      This patch series takes the common bits from the x86 early ioremap
      implementation and creates a generic implementation which may be used by
      other architectures.  The early ioremap interfaces are intended for
      situations where boot code needs to make temporary virtual mappings
      before the normal ioremap interfaces are available.  Typically, this
      means before paging_init() has run.
      
      This patch (of 6):
      
      There's a lot of sparse warnings for code like below: void *a =
      early_memremap(phys_addr, size);
      
      early_memremap intend to map kernel memory with ioremap facility, the
      return pointer should be a kernel ram pointer instead of iomem one.
      
      For making the function clearer and supressing sparse warnings this patch
      do below two things:
      1. cast to (__force void *) for the return value of early_memremap
      2. add early_memunmap function and pass (__force void __iomem *) to iounmap
      
      From Boris:
        "Ingo told me yesterday, it makes sense too.  I'd guess we can try it.
         FWIW, all callers of early_memremap use the memory they get remapped
         as normal memory so we should be safe"
      Signed-off-by: NDave Young <dyoung@redhat.com>
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6b550f6f
    • C
      percpu: add raw_cpu_ops · b3ca1c10
      Christoph Lameter 提交于
      The kernel has never been audited to ensure that this_cpu operations are
      consistently used throughout the kernel.  The code generated in many
      places can be improved through the use of this_cpu operations (which
      uses a segment register for relocation of per cpu offsets instead of
      performing address calculations).
      
      The patch set also addresses various consistency issues in general with
      the per cpu macros.
      
      A. The semantics of __this_cpu_ptr() differs from this_cpu_ptr only
         because checks are skipped. This is typically shown through a raw_
         prefix. So this patch set changes the places where __this_cpu_ptr()
         is used to raw_cpu_ptr().
      
      B. There has been the long term wish by some that __this_cpu operations
         would check for preemption. However, there are cases where preemption
         checks need to be skipped. This patch set adds raw_cpu operations that
         do not check for preemption and then adds preemption checks to the
         __this_cpu operations.
      
      C. The use of __get_cpu_var is always a reference to a percpu variable
         that can also be handled via a this_cpu operation. This patch set
         replaces all uses of __get_cpu_var with this_cpu operations.
      
      D. We can then use this_cpu RMW operations in various places replacing
         sequences of instructions by a single one.
      
      E. The use of this_cpu operations throughout will allow other arches than
         x86 to implement optimized references and RMV operations to work with
         per cpu local data.
      
      F. The use of this_cpu operations opens up the possibility to
         further optimize code that relies on synchronization through
         per cpu data.
      
      The patch set works in a couple of stages:
      
      I. Patch 1 adds the additional raw_cpu operations and raw_cpu_ptr().
          Also converts the existing __this_cpu_xx_# primitive in the x86
          code to raw_cpu_xx_#.
      
      II. Patch 2-4 use the raw_cpu operations in places that would give
           us false positives once they are enabled.
      
      III. Patch 5 adds preemption checks to __this_cpu operations to allow
          checking if preemption is properly disabled when these functions
          are used.
      
      IV. Patches 6-20 are patches that simply replace uses of __get_cpu_var
         with this_cpu_ptr. They do not depend on any changes to the percpu
         code. No preemption tests are skipped if they are applied.
      
      V. Patches 21-46 are conversion patches that use this_cpu operations
         in various kernel subsystems/drivers or arch code.
      
      VI.  Patches 47/48 (not included in this series) remove no longer used
          functions (__this_cpu_ptr and __get_cpu_var).  These should only be
          applied after all the conversion patches have made it and after we
          have done additional passes through the kernel to ensure that none of
          the uses of these functions remain.
      
      This patch (of 46):
      
      The patches following this one will add preemption checks to __this_cpu
      ops so we need to have an alternative way to use this_cpu operations
      without preemption checks.
      
      raw_cpu_ops will be the basis for all other ops since these will be the
      operations that do not implement any checks.
      
      Primitive operations are renamed by this patch from __this_cpu_xxx to
      raw_cpu_xxxx.
      
      Also change the uses of the x86 percpu primitives in preempt.h.
      These depend directly on asm/percpu.h (header #include nesting issue).
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Alex Shi <alex.shi@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Bryan Wu <cooloney@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: David Daney <david.daney@cavium.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Hedi Berriche <hedi@sgi.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Mike Travis <travis@sgi.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Robert Richter <rric@kernel.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Wim Van Sebroeck <wim@iguana.be>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b3ca1c10
    • J
      x86: always define BUG() and HAVE_ARCH_BUG, even with !CONFIG_BUG · b06dd879
      Josh Triplett 提交于
      This ensures that BUG() always has a definition that causes a trap (via
      an undefined instruction), and that the compiler still recognizes the
      code following BUG() as unreachable, avoiding warnings that would
      otherwise appear (such as on non-void functions that don't return a
      value after BUG()).
      
      In addition to saving a few bytes over the generic infinite-loop
      implementation, this implementation traps rather than looping, which
      potentially allows for better error-recovery behavior (such as by
      rebooting).
      Signed-off-by: NJosh Triplett <josh@joshtriplett.org>
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b06dd879
  4. 29 3月, 2014 1 次提交
    • A
      x86: fix boot on uniprocessor systems · 825600c0
      Artem Fetishev 提交于
      On x86 uniprocessor systems topology_physical_package_id() returns -1
      which causes rapl_cpu_prepare() to leave rapl_pmu variable uninitialized
      which leads to GPF in rapl_pmu_init().
      
      See arch/x86/kernel/cpu/perf_event_intel_rapl.c.
      
      It turns out that physical_package_id and core_id can actually be
      retreived for uniprocessor systems too.  Enabling them also fixes
      rapl_pmu code.
      Signed-off-by: NArtem Fetishev <artem_fetishev@epam.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      825600c0
  5. 25 3月, 2014 1 次提交
    • D
      Revert "xen: properly account for _PAGE_NUMA during xen pte translations" · 5926f87f
      David Vrabel 提交于
      This reverts commit a9c8e4be.
      
      PTEs in Xen PV guests must contain machine addresses if _PAGE_PRESENT
      is set and pseudo-physical addresses is _PAGE_PRESENT is clear.
      
      This is because during a domain save/restore (migration) the page
      table entries are "canonicalised" and uncanonicalised". i.e., MFNs are
      converted to PFNs during domain save so that on a restore the page
      table entries may be rewritten with the new MFNs on the destination.
      This canonicalisation is only done for PTEs that are present.
      
      This change resulted in writing PTEs with MFNs if _PAGE_PROTNONE (or
      _PAGE_NUMA) was set but _PAGE_PRESENT was clear.  These PTEs would be
      migrated as-is which would result in unexpected behaviour in the
      destination domain.  Either a) the MFN would be translated to the
      wrong PFN/page; b) setting the _PAGE_PRESENT bit would clear the PTE
      because the MFN is no longer owned by the domain; or c) the present
      bit would not get set.
      
      Symptoms include "Bad page" reports when munmapping after migrating a
      domain.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: <stable@vger.kernel.org>        [3.12+]
      5926f87f
  6. 21 3月, 2014 3 次提交
  7. 20 3月, 2014 4 次提交
    • E
      audit: use uapi/linux/audit.h for AUDIT_ARCH declarations · 579ec9e1
      Eric Paris 提交于
      The syscall.h headers were including linux/audit.h but really only
      needed the uapi/linux/audit.h to get the requisite defines.  Switch to
      the uapi headers.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-s390@vger.kernel.org
      Cc: x86@kernel.org
      579ec9e1
    • E
      syscall_get_arch: remove useless function arguments · 5e937a9a
      Eric Paris 提交于
      Every caller of syscall_get_arch() uses current for the task and no
      implementors of the function need args.  So just get rid of both of
      those things.  Admittedly, since these are inline functions we aren't
      wasting stack space, but it just makes the prototypes better.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-mips@linux-mips.org
      Cc: linux390@de.ibm.com
      Cc: x86@kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      5e937a9a
    • H
      random: Add arch_has_random[_seed]() · 7b878d4b
      H. Peter Anvin 提交于
      Add predicate functions for having arch_get_random[_seed]*().  The
      only current use is to avoid the loop in arch_random_refill() when
      arch_get_random_seed_long() is unavailable.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      7b878d4b
    • H
      x86, random: Enable the RDSEED instruction · d20f78d2
      H. Peter Anvin 提交于
      Upcoming Intel silicon adds a new RDSEED instruction, which is similar
      to RDRAND but provides a stronger guarantee: unlike RDRAND, RDSEED
      will always reseed the PRNG from the true random number source between
      each read.  Thus, the output of RDSEED is guaranteed to be 100%
      entropic, unlike RDRAND which is only architecturally guaranteed to be
      1/512 entropic (although in practice is much more.)
      
      The RDSEED instruction takes the same time to execute as RDRAND, but
      RDSEED unlike RDRAND can legitimately return failure (CF=0) due to
      entropy exhaustion if too many threads on too many cores are hammering
      the RDSEED instruction at the same time.  Therefore, we have to be
      more conservative and only use it in places where we can tolerate
      failures.
      
      This patch introduces the primitives arch_get_random_seed_{int,long}()
      but does not use it yet.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Reviewed-by: NIngo Molnar <mingo@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      d20f78d2
  8. 19 3月, 2014 5 次提交
  9. 18 3月, 2014 1 次提交
  10. 14 3月, 2014 3 次提交
  11. 13 3月, 2014 1 次提交
  12. 12 3月, 2014 1 次提交
  13. 11 3月, 2014 5 次提交
  14. 07 3月, 2014 4 次提交
  15. 06 3月, 2014 2 次提交
  16. 05 3月, 2014 3 次提交
    • T
      x86: hyperv: Fixup the (brain) damage caused by the irq cleanup · 76d388cd
      Thomas Gleixner 提交于
      Compiling last minute changes without setting the proper config
      options is not really clever.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      76d388cd
    • B
      x86/efi: Quirk out SGI UV · a5d90c92
      Borislav Petkov 提交于
      Alex reported hitting the following BUG after the EFI 1:1 virtual
      mapping work was merged,
      
       kernel BUG at arch/x86/mm/init_64.c:351!
       invalid opcode: 0000 [#1] SMP
       Call Trace:
        [<ffffffff818aa71d>] init_extra_mapping_uc+0x13/0x15
        [<ffffffff818a5e20>] uv_system_init+0x22b/0x124b
        [<ffffffff8108b886>] ? clockevents_register_device+0x138/0x13d
        [<ffffffff81028dbb>] ? setup_APIC_timer+0xc5/0xc7
        [<ffffffff8108b620>] ? clockevent_delta2ns+0xb/0xd
        [<ffffffff818a3a92>] ? setup_boot_APIC_clock+0x4a8/0x4b7
        [<ffffffff8153d955>] ? printk+0x72/0x74
        [<ffffffff818a1757>] native_smp_prepare_cpus+0x389/0x3d6
        [<ffffffff818957bc>] kernel_init_freeable+0xb7/0x1fb
        [<ffffffff81535530>] ? rest_init+0x74/0x74
        [<ffffffff81535539>] kernel_init+0x9/0xff
        [<ffffffff81541dfc>] ret_from_fork+0x7c/0xb0
        [<ffffffff81535530>] ? rest_init+0x74/0x74
      
      Getting this thing to work with the new mapping scheme would need more
      work, so automatically switch to the old memmap layout for SGI UV.
      Acked-by: NRuss Anderson <rja@sgi.com>
      Cc: Alex Thorlton <athorlton@sgi.com
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      a5d90c92
    • M
      x86/efi: Wire up CONFIG_EFI_MIXED · 7d453eee
      Matt Fleming 提交于
      Add the Kconfig option and bump the kernel header version so that boot
      loaders can check whether the handover code is available if they want.
      
      The xloadflags field in the bzImage header is also updated to reflect
      that the kernel supports both entry points by setting both of
      XLF_EFI_HANDOVER_32 and XLF_EFI_HANDOVER_64 when CONFIG_EFI_MIXED=y.
      XLF_CAN_BE_LOADED_ABOVE_4G is disabled so that the kernel text is
      guaranteed to be addressable with 32-bits.
      
      Note that no boot loaders should be using the bits set in xloadflags to
      decide which entry point to jump to. The entire scheme is based on the
      concept that 32-bit bootloaders always jump to ->handover_offset and
      64-bit loaders always jump to ->handover_offset + 512. We set both bits
      merely to inform the boot loader that it's safe to use the native
      handover offset even if the machine type in the PE/COFF header claims
      otherwise.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      7d453eee