1. 21 6月, 2015 1 次提交
  2. 20 6月, 2015 1 次提交
  3. 19 6月, 2015 5 次提交
    • B
      x86/boot: Fix overflow warning with 32-bit binutils · 04c17341
      Borislav Petkov 提交于
      When building the kernel with 32-bit binutils built with support
      only for the i386 target, we get the following warning:
      
        arch/x86/kernel/head_32.S:66: Warning: shift count out of range (32 is not between 0 and 31)
      
      The problem is that in that case, binutils' internal type
      representation is 32-bit wide and the shift range overflows.
      
      In order to fix this, manipulate the shift expression which
      creates the 4GiB constant to not overflow the shift count.
      Suggested-by: NMichael Matz <matz@suse.de>
      Reported-and-tested-by: NEnrico Mioso <mrkiko.rs@gmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      04c17341
    • P
      perf/x86: Honor the architectural performance monitoring version · 2c33645d
      Palik, Imre 提交于
      Architectural performance monitoring, version 1, doesn't support fixed counters.
      
      Currently, even if a hypervisor advertises support for architectural
      performance monitoring version 1, perf may still try to use the fixed
      counters, as the constraints are set up based on the CPU model.
      
      This patch ensures that perf honors the architectural performance monitoring
      version returned by CPUID, and it only uses the fixed counters for version 2
      and above.
      
      (Some of the ideas in this patch came from Peter Zijlstra.)
      Signed-off-by: NImre Palik <imrep@amazon.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Anthony Liguori <aliguori@amazon.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1433767609-1039-1-git-send-email-imrep.amz@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2c33645d
    • A
      perf/x86/intel: Fix PMI handling for Intel PT · 1b7b938f
      Alexander Shishkin 提交于
      Intel PT is a separate PMU and it is not using any of the x86_pmu
      code paths, which means in particular that the active_events counter
      remains intact when new PT events are created.
      
      However, PT uses the generic x86_pmu PMI handler for its PMI handling needs.
      
      The problem here is that the latter checks active_events and in case of it
      being zero, exits without calling the actual x86_pmu.handle_nmi(), which
      results in unknown NMI errors and massive data loss for PT.
      
      The effect is not visible if there are other perf events in the system
      at the same time that keep active_events counter non-zero, for instance
      if the NMI watchdog is running, so one needs to disable it to reproduce
      the problem.
      
      At the same time, the active_events counter besides doing what the name
      suggests also implicitly serves as a PMC hardware and DS area reference
      counter.
      
      This patch adds a separate reference counter for the PMC hardware, leaving
      active_events for actually counting the events and makes sure it also
      counts PT and BTS events.
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: adrian.hunter@intel.com
      Link: http://lkml.kernel.org/r/87k2v92t0s.fsf@ashishki-desk.ger.corp.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1b7b938f
    • A
      perf/x86/intel/bts: Fix DS area sharing with x86_pmu events · 6b099d9b
      Alexander Shishkin 提交于
      Currently, the intel_bts driver relies on the DS area allocated by the x86_pmu
      code in its event_init() path, which is a bug: creating a BTS event while
      no x86_pmu events are present results in a NULL pointer dereference.
      
      The same DS area is also used by PEBS sampling, which makes it quite a bit
      trickier to have a separate one for intel_bts' purposes.
      
      This patch makes intel_bts driver use the same DS allocation and reference
      counting code as x86_pmu to make sure it is always present when either
      intel_bts or x86_pmu need it.
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: adrian.hunter@intel.com
      Link: http://lkml.kernel.org/r/1434024837-9916-2-git-send-email-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6b099d9b
    • A
      perf/x86: Add more Broadwell model numbers · 4b36f1a4
      Andi Kleen 提交于
      This patch adds additional model numbers for Broadwell to perf.
      Support for Broadwell with Iris Pro (Intel Core i7-57xxC)
      and support for Broadwell Server Xeon.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1434055942-28253-1-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4b36f1a4
  4. 18 6月, 2015 2 次提交
  5. 16 6月, 2015 1 次提交
    • R
      KVM: x86: fix lapic.timer_mode on restore · b6ac0695
      Radim Krčmář 提交于
      lapic.timer_mode was not properly initialized after migration, which
      broke few useful things, like login, by making every sleep eternal.
      
      Fix this by calling apic_update_lvtt in kvm_apic_post_state_restore.
      
      There are other slowpaths that update lvtt, so this patch makes sure
      something similar doesn't happen again by calling apic_update_lvtt
      after every modification.
      
      Cc: stable@vger.kernel.org
      Fixes: f30ebc31 ("KVM: x86: optimize some accesses to LVTT and SPIV")
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      b6ac0695
  6. 12 6月, 2015 4 次提交
  7. 11 6月, 2015 3 次提交
    • J
      x86/crash: Allocate enough low memory when crashkernel=high · 94fb9334
      Joerg Roedel 提交于
      When the crash kernel is loaded above 4GiB in memory, the
      first kernel allocates only 72MiB of low-memory for the DMA
      requirements of the second kernel. On systems with many
      devices this is not enough and causes device driver
      initialization errors and failed crash dumps. Testing by
      SUSE and Redhat has shown that 256MiB is a good default
      value for now and the discussion has lead to this value as
      well. So set this default value to 256MiB to make sure there
      is enough memory available for DMA.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      [ Reflow comment. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NBaoquan He <bhe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jörg Rödel <joro@8bytes.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: kexec@lists.infradead.org
      Link: http://lkml.kernel.org/r/1433500202-25531-4-git-send-email-joro@8bytes.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      94fb9334
    • J
      x86/swiotlb: Try coherent allocations with __GFP_NOWARN · 186dfc9d
      Joerg Roedel 提交于
      When we boot a kdump kernel in high memory, there is by
      default only 72MB of low memory available. The swiotlb code
      takes 64MB of it (by default) so that there are only 8MB
      left to allocate from. On systems with many devices this
      causes page allocator warnings from
      dma_generic_alloc_coherent():
      
        systemd-udevd: page allocation failure: order:0, mode:0x280d4
        CPU: 0 PID: 197 Comm: systemd-udevd Tainted: G        W
        3.12.28-4-default #1 Hardware name: HP ProLiant DL980 G7, BIOS
        P66 07/30/2012  ffff8800781335e0 ffffffff8150b1db 00000000000280d4 ffffffff8113af90
         0000000000000000 0000000000000000 ffff88007efdbb00 0000000100000000
         0000000000000000 0000000000000000 0000000000000000 0000000000000001
        Call Trace:
          dump_trace+0x7d/0x2d0
          show_stack_log_lvl+0x94/0x170
          show_stack+0x21/0x50
          dump_stack+0x41/0x51
          warn_alloc_failed+0xf0/0x160
          __alloc_pages_slowpath+0x72f/0x796
          __alloc_pages_nodemask+0x1ea/0x210
          dma_generic_alloc_coherent+0x96/0x140
          x86_swiotlb_alloc_coherent+0x1c/0x50
          ttm_dma_pool_alloc_new_pages+0xab/0x320 [ttm]
          ttm_dma_populate+0x3ce/0x640 [ttm]
          ttm_tt_bind+0x36/0x60 [ttm]
          ttm_bo_handle_move_mem+0x55f/0x5c0 [ttm]
          ttm_bo_move_buffer+0x105/0x130 [ttm]
          ttm_bo_validate+0xc1/0x130 [ttm]
          ttm_bo_init+0x24b/0x400 [ttm]
          radeon_bo_create+0x16c/0x200 [radeon]
          radeon_ring_init+0x11e/0x2b0 [radeon]
          r100_cp_init+0x123/0x5b0 [radeon]
          r100_startup+0x194/0x230 [radeon]
          r100_init+0x223/0x410 [radeon]
          radeon_device_init+0x6af/0x830 [radeon]
          radeon_driver_load_kms+0x89/0x180 [radeon]
          drm_get_pci_dev+0x121/0x2f0 [drm]
          local_pci_probe+0x39/0x60
          pci_device_probe+0xa9/0x120
          driver_probe_device+0x9d/0x3d0
          __driver_attach+0x8b/0x90
          bus_for_each_dev+0x5b/0x90
          bus_add_driver+0x1f8/0x2c0
          driver_register+0x5b/0xe0
          do_one_initcall+0xf2/0x1a0
          load_module+0x1207/0x1c70
          SYSC_finit_module+0x75/0xa0
          system_call_fastpath+0x16/0x1b
          0x7fac533d2788
      
      After these warnings the code enters a fall-back path and
      allocated directly from the swiotlb aperture in the end.
      So remove these warnings as this is not a fatal error.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      [ Simplify, reflow comment. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NBaoquan He <bhe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jörg Rödel <joro@8bytes.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: kexec@lists.infradead.org
      Link: http://lkml.kernel.org/r/1433500202-25531-3-git-send-email-joro@8bytes.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      186dfc9d
    • A
      arch/x86/kvm/mmu.c: work around gcc-4.4.4 bug · 5ec45a19
      Andrew Morton 提交于
      Fix this compile issue with gcc-4.4.4:
      
         arch/x86/kvm/mmu.c: In function 'kvm_mmu_pte_write':
         arch/x86/kvm/mmu.c:4256: error: unknown field 'cr0_wp' specified in initializer
         arch/x86/kvm/mmu.c:4257: error: unknown field 'cr4_pae' specified in initializer
         arch/x86/kvm/mmu.c:4257: warning: excess elements in union initializer
         ...
      
      gcc-4.4.4 (at least) has issues when using anonymous unions in
      initializers.
      
      Fixes: edc90b7d ("KVM: MMU: fix SMAP virtualization")
      Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5ec45a19
  8. 10 6月, 2015 4 次提交
    • A
      x86/asm/entry/64: Disentangle error_entry/exit gsbase/ebx/usermode code · 539f5113
      Andy Lutomirski 提交于
      The error_entry/error_exit code to handle gsbase and whether we
      return to user mdoe was a mess:
      
       - error_sti was misnamed.  In particular, it did not enable interrupts.
      
       - Error handling for gs_change was hopelessly tangled the normal
         usermode path.  Separate it out.  This saves a branch in normal
         entries from kernel mode.
      
       - The comments were bad.
      
      Fix it up.  As a nice side effect, there's now a code path that
      happens on error entries from user mode.  We'll use it soon.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/f1be898ab93360169fb845ab85185948832209ee.1433878454.git.luto@kernel.org
      [ Prettified it, clarified comments some more. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      539f5113
    • D
      x86/asm/entry/32: Shorten __audit_syscall_entry() args preparation · a92fde25
      Denys Vlasenko 提交于
      We use three MOVs to swap edx and ecx. We can use one XCHG
      instead.
      
      Expand the comments. It's difficult to keep track which arg#
      every register corresponds to, so spell it out.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1433876051-26604-3-git-send-email-dvlasenk@redhat.com
      [ Expanded the comments some more. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a92fde25
    • D
      x86/asm/entry/32: Explain reloading of registers after __audit_syscall_entry() · 1536bb46
      Denys Vlasenko 提交于
      Here it is not obvious why we load pt_regs->cx to %esi etc.
      Lets improve comments.
      
      Explain that here we combine two things: first, we reload
      registers since some of them are clobbered by the C function we
      just called; and we also convert 32-bit syscall params to 64-bit
      C ABI, because we are going to jump back to syscall dispatch
      code.
      
      Move reloading of 6th argument into the macro instead of having
      it after each of two macro invocations.
      
      No actual code changes here.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1433876051-26604-2-git-send-email-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1536bb46
    • D
      x86/asm/entry/32: Fix fallout from the R9 trick removal in the SYSCALL code · aee4b013
      Denys Vlasenko 提交于
      I put %ebp restoration code too late. Under strace, it is not
      reached and %ebp is not restored upon return to userspace.
      
      This is the fix. Run-tested.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1433876051-26604-1-git-send-email-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      aee4b013
  9. 09 6月, 2015 19 次提交
    • D
      x86/mpx: Allow 32-bit binaries on 64-bit kernels again · 97ac46a5
      Dave Hansen 提交于
      Now that the bugs in mixed mode MPX handling are fixed, re-allow
      32-bit binaries on 64-bit kernels again.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183706.70277DAD@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      97ac46a5
    • D
      x86/mpx: Do not count MPX VMAs as neighbors when unmapping · bea03c50
      Dave Hansen 提交于
      The comment pretty much says it all.
      
      I wrote a test program that does lots of random allocations
      and forces bounds tables to be created.  It came up with a
      layout like this:
      
        ....   | BOUNDS DIRECTORY ENTRY COVERS |  ....
               |    BOUNDS TABLE COVERS        |
      |  BOUNDS TABLE |  REAL ALLOC | BOUNDS TABLE |
      
      Unmapping "REAL ALLOC" should have been able to free the
      bounds table "covering" the "REAL ALLOC" because it was the
      last real user.  But, the neighboring VMA bounds tables were
      found, considered as real neighbors, and we declined to free
      the bounds table covering the area.
      
      Doing this over and over left a small but significant number
      of these orphans.  Handling them is fairly straighforward.
      All we have to do is walk the VMAs and skip all of the MPX
      ones when looking for neighbors.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183706.A6BD90BF@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      bea03c50
    • D
      x86/mpx: Rewrite the unmap code · 3ceaccdf
      Dave Hansen 提交于
      The MPX code needs to clear out bounds tables for memory which
      is no longer in use.  We do this when a userspace mapping is
      torn down (unmapped).
      
      There are two modes:
      
        1. An entire bounds table becomes unused, and can be freed
           and its pointer removed from the bounds directory.  This
           happens either when a large mapping is torn down, or when
           a small mapping is torn down and it is the last mapping
           "covered" by a bounds table.
      
        2. Only part of a bounds table becomes unused, in which case
           we free the backing memory as if MADV_DONTNEED was called.
      
      The old code was a spaghetti mess of "edge" bounds tables
      where the edges were handled specially, even if we were
      unmapping an entire one.  Non-edge bounds tables are always
      fully unmapped, but share a different code path from the edge
      ones.  The old code had a bug where it was unmapping too much
      memory.  I worked on fixing it for two days and gave up.
      
      I didn't write the original code.  I didn't particularly like
      it, but it worked, so I left it.  After my debug session, I
      realized it was undebuggagle *and* buggy, so out it went.
      
      I also wrote a new unmapping test program which uncovers bugs
      pretty nicely.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183706.DCAEC67D@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3ceaccdf
    • D
      x86/mpx: Support 32-bit binaries on 64-bit kernels · 613fcb7d
      Dave Hansen 提交于
      Right now, the kernel can only switch between 64-bit and 32-bit
      binaries at compile time. This patch adds support for 32-bit
      binaries on 64-bit kernels when we support ia32 emulation.
      
      We essentially choose which set of table sizes to use when doing
      arithmetic for the bounds table calculations.
      
      This also uses a different approach for calculating the table
      indexes than before.  I think the new one makes it much more
      clear what is going on, and allows us to share more code between
      the 32-bit and 64-bit cases.
      Based-on-patch-by: NQiaowei Ren <qiaowei.ren@intel.com>
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183705.E01F21E2@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      613fcb7d
    • D
      x86/mpx: Use 32-bit-only cmpxchg() for 32-bit apps · 6ac52bb4
      Dave Hansen 提交于
      user_atomic_cmpxchg_inatomic() actually looks at sizeof(*ptr) to
      figure out how many bytes to copy.  If we run it on a 64-bit
      kernel with a 64-bit pointer, it will copy a 64-bit bounds
      directory entry.  That's fine, except when we have 32-bit
      programs with 32-bit bounds directory entries and we only *want*
      32-bits.
      
      This patch breaks the cmpxchg() operation out in to its own
      function and performs the 32-bit type swizzling in there.
      
      Note, the "64-bit" version of this code _would_ work on a
      32-bit-only kernel.  The issue this patch addresses is only for
      when the kernel's 'long' is mismatched from the size of the
      bounds directory entry of the process we are working on.
      
      The new helper modifies 'actual_old_val' or returns an error.
      But gcc doesn't know this, so it warns about 'actual_old_val'
      being unused.  Shut it up with an uninitialized_var().
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183705.672B115E@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6ac52bb4
    • D
      x86/mpx: Introduce new 'directory entry' to 'addr' helper function · 54587653
      Dave Hansen 提交于
      Currently, to get from a bounds directory entry to the virtual
      address of a bounds table, we simply mask off a few low bits.
      However, the set of bits we mask off is different for 32-bit and
      64-bit binaries.
      
      This breaks the operation out in to a helper function and also
      adds a temporary variable to store the result until we are
      sure we are returning one.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183704.007686CE@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      54587653
    • D
      x86/mpx: Add temporary variable to reduce masking · a1149fc8
      Dave Hansen 提交于
      When we allocate a bounds table, we call mmap(), then add a
      "valid" bit to the value before storing it in to the bounds
      directory.
      
      If we fail along the way, we go and mask that valid bit
      _back_ out.  That seems a little silly, and this makes it
      much more clear when we have a plain address versus an
      actual table _entry_.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183704.3D69D5F4@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a1149fc8
    • D
      x86: Make is_64bit_mm() widely available · b0e9b09b
      Dave Hansen 提交于
      The uprobes code has a nice helper, is_64bit_mm(), that consults
      both the runtime and compile-time flags for 32-bit support.
      Instead of reinventing the wheel, pull it in to an x86 header so
      we can use it for MPX.
      
      I prefer passing the 'mm' around to test_thread_flag(TIF_IA32)
      because it makes it explicit where the context is coming from.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183704.F0209999@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b0e9b09b
    • D
      x86/mpx: Trace allocation of new bounds tables · cd4996dc
      Dave Hansen 提交于
      Bounds tables are a significant consumer of memory.  It is
      important to know when they are being allocated.  Add a trace
      point to trace whenever an allocation occurs and also its
      virtual address.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183704.EC23A93E@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cd4996dc
    • D
      x86/mpx: Trace the attempts to find bounds tables · 2a1dcb1f
      Dave Hansen 提交于
      There are two different events being traced here.  They are
      doing similar things so share a trace "EVENT_CLASS" and are
      presented together.
      
      1. Trace when MPX is zapping pages "mpx_unmap_zap":
      
      	When MPX can not free an entire bounds table, it will
      	instead try to zap unused parts of a bounds table to free
      	the backing memory.  This decreases RSS (resident set
      	size) without decreasing the virtual space allocated
      	for bounds tables.
      
      2. Trace attempts to find bounds tables "mpx_unmap_search":
      
      	This event traces any time we go looking to unmap a
      	bounds table for a given virtual address range.  This is
      	useful to ensure that the kernel actually "tried" to free
      	a bounds table versus times it succeeded in finding one.
      
      	It might try and fail if it realized that a table was
      	shared with an adjacent VMA which is not being unmapped.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183703.B9D2468B@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2a1dcb1f
    • D
      x86/mpx: Trace entry to bounds exception paths · 97efebf1
      Dave Hansen 提交于
      There are two basic things that can happen as the result of
      a bounds exception (#BR):
      
      	1. We allocate a new bounds table
      	2. We pass up a bounds exception to userspace.
      
      This patch adds a trace point for the case where we are
      passing the exception up to userspace with a signal.
      
      We are also explicit that we're printing out the inverse of
      the 'upper' that we encounter.  If you want to filter, for
      instance, you need to ~ the value first.  The reason we do
      this is because of how 'upper' is stored in the bounds table.
      
      If a pointer's range is:
      
      	0x1000 -> 0x2000
      
      it is stored in the bounds table as (32-bits here for brevity):
      
      	lower: 0x00001000
      	upper: 0xffffdfff
      
      That is so that an all 0's entry:
      
      	lower: 0x00000000
      	upper: 0x00000000
      
      corresponds to the "init" bounds which store a *range* of:
      
      	0x00000000 -> 0xffffffff
      
      That is, by far, the common case, and that lets us use the
      zero page, or deduplicate the memory, etc... The 'upper'
      stored in the table is gibberish to print by itself, so we
      print ~upper to get the *actual*, logical, human-readable
      value printed out.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183703.027BB9B0@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      97efebf1
    • D
      x86/mpx: Trace #BR exceptions · e7126cf5
      Dave Hansen 提交于
      This is the first in a series of MPX tracing patches.
      I've found these extremely useful in the process of
      debugging applications and the kernel code itself.
      
      This exception hooks in to the bounds (#BR) exception
      very early and allows capturing the key registers which
      would influence how the exception is handled.
      
      Note that bndcfgu/bndstatus are technically still
      64-bit registers even in 32-bit mode.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183703.5FE2619A@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e7126cf5
    • D
      x86/mpx: Introduce a boot-time disable flag · 8c3641e9
      Dave Hansen 提交于
      MPX has the _potential_ to cause some issues.  Say part of your
      init system tried to protect one of its components from buffer
      overflows with MPX.  If there were a false positive, it's
      possible that MPX could keep a system from booting.
      
      MPX could also potentially cause performance issues since it is
      present in hot paths like the unmap path.
      
      Allow it to be disabled at boot time.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: Thomas Gleixner <tglx@linutronix.de
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150607183702.2E8B77AB@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8c3641e9
    • D
      x86/mpx: Restrict the mmap() size check to bounds tables · eb099e5b
      Dave Hansen 提交于
      The comment and code here are confusing.  We do not currently
      allocate the bounds directory in the kernel.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183702.222CEC2A@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      eb099e5b
    • Q
      x86/mpx: Remove redundant MPX_BNDCFG_ADDR_MASK · 3c1d3230
      Qiaowei Ren 提交于
      MPX_BNDCFG_ADDR_MASK is defined two times, so this patch removes
      redundant one.
      Signed-off-by: NQiaowei Ren <qiaowei.ren@intel.com>
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183702.5F129376@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3c1d3230
    • D
      x86/mpx: Clean up the code by not passing a task pointer around when unnecessary · 46a6e0cf
      Dave Hansen 提交于
      The MPX code can only work on the current task.  You can not,
      for instance, enable MPX management in another process or
      thread. You can also not handle a fault for another process or
      thread.
      
      Despite this, we pass a task_struct around prolifically.  This
      patch removes all of the task struct passing for code paths
      where the code can not deal with another task (which turns out
      to be all of them).
      
      This has no functional changes.  It's just a cleanup.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: bp@alien8.de
      Link: http://lkml.kernel.org/r/20150607183702.6A81DA2C@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      46a6e0cf
    • D
      x86/mpx: Use the new get_xsave_field_ptr()API · a84eeaa9
      Dave Hansen 提交于
      The MPX registers (bndcsr/bndcfgu/bndstatus) are not directly
      accessible via normal instructions.  They essentially act as
      if they were floating point registers and are saved/restored
      along with those registers.
      
      There are two main paths in the MPX code where we care about
      the contents of these registers:
      
      	1. #BR (bounds) faults
      	2. the prctl() code where we are setting MPX up
      
      Both of those paths _might_ be called without the FPU having
      been used.  That means that 'tsk->thread.fpu.state' might
      never be allocated.
      
      Also, fpu_save_init() is not preempt-safe.  It was a bug to
      call it without disabling preemption.  The new
      get_xsave_addr() calls unlazy_fpu() instead and properly
      disables preemption.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Cc: bp@alien8.de
      Link: http://lkml.kernel.org/r/20150607183701.BC0D37CF@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a84eeaa9
    • D
      x86/fpu/xstate: Wrap get_xsave_addr() to make it safer · 04cd027b
      Dave Hansen 提交于
      The MPX code appears is calling a low-level FPU function
      (copy_fpregs_to_fpstate()).  This function is not able to
      be called in all contexts, although it is safe to call
      directly in some cases.
      
      Although probably correct, the current code is ugly and
      potentially error-prone.  So, add a wrapper that calls
      the (slightly) higher-level fpu__save() (which is preempt-
      safe) and also ensures that we even *have* an FPU context
      (in the case that this was called when in lazy FPU mode).
      
      Ingo had this to say about the details about when we need
      preemption disabled:
      
      > it's indeed generally unsafe to access/copy FPU registers with preemption enabled,
      > for two reasons:
      >
      >   - on older systems that use FSAVE the instruction destroys FPU register
      >     contents, which has to be handled carefully
      >
      >   - even on newer systems if we copy to FPU registers (which this code doesn't)
      >     then we don't want a context switch to occur in the middle of it, because a
      >     context switch will write to the fpstate, potentially overwriting our new data
      >     with old FPU state.
      >
      > But it's safe to access FPU registers with preemption enabled in a couple of
      > special cases:
      >
      >   - potentially destructively saving FPU registers: the signal handling code does
      >     this in copy_fpstate_to_sigframe(), because it can rely on the signal restore
      >     side to restore the original FPU state.
      >
      >   - reading FPU registers on modern systems: we don't do this anywhere at the
      >     moment, mostly to keep symmetry with older systems where FSAVE is
      >     destructive.
      >
      >   - initializing FPU registers on modern systems: fpu__clear() does this. Here
      >     it's safe because we don't copy from the fpstate.
      >
      >   - directly writing FPU registers from user-space memory (!). We do this in
      >     fpu__restore_sig(), and it's safe because neither context switches nor
      >     irq-handler FPU use can corrupt the source context of the copy (which is
      >     user-space memory).
      >
      > Note that the MPX code's current use of copy_fpregs_to_fpstate() was safe I think,
      > because:
      >
      >  - MPX is predicated on eagerfpu, so the destructive F[N]SAVE instruction won't be
      >    used.
      >
      >  - the code was only reading FPU registers, and was doing it only in places that
      >    guaranteed that an FPU state was already active (i.e. didn't do it in
      >    kthreads)
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Cc: bp@alien8.de
      Link: http://lkml.kernel.org/r/20150607183700.AA881696@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      04cd027b
    • D
      x86/fpu/xstate: Fix up bad get_xsave_addr() assumptions · 0c4109be
      Dave Hansen 提交于
      get_xsave_addr() assumes that if an xsave bit is present in the
      hardware (pcntxt_mask) that it is present in a given xsave
      buffer.  Due to an bug in the xsave code on all of the systems
      that have MPX (and thus all the users of this code), that has
      been a true assumption.
      
      But, the bug is getting fixed, so our assumption is not going
      to hold any more.
      
      It's quite possible (and normal) for an enabled state to be
      present on 'pcntxt_mask', but *not* in 'xstate_bv'.  We need
      to consult 'xstate_bv'.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20150607183700.1E739B34@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0c4109be