1. 01 3月, 2017 4 次提交
  2. 14 2月, 2017 1 次提交
  3. 10 2月, 2017 3 次提交
    • A
      x86/mm/ptdump: Fix soft lockup in page table walker · 146fbb76
      Andrey Ryabinin 提交于
      CONFIG_KASAN=y needs a lot of virtual memory mapped for its shadow.
      In that case ptdump_walk_pgd_level_core() takes a lot of time to
      walk across all page tables and doing this without
      a rescheduling causes soft lockups:
      
       NMI watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [swapper/0:1]
       ...
       Call Trace:
        ptdump_walk_pgd_level_core+0x40c/0x550
        ptdump_walk_pgd_level_checkwx+0x17/0x20
        mark_rodata_ro+0x13b/0x150
        kernel_init+0x2f/0x120
        ret_from_fork+0x2c/0x40
      
      I guess that this issue might arise even without KASAN on huge machines
      with several terabytes of RAM.
      
      Stick cond_resched() in pgd loop to fix this.
      Reported-by: NTobias Regnery <tobias.regnery@gmail.com>
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Alexander Potapenko <glider@google.com>
      Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20170210095405.31802-1-aryabinin@virtuozzo.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      146fbb76
    • T
      x86/tsc: Make the TSC ADJUST sanitizing work for tsc_reliable · 5f2e71e7
      Thomas Gleixner 提交于
      When the TSC is marked reliable then the synchronization check is skipped,
      but that also skips the TSC ADJUST sanitizing code. So on a machine with a
      wreckaged BIOS the TSC deviation between CPUs might go unnoticed.
      
      Let the TSC adjust sanitizing code run unconditionally and just skip the
      expensive synchronization checks when TSC is marked reliable.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Olof Johansson <olof@lixom.net>
      Link: http://lkml.kernel.org/r/20170209151231.491189912@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      5f2e71e7
    • T
      x86/tsc: Avoid the large time jump when sanitizing TSC ADJUST · f2e04214
      Thomas Gleixner 提交于
      Olof reported that on a machine which has a BIOS wreckaged TSC the
      timestamps in dmesg are making a large jump because the TSC value is
      jumping forward after resetting the TSC ADJUST register to a sane value.
      
      This can be avoided by calling the TSC ADJUST saniziting function before
      initializing the per cpu sched clock machinery. That takes the offset into
      account and avoid the time jump.
      
      What cannot be avoided is that the 'Firmware Bug' warnings on the secondary
      CPUs are printed with the large time offsets because it would be too much
      effort and ugly hackery to print those warnings into a buffer and emit them
      after the adjustemt on the starting CPUs. It's a firmware bug and should be
      fixed in firmware. The weird timestamps are collateral damage and just
      illustrate the sillyness of the BIOS folks:
      
      [    0.397445] smp: Bringing up secondary CPUs ...
      [    0.402100] x86: Booting SMP configuration:
      [    0.406343] .... node  #0, CPUs:      #1
      [1265776479.930667] [Firmware Bug]: TSC ADJUST differs: Reference CPU0: -2978888639075328 CPU1: -2978888639183101
      [1265776479.944664] TSC ADJUST synchronize: Reference CPU0: 0 CPU1: -2978888639183101
      [    0.508119]  #2
      [1265776480.032346] [Firmware Bug]: TSC ADJUST differs: Reference CPU0: -2978888639075328 CPU2: -2978888639183677
      [1265776480.044192] TSC ADJUST synchronize: Reference CPU0: 0 CPU2: -2978888639183677
      [    0.607643]  #3
      [1265776480.131874] [Firmware Bug]: TSC ADJUST differs: Reference CPU0: -2978888639075328 CPU3: -2978888639184530
      [1265776480.143720] TSC ADJUST synchronize: Reference CPU0: 0 CPU3: -2978888639184530
      [    0.707108] smp: Brought up 1 node, 4 CPUs
      [    0.711271] smpboot: Total of 4 processors activated (21698.88 BogoMIPS)
      Reported-by: NOlof Johansson <olof@lixom.net>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170209151231.411460506@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      f2e04214
  4. 09 2月, 2017 1 次提交
    • L
      Revert "x86/ioapic: Restore IO-APIC irq_chip retrigger callback" · d966564f
      Linus Torvalds 提交于
      This reverts commit 020eb3da.
      
      Gabriel C reports that it causes his machine to not boot, and we haven't
      tracked down the reason for it yet.  Since the bug it fixes has been
      around for a longish time, we're better off reverting the fix for now.
      
      Gabriel says:
       "It hangs early and freezes with a lot RCU warnings.
      
        I bisected it down to :
      
        > Ruslan Ruslichenko (1):
        >       x86/ioapic: Restore IO-APIC irq_chip retrigger callback
      
        Reverting this one fixes the problem for me..
      
        The box is a PRIMERGY TX200 S5 , 2 socket , 2 x E5520 CPU(s) installed"
      
      and Ruslan and Thomas are currently stumped.
      Reported-and-bisected-by: NGabriel C <nix.or.die@gmail.com>
      Cc: Ruslan Ruslichenko <rruslich@cisco.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@kernel.org   # for the backport of the original commit
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d966564f
  5. 05 2月, 2017 2 次提交
    • Y
      x86/CPU/AMD: Fix Zen SMT topology · 08b25963
      Yazen Ghannam 提交于
      After:
      
        a33d3317 ("x86/CPU/AMD: Fix Bulldozer topology")
      
      our  SMT scheduling topology for Fam17h systems is broken, because
      the ThreadId is included in the ApicId when SMT is enabled.
      
      So, without further decoding cpu_core_id is unique for each thread
      rather than the same for threads on the same core. This didn't affect
      systems with SMT disabled. Make cpu_core_id be what it is defined to be.
      Signed-off-by: NYazen Ghannam <Yazen.Ghannam@amd.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org> # 4.9
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170205105022.8705-2-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      08b25963
    • B
      x86/CPU/AMD: Bring back Compute Unit ID · 79a8b9aa
      Borislav Petkov 提交于
      Commit:
      
        a33d3317 ("x86/CPU/AMD: Fix Bulldozer topology")
      
      restored the initial approach we had with the Fam15h topology of
      enumerating CU (Compute Unit) threads as cores. And this is still
      correct - they're beefier than HT threads but still have some
      shared functionality.
      
      Our current approach has a problem with the Mad Max Steam game, for
      example. Yves Dionne reported a certain "choppiness" while playing on
      v4.9.5.
      
      That problem stems most likely from the fact that the CU threads share
      resources within one CU and when we schedule to a thread of a different
      compute unit, this incurs latency due to migrating the working set to a
      different CU through the caches.
      
      When the thread siblings mask mirrors that aspect of the CUs and
      threads, the scheduler pays attention to it and tries to schedule within
      one CU first. Which takes care of the latency, of course.
      Reported-by: NYves Dionne <yves.dionne@gmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org> # 4.9
      Cc: Brice Goglin <Brice.Goglin@inria.fr>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yazen Ghannam <yazen.ghannam@amd.com>
      Link: http://lkml.kernel.org/r/20170205105022.8705-1-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      79a8b9aa
  6. 04 2月, 2017 1 次提交
  7. 03 2月, 2017 1 次提交
  8. 01 2月, 2017 5 次提交
    • T
      perf/x86/intel/uncore: Make package handling more robust · fff4b87e
      Thomas Gleixner 提交于
      The package management code in uncore relies on package mapping being
      available before a CPU is started. This changed with:
      
        9d85eb91 ("x86/smpboot: Make logical package management more robust")
      
      because the ACPI/BIOS information turned out to be unreliable, but that
      left uncore in broken state. This was not noticed because on a regular boot
      all CPUs are online before uncore is initialized.
      
      Move the allocation to the CPU online callback and simplify the hotplug
      handling. At this point the package mapping is established and correct.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com>
      Fixes: 9d85eb91 ("x86/smpboot: Make logical package management more robust")
      Link: http://lkml.kernel.org/r/20170131230141.377156255@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fff4b87e
    • T
      perf/x86/intel/uncore: Clean up hotplug conversion fallout · 1aa6cfd3
      Thomas Gleixner 提交于
      The recent conversion to the hotplug state machine kept two mechanisms from
      the original code:
      
       1) The first_init logic which adds the number of online CPUs in a package
          to the refcount. That's wrong because the callbacks are executed for
          all online CPUs.
      
          Remove it so the refcounting is correct.
      
       2) The on_each_cpu() call to undo box->init() in the error handling
          path. That's bogus because when the prepare callback fails no box has
          been initialized yet.
      
          Remove it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com>
      Fixes: 1a246b9f ("perf/x86/intel/uncore: Convert to hotplug state machine")
      Link: http://lkml.kernel.org/r/20170131230141.298032324@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1aa6cfd3
    • T
      perf/x86/intel/rapl: Make package handling more robust · dd86e373
      Thomas Gleixner 提交于
      The package management code in RAPL relies on package mapping being
      available before a CPU is started. This changed with:
      
        9d85eb91 ("x86/smpboot: Make logical package management more robust")
      
      because the ACPI/BIOS information turned out to be unreliable, but that
      left RAPL in broken state. This was not noticed because on a regular boot
      all CPUs are online before RAPL is initialized.
      
      A possible fix would be to reintroduce the mess which allocates a package
      data structure in CPU prepare and when it turns out to already exist in
      starting throw it away later in the CPU online callback. But that's a
      horrible hack and not required at all because RAPL becomes functional for
      perf only in the CPU online callback. That's correct because user space is
      not yet informed about the CPU being onlined, so nothing caan rely on RAPL
      being available on that particular CPU.
      
      Move the allocation to the CPU online callback and simplify the hotplug
      handling. At this point the package mapping is established and correct.
      
      This also adds a missing check for available package data in the
      event_init() function.
      Reported-by: NYasuaki Ishimatsu <yasu.isimatu@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 9d85eb91 ("x86/smpboot: Make logical package management more robust")
      Link: http://lkml.kernel.org/r/20170131230141.212593966@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dd86e373
    • T
      x86/mce: Make timer handling more robust · 0becc0ae
      Thomas Gleixner 提交于
      Erik reported that on a preproduction hardware a CMCI storm triggers the
      BUG_ON in add_timer_on(). The reason is that the per CPU MCE timer is
      started by the CMCI logic before the MCE CPU hotplug callback starts the
      timer with add_timer_on(). So the timer is already queued which triggers
      the BUG.
      
      Using add_timer_on() is pretty pointless in this code because the timer is
      strictlty per CPU, initialized as pinned and all operations which arm the
      timer happen on the CPU to which the timer belongs.
      
      Simplify the whole machinery by using mod_timer() instead of add_timer_on()
      which avoids the problem because mod_timer() can handle already queued
      timers. Use __start_timer() everywhere so the earliest armed expiry time is
      preserved.
      Reported-by: NErik Veijola <erik.veijola@intel.com>
      Tested-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701310936080.3457@nanosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      0becc0ae
    • T
      x86/irq: Make irq activate operations symmetric · aaaec6fc
      Thomas Gleixner 提交于
      The recent commit which prevents double activation of interrupts unearthed
      interesting code in x86. The code (ab)uses irq_domain_activate_irq() to
      reconfigure an already activated interrupt. That trips over the prevention
      code now.
      
      Fix it by deactivating the interrupt before activating the new configuration.
      
      Fixes: 08d85f3e "irqdomain: Avoid activating interrupts more than once"
      Reported-and-tested-by: NMike Galbraith <efault@gmx.de>
      Reported-and-tested-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701311901580.3457@nanos
      aaaec6fc
  9. 30 1月, 2017 1 次提交
    • B
      x86/microcode: Do not access the initrd after it has been freed · 24c25032
      Borislav Petkov 提交于
      When we look for microcode blobs, we first try builtin and if that
      doesn't succeed, we fallback to the initrd supplied to the kernel.
      
      However, at some point doing boot, that initrd gets jettisoned and we
      shouldn't access it anymore. But we do, as the below KASAN report shows.
      That's because find_microcode_in_initrd() doesn't check whether the
      initrd is still valid or not.
      
      So do that.
      
        ==================================================================
        BUG: KASAN: use-after-free in find_cpio_data
        Read of size 1 by task swapper/1/0
        page:ffffea0000db9d40 count:0 mapcount:0 mapping:          (null) index:0x1
        flags: 0x100000000000000()
        raw: 0100000000000000 0000000000000000 0000000000000001 00000000ffffffff
        raw: dead000000000100 dead000000000200 0000000000000000 0000000000000000
        page dumped because: kasan: bad access detected
        CPU: 1 PID: 0 Comm: swapper/1 Tainted: G        W       4.10.0-rc5-debug-00075-g2dbde22 #3
        Hardware name: Dell Inc. XPS 13 9360/0839Y6, BIOS 1.2.3 12/01/2016
        Call Trace:
         dump_stack
         ? _atomic_dec_and_lock
         ? __dump_page
         kasan_report_error
         ? pointer
         ? find_cpio_data
         __asan_report_load1_noabort
         ? find_cpio_data
         find_cpio_data
         ? vsprintf
         ? dump_stack
         ? get_ucode_user
         ? print_usage_bug
         find_microcode_in_initrd
         __load_ucode_intel
         ? collect_cpu_info_early
         ? debug_check_no_locks_freed
         load_ucode_intel_ap
         ? collect_cpu_info
         ? trace_hardirqs_on
         ? flat_send_IPI_mask_allbutself
         load_ucode_ap
         ? get_builtin_firmware
         ? flush_tlb_func
         ? do_raw_spin_trylock
         ? cpumask_weight
         cpu_init
         ? trace_hardirqs_off
         ? play_dead_common
         ? native_play_dead
         ? hlt_play_dead
         ? syscall_init
         ? arch_cpu_idle_dead
         ? do_idle
         start_secondary
         start_cpu
        Memory state around the buggy address:
         ffff880036e74f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
         ffff880036e74f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        >ffff880036e75000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
                           ^
         ffff880036e75080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
         ffff880036e75100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        ==================================================================
      Reported-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Tested-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170126165833.evjemhbqzaepirxo@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      24c25032
  10. 28 1月, 2017 1 次提交
    • J
      x86/efi: Always map the first physical page into the EFI pagetables · bf29bddf
      Jiri Kosina 提交于
      Commit:
      
        12976670 ("x86/efi: Only map RAM into EFI page tables if in mixed-mode")
      
      stopped creating 1:1 mappings for all RAM, when running in native 64-bit mode.
      
      It turns out though that there are 64-bit EFI implementations in the wild
      (this particular problem has been reported on a Lenovo Yoga 710-11IKB),
      which still make use of the first physical page for their own private use,
      even though they explicitly mark it EFI_CONVENTIONAL_MEMORY in the memory
      map.
      
      In case there is no mapping for this particular frame in the EFI pagetables,
      as soon as firmware tries to make use of it, a triple fault occurs and the
      system reboots (in case of the Yoga 710-11IKB this is very early during bootup).
      
      Fix that by always mapping the first page of physical memory into the EFI
      pagetables. We're free to hand this page to the BIOS, as trim_bios_range()
      will reserve the first page and isolate it away from memory allocators anyway.
      
      Note that just reverting 12976670 alone is not enough on v4.9-rc1+ to fix the
      regression on affected hardware, as this commit:
      
         ab72a27d ("x86/efi: Consolidate region mapping logic")
      
      later made the first physical frame not to be mapped anyway.
      Reported-by: NHanka Pavlikova <hanka@ucw.cz>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vojtech Pavlik <vojtech@ucw.cz>
      Cc: Waiman Long <waiman.long@hpe.com>
      Cc: linux-efi@vger.kernel.org
      Cc: stable@kernel.org # v4.8+
      Fixes: 12976670 ("x86/efi: Only map RAM into EFI page tables if in mixed-mode")
      Link: http://lkml.kernel.org/r/20170127222552.22336-1-matt@codeblueprint.co.uk
      [ Tidied up the changelog and the comment. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      bf29bddf
  11. 24 1月, 2017 1 次提交
    • Y
      x86/fpu/xstate: Fix xcomp_bv in XSAVES header · dffba9a3
      Yu-cheng Yu 提交于
      The compacted-format XSAVES area is determined at boot time and
      never changed after.  The field xsave.header.xcomp_bv indicates
      which components are in the fixed XSAVES format.
      
      In fpstate_init() we did not set xcomp_bv to reflect the XSAVES
      format since at the time there is no valid data.
      
      However, after we do copy_init_fpstate_to_fpregs() in fpu__clear(),
      as in commit:
      
        b22cbe40 x86/fpu: Fix invalid FPU ptrace state after execve()
      
      and when __fpu_restore_sig() does fpu__restore() for a COMPAT-mode
      app, a #GP occurs.  This can be easily triggered by doing valgrind on
      a COMPAT-mode "Hello World," as reported by Joakim Tjernlund and
      others:
      
      	https://bugzilla.kernel.org/show_bug.cgi?id=190061
      
      Fix it by setting xcomp_bv correctly.
      
      This patch also moves the xcomp_bv initialization to the proper
      place, which was in copyin_to_xsaves() as of:
      
        4c833368 x86/fpu: Set the xcomp_bv when we fake up a XSAVES area
      
      which fixed the bug too, but it's more efficient and cleaner to
      initialize things once per boot, not for every signal handling
      operation.
      Reported-by: NKevin Hao <haokexin@gmail.com>
      Reported-by: NJoakim Tjernlund <Joakim.Tjernlund@infinera.com>
      Signed-off-by: NYu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: haokexin@gmail.com
      Link: http://lkml.kernel.org/r/1485212084-4418-1-git-send-email-yu-cheng.yu@intel.com
      [ Combined it with 4c833368. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      dffba9a3
  12. 23 1月, 2017 2 次提交
    • K
      x86/fpu: Set the xcomp_bv when we fake up a XSAVES area · 4c833368
      Kevin Hao 提交于
      I got the following calltrace on a Apollo Lake SoC with 32-bit kernel:
      
        WARNING: CPU: 2 PID: 261 at arch/x86/include/asm/fpu/internal.h:363 fpu__restore+0x1f5/0x260
        [...]
        Hardware name: Intel Corp. Broxton P/NOTEBOOK, BIOS APLIRVPA.X64.0138.B35.1608091058 08/09/2016
        Call Trace:
         dump_stack()
         __warn()
         ? fpu__restore()
         warn_slowpath_null()
         fpu__restore()
         __fpu__restore_sig()
         fpu__restore_sig()
         restore_sigcontext.isra.9()
         sys_sigreturn()
         do_int80_syscall_32()
         entry_INT80_32()
      
      The reason is that a #GP occurs when executing XRSTORS. The root cause
      is that we forget to set the xcomp_bv when we fake up the XSAVES area
      in the copyin_to_xsaves() function.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
      Link: http://lkml.kernel.org/r/1485075023-30161-1-git-send-email-haokexin@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4c833368
    • B
      x86/microcode/intel: Drop stashed AP patch pointer optimization · c26665ab
      Borislav Petkov 提交于
      This was meant to save us the scanning of the microcode containter in
      the initrd since the first AP had already done that but it can also hurt
      us:
      
      Imagine a single hyperthreaded CPU (Intel(R) Atom(TM) CPU N270, for
      example) which updates the microcode on the BSP but since the microcode
      engine is shared between the two threads, the update on CPU1 doesn't
      happen because it has already happened on CPU0 and we don't find a newer
      microcode revision on CPU1.
      
      Which doesn't set the intel_ucode_patch pointer and at initrd
      jettisoning time we don't save the microcode patch for later
      application.
      
      Now, when we suspend to RAM, the loaded microcode gets cleared so we
      need to reload but there's no patch saved in the cache.
      
      Removing the optimization fixes this issue and all is fine and dandy.
      
      Fixes: 06b8534c ("x86/microcode: Rework microcode loading")
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170120202955.4091-2-bp@alien8.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      c26665ab
  13. 18 1月, 2017 1 次提交
  14. 17 1月, 2017 2 次提交
    • D
      KVM: x86: fix fixing of hypercalls · ce2e852e
      Dmitry Vyukov 提交于
      emulator_fix_hypercall() replaces hypercall with vmcall instruction,
      but it does not handle GP exception properly when writes the new instruction.
      It can return X86EMUL_PROPAGATE_FAULT without setting exception information.
      This leads to incorrect emulation and triggers
      WARN_ON(ctxt->exception.vector > 0x1f) in x86_emulate_insn()
      as discovered by syzkaller fuzzer:
      
      WARNING: CPU: 2 PID: 18646 at arch/x86/kvm/emulate.c:5558
      Call Trace:
       warn_slowpath_null+0x2c/0x40 kernel/panic.c:582
       x86_emulate_insn+0x16a5/0x4090 arch/x86/kvm/emulate.c:5572
       x86_emulate_instruction+0x403/0x1cc0 arch/x86/kvm/x86.c:5618
       emulate_instruction arch/x86/include/asm/kvm_host.h:1127 [inline]
       handle_exception+0x594/0xfd0 arch/x86/kvm/vmx.c:5762
       vmx_handle_exit+0x2b7/0x38b0 arch/x86/kvm/vmx.c:8625
       vcpu_enter_guest arch/x86/kvm/x86.c:6888 [inline]
       vcpu_run arch/x86/kvm/x86.c:6947 [inline]
      
      Set exception information when write in emulator_fix_hypercall() fails.
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Cc: kvm@vger.kernel.org
      Cc: syzkaller@googlegroups.com
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      ce2e852e
    • Z
      perf/x86/intel: Handle exclusive threadid correctly on CPU hotplug · 4e71de79
      Zhou Chengming 提交于
      The CPU hotplug function intel_pmu_cpu_starting() sets
      cpu_hw_events.excl_thread_id unconditionally to 1 when the shared exclusive
      counters data structure is already availabe for the sibling thread.
      
      This works during the boot process because the first sibling gets threadid
      0 assigned and the second sibling which shares the data structure gets 1.
      
      But when the first thread of the core is offlined and onlined again it
      shares the data structure with the second thread and gets exclusive thread
      id 1 assigned as well.
      
      Prevent this by checking the threadid of the already online thread.
      
      [ tglx: Rewrote changelog ]
      Signed-off-by: NZhou Chengming <zhouchengming1@huawei.com>
      Cc: NuoHan Qiao <qiaonuohan@huawei.com>
      Cc: ak@linux.intel.com
      Cc: peterz@infradead.org
      Cc: kan.liang@intel.com
      Cc: dave.hansen@linux.intel.com
      Cc: eranian@google.com
      Cc: qiaonuohan@huawei.com
      Cc: davidcc@google.com
      Cc: guohanjun@huawei.com
      Link: http://lkml.kernel.org/r/1484536871-3131-1-git-send-email-zhouchengming1@huawei.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      ---					---
       arch/x86/events/intel/core.c |    7 +++++--
       1 file changed, 5 insertions(+), 2 deletions(-)
      4e71de79
  15. 14 1月, 2017 5 次提交
    • P
      efi/x86: Prune invalid memory map entries and fix boot regression · 0100a3e6
      Peter Jones 提交于
      Some machines, such as the Lenovo ThinkPad W541 with firmware GNET80WW
      (2.28), include memory map entries with phys_addr=0x0 and num_pages=0.
      
      These machines fail to boot after the following commit,
      
        commit 8e80632f ("efi/esrt: Use efi_mem_reserve() and avoid a kmalloc()")
      
      Fix this by removing such bogus entries from the memory map.
      
      Furthermore, currently the log output for this case (with efi=debug)
      looks like:
      
       [    0.000000] efi: mem45: [Reserved           |   |  |  |  |  |  |  |  |  |  |  |  ] range=[0x0000000000000000-0xffffffffffffffff] (0MB)
      
      This is clearly wrong, and also not as informative as it could be.  This
      patch changes it so that if we find obviously invalid memory map
      entries, we print an error and skip those entries.  It also detects the
      display of the address range calculation overflow, so the new output is:
      
       [    0.000000] efi: [Firmware Bug]: Invalid EFI memory map entries:
       [    0.000000] efi: mem45: [Reserved           |   |  |  |  |  |  |  |   |  |  |  |  ] range=[0x0000000000000000-0x0000000000000000] (invalid)
      
      It also detects memory map sizes that would overflow the physical
      address, for example phys_addr=0xfffffffffffff000 and
      num_pages=0x0200000000000001, and prints:
      
       [    0.000000] efi: [Firmware Bug]: Invalid EFI memory map entries:
       [    0.000000] efi: mem45: [Reserved           |   |  |  |  |  |  |  |   |  |  |  |  ] range=[phys_addr=0xfffffffffffff000-0x20ffffffffffffffff] (invalid)
      
      It then removes these entries from the memory map.
      Signed-off-by: NPeter Jones <pjones@redhat.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [ardb: refactor for clarity with no functional changes, avoid PAGE_SHIFT]
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      [Matt: Include bugzilla info in commit log]
      Cc: <stable@vger.kernel.org> # v4.9+
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=191121Signed-off-by: NIngo Molnar <mingo@kernel.org>
      0100a3e6
    • J
      perf/x86: Reject non sampling events with precise_ip · 18e7a45a
      Jiri Olsa 提交于
      As Peter suggested [1] rejecting non sampling PEBS events,
      because they dont make any sense and could cause bugs
      in the NMI handler [2].
      
        [1] http://lkml.kernel.org/r/20170103094059.GC3093@worktop
        [2] http://lkml.kernel.org/r/1482931866-6018-3-git-send-email-jolsa@kernel.orgSigned-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vince@deater.net>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20170103142454.GA26251@kravaSigned-off-by: NIngo Molnar <mingo@kernel.org>
      18e7a45a
    • J
      perf/x86/intel: Account interrupts for PEBS errors · 475113d9
      Jiri Olsa 提交于
      It's possible to set up PEBS events to get only errors and not
      any data, like on SNB-X (model 45) and IVB-EP (model 62)
      via 2 perf commands running simultaneously:
      
          taskset -c 1 ./perf record -c 4 -e branches:pp -j any -C 10
      
      This leads to a soft lock up, because the error path of the
      intel_pmu_drain_pebs_nhm() does not account event->hw.interrupt
      for error PEBS interrupts, so in case you're getting ONLY
      errors you don't have a way to stop the event when it's over
      the max_samples_per_tick limit:
      
        NMI watchdog: BUG: soft lockup - CPU#22 stuck for 22s! [perf_fuzzer:5816]
        ...
        RIP: 0010:[<ffffffff81159232>]  [<ffffffff81159232>] smp_call_function_single+0xe2/0x140
        ...
        Call Trace:
         ? trace_hardirqs_on_caller+0xf5/0x1b0
         ? perf_cgroup_attach+0x70/0x70
         perf_install_in_context+0x199/0x1b0
         ? ctx_resched+0x90/0x90
         SYSC_perf_event_open+0x641/0xf90
         SyS_perf_event_open+0x9/0x10
         do_syscall_64+0x6c/0x1f0
         entry_SYSCALL64_slow_path+0x25/0x25
      
      Add perf_event_account_interrupt() which does the interrupt
      and frequency checks and call it from intel_pmu_drain_pebs_nhm()'s
      error path.
      
      We keep the pending_kill and pending_wakeup logic only in the
      __perf_event_overflow() path, because they make sense only if
      there's any data to deliver.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vince@deater.net>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/1482931866-6018-2-git-send-email-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      475113d9
    • T
      x86/mpx: Use compatible types in comparison to fix sparse error · 45382862
      Tobias Klauser 提交于
      info->si_addr is of type void __user *, so it should be compared against
      something from the same address space.
      
      This fixes the following sparse error:
      
        arch/x86/mm/mpx.c:296:27: error: incompatible types in comparison expression (different address spaces)
      Signed-off-by: NTobias Klauser <tklauser@distanz.ch>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      45382862
    • L
      x86/tsc: Add the Intel Denverton Processor to native_calibrate_tsc() · 695085b4
      Len Brown 提交于
      The Intel Denverton microserver uses a 25 MHz TSC crystal,
      so we can derive its exact [*] TSC frequency
      using CPUID and some arithmetic, eg.:
      
        TSC: 1800 MHz (25000000 Hz * 216 / 3 / 1000000)
      
      [*] 'exact' is only as good as the crystal, which should be +/- 20ppm
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/306899f94804aece6d8fa8b4223ede3b48dbb59c.1484287748.git.len.brown@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      695085b4
  16. 12 1月, 2017 8 次提交
    • P
      KVM: x86: fix emulation of "MOV SS, null selector" · 33ab9110
      Paolo Bonzini 提交于
      This is CVE-2017-2583.  On Intel this causes a failed vmentry because
      SS's type is neither 3 nor 7 (even though the manual says this check is
      only done for usable SS, and the dmesg splat says that SS is unusable!).
      On AMD it's worse: svm.c is confused and sets CPL to 0 in the vmcb.
      
      The fix fabricates a data segment descriptor when SS is set to a null
      selector, so that CPL and SS.DPL are set correctly in the VMCS/vmcb.
      Furthermore, only allow setting SS to a NULL selector if SS.RPL < 3;
      this in turn ensures CPL < 3 because RPL must be equal to CPL.
      
      Thanks to Andy Lutomirski and Willy Tarreau for help in analyzing
      the bug and deciphering the manuals.
      Reported-by: NXiaohan Zhang <zhangxiaohan1@huawei.com>
      Fixes: 79d5b4c3
      Cc: stable@nongnu.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      33ab9110
    • W
      KVM: x86: fix NULL deref in vcpu_scan_ioapic · 546d87e5
      Wanpeng Li 提交于
      Reported by syzkaller:
      
          BUG: unable to handle kernel NULL pointer dereference at 00000000000001b0
          IP: _raw_spin_lock+0xc/0x30
          PGD 3e28eb067
          PUD 3f0ac6067
          PMD 0
          Oops: 0002 [#1] SMP
          CPU: 0 PID: 2431 Comm: test Tainted: G           OE   4.10.0-rc1+ #3
          Call Trace:
           ? kvm_ioapic_scan_entry+0x3e/0x110 [kvm]
           kvm_arch_vcpu_ioctl_run+0x10a8/0x15f0 [kvm]
           ? pick_next_task_fair+0xe1/0x4e0
           ? kvm_arch_vcpu_load+0xea/0x260 [kvm]
           kvm_vcpu_ioctl+0x33a/0x600 [kvm]
           ? hrtimer_try_to_cancel+0x29/0x130
           ? do_nanosleep+0x97/0xf0
           do_vfs_ioctl+0xa1/0x5d0
           ? __hrtimer_init+0x90/0x90
           ? do_nanosleep+0x5b/0xf0
           SyS_ioctl+0x79/0x90
           do_syscall_64+0x6e/0x180
           entry_SYSCALL64_slow_path+0x25/0x25
          RIP: _raw_spin_lock+0xc/0x30 RSP: ffffa43688973cc0
      
      The syzkaller folks reported a NULL pointer dereference due to
      ENABLE_CAP succeeding even without an irqchip.  The Hyper-V
      synthetic interrupt controller is activated, resulting in a
      wrong request to rescan the ioapic and a NULL pointer dereference.
      
          #include <sys/ioctl.h>
          #include <sys/mman.h>
          #include <sys/types.h>
          #include <linux/kvm.h>
          #include <pthread.h>
          #include <stddef.h>
          #include <stdint.h>
          #include <stdlib.h>
          #include <string.h>
          #include <unistd.h>
      
          #ifndef KVM_CAP_HYPERV_SYNIC
          #define KVM_CAP_HYPERV_SYNIC 123
          #endif
      
          void* thr(void* arg)
          {
      	struct kvm_enable_cap cap;
      	cap.flags = 0;
      	cap.cap = KVM_CAP_HYPERV_SYNIC;
      	ioctl((long)arg, KVM_ENABLE_CAP, &cap);
      	return 0;
          }
      
          int main()
          {
      	void *host_mem = mmap(0, 0x1000, PROT_READ|PROT_WRITE,
      			MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
      	int kvmfd = open("/dev/kvm", 0);
      	int vmfd = ioctl(kvmfd, KVM_CREATE_VM, 0);
      	struct kvm_userspace_memory_region memreg;
      	memreg.slot = 0;
      	memreg.flags = 0;
      	memreg.guest_phys_addr = 0;
      	memreg.memory_size = 0x1000;
      	memreg.userspace_addr = (unsigned long)host_mem;
      	host_mem[0] = 0xf4;
      	ioctl(vmfd, KVM_SET_USER_MEMORY_REGION, &memreg);
      	int cpufd = ioctl(vmfd, KVM_CREATE_VCPU, 0);
      	struct kvm_sregs sregs;
      	ioctl(cpufd, KVM_GET_SREGS, &sregs);
      	sregs.cr0 = 0;
      	sregs.cr4 = 0;
      	sregs.efer = 0;
      	sregs.cs.selector = 0;
      	sregs.cs.base = 0;
      	ioctl(cpufd, KVM_SET_SREGS, &sregs);
      	struct kvm_regs regs = { .rflags = 2 };
      	ioctl(cpufd, KVM_SET_REGS, &regs);
      	ioctl(vmfd, KVM_CREATE_IRQCHIP, 0);
      	pthread_t th;
      	pthread_create(&th, 0, thr, (void*)(long)cpufd);
      	usleep(rand() % 10000);
      	ioctl(cpufd, KVM_RUN, 0);
      	pthread_join(th, 0);
      	return 0;
          }
      
      This patch fixes it by failing ENABLE_CAP if without an irqchip.
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Fixes: 5c919412 (kvm/x86: Hyper-V synthetic interrupt controller)
      Cc: stable@vger.kernel.org # 4.5+
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      546d87e5
    • S
      KVM: x86: Introduce segmented_write_std · 129a72a0
      Steve Rutherford 提交于
      Introduces segemented_write_std.
      
      Switches from emulated reads/writes to standard read/writes in fxsave,
      fxrstor, sgdt, and sidt.  This fixes CVE-2017-2584, a longstanding
      kernel memory leak.
      
      Since commit 283c95d0 ("KVM: x86: emulate FXSAVE and FXRSTOR",
      2016-11-09), which is luckily not yet in any final release, this would
      also be an exploitable kernel memory *write*!
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: stable@vger.kernel.org
      Fixes: 96051572
      Fixes: 283c95d0Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NSteve Rutherford <srutherford@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      129a72a0
    • D
      KVM: x86: flush pending lapic jump label updates on module unload · cef84c30
      David Matlack 提交于
      KVM's lapic emulation uses static_key_deferred (apic_{hw,sw}_disabled).
      These are implemented with delayed_work structs which can still be
      pending when the KVM module is unloaded. We've seen this cause kernel
      panics when the kvm_intel module is quickly reloaded.
      
      Use the new static_key_deferred_flush() API to flush pending updates on
      module unload.
      Signed-off-by: NDavid Matlack <dmatlack@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cef84c30
    • J
      x86/entry: Fix the end of the stack for newly forked tasks · ff3f7e24
      Josh Poimboeuf 提交于
      When unwinding a task, the end of the stack is always at the same offset
      right below the saved pt_regs, regardless of which syscall was used to
      enter the kernel.  That convention allows the unwinder to verify that a
      stack is sane.
      
      However, newly forked tasks don't always follow that convention, as
      reported by the following unwinder warning seen by Dave Jones:
      
        WARNING: kernel stack frame pointer at ffffc90001443f30 in kworker/u8:8:30468 has bad value           (null)
      
      The warning was due to the following call chain:
      
        (ftrace handler)
        call_usermodehelper_exec_async+0x5/0x140
        ret_from_fork+0x22/0x30
      
      The problem is that ret_from_fork() doesn't create a stack frame before
      calling other functions.  Fix that by carefully using the frame pointer
      macros.
      
      In addition to conforming to the end of stack convention, this also
      makes related stack traces more sensible by making it clear to the user
      that ret_from_fork() was involved.
      Reported-by: NDave Jones <davej@codemonkey.org.uk>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/8854cdaab980e9700a81e9ebf0d4238e4bbb68ef.1483978430.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ff3f7e24
    • J
      x86/unwind: Include __schedule() in stack traces · 2c96b2fe
      Josh Poimboeuf 提交于
      In the following commit:
      
        0100301b ("sched/x86: Rewrite the switch_to() code")
      
      ... the layout of the 'inactive_task_frame' struct was designed to have
      a frame pointer header embedded in it, so that the unwinder could use
      the 'bp' and 'ret_addr' fields to report __schedule() on the stack (or
      ret_from_fork() for newly forked tasks which haven't actually run yet).
      
      Finish the job by changing get_frame_pointer() to return a pointer to
      inactive_task_frame's 'bp' field rather than 'bp' itself.  This allows
      the unwinder to start one frame higher on the stack, so that it properly
      reports __schedule().
      Reported-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/598e9f7505ed0aba86e8b9590aa528c6c7ae8dcd.1483978430.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2c96b2fe
    • J
      x86/unwind: Disable KASAN checks for non-current tasks · 84936118
      Josh Poimboeuf 提交于
      There are a handful of callers to save_stack_trace_tsk() and
      show_stack() which try to unwind the stack of a task other than current.
      In such cases, it's remotely possible that the task is running on one
      CPU while the unwinder is reading its stack from another CPU, causing
      the unwinder to see stack corruption.
      
      These cases seem to be mostly harmless.  The unwinder has checks which
      prevent it from following bad pointers beyond the bounds of the stack.
      So it's not really a bug as long as the caller understands that
      unwinding another task will not always succeed.
      
      In such cases, it's possible that the unwinder may read a KASAN-poisoned
      region of the stack.  Account for that by using READ_ONCE_NOCHECK() when
      reading the stack of another task.
      
      Use READ_ONCE() when reading the stack of the current task, since KASAN
      warnings can still be useful for finding bugs in that case.
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/4c575eb288ba9f73d498dfe0acde2f58674598f1.1483978430.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      84936118
    • J
      x86/unwind: Silence warnings for non-current tasks · 900742d8
      Josh Poimboeuf 提交于
      There are a handful of callers to save_stack_trace_tsk() and
      show_stack() which try to unwind the stack of a task other than current.
      In such cases, it's remotely possible that the task is running on one
      CPU while the unwinder is reading its stack from another CPU, causing
      the unwinder to see stack corruption.
      
      These cases seem to be mostly harmless.  The unwinder has checks which
      prevent it from following bad pointers beyond the bounds of the stack.
      So it's not really a bug as long as the caller understands that
      unwinding another task will not always succeed.
      
      Since stack "corruption" on another task's stack isn't necessarily a
      bug, silence the warnings when unwinding tasks other than current.
      Reported-by: NDave Jones <davej@codemonkey.org.uk>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/00d8c50eea3446c1524a2a755397a3966629354c.1483978430.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      900742d8
  17. 11 1月, 2017 1 次提交