1. 16 2月, 2017 1 次提交
  2. 07 2月, 2017 2 次提交
  3. 05 2月, 2017 1 次提交
    • B
      x86/CPU/AMD: Bring back Compute Unit ID · 79a8b9aa
      Borislav Petkov 提交于
      Commit:
      
        a33d3317 ("x86/CPU/AMD: Fix Bulldozer topology")
      
      restored the initial approach we had with the Fam15h topology of
      enumerating CU (Compute Unit) threads as cores. And this is still
      correct - they're beefier than HT threads but still have some
      shared functionality.
      
      Our current approach has a problem with the Mad Max Steam game, for
      example. Yves Dionne reported a certain "choppiness" while playing on
      v4.9.5.
      
      That problem stems most likely from the fact that the CU threads share
      resources within one CU and when we schedule to a thread of a different
      compute unit, this incurs latency due to migrating the working set to a
      different CU through the caches.
      
      When the thread siblings mask mirrors that aspect of the CUs and
      threads, the scheduler pays attention to it and tries to schedule within
      one CU first. Which takes care of the latency, of course.
      Reported-by: NYves Dionne <yves.dionne@gmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org> # 4.9
      Cc: Brice Goglin <Brice.Goglin@inria.fr>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yazen Ghannam <yazen.ghannam@amd.com>
      Link: http://lkml.kernel.org/r/20170205105022.8705-1-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      79a8b9aa
  4. 04 2月, 2017 3 次提交
  5. 01 2月, 2017 1 次提交
    • F
      sched/cputime: Remove generic asm headers · b672592f
      Frederic Weisbecker 提交于
      cputime_t is now only used by two architectures:
      
      	* powerpc (when CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y)
      	* s390
      
      And since the core doesn't use it anymore, we don't need any arch support
      from the others. So we can remove their stub implementations.
      
      A final cleanup would be to provide an efficient pure arch
      implementation of cputime_to_nsec() for s390 and powerpc and finally
      remove include/linux/cputime.h .
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Stanislaw Gruszka <sgruszka@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Link: http://lkml.kernel.org/r/1485832191-26889-36-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b672592f
  6. 30 1月, 2017 1 次提交
    • B
      x86/microcode: Do not access the initrd after it has been freed · 24c25032
      Borislav Petkov 提交于
      When we look for microcode blobs, we first try builtin and if that
      doesn't succeed, we fallback to the initrd supplied to the kernel.
      
      However, at some point doing boot, that initrd gets jettisoned and we
      shouldn't access it anymore. But we do, as the below KASAN report shows.
      That's because find_microcode_in_initrd() doesn't check whether the
      initrd is still valid or not.
      
      So do that.
      
        ==================================================================
        BUG: KASAN: use-after-free in find_cpio_data
        Read of size 1 by task swapper/1/0
        page:ffffea0000db9d40 count:0 mapcount:0 mapping:          (null) index:0x1
        flags: 0x100000000000000()
        raw: 0100000000000000 0000000000000000 0000000000000001 00000000ffffffff
        raw: dead000000000100 dead000000000200 0000000000000000 0000000000000000
        page dumped because: kasan: bad access detected
        CPU: 1 PID: 0 Comm: swapper/1 Tainted: G        W       4.10.0-rc5-debug-00075-g2dbde22 #3
        Hardware name: Dell Inc. XPS 13 9360/0839Y6, BIOS 1.2.3 12/01/2016
        Call Trace:
         dump_stack
         ? _atomic_dec_and_lock
         ? __dump_page
         kasan_report_error
         ? pointer
         ? find_cpio_data
         __asan_report_load1_noabort
         ? find_cpio_data
         find_cpio_data
         ? vsprintf
         ? dump_stack
         ? get_ucode_user
         ? print_usage_bug
         find_microcode_in_initrd
         __load_ucode_intel
         ? collect_cpu_info_early
         ? debug_check_no_locks_freed
         load_ucode_intel_ap
         ? collect_cpu_info
         ? trace_hardirqs_on
         ? flat_send_IPI_mask_allbutself
         load_ucode_ap
         ? get_builtin_firmware
         ? flush_tlb_func
         ? do_raw_spin_trylock
         ? cpumask_weight
         cpu_init
         ? trace_hardirqs_off
         ? play_dead_common
         ? native_play_dead
         ? hlt_play_dead
         ? syscall_init
         ? arch_cpu_idle_dead
         ? do_idle
         start_secondary
         start_cpu
        Memory state around the buggy address:
         ffff880036e74f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
         ffff880036e74f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        >ffff880036e75000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
                           ^
         ffff880036e75080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
         ffff880036e75100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        ==================================================================
      Reported-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Tested-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170126165833.evjemhbqzaepirxo@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      24c25032
  7. 25 1月, 2017 2 次提交
  8. 24 1月, 2017 3 次提交
  9. 17 1月, 2017 1 次提交
  10. 14 1月, 2017 2 次提交
  11. 12 1月, 2017 2 次提交
    • J
      x86/unwind: Include __schedule() in stack traces · 2c96b2fe
      Josh Poimboeuf 提交于
      In the following commit:
      
        0100301b ("sched/x86: Rewrite the switch_to() code")
      
      ... the layout of the 'inactive_task_frame' struct was designed to have
      a frame pointer header embedded in it, so that the unwinder could use
      the 'bp' and 'ret_addr' fields to report __schedule() on the stack (or
      ret_from_fork() for newly forked tasks which haven't actually run yet).
      
      Finish the job by changing get_frame_pointer() to return a pointer to
      inactive_task_frame's 'bp' field rather than 'bp' itself.  This allows
      the unwinder to start one frame higher on the stack, so that it properly
      reports __schedule().
      Reported-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/598e9f7505ed0aba86e8b9590aa528c6c7ae8dcd.1483978430.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2c96b2fe
    • J
      x86/unwind: Disable KASAN checks for non-current tasks · 84936118
      Josh Poimboeuf 提交于
      There are a handful of callers to save_stack_trace_tsk() and
      show_stack() which try to unwind the stack of a task other than current.
      In such cases, it's remotely possible that the task is running on one
      CPU while the unwinder is reading its stack from another CPU, causing
      the unwinder to see stack corruption.
      
      These cases seem to be mostly harmless.  The unwinder has checks which
      prevent it from following bad pointers beyond the bounds of the stack.
      So it's not really a bug as long as the caller understands that
      unwinding another task will not always succeed.
      
      In such cases, it's possible that the unwinder may read a KASAN-poisoned
      region of the stack.  Account for that by using READ_ONCE_NOCHECK() when
      reading the stack of another task.
      
      Use READ_ONCE() when reading the stack of the current task, since KASAN
      warnings can still be useful for finding bugs in that case.
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/4c575eb288ba9f73d498dfe0acde2f58674598f1.1483978430.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      84936118
  12. 10 1月, 2017 2 次提交
  13. 06 1月, 2017 1 次提交
  14. 05 1月, 2017 1 次提交
  15. 30 12月, 2016 1 次提交
    • L
      mm: optimize PageWaiters bit use for unlock_page() · b91e1302
      Linus Torvalds 提交于
      In commit 62906027 ("mm: add PageWaiters indicating tasks are
      waiting for a page bit") Nick Piggin made our page locking no longer
      unconditionally touch the hashed page waitqueue, which not only helps
      performance in general, but is particularly helpful on NUMA machines
      where the hashed wait queues can bounce around a lot.
      
      However, the "clear lock bit atomically and then test the waiters bit"
      sequence turns out to be much more expensive than it needs to be,
      because you get a nasty stall when trying to access the same word that
      just got updated atomically.
      
      On architectures where locking is done with LL/SC, this would be trivial
      to fix with a new primitive that clears one bit and tests another
      atomically, but that ends up not working on x86, where the only atomic
      operations that return the result end up being cmpxchg and xadd.  The
      atomic bit operations return the old value of the same bit we changed,
      not the value of an unrelated bit.
      
      On x86, we could put the lock bit in the high bit of the byte, and use
      "xadd" with that bit (where the overflow ends up not touching other
      bits), and look at the other bits of the result.  However, an even
      simpler model is to just use a regular atomic "and" to clear the lock
      bit, and then the sign bit in eflags will indicate the resulting state
      of the unrelated bit #7.
      
      So by moving the PageWaiters bit up to bit #7, we can atomically clear
      the lock bit and test the waiters bit on x86 too.  And architectures
      with LL/SC (which is all the usual RISC suspects), the particular bit
      doesn't matter, so they are fine with this approach too.
      
      This avoids the extra access to the same atomic word, and thus avoids
      the costly stall at page unlock time.
      
      The only downside is that the interface ends up being a bit odd and
      specialized: clear a bit in a byte, and test the sign bit.  Nick doesn't
      love the resulting name of the new primitive, but I'd rather make the
      name be descriptive and very clear about the limitation imposed by
      trying to work across all relevant architectures than make it be some
      generic thing that doesn't make the odd semantics explicit.
      
      So this introduces the new architecture primitive
      
          clear_bit_unlock_is_negative_byte();
      
      and adds the trivial implementation for x86.  We have a generic
      non-optimized fallback (that just does a "clear_bit()"+"test_bit(7)"
      combination) which can be overridden by any architecture that can do
      better.  According to Nick, Power has the same hickup x86 has, for
      example, but some other architectures may not even care.
      
      All these optimizations mean that my page locking stress-test (which is
      just executing a lot of small short-lived shell scripts: "make test" in
      the git source tree) no longer makes our page locking look horribly bad.
      Before all these optimizations, just the unlock_page() costs were just
      over 3% of all CPU overhead on "make test".  After this, it's down to
      0.66%, so just a quarter of the cost it used to be.
      
      (The difference on NUMA is bigger, but there this micro-optimization is
      likely less noticeable, since the big issue on NUMA was not the accesses
      to 'struct page', but the waitqueue accesses that were already removed
      by Nick's earlier commit).
      Acked-by: NNick Piggin <npiggin@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Bob Peterson <rpeterso@redhat.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Andrew Lutomirski <luto@kernel.org>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b91e1302
  16. 28 12月, 2016 1 次提交
  17. 25 12月, 2016 2 次提交
  18. 19 12月, 2016 6 次提交
  19. 18 12月, 2016 1 次提交
  20. 17 12月, 2016 2 次提交
  21. 15 12月, 2016 3 次提交
    • K
      x86/mm: Drop unused argument 'removed' from sync_global_pgds() · 5372e155
      Kirill A. Shutemov 提交于
      Since commit af2cf278 ("x86/mm/hotplug: Don't remove PGD entries in
      remove_pagetable()") there are no callers of sync_global_pgds() which set
      the 'removed' argument to 1.
      
      Remove the argument and the related conditionals in the function.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Link: http://lkml.kernel.org/r/20161214234403.137556-1-kirill.shutemov@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      5372e155
    • T
      x86/tsc: Force TSC_ADJUST register to value >= zero · 5bae1562
      Thomas Gleixner 提交于
      Roland reported that his DELL T5810 sports a value add BIOS which
      completely wreckages the TSC. The squirmware [(TM) Ingo Molnar] boots with
      random negative TSC_ADJUST values, different on all CPUs. That renders the
      TSC useless because the sycnchronization check fails.
      
      Roland tested the new TSC_ADJUST mechanism. While it manages to readjust
      the TSCs he needs to disable the TSC deadline timer, otherwise the machine
      just stops booting.
      
      Deeper investigation unearthed that the TSC deadline timer is sensitive to
      the TSC_ADJUST value. Writing TSC_ADJUST to a negative value results in an
      interrupt storm caused by the TSC deadline timer.
      
      This does not make any sense and it's hard to imagine what kind of hardware
      wreckage is behind that misfeature, but it's reliably reproducible on other
      systems which have TSC_ADJUST and TSC deadline timer.
      
      While it would be understandable that a big enough negative value which
      moves the resulting TSC readout into the negative space could have the
      described effect, this happens even with a adjust value of -1, which keeps
      the TSC readout definitely in the positive space. The compare register for
      the TSC deadline timer is set to a positive value larger than the TSC, but
      despite not having reached the deadline the interrupt is raised
      immediately. If this happens on the boot CPU, then the machine dies
      silently because this setup happens before the NMI watchdog is armed.
      
      Further experiments showed that any other adjustment of TSC_ADJUST works as
      expected as long as it stays in the positive range. The direction of the
      adjustment has no influence either. See the lkml link for further analysis.
      
      Yet another proof for the theory that timers are designed by janitors and
      the underlying (obviously undocumented) mechanisms which allow BIOSes to
      wreckage them are considered a feature. Well done Intel - NOT!
      
      To address this wreckage add the following sanity measures:
      
      - If the TSC_ADJUST value on the boot cpu is not 0, set it to 0
      
      - If the TSC_ADJUST value on any cpu is negative, set it to 0
      
      - Prevent the cross package synchronization mechanism from setting negative
        TSC_ADJUST values.
      Reported-and-tested-by: NRoland Scheidegger <rscheidegger_lists@hispeed.ch>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Bruce Schlobohm <bruce.schlobohm@intel.com>
      Cc: Kevin Stanton <kevin.b.stanton@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Allen Hung <allen_hung@dell.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Link: http://lkml.kernel.org/r/20161213131211.397588033@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      5bae1562
    • T
      x86/tsc: Validate TSC_ADJUST after resume · 6a369583
      Thomas Gleixner 提交于
      Some 'feature' BIOSes fiddle with the TSC_ADJUST register during
      suspend/resume which renders the TSC unusable.
      
      Add sanity checks into the resume path and restore the
      original value if it was adjusted.
      Reported-and-tested-by: NRoland Scheidegger <rscheidegger_lists@hispeed.ch>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Bruce Schlobohm <bruce.schlobohm@intel.com>
      Cc: Kevin Stanton <kevin.b.stanton@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Allen Hung <allen_hung@dell.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Link: http://lkml.kernel.org/r/20161213131211.317654500@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      6a369583
  22. 14 12月, 2016 1 次提交