1. 20 3月, 2018 5 次提交
  2. 17 3月, 2018 4 次提交
    • J
      parisc: Handle case where flush_cache_range is called with no context · 9ef0f88f
      John David Anglin 提交于
      Just when I had decided that flush_cache_range() was always called with
      a valid context, Helge reported two cases where the
      "BUG_ON(!vma->vm_mm->context);" was hit on the phantom buildd:
      
       kernel BUG at /mnt/sdb6/linux/linux-4.15.4/arch/parisc/kernel/cache.c:587!
       CPU: 1 PID: 3254 Comm: kworker/1:2 Tainted: G D 4.15.0-1-parisc64-smp #1 Debian 4.15.4-1+b1
       Workqueue: events free_ioctx
        IAOQ[0]: flush_cache_range+0x164/0x168
        IAOQ[1]: flush_cache_page+0x0/0x1c8
        RP(r2): unmap_page_range+0xae8/0xb88
       Backtrace:
        [<00000000404a6980>] unmap_page_range+0xae8/0xb88
        [<00000000404a6ae0>] unmap_single_vma+0xc0/0x188
        [<00000000404a6cdc>] zap_page_range_single+0x134/0x1f8
        [<00000000404a702c>] unmap_mapping_range+0x1cc/0x208
        [<0000000040461518>] truncate_pagecache+0x98/0x108
        [<0000000040461624>] truncate_setsize+0x9c/0xb8
        [<00000000405d7f30>] put_aio_ring_file+0x80/0x100
        [<00000000405d803c>] aio_free_ring+0x8c/0x290
        [<00000000405d82c0>] free_ioctx+0x80/0x180
        [<0000000040284e6c>] process_one_work+0x21c/0x668
        [<00000000402854c4>] worker_thread+0x20c/0x778
        [<0000000040291d44>] kthread+0x2d4/0x2e0
        [<0000000040204020>] end_fault_vector+0x20/0xc0
      
      This indicates that we need to handle the no context case in
      flush_cache_range() as we do in flush_cache_mm().
      
      In thinking about this, I realized that we don't need to flush the TLB
      when there is no context.  So, I added context checks to the large flush
      cases in flush_cache_mm() and flush_cache_range().  The large flush case
      occurs frequently in flush_cache_mm() and the change should improve fork
      performance.
      
      The v2 version of this change removes the BUG_ON from flush_cache_page()
      by skipping the TLB flush when there is no context.  I also added code
      to flush the TLB in flush_cache_mm() and flush_cache_range() when we
      have a context that's not current.  Now all three routines handle TLB
      flushes in a similar manner.
      Signed-off-by: NJohn David Anglin <dave.anglin@bell.net>
      Cc: stable@vger.kernel.org # 4.9+
      Signed-off-by: NHelge Deller <deller@gmx.de>
      9ef0f88f
    • B
      x86/microcode: Fix CPU synchronization routine · bb8c13d6
      Borislav Petkov 提交于
      Emanuel reported an issue with a hang during microcode update because my
      dumb idea to use one atomic synchronization variable for both rendezvous
      - before and after update - was simply bollocks:
      
        microcode: microcode_reload_late: late_cpus: 4
        microcode: __reload_late: cpu 2 entered
        microcode: __reload_late: cpu 1 entered
        microcode: __reload_late: cpu 3 entered
        microcode: __reload_late: cpu 0 entered
        microcode: __reload_late: cpu 1 left
        microcode: Timeout while waiting for CPUs rendezvous, remaining: 1
      
      CPU1 above would finish, leave and the others will still spin waiting for
      it to join.
      
      So do two synchronization atomics instead, which makes the code a lot more
      straightforward.
      
      Also, since the update is serialized and it also takes quite some time per
      microcode engine, increase the exit timeout by the number of CPUs on the
      system.
      
      That's ok because the moment all CPUs are done, that timeout will be cut
      short.
      
      Furthermore, panic when some of the CPUs timeout when returning from a
      microcode update: we can't allow a system with not all cores updated.
      
      Also, as an optimization, do not do the exit sync if microcode wasn't
      updated.
      Reported-by: NEmanuel Czirai <xftroxgpx@protonmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NEmanuel Czirai <xftroxgpx@protonmail.com>
      Tested-by: NAshok Raj <ashok.raj@intel.com>
      Tested-by: NTom Lendacky <thomas.lendacky@amd.com>
      Link: https://lkml.kernel.org/r/20180314183615.17629-2-bp@alien8.de
      bb8c13d6
    • B
      x86/microcode: Attempt late loading only when new microcode is present · 2613f36e
      Borislav Petkov 提交于
      Return UCODE_NEW from the scanning functions to denote that new microcode
      was found and only then attempt the expensive synchronization dance.
      Reported-by: NEmanuel Czirai <xftroxgpx@protonmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NEmanuel Czirai <xftroxgpx@protonmail.com>
      Tested-by: NAshok Raj <ashok.raj@intel.com>
      Tested-by: NTom Lendacky <thomas.lendacky@amd.com>
      Link: https://lkml.kernel.org/r/20180314183615.17629-1-bp@alien8.de
      2613f36e
    • P
      perf: Fix sibling iteration · edb39592
      Peter Zijlstra 提交于
      Mark noticed that the change to sibling_list changed some iteration
      semantics; because previously we used group_list as list entry,
      sibling events would always have an empty sibling_list.
      
      But because we now use sibling_list for both list head and list entry,
      siblings will report as having siblings.
      
      Fix this with a custom for_each_sibling_event() iterator.
      
      Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry")
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: vincent.weaver@maine.edu
      Cc: alexander.shishkin@linux.intel.com
      Cc: torvalds@linux-foundation.org
      Cc: alexey.budankov@linux.intel.com
      Cc: valery.cherepennikov@intel.com
      Cc: eranian@google.com
      Cc: acme@redhat.com
      Cc: linux-tip-commits@vger.kernel.org
      Cc: davidcc@google.com
      Cc: kan.liang@intel.com
      Cc: Dmitry.Prohorov@intel.com
      Cc: jolsa@redhat.com
      Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
      edb39592
  3. 16 3月, 2018 7 次提交
  4. 15 3月, 2018 2 次提交
  5. 14 3月, 2018 3 次提交
    • A
      x86/speculation, objtool: Annotate indirect calls/jumps for objtool on 32-bit kernels · a14bff13
      Andy Whitcroft 提交于
      In the following commit:
      
        9e0e3c51 ("x86/speculation, objtool: Annotate indirect calls/jumps for objtool")
      
      ... we added annotations for CALL_NOSPEC/JMP_NOSPEC on 64-bit x86 kernels,
      but we did not annotate the 32-bit path.
      
      Annotate it similarly.
      Signed-off-by: NAndy Whitcroft <apw@canonical.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180314112427.22351-1-apw@canonical.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a14bff13
    • A
      x86/vm86/32: Fix POPF emulation · b5069782
      Andy Lutomirski 提交于
      POPF would trap if VIP was set regardless of whether IF was set.  Fix it.
      Suggested-by: NStas Sergeev <stsp@list.ru>
      Reported-by: NBart Oldeman <bartoldeman@gmail.com>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Fixes: 5ed92a8a ("x86/vm86: Use the normal pt_regs area for vm86")
      Link: http://lkml.kernel.org/r/ce95f40556e7b2178b6bc06ee9557827ff94bd28.1521003603.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b5069782
    • P
      KVM: PPC: Book3S HV: Fix trap number return from __kvmppc_vcore_entry · a8b48a4d
      Paul Mackerras 提交于
      This fixes a bug where the trap number that is returned by
      __kvmppc_vcore_entry gets corrupted.  The effect of the corruption
      is that IPIs get ignored on POWER9 systems when the IPI is sent via
      a doorbell interrupt to a CPU which is executing in a KVM guest.
      The effect of the IPI being ignored is often that another CPU locks
      up inside smp_call_function_many() (and if that CPU is holding a
      spinlock, other CPUs then lock up inside raw_spin_lock()).
      
      The trap number is currently held in register r12 for most of the
      assembly-language part of the guest exit path.  In that path, we
      call kvmppc_subcore_exit_guest(), which is a C function, without
      restoring r12 afterwards.  Depending on the kernel config and the
      compiler, it may modify r12 or it may not, so some config/compiler
      combinations see the bug and others don't.
      
      To fix this, we arrange for the trap number to be stored on the
      stack from the 'guest_bypass:' label until the end of the function,
      then the trap number is loaded and returned in r12 as before.
      
      Cc: stable@vger.kernel.org # v4.8+
      Fixes: fd7bacbc ("KVM: PPC: Book3S HV: Fix TB corruption in guest exit path on HMI interrupt")
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      a8b48a4d
  6. 12 3月, 2018 3 次提交
  7. 10 3月, 2018 1 次提交
  8. 09 3月, 2018 8 次提交
    • F
      x86/kprobes: Fix kernel crash when probing .entry_trampoline code · c07a8f8b
      Francis Deslauriers 提交于
      Disable the kprobe probing of the entry trampoline:
      
      .entry_trampoline is a code area that is used to ensure page table
      isolation between userspace and kernelspace.
      
      At the beginning of the execution of the trampoline, we load the
      kernel's CR3 register. This has the effect of enabling the translation
      of the kernel virtual addresses to physical addresses. Before this
      happens most kernel addresses can not be translated because the running
      process' CR3 is still used.
      
      If a kprobe is placed on the trampoline code before that change of the
      CR3 register happens the kernel crashes because int3 handling pages are
      not accessible.
      
      To fix this, add the .entry_trampoline section to the kprobe blacklist
      to prohibit the probing of code before all the kernel pages are
      accessible.
      Signed-off-by: NFrancis Deslauriers <francis.deslauriers@efficios.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: mathieu.desnoyers@efficios.com
      Cc: mhiramat@kernel.org
      Link: http://lkml.kernel.org/r/1520565492-4637-2-git-send-email-francis.deslauriers@efficios.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c07a8f8b
    • K
      perf/x86/intel: Disable userspace RDPMC usage for large PEBS · 1af22eba
      Kan Liang 提交于
      Userspace RDPMC cannot possibly work for large PEBS, which was introduced in:
      
        b8241d20699e ("perf/x86/intel: Implement batched PEBS interrupt handling (large PEBS interrupt threshold)")
      
      When the PEBS interrupt threshold is larger than one, there is no way
      to get exact auto-reload times and value for userspace RDPMC.  Disable
      the userspace RDPMC usage when large PEBS is enabled.
      
      The only exception is when the PEBS interrupt threshold is 1, in which
      case user-space RDPMC works well even with auto-reload events.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Fixes: b8241d20699e ("perf/x86/intel: Implement batched PEBS interrupt handling (large PEBS interrupt threshold)")
      Link: http://lkml.kernel.org/r/1518474035-21006-6-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1af22eba
    • K
      perf/x86/intel: Fix PMU read for auto-reload · ceb90d9e
      Kan Liang 提交于
      Auto-reload events needs to be specially handled in event count read.
      
      Auto-reload is only available for intel_pmu.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Fixes: b8241d20699e ("perf/x86/intel: Implement batched PEBS interrupt handling (large PEBS interrupt threshold)")
      Link: http://lkml.kernel.org/r/1518474035-21006-5-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ceb90d9e
    • K
      perf/x86/intel/ds: Introduce ->read() function for auto-reload events and... · 5bee2cc6
      Kan Liang 提交于
      perf/x86/intel/ds: Introduce ->read() function for auto-reload events and flush the PEBS buffer there
      
      There is no way to get exact auto-reload times and values which are needed
      for event updates unless we flush the PEBS buffer.
      
      Introduce intel_pmu_auto_reload_read() to drain the PEBS buffer for
      auto reload event. To prevent races with the hardware, we can only
      call drain_pebs() when the PMU is disabled.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/1518474035-21006-4-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5bee2cc6
    • K
      perf/x86: Introduce a ->read() callback in 'struct x86_pmu' · bcfbe5c4
      Kan Liang 提交于
      Auto-reload needs to be specially handled when reading event counts.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/1518474035-21006-3-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      bcfbe5c4
    • K
      perf/x86/intel: Fix event update for auto-reload · d31fc13f
      Kan Liang 提交于
      There is a bug when reading event->count with large PEBS enabled.
      
      Here is an example:
      
        # ./read_count
        0x71f0
        0x122c0
        0x1000000001c54
        0x100000001257d
        0x200000000bdc5
      
      In fixed period mode, the auto-reload mechanism could be enabled for
      PEBS events, but the calculation of event->count does not take the
      auto-reload values into account.
      
      Anyone who reads event->count will get the wrong result, e.g x86_pmu_read().
      
      This bug was introduced with the auto-reload mechanism enabled since
      commit:
      
        851559e3 ("perf/x86/intel: Use the PEBS auto reload mechanism when possible")
      
      Introduce intel_pmu_save_and_restart_reload() to calculate the
      event->count only for auto-reload.
      
      Since the counter increments a negative counter value and overflows on
      the sign switch, giving the interval:
      
              [-period, 0]
      
      the difference between two consequtive reads is:
      
       A) value2 - value1;
          when no overflows have happened in between,
       B) (0 - value1) + (value2 - (-period));
          when one overflow happened in between,
       C) (0 - value1) + (n - 1) * (period) + (value2 - (-period));
          when @n overflows happened in between.
      
      Here A) is the obvious difference, B) is the extension to the discrete
      interval, where the first term is to the top of the interval and the
      second term is from the bottom of the next interval and C) the extension
      to multiple intervals, where the middle term is the whole intervals
      covered.
      
      The equation for all cases is:
      
          value2 - value1 + n * period
      
      Previously the event->count is updated right before the sample output.
      But for case A, there is no PEBS record ready. It needs to be specially
      handled.
      
      Remove the auto-reload code from x86_perf_event_set_period() since
      we'll not longer call that function in this case.
      
      Based-on-code-from: Peter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Fixes: 851559e3 ("perf/x86/intel: Use the PEBS auto reload mechanism when possible")
      Link: http://lkml.kernel.org/r/1518474035-21006-2-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d31fc13f
    • K
      perf/x86/intel: Properly save/restore the PMU state in the NMI handler · 82d71ed0
      Kan Liang 提交于
      The PMU is disabled in intel_pmu_handle_irq(), but cpuc->enabled is not updated
      accordingly.
      
      This is fine in current usage because no-one checks it - but fix it
      for future code: for example, the drain_pebs() will be modified to
      fix an auto-reload bug.
      
      Properly save/restore the old PMU state.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: kernel test robot <fengguang.wu@intel.com>
      Link: http://lkml.kernel.org/r/6f44ee84-56f8-79f1-559b-08e371eaeb78@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      82d71ed0
    • K
      perf/x86/intel: Fix large period handling on Broadwell CPUs · f605cfca
      Kan Liang 提交于
      Large fixed period values could be truncated on Broadwell, for example:
      
        perf record -e cycles -c 10000000000
      
      Here the fixed period is 0x2540BE400, but the period which finally applied is
      0x540BE400 - which is wrong.
      
      The reason is that x86_pmu::limit_period() uses an u32 parameter, so the
      high 32 bits of 'period' get truncated.
      
      This bug was introduced in:
      
        commit 294fe0f5 ("perf/x86/intel: Add INST_RETIRED.ALL workarounds")
      
      It's safe to use u64 instead of u32:
      
       - Although the 'left' is s64, the value of 'left' must be positive when
         calling limit_period().
      
       - bdw_limit_period() only modifies the lowest 6 bits, it doesn't touch
         the higher 32 bits.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 294fe0f5 ("perf/x86/intel: Add INST_RETIRED.ALL workarounds")
      Link: http://lkml.kernel.org/r/1519926894-3520-1-git-send-email-kan.liang@linux.intel.com
      [ Rewrote unacceptably bad changelog. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f605cfca
  9. 08 3月, 2018 7 次提交