1. 17 1月, 2020 10 次提交
  2. 15 1月, 2020 8 次提交
  3. 27 12月, 2019 3 次提交
    • F
      CPX: x86/cpufeatures: Enumerate the new AVX512 BFLOAT16 instructions · 0743af97
      Fenghua Yu 提交于
      commit b302e4b176d00e1cbc80148c5d0aee36751f7480 upstream.
      
      AVX512 BFLOAT16 instructions support 16-bit BFLOAT16 floating-point
      format (BF16) for deep learning optimization.
      
      BF16 is a short version of 32-bit single-precision floating-point
      format (FP32) and has several advantages over 16-bit half-precision
      floating-point format (FP16). BF16 keeps FP32 accumulation after
      multiplication without loss of precision, offers more than enough
      range for deep learning training tasks, and doesn't need to handle
      hardware exception.
      
      AVX512 BFLOAT16 instructions are enumerated in CPUID.7.1:EAX[bit 5]
      AVX512_BF16.
      
      CPUID.7.1:EAX contains only feature bits. Reuse the currently empty
      word 12 as a pure features word to hold the feature bits including
      AVX512_BF16.
      
      Detailed information of the CPUID bit and AVX512 BFLOAT16 instructions
      can be found in the latest Intel Architecture Instruction Set Extensions
      and Future Features Programming Reference.
      
       [ bp: Check CPUID(7) subleaf validity before accessing subleaf 1. ]
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
      Cc: Robert Hoo <robert.hu@linux.intel.com>
      Cc: "Sean J Christopherson" <sean.j.christopherson@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
      Cc: x86 <x86@kernel.org>
      Link: https://lkml.kernel.org/r/1560794416-217638-3-git-send-email-fenghua.yu@intel.comSigned-off-by: NLin Wang <lin.x.wang@intel.com>
      Signed-off-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Acked-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      0743af97
    • W
      x86: uaccess: Inhibit speculation past access_ok() in user_access_begin() · 514f013a
      Will Deacon 提交于
      commit 6e693b3ffecb0b478c7050b44a4842854154f715 upstream.
      
      Commit 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'")
      makes the access_ok() check part of the user_access_begin() preceding a
      series of 'unsafe' accesses.  This has the desirable effect of ensuring
      that all 'unsafe' accesses have been range-checked, without having to
      pick through all of the callsites to verify whether the appropriate
      checking has been made.
      
      However, the consolidated range check does not inhibit speculation, so
      it is still up to the caller to ensure that they are not susceptible to
      any speculative side-channel attacks for user addresses that ultimately
      fail the access_ok() check.
      
      This is an oversight, so use __uaccess_begin_nospec() to ensure that
      speculation is inhibited until the access_ok() check has passed.
      Reported-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com>
      Acked-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      514f013a
    • L
      make 'user_access_begin()' do 'access_ok()' · 83460ef1
      Linus Torvalds 提交于
      commit 594cc251fdd0d231d342d88b2fdff4bc42fb0690 upstream.
      
      Originally, the rule used to be that you'd have to do access_ok()
      separately, and then user_access_begin() before actually doing the
      direct (optimized) user access.
      
      But experience has shown that people then decide not to do access_ok()
      at all, and instead rely on it being implied by other operations or
      similar.  Which makes it very hard to verify that the access has
      actually been range-checked.
      
      If you use the unsafe direct user accesses, hardware features (either
      SMAP - Supervisor Mode Access Protection - on x86, or PAN - Privileged
      Access Never - on ARM) do force you to use user_access_begin().  But
      nothing really forces the range check.
      
      By putting the range check into user_access_begin(), we actually force
      people to do the right thing (tm), and the range check vill be visible
      near the actual accesses.  We have way too long a history of people
      trying to avoid them.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      
      [ Shile: fix following conflicts by adding a dummy arguments ]
      Conflicts:
      	kernel/compat.c
      	kernel/exit.c
      Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com>
      83460ef1
  4. 01 12月, 2019 1 次提交
  5. 24 11月, 2019 1 次提交
    • B
      x86/kexec: Correct KEXEC_BACKUP_SRC_END off-by-one error · 85f996c3
      Bjorn Helgaas 提交于
      [ Upstream commit 51fbf14f2528a8c6401290e37f1c893a2412f1d3 ]
      
      The only use of KEXEC_BACKUP_SRC_END is as an argument to
      walk_system_ram_res():
      
        int crash_load_segments(struct kimage *image)
        {
          ...
          walk_system_ram_res(KEXEC_BACKUP_SRC_START, KEXEC_BACKUP_SRC_END,
                              image, determine_backup_region);
      
      walk_system_ram_res() expects "start, end" arguments that are inclusive,
      i.e., the range to be walked includes both the start and end addresses.
      
      KEXEC_BACKUP_SRC_END was previously defined as (640 * 1024UL), which is the
      first address *past* the desired 0-640KB range.
      
      Define KEXEC_BACKUP_SRC_END as (640 * 1024UL - 1) so the KEXEC_BACKUP_SRC
      region is [0-0x9ffff], not [0-0xa0000].
      
      Fixes: dd5f7260 ("kexec: support for kexec on panic using new system call")
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      CC: "H. Peter Anvin" <hpa@zytor.com>
      CC: Andrew Morton <akpm@linux-foundation.org>
      CC: Brijesh Singh <brijesh.singh@amd.com>
      CC: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      CC: Ingo Molnar <mingo@redhat.com>
      CC: Lianbo Jiang <lijiang@redhat.com>
      CC: Takashi Iwai <tiwai@suse.de>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Tom Lendacky <thomas.lendacky@amd.com>
      CC: Vivek Goyal <vgoyal@redhat.com>
      CC: baiyaowei@cmss.chinamobile.com
      CC: bhe@redhat.com
      CC: dan.j.williams@intel.com
      CC: dyoung@redhat.com
      CC: kexec@lists.infradead.org
      Link: http://lkml.kernel.org/r/153805811578.1157.6948388946904655969.stgit@bhelgaas-glaptop.roam.corp.google.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      85f996c3
  6. 13 11月, 2019 5 次提交
    • J
      kvm: x86: mmu: Recovery of shattered NX large pages · 46a4a014
      Junaid Shahid 提交于
      commit 1aa9b9572b10529c2e64e2b8f44025d86e124308 upstream.
      
      The page table pages corresponding to broken down large pages are zapped in
      FIFO order, so that the large page can potentially be recovered, if it is
      not longer being used for execution.  This removes the performance penalty
      for walking deeper EPT page tables.
      
      By default, one large page will last about one hour once the guest
      reaches a steady state.
      Signed-off-by: NJunaid Shahid <junaids@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      46a4a014
    • P
      kvm: mmu: ITLB_MULTIHIT mitigation · 5219505f
      Paolo Bonzini 提交于
      commit b8e8c8303ff28c61046a4d0f6ea99aea609a7dc0 upstream.
      
      With some Intel processors, putting the same virtual address in the TLB
      as both a 4 KiB and 2 MiB page can confuse the instruction fetch unit
      and cause the processor to issue a machine check resulting in a CPU lockup.
      
      Unfortunately when EPT page tables use huge pages, it is possible for a
      malicious guest to cause this situation.
      
      Add a knob to mark huge pages as non-executable. When the nx_huge_pages
      parameter is enabled (and we are using EPT), all huge pages are marked as
      NX. If the guest attempts to execute in one of those pages, the page is
      broken down into 4K pages, which are then marked executable.
      
      This is not an issue for shadow paging (except nested EPT), because then
      the host is in control of TLB flushes and the problematic situation cannot
      happen.  With nested EPT, again the nested guest can cause problems shadow
      and direct EPT is treated in the same way.
      
      [ tglx: Fixup default to auto and massage wording a bit ]
      Originally-by: NJunaid Shahid <junaids@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5219505f
    • V
      x86/bugs: Add ITLB_MULTIHIT bug infrastructure · f9aa6b73
      Vineela Tummalapalli 提交于
      commit db4d30fbb71b47e4ecb11c4efa5d8aad4b03dfae upstream.
      
      Some processors may incur a machine check error possibly resulting in an
      unrecoverable CPU lockup when an instruction fetch encounters a TLB
      multi-hit in the instruction TLB. This can occur when the page size is
      changed along with either the physical address or cache type. The relevant
      erratum can be found here:
      
         https://bugzilla.kernel.org/show_bug.cgi?id=205195
      
      There are other processors affected for which the erratum does not fully
      disclose the impact.
      
      This issue affects both bare-metal x86 page tables and EPT.
      
      It can be mitigated by either eliminating the use of large pages or by
      using careful TLB invalidations when changing the page size in the page
      tables.
      
      Just like Spectre, Meltdown, L1TF and MDS, a new bit has been allocated in
      MSR_IA32_ARCH_CAPABILITIES (PSCHANGE_MC_NO) and will be set on CPUs which
      are mitigated against this issue.
      Signed-off-by: NVineela Tummalapalli <vineela.tummalapalli@intel.com>
      Co-developed-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f9aa6b73
    • P
      x86/speculation/taa: Add mitigation for TSX Async Abort · 6c58ea85
      Pawan Gupta 提交于
      commit 1b42f017415b46c317e71d41c34ec088417a1883 upstream.
      
      TSX Async Abort (TAA) is a side channel vulnerability to the internal
      buffers in some Intel processors similar to Microachitectural Data
      Sampling (MDS). In this case, certain loads may speculatively pass
      invalid data to dependent operations when an asynchronous abort
      condition is pending in a TSX transaction.
      
      This includes loads with no fault or assist condition. Such loads may
      speculatively expose stale data from the uarch data structures as in
      MDS. Scope of exposure is within the same-thread and cross-thread. This
      issue affects all current processors that support TSX, but do not have
      ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES.
      
      On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0,
      CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers
      using VERW or L1D_FLUSH, there is no additional mitigation needed for
      TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by
      disabling the Transactional Synchronization Extensions (TSX) feature.
      
      A new MSR IA32_TSX_CTRL in future and current processors after a
      microcode update can be used to control the TSX feature. There are two
      bits in that MSR:
      
      * TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted
      Transactional Memory (RTM).
      
      * TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other
      TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
      disabled with updated microcode but still enumerated as present by
      CPUID(EAX=7).EBX{bit4}.
      
      The second mitigation approach is similar to MDS which is clearing the
      affected CPU buffers on return to user space and when entering a guest.
      Relevant microcode update is required for the mitigation to work.  More
      details on this approach can be found here:
      
        https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
      
      The TSX feature can be controlled by the "tsx" command line parameter.
      If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is
      deployed. The effective mitigation state can be read from sysfs.
      
       [ bp:
         - massage + comments cleanup
         - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh.
         - remove partial TAA mitigation in update_mds_branch_idle() - Josh.
         - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g
       ]
      Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6c58ea85
    • P
      x86/msr: Add the IA32_TSX_CTRL MSR · 4002d16a
      Pawan Gupta 提交于
      commit c2955f270a84762343000f103e0640d29c7a96f3 upstream.
      
      Transactional Synchronization Extensions (TSX) may be used on certain
      processors as part of a speculative side channel attack.  A microcode
      update for existing processors that are vulnerable to this attack will
      add a new MSR - IA32_TSX_CTRL to allow the system administrator the
      option to disable TSX as one of the possible mitigations.
      
      The CPUs which get this new MSR after a microcode upgrade are the ones
      which do not set MSR_IA32_ARCH_CAPABILITIES.MDS_NO (bit 5) because those
      CPUs have CPUID.MD_CLEAR, i.e., the VERW implementation which clears all
      CPU buffers takes care of the TAA case as well.
      
        [ Note that future processors that are not vulnerable will also
          support the IA32_TSX_CTRL MSR. ]
      
      Add defines for the new IA32_TSX_CTRL MSR and its bits.
      
      TSX has two sub-features:
      
      1. Restricted Transactional Memory (RTM) is an explicitly-used feature
         where new instructions begin and end TSX transactions.
      2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
         "old" style locks are used by software.
      
      Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
      IA32_TSX_CTRL MSR.
      
      There are two control bits in IA32_TSX_CTRL MSR:
      
        Bit 0: When set, it disables the Restricted Transactional Memory (RTM)
               sub-feature of TSX (will force all transactions to abort on the
      	 XBEGIN instruction).
      
        Bit 1: When set, it disables the enumeration of the RTM and HLE feature
               (i.e. it will make CPUID(EAX=7).EBX{bit4} and
      	  CPUID(EAX=7).EBX{bit11} read as 0).
      
      The other TSX sub-feature, Hardware Lock Elision (HLE), is
      unconditionally disabled by the new microcode but still enumerated
      as present by CPUID(EAX=7).EBX{bit4}, unless disabled by
      IA32_TSX_CTRL_MSR[1] - TSX_CTRL_CPUID_CLEAR.
      Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NNeelima Krishnan <neelima.krishnan@intel.com>
      Reviewed-by: NMark Gross <mgross@linux.intel.com>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4002d16a
  7. 06 11月, 2019 2 次提交
  8. 18 10月, 2019 1 次提交
  9. 05 10月, 2019 1 次提交
  10. 21 9月, 2019 2 次提交
    • P
      x86/uaccess: Don't leak the AC flags into __get_user() argument evaluation · 37135777
      Peter Zijlstra 提交于
      [ Upstream commit 9b8bd476e78e89c9ea26c3b435ad0201c3d7dbf5 ]
      
      Identical to __put_user(); the __get_user() argument evalution will too
      leak UBSAN crud into the __uaccess_begin() / __uaccess_end() region.
      While uncommon this was observed to happen for:
      
        drivers/xen/gntdev.c: if (__get_user(old_status, batch->status[i]))
      
      where UBSAN added array bound checking.
      
      This complements commit:
      
        6ae865615fc4 ("x86/uaccess: Dont leak the AC flag into __put_user() argument evaluation")
      
      Tested-by Sedat Dilek <sedat.dilek@gmail.com>
      Reported-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: broonie@kernel.org
      Cc: sfr@canb.auug.org.au
      Cc: akpm@linux-foundation.org
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: mhocko@suse.cz
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Link: https://lkml.kernel.org/r/20190829082445.GM2369@hirez.programming.kicks-ass.netSigned-off-by: NSasha Levin <sashal@kernel.org>
      37135777
    • K
      perf/x86/amd/ibs: Fix sample bias for dispatched micro-ops · 7ec11cad
      Kim Phillips 提交于
      [ Upstream commit 0f4cd769c410e2285a4e9873a684d90423f03090 ]
      
      When counting dispatched micro-ops with cnt_ctl=1, in order to prevent
      sample bias, IBS hardware preloads the least significant 7 bits of
      current count (IbsOpCurCnt) with random values, such that, after the
      interrupt is handled and counting resumes, the next sample taken
      will be slightly perturbed.
      
      The current count bitfield is in the IBS execution control h/w register,
      alongside the maximum count field.
      
      Currently, the IBS driver writes that register with the maximum count,
      leaving zeroes to fill the current count field, thereby overwriting
      the random bits the hardware preloaded for itself.
      
      Fix the driver to actually retain and carry those random bits from the
      read of the IBS control register, through to its write, instead of
      overwriting the lower current count bits with zeroes.
      
      Tested with:
      
      perf record -c 100001 -e ibs_op/cnt_ctl=1/pp -a -C 0 taskset -c 0 <workload>
      
      'perf annotate' output before:
      
       15.70  65:   addsd     %xmm0,%xmm1
       17.30        add       $0x1,%rax
       15.88        cmp       %rdx,%rax
                    je        82
       17.32  72:   test      $0x1,%al
                    jne       7c
        7.52        movapd    %xmm1,%xmm0
        5.90        jmp       65
        8.23  7c:   sqrtsd    %xmm1,%xmm0
       12.15        jmp       65
      
      'perf annotate' output after:
      
       16.63  65:   addsd     %xmm0,%xmm1
       16.82        add       $0x1,%rax
       16.81        cmp       %rdx,%rax
                    je        82
       16.69  72:   test      $0x1,%al
                    jne       7c
        8.30        movapd    %xmm1,%xmm0
        8.13        jmp       65
        8.24  7c:   sqrtsd    %xmm1,%xmm0
        8.39        jmp       65
      
      Tested on Family 15h and 17h machines.
      
      Machines prior to family 10h Rev. C don't have the RDWROPCNT capability,
      and have the IbsOpCurCnt bitfield reserved, so this patch shouldn't
      affect their operation.
      
      It is unknown why commit db98c5fa ("perf/x86: Implement 64-bit
      counter support for IBS") ignored the lower 4 bits of the IbsOpCurCnt
      field; the number of preloaded random bits has always been 7, AFAICT.
      Signed-off-by: NKim Phillips <kim.phillips@amd.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: "Arnaldo Carvalho de Melo" <acme@kernel.org>
      Cc: <x86@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "Borislav Petkov" <bp@alien8.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: "Namhyung Kim" <namhyung@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Link: https://lkml.kernel.org/r/20190826195730.30614-1-kim.phillips@amd.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      7ec11cad
  11. 16 9月, 2019 2 次提交
  12. 10 9月, 2019 1 次提交
    • J
      x86/boot: Preserve boot_params.secure_boot from sanitizing · ee271ead
      John S. Gruber 提交于
      commit 29d9a0b50736768f042752070e5cdf4e4d4c00df upstream.
      
      Commit
      
        a90118c445cc ("x86/boot: Save fields explicitly, zero out everything else")
      
      now zeroes the secure boot setting information (enabled/disabled/...)
      passed by the boot loader or by the kernel's EFI handover mechanism.
      
      The problem manifests itself with signed kernels using the EFI handoff
      protocol with grub and the kernel loses the information whether secure
      boot is enabled in the firmware, i.e., the log message "Secure boot
      enabled" becomes "Secure boot could not be determined".
      
      efi_main() arch/x86/boot/compressed/eboot.c sets this field early but it
      is subsequently zeroed by the above referenced commit.
      
      Include boot_params.secure_boot in the preserve field list.
      
       [ bp: restructure commit message and massage. ]
      
      Fixes: a90118c445cc ("x86/boot: Save fields explicitly, zero out everything else")
      Signed-off-by: NJohn S. Gruber <JohnSGruber@gmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NJohn Hubbard <jhubbard@nvidia.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: stable <stable@vger.kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/CAPotdmSPExAuQcy9iAHqX3js_fc4mMLQOTr5RBGvizyCOPcTQQ@mail.gmail.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ee271ead
  13. 29 8月, 2019 3 次提交