1. 02 9月, 2018 1 次提交
  2. 31 8月, 2018 1 次提交
  3. 30 8月, 2018 2 次提交
  4. 27 8月, 2018 2 次提交
  5. 24 8月, 2018 2 次提交
  6. 21 8月, 2018 2 次提交
    • D
      x86/memory_failure: Introduce {set, clear}_mce_nospec() · 284ce401
      Dan Williams 提交于
      Currently memory_failure() returns zero if the error was handled. On
      that result mce_unmap_kpfn() is called to zap the page out of the kernel
      linear mapping to prevent speculative fetches of potentially poisoned
      memory. However, in the case of dax mapped devmap pages the page may be
      in active permanent use by the device driver, so it cannot be unmapped
      from the kernel.
      
      Instead of marking the page not present, marking the page UC should
      be sufficient for preventing poison from being pre-fetched into the
      cache. Convert mce_unmap_pfn() to set_mce_nospec() remapping the page as
      UC, to hide it from speculative accesses.
      
      Given that that persistent memory errors can be cleared by the driver,
      include a facility to restore the page to cacheable operation,
      clear_mce_nospec().
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: <linux-edac@vger.kernel.org>
      Cc: <x86@kernel.org>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-by: NIngo Molnar <mingo@redhat.com>
      Signed-off-by: NDave Jiang <dave.jiang@intel.com>
      284ce401
    • R
      x86/process: Re-export start_thread() · dc76803e
      Rian Hunter 提交于
      The consolidation of the start_thread() functions removed the export
      unintentionally. This breaks binfmt handlers built as a module.
      
      Add it back.
      
      Fixes: e634d8fc ("x86-64: merge the standard and compat start_thread() functions")
      Signed-off-by: NRian Hunter <rian@alum.mit.edu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Dmitry Safonov <dima@arista.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180819230854.7275-1-rian@alum.mit.edu
      dc76803e
  7. 17 8月, 2018 1 次提交
  8. 16 8月, 2018 1 次提交
  9. 15 8月, 2018 2 次提交
  10. 10 8月, 2018 1 次提交
  11. 08 8月, 2018 1 次提交
    • P
      x86/paravirt: Fix spectre-v2 mitigations for paravirt guests · 5800dc5c
      Peter Zijlstra 提交于
      Nadav reported that on guests we're failing to rewrite the indirect
      calls to CALLEE_SAVE paravirt functions. In particular the
      pv_queued_spin_unlock() call is left unpatched and that is all over the
      place. This obviously wrecks Spectre-v2 mitigation (for paravirt
      guests) which relies on not actually having indirect calls around.
      
      The reason is an incorrect clobber test in paravirt_patch_call(); this
      function rewrites an indirect call with a direct call to the _SAME_
      function, there is no possible way the clobbers can be different
      because of this.
      
      Therefore remove this clobber check. Also put WARNs on the other patch
      failure case (not enough room for the instruction) which I've not seen
      trigger in my (limited) testing.
      
      Three live kernel image disassemblies for lock_sock_nested (as a small
      function that illustrates the problem nicely). PRE is the current
      situation for guests, POST is with this patch applied and NATIVE is with
      or without the patch for !guests.
      
      PRE:
      
      (gdb) disassemble lock_sock_nested
      Dump of assembler code for function lock_sock_nested:
         0xffffffff817be970 <+0>:     push   %rbp
         0xffffffff817be971 <+1>:     mov    %rdi,%rbp
         0xffffffff817be974 <+4>:     push   %rbx
         0xffffffff817be975 <+5>:     lea    0x88(%rbp),%rbx
         0xffffffff817be97c <+12>:    callq  0xffffffff819f7160 <_cond_resched>
         0xffffffff817be981 <+17>:    mov    %rbx,%rdi
         0xffffffff817be984 <+20>:    callq  0xffffffff819fbb00 <_raw_spin_lock_bh>
         0xffffffff817be989 <+25>:    mov    0x8c(%rbp),%eax
         0xffffffff817be98f <+31>:    test   %eax,%eax
         0xffffffff817be991 <+33>:    jne    0xffffffff817be9ba <lock_sock_nested+74>
         0xffffffff817be993 <+35>:    movl   $0x1,0x8c(%rbp)
         0xffffffff817be99d <+45>:    mov    %rbx,%rdi
         0xffffffff817be9a0 <+48>:    callq  *0xffffffff822299e8
         0xffffffff817be9a7 <+55>:    pop    %rbx
         0xffffffff817be9a8 <+56>:    pop    %rbp
         0xffffffff817be9a9 <+57>:    mov    $0x200,%esi
         0xffffffff817be9ae <+62>:    mov    $0xffffffff817be993,%rdi
         0xffffffff817be9b5 <+69>:    jmpq   0xffffffff81063ae0 <__local_bh_enable_ip>
         0xffffffff817be9ba <+74>:    mov    %rbp,%rdi
         0xffffffff817be9bd <+77>:    callq  0xffffffff817be8c0 <__lock_sock>
         0xffffffff817be9c2 <+82>:    jmp    0xffffffff817be993 <lock_sock_nested+35>
      End of assembler dump.
      
      POST:
      
      (gdb) disassemble lock_sock_nested
      Dump of assembler code for function lock_sock_nested:
         0xffffffff817be970 <+0>:     push   %rbp
         0xffffffff817be971 <+1>:     mov    %rdi,%rbp
         0xffffffff817be974 <+4>:     push   %rbx
         0xffffffff817be975 <+5>:     lea    0x88(%rbp),%rbx
         0xffffffff817be97c <+12>:    callq  0xffffffff819f7160 <_cond_resched>
         0xffffffff817be981 <+17>:    mov    %rbx,%rdi
         0xffffffff817be984 <+20>:    callq  0xffffffff819fbb00 <_raw_spin_lock_bh>
         0xffffffff817be989 <+25>:    mov    0x8c(%rbp),%eax
         0xffffffff817be98f <+31>:    test   %eax,%eax
         0xffffffff817be991 <+33>:    jne    0xffffffff817be9ba <lock_sock_nested+74>
         0xffffffff817be993 <+35>:    movl   $0x1,0x8c(%rbp)
         0xffffffff817be99d <+45>:    mov    %rbx,%rdi
         0xffffffff817be9a0 <+48>:    callq  0xffffffff810a0c20 <__raw_callee_save___pv_queued_spin_unlock>
         0xffffffff817be9a5 <+53>:    xchg   %ax,%ax
         0xffffffff817be9a7 <+55>:    pop    %rbx
         0xffffffff817be9a8 <+56>:    pop    %rbp
         0xffffffff817be9a9 <+57>:    mov    $0x200,%esi
         0xffffffff817be9ae <+62>:    mov    $0xffffffff817be993,%rdi
         0xffffffff817be9b5 <+69>:    jmpq   0xffffffff81063aa0 <__local_bh_enable_ip>
         0xffffffff817be9ba <+74>:    mov    %rbp,%rdi
         0xffffffff817be9bd <+77>:    callq  0xffffffff817be8c0 <__lock_sock>
         0xffffffff817be9c2 <+82>:    jmp    0xffffffff817be993 <lock_sock_nested+35>
      End of assembler dump.
      
      NATIVE:
      
      (gdb) disassemble lock_sock_nested
      Dump of assembler code for function lock_sock_nested:
         0xffffffff817be970 <+0>:     push   %rbp
         0xffffffff817be971 <+1>:     mov    %rdi,%rbp
         0xffffffff817be974 <+4>:     push   %rbx
         0xffffffff817be975 <+5>:     lea    0x88(%rbp),%rbx
         0xffffffff817be97c <+12>:    callq  0xffffffff819f7160 <_cond_resched>
         0xffffffff817be981 <+17>:    mov    %rbx,%rdi
         0xffffffff817be984 <+20>:    callq  0xffffffff819fbb00 <_raw_spin_lock_bh>
         0xffffffff817be989 <+25>:    mov    0x8c(%rbp),%eax
         0xffffffff817be98f <+31>:    test   %eax,%eax
         0xffffffff817be991 <+33>:    jne    0xffffffff817be9ba <lock_sock_nested+74>
         0xffffffff817be993 <+35>:    movl   $0x1,0x8c(%rbp)
         0xffffffff817be99d <+45>:    mov    %rbx,%rdi
         0xffffffff817be9a0 <+48>:    movb   $0x0,(%rdi)
         0xffffffff817be9a3 <+51>:    nopl   0x0(%rax)
         0xffffffff817be9a7 <+55>:    pop    %rbx
         0xffffffff817be9a8 <+56>:    pop    %rbp
         0xffffffff817be9a9 <+57>:    mov    $0x200,%esi
         0xffffffff817be9ae <+62>:    mov    $0xffffffff817be993,%rdi
         0xffffffff817be9b5 <+69>:    jmpq   0xffffffff81063ae0 <__local_bh_enable_ip>
         0xffffffff817be9ba <+74>:    mov    %rbp,%rdi
         0xffffffff817be9bd <+77>:    callq  0xffffffff817be8c0 <__lock_sock>
         0xffffffff817be9c2 <+82>:    jmp    0xffffffff817be993 <lock_sock_nested+35>
      End of assembler dump.
      
      
      Fixes: 63f70270 ("[PATCH] i386: PARAVIRT: add common patching machinery")
      Fixes: 3010a066 ("x86/paravirt, objtool: Annotate indirect calls")
      Reported-by: NNadav Amit <namit@vmware.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: stable@vger.kernel.org
      5800dc5c
  12. 07 8月, 2018 2 次提交
    • T
      cpu/hotplug: Fix SMT supported evaluation · bc2d8d26
      Thomas Gleixner 提交于
      Josh reported that the late SMT evaluation in cpu_smt_state_init() sets
      cpu_smt_control to CPU_SMT_NOT_SUPPORTED in case that 'nosmt' was supplied
      on the kernel command line as it cannot differentiate between SMT disabled
      by BIOS and SMT soft disable via 'nosmt'. That wreckages the state and
      makes the sysfs interface unusable.
      
      Rework this so that during bringup of the non boot CPUs the availability of
      SMT is determined in cpu_smt_allowed(). If a newly booted CPU is not a
      'primary' thread then set the local cpu_smt_available marker and evaluate
      this explicitely right after the initial SMP bringup has finished.
      
      SMT evaulation on x86 is a trainwreck as the firmware has all the
      information _before_ booting the kernel, but there is no interface to query
      it.
      
      Fixes: 73d5e2b4 ("cpu/hotplug: detect SMT disabled by BIOS")
      Reported-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      bc2d8d26
    • M
      xen/pv: Call get_cpu_address_sizes to set x86_virt/phys_bits · 405c018a
      M. Vefa Bicakci 提交于
      Commit d94a155c ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits
      adjustment corruption") has moved the query and calculation of the
      x86_virt_bits and x86_phys_bits fields of the cpuinfo_x86 struct
      from the get_cpu_cap function to a new function named
      get_cpu_address_sizes.
      
      One of the call sites related to Xen PV VMs was unfortunately missed
      in the aforementioned commit. This prevents successful boot-up of
      kernel versions 4.17 and up in Xen PV VMs if CONFIG_DEBUG_VIRTUAL
      is enabled, due to the following code path:
      
        enlighten_pv.c::xen_start_kernel
          mmu_pv.c::xen_reserve_special_pages
            page.h::__pa
              physaddr.c::__phys_addr
                physaddr.h::phys_addr_valid
      
      phys_addr_valid uses boot_cpu_data.x86_phys_bits to validate physical
      addresses. boot_cpu_data.x86_phys_bits is no longer populated before
      the call to xen_reserve_special_pages due to the aforementioned commit
      though, so the validation performed by phys_addr_valid fails, which
      causes __phys_addr to trigger a BUG, preventing boot-up.
      Signed-off-by: NM. Vefa Bicakci <m.v.b@runbox.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: xen-devel@lists.xenproject.org
      Cc: x86@kernel.org
      Cc: stable@vger.kernel.org # for v4.17 and up
      Fixes: d94a155c ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits adjustment corruption")
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      405c018a
  13. 06 8月, 2018 3 次提交
  14. 05 8月, 2018 4 次提交
    • P
      x86/speculation: Use ARCH_CAPABILITIES to skip L1D flush on vmentry · 8e0b2b91
      Paolo Bonzini 提交于
      Bit 3 of ARCH_CAPABILITIES tells a hypervisor that L1D flush on vmentry is
      not needed.  Add a new value to enum vmx_l1d_flush_state, which is used
      either if there is no L1TF bug at all, or if bit 3 is set in ARCH_CAPABILITIES.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      8e0b2b91
    • P
      x86/speculation: Simplify sysfs report of VMX L1TF vulnerability · ea156d19
      Paolo Bonzini 提交于
      Three changes to the content of the sysfs file:
      
       - If EPT is disabled, L1TF cannot be exploited even across threads on the
         same core, and SMT is irrelevant.
      
       - If mitigation is completely disabled, and SMT is enabled, print "vulnerable"
         instead of "vulnerable, SMT vulnerable"
      
       - Reorder the two parts so that the main vulnerability state comes first
         and the detail on SMT is second.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ea156d19
    • N
      x86/irq: Let interrupt handlers set kvm_cpu_l1tf_flush_l1d · ffcba43f
      Nicolai Stange 提交于
      The last missing piece to having vmx_l1d_flush() take interrupts after
      VMEXIT into account is to set the kvm_cpu_l1tf_flush_l1d per-cpu flag on
      irq entry.
      
      Issue calls to kvm_set_cpu_l1tf_flush_l1d() from entering_irq(),
      ipi_entering_ack_irq(), smp_reschedule_interrupt() and
      uv_bau_message_interrupt().
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NNicolai Stange <nstange@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ffcba43f
    • N
      x86: Don't include linux/irq.h from asm/hardirq.h · 447ae316
      Nicolai Stange 提交于
      The next patch in this series will have to make the definition of
      irq_cpustat_t available to entering_irq().
      
      Inclusion of asm/hardirq.h into asm/apic.h would cause circular header
      dependencies like
      
        asm/smp.h
          asm/apic.h
            asm/hardirq.h
              linux/irq.h
                linux/topology.h
                  linux/smp.h
                    asm/smp.h
      
      or
      
        linux/gfp.h
          linux/mmzone.h
            asm/mmzone.h
              asm/mmzone_64.h
                asm/smp.h
                  asm/apic.h
                    asm/hardirq.h
                      linux/irq.h
                        linux/irqdesc.h
                          linux/kobject.h
                            linux/sysfs.h
                              linux/kernfs.h
                                linux/idr.h
                                  linux/gfp.h
      
      and others.
      
      This causes compilation errors because of the header guards becoming
      effective in the second inclusion: symbols/macros that had been defined
      before wouldn't be available to intermediate headers in the #include chain
      anymore.
      
      A possible workaround would be to move the definition of irq_cpustat_t
      into its own header and include that from both, asm/hardirq.h and
      asm/apic.h.
      
      However, this wouldn't solve the real problem, namely asm/harirq.h
      unnecessarily pulling in all the linux/irq.h cruft: nothing in
      asm/hardirq.h itself requires it. Also, note that there are some other
      archs, like e.g. arm64, which don't have that #include in their
      asm/hardirq.h.
      
      Remove the linux/irq.h #include from x86' asm/hardirq.h.
      
      Fix resulting compilation errors by adding appropriate #includes to *.c
      files as needed.
      
      Note that some of these *.c files could be cleaned up a bit wrt. to their
      set of #includes, but that should better be done from separate patches, if
      at all.
      Signed-off-by: NNicolai Stange <nstange@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      447ae316
  15. 03 8月, 2018 3 次提交
    • T
      x86/intel_rdt: Disable PMU access · 4a7a54a5
      Thomas Gleixner 提交于
      Peter is objecting to the direct PMU access in RDT. Right now the PMU usage
      is broken anyway as it is not coordinated with perf.
      
      Until this discussion settled, disable the PMU mechanics by simply
      rejecting the type '2' measurement in the resctrl file.
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Reinette Chatre <reinette.chatre@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: fenghua.yu@intel.com
      Cc: tony.luck@intel.com
      Cc: vikas.shivappa@linux.intel.com
      CC: gavin.hindman@intel.com
      Cc: jithu.joseph@intel.com
      Cc: hpa@zytor.com
      4a7a54a5
    • S
      x86/speculation: Support Enhanced IBRS on future CPUs · 706d5168
      Sai Praneeth 提交于
      Future Intel processors will support "Enhanced IBRS" which is an "always
      on" mode i.e. IBRS bit in SPEC_CTRL MSR is enabled once and never
      disabled.
      
      From the specification [1]:
      
       "With enhanced IBRS, the predicted targets of indirect branches
        executed cannot be controlled by software that was executed in a less
        privileged predictor mode or on another logical processor. As a
        result, software operating on a processor with enhanced IBRS need not
        use WRMSR to set IA32_SPEC_CTRL.IBRS after every transition to a more
        privileged predictor mode. Software can isolate predictor modes
        effectively simply by setting the bit once. Software need not disable
        enhanced IBRS prior to entering a sleep state such as MWAIT or HLT."
      
      If Enhanced IBRS is supported by the processor then use it as the
      preferred spectre v2 mitigation mechanism instead of Retpoline. Intel's
      Retpoline white paper [2] states:
      
       "Retpoline is known to be an effective branch target injection (Spectre
        variant 2) mitigation on Intel processors belonging to family 6
        (enumerated by the CPUID instruction) that do not have support for
        enhanced IBRS. On processors that support enhanced IBRS, it should be
        used for mitigation instead of retpoline."
      
      The reason why Enhanced IBRS is the recommended mitigation on processors
      which support it is that these processors also support CET which
      provides a defense against ROP attacks. Retpoline is very similar to ROP
      techniques and might trigger false positives in the CET defense.
      
      If Enhanced IBRS is selected as the mitigation technique for spectre v2,
      the IBRS bit in SPEC_CTRL MSR is set once at boot time and never
      cleared. Kernel also has to make sure that IBRS bit remains set after
      VMEXIT because the guest might have cleared the bit. This is already
      covered by the existing x86_spec_ctrl_set_guest() and
      x86_spec_ctrl_restore_host() speculation control functions.
      
      Enhanced IBRS still requires IBPB for full mitigation.
      
      [1] Speculative-Execution-Side-Channel-Mitigations.pdf
      [2] Retpoline-A-Branch-Target-Injection-Mitigation.pdf
      Both documents are available at:
      https://bugzilla.kernel.org/show_bug.cgi?id=199511Originally-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Tim C Chen <tim.c.chen@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Link: https://lkml.kernel.org/r/1533148945-24095-1-git-send-email-sai.praneeth.prakhya@intel.com
      706d5168
    • P
      x86/cpufeatures: Add EPT_AD feature bit · 301d328a
      Peter Feiner 提交于
      Some Intel processors have an EPT feature whereby the accessed & dirty bits
      in EPT entries can be updated by HW. MSR IA32_VMX_EPT_VPID_CAP exposes the
      presence of this capability.
      
      There is no point in trying to use that new feature bit in the VMX code as
      VMX needs to read the MSR anyway to access other bits, but having the
      feature bit for EPT_AD in place helps virtualization management as it
      exposes "ept_ad" in /proc/cpuinfo/$proc/flags if the feature is present.
      
      [ tglx: Amended changelog ]
      Signed-off-by: NPeter Feiner <pfeiner@google.com>
      Signed-off-by: NPeter Shier <pshier@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Link: https://lkml.kernel.org/r/20180801180657.138051-1-pshier@google.com
      301d328a
  16. 02 8月, 2018 1 次提交
  17. 31 7月, 2018 6 次提交
  18. 30 7月, 2018 1 次提交
    • J
      x86/kexec: Allocate 8k PGDs for PTI · ca38dc8f
      Joerg Roedel 提交于
      Fuzzing the PTI-x86-32 code with trinity showed unhandled
      kernel paging request oops-messages that looked a lot like
      silent data corruption.
      
      Lot's of debugging and testing lead to the kexec-32bit code,
      which is still allocating 4k PGDs when PTI is enabled. But
      since it uses native_set_pud() to build the page-table, it
      will unevitably call into __pti_set_user_pgtbl(), which
      writes beyond the allocated 4k page.
      
      Use PGD_ALLOCATION_ORDER to allocate PGDs in the kexec code
      to fix the issue.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NDavid H. Gutteridge <dhgutteridge@sympatico.ca>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: linux-mm@kvack.org
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Waiman Long <llong@redhat.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: joro@8bytes.org
      Link: https://lkml.kernel.org/r/1532533683-5988-4-git-send-email-joro@8bytes.org
      ca38dc8f
  19. 28 7月, 2018 1 次提交
    • R
      dma-mapping: Generalise dma_32bit_limit flag · f07d141f
      Robin Murphy 提交于
      Whilst the notion of an upstream DMA restriction is most commonly seen
      in PCI host bridges saddled with a 32-bit native interface, a more
      general version of the same issue can exist on complex SoCs where a bus
      or point-to-point interconnect link from a device's DMA master interface
      to another component along the path to memory (often an IOMMU) may carry
      fewer address bits than the interfaces at both ends nominally support.
      In order to properly deal with this, the first step is to expand the
      dma_32bit_limit flag into an arbitrary mask.
      
      To minimise the impact on existing code, we'll make sure to only
      consider this new mask valid if set. That makes sense anyway, since a
      mask of zero would represent DMA not being wired up at all, and that
      would be better handled by not providing valid ops in the first place.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      f07d141f
  20. 27 7月, 2018 1 次提交
    • O
      iommu: Add config option to set passthrough as default · 58d11317
      Olof Johansson 提交于
      This allows the default behavior to be controlled by a kernel config
      option instead of changing the commandline for the kernel to include
      "iommu.passthrough=on" or "iommu=pt" on machines where this is desired.
      
      Likewise, for machines where this config option is enabled, it can be
      disabled at boot time with "iommu.passthrough=off" or "iommu=nopt".
      
      Also corrected iommu=pt documentation for IA-64, since it has no code that
      parses iommu= at all.
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      58d11317
  21. 24 7月, 2018 2 次提交