1. 21 5月, 2019 3 次提交
  2. 19 5月, 2019 1 次提交
  3. 18 5月, 2019 7 次提交
  4. 17 5月, 2019 20 次提交
  5. 16 5月, 2019 9 次提交
    • A
      powerpc/mm: Drop VM_BUG_ON in get_region_id() · 6457f42e
      Aneesh Kumar K.V 提交于
      We call get_region_id() without validating the ea value. That means
      with a wrong ea value we hit the BUG as below.
      
        kernel BUG at arch/powerpc/include/asm/book3s/64/hash.h:129!
        Oops: Exception in kernel mode, sig: 5 [#1]
        LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
        CPU: 0 PID: 3937 Comm: access_tests Not tainted 5.1.0
        ....
        NIP [c00000000007ba20] do_slb_fault+0x70/0x320
        LR [c00000000000896c] data_access_slb_common+0x15c/0x1a0
      
      Fix this by removing the VM_BUG_ON. All callers make sure the returned
      region id is valid and error out otherwise.
      
      Fixes: 0034d395 ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range")
      Reported-by: NAndrew Donnellan <ajd@linux.ibm.com>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6457f42e
    • V
      nds32: Fix vDSO clock_getres() · af9abd65
      Vincenzo Frascino 提交于
      clock_getres in the vDSO library has to preserve the same behaviour
      of posix_get_hrtimer_res().
      
      In particular, posix_get_hrtimer_res() does:
          sec = 0;
          ns = hrtimer_resolution;
      and hrtimer_resolution depends on the enablement of the high
      resolution timers that can happen either at compile or at run time.
      
      Fix the nds32 vdso implementation of clock_getres keeping a copy of
      hrtimer_resolution in vdso data and using that directly.
      
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NGreentime Hu <greentime@andestech.com>
      af9abd65
    • A
      x86/speculation/mds: Revert CPU buffer clear on double fault exit · 88640e1d
      Andy Lutomirski 提交于
      The double fault ESPFIX path doesn't return to user mode at all --
      it returns back to the kernel by simulating a #GP fault.
      prepare_exit_to_usermode() will run on the way out of
      general_protection before running user code.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Fixes: 04dcbdb8 ("x86/speculation/mds: Clear CPU buffers on exit to user")
      Link: http://lkml.kernel.org/r/ac97612445c0a44ee10374f6ea79c222fe22a5c4.1557865329.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      88640e1d
    • C
      nds32: don't export low-level cache flushing routines · a771e922
      Christoph Hellwig 提交于
      None of these is used by modules.  Nor should they as we have better
      highlevel primitives.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NGreentime Hu <greentime@andestech.com>
      Signed-off-by: NGreentime Hu <greentime@andestech.com>
      a771e922
    • T
      ia64: Make sure that we have a mmiowb function real early · 8a635ffb
      Tony Luck 提交于
      Generic kernels feed many operation through the "machvec" logic to get
      the correct form of the operation for the current system.  "mmiowb()" is
      one of those operations.
      
      Although machvec is initialized very early in boot, it isn't early
      enough for a recent upstream kernel change that added mmiowb to the
      spin_unlock() path.
      
      Statically initialize the mmiowb field of machvec so that we won't die
      with a call through a NULL pointer.  This should be safe because we do
      the real initialization of machvec before bringing up any addtional CPUs
      or doing any I/O.
      
      Fixes: 49ca6462 ("ia64/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()")
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8a635ffb
    • R
      Revert "ARM: 8846/1: warn if divided syntax assembler is used" · b752bb40
      Russell King 提交于
      This reverts commit e8c24bbd.
      
      GCC 4.7, which is still permitted, emits code using the original
      syntax.  This means we end up with lots of assembler warnings when
      building with a currently-supported version of gcc.
      
      Revert the commit (with fixups to keep the follow-on -mauto-it
      change) to avoid these warnings.
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      b752bb40
    • M
      MIPS: Alchemy: add DMA masks for on-chip ethernet · b1e479e3
      Manuel Lauss 提交于
      Makes au1000-eth work again, tested on DB1500.
      Signed-off-by: NManuel Lauss <manuel.lauss@gmail.com>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: Linux-MIPS <linux-mips@linux-mips.org>
      b1e479e3
    • S
      Revert "KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU" · f93f7ede
      Sean Christopherson 提交于
      The RDPMC-exiting control is dependent on the existence of the RDPMC
      instruction itself, i.e. is not tied to the "Architectural Performance
      Monitoring" feature.  For all intents and purposes, the control exists
      on all CPUs with VMX support since RDPMC also exists on all VCPUs with
      VMX supported.  Per Intel's SDM:
      
        The RDPMC instruction was introduced into the IA-32 Architecture in
        the Pentium Pro processor and the Pentium processor with MMX technology.
        The earlier Pentium processors have performance-monitoring counters, but
        they must be read with the RDMSR instruction.
      
      Because RDPMC-exiting always exists, KVM requires the control and refuses
      to load if it's not available.  As a result, hiding the PMU from a guest
      breaks nested virtualization if the guest attemts to use KVM.
      
      While it's not explicitly stated in the RDPMC pseudocode, the VM-Exit
      check for RDPMC-exiting follows standard fault vs. VM-Exit prioritization
      for privileged instructions, e.g. occurs after the CPL/CR0.PE/CR4.PCE
      checks, but before the counter referenced in ECX is checked for validity.
      
      In other words, the original KVM behavior of injecting a #GP was correct,
      and the KVM unit test needs to be adjusted accordingly, e.g. eat the #GP
      when the unit test guest (L3 in this case) executes RDPMC without
      RDPMC-exiting set in the unit test host (L2).
      
      This reverts commit e51bfdb6.
      
      Fixes: e51bfdb6 ("KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU")
      Reported-by: NDavid Hill <hilld@binarystorm.net>
      Cc: Saar Amar <saaramar@microsoft.com>
      Cc: Mihai Carabas <mihai.carabas@oracle.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Liran Alon <liran.alon@oracle.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f93f7ede
    • K
      kvm: x86: Fix L1TF mitigation for shadow MMU · 61455bf2
      Kai Huang 提交于
      Currently KVM sets 5 most significant bits of physical address bits
      reported by CPUID (boot_cpu_data.x86_phys_bits) for nonpresent or
      reserved bits SPTE to mitigate L1TF attack from guest when using shadow
      MMU. However for some particular Intel CPUs the physical address bits
      of internal cache is greater than physical address bits reported by
      CPUID.
      
      Use the kernel's existing boot_cpu_data.x86_cache_bits to determine the
      five most significant bits. Doing so improves KVM's L1TF mitigation in
      the unlikely scenario that system RAM overlaps the high order bits of
      the "real" physical address space as reported by CPUID. This aligns with
      the kernel's warnings regarding L1TF mitigation, e.g. in the above
      scenario the kernel won't warn the user about lack of L1TF mitigation
      if x86_cache_bits is greater than x86_phys_bits.
      
      Also initialize shadow_nonpresent_or_rsvd_mask explicitly to make it
      consistent with other 'shadow_{xxx}_mask', and opportunistically add a
      WARN once if KVM's L1TF mitigation cannot be applied on a system that
      is marked as being susceptible to L1TF.
      Reviewed-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NKai Huang <kai.huang@linux.intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      61455bf2