1. 30 11月, 2019 28 次提交
  2. 21 11月, 2019 3 次提交
    • P
      arm64: uaccess: Remove uaccess_*_not_uao asm macros · e50be648
      Pavel Tatashin 提交于
      It is safer and simpler to drop the uaccess assembly macros in favour of
      inline C functions. Although this bloats the Image size slightly, it
      aligns our user copy routines with '{get,put}_user()' and generally
      makes the code a lot easier to reason about.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@soleen.com>
      [will: tweaked commit message and changed temporary variable names]
      Signed-off-by: NWill Deacon <will@kernel.org>
      e50be648
    • P
      arm64: uaccess: Ensure PAN is re-enabled after unhandled uaccess fault · 94bb804e
      Pavel Tatashin 提交于
      A number of our uaccess routines ('__arch_clear_user()' and
      '__arch_copy_{in,from,to}_user()') fail to re-enable PAN if they
      encounter an unhandled fault whilst accessing userspace.
      
      For CPUs implementing both hardware PAN and UAO, this bug has no effect
      when both extensions are in use by the kernel.
      
      For CPUs implementing hardware PAN but not UAO, this means that a kernel
      using hardware PAN may execute portions of code with PAN inadvertently
      disabled, opening us up to potential security vulnerabilities that rely
      on userspace access from within the kernel which would usually be
      prevented by this mechanism. In other words, parts of the kernel run the
      same way as they would on a CPU without PAN implemented/emulated at all.
      
      For CPUs not implementing hardware PAN and instead relying on software
      emulation via 'CONFIG_ARM64_SW_TTBR0_PAN=y', the impact is unfortunately
      much worse. Calling 'schedule()' with software PAN disabled means that
      the next task will execute in the kernel using the page-table and ASID
      of the previous process even after 'switch_mm()', since the actual
      hardware switch is deferred until return to userspace. At this point, or
      if there is a intermediate call to 'uaccess_enable()', the page-table
      and ASID of the new process are installed. Sadly, due to the changes
      introduced by KPTI, this is not an atomic operation and there is a very
      small window (two instructions) where the CPU is configured with the
      page-table of the old task and the ASID of the new task; a speculative
      access in this state is disastrous because it would corrupt the TLB
      entries for the new task with mappings from the previous address space.
      
      As Pavel explains:
      
        | I was able to reproduce memory corruption problem on Broadcom's SoC
        | ARMv8-A like this:
        |
        | Enable software perf-events with PERF_SAMPLE_CALLCHAIN so userland's
        | stack is accessed and copied.
        |
        | The test program performed the following on every CPU and forking
        | many processes:
        |
        |	unsigned long *map = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE,
        |				  MAP_SHARED | MAP_ANONYMOUS, -1, 0);
        |	map[0] = getpid();
        |	sched_yield();
        |	if (map[0] != getpid()) {
        |		fprintf(stderr, "Corruption detected!");
        |	}
        |	munmap(map, PAGE_SIZE);
        |
        | From time to time I was getting map[0] to contain pid for a
        | different process.
      
      Ensure that PAN is re-enabled when returning after an unhandled user
      fault from our uaccess routines.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Cc: <stable@vger.kernel.org>
      Fixes: 338d4f49 ("arm64: kernel: Add support for Privileged Access Never")
      Signed-off-by: NPavel Tatashin <pasha.tatashin@soleen.com>
      [will: rewrote commit message]
      Signed-off-by: NWill Deacon <will@kernel.org>
      94bb804e
    • T
      s390/cpumf: Adjust registration of s390 PMU device drivers · 6a82e23f
      Thomas Richter 提交于
      Linux-next commit titled "perf/core: Optimize perf_init_event()"
      changed the semantics of PMU device driver registration.
      It was done to speed up the lookup/handling of PMU device driver
      specific events. It also enforces that only one PMU device
      driver will be registered of type PERF_EVENT_RAW.
      
      This change added these line in function perf_pmu_register():
      
        ...
        +       ret = idr_alloc(&pmu_idr, pmu, max, 0, GFP_KERNEL);
        +       if (ret < 0)
                      goto free_pdc;
        +
        +       WARN_ON(type >= 0 && ret != type);
      
      The warn_on generates a message. We have 3 PMU device drivers,
      each registered as type PERF_TYPE_RAW.
      The cf_diag device driver (arch/s390/kernel/perf_cpumf_cf_diag.c)
      always hits the WARN_ON because it is the second PMU device driver
      (after sampling device driver arch/s390/kernel/perf_cpumf_sf.c)
      which is registered as type 4 (PERF_TYPE_RAW).
      So when the sampling device driver is registered, ret has value 4.
      When cf_diag device driver is registered with type 4,
      ret has value of 5 and WARN_ON fires.
      
      Adjust the PMU device drivers for s390 to support the new
      semantics required by perf_pmu_register().
      Signed-off-by: NThomas Richter <tmricht@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      6a82e23f
  3. 20 11月, 2019 6 次提交
  4. 14 11月, 2019 3 次提交
    • S
      KVM: x86/mmu: Take slots_lock when using kvm_mmu_zap_all_fast() · ed69a6cb
      Sean Christopherson 提交于
      Acquire the per-VM slots_lock when zapping all shadow pages as part of
      toggling nx_huge_pages.  The fast zap algorithm relies on exclusivity
      (via slots_lock) to identify obsolete vs. valid shadow pages, because it
      uses a single bit for its generation number. Holding slots_lock also
      obviates the need to acquire a read lock on the VM's srcu.
      
      Failing to take slots_lock when toggling nx_huge_pages allows multiple
      instances of kvm_mmu_zap_all_fast() to run concurrently, as the other
      user, KVM_SET_USER_MEMORY_REGION, does not take the global kvm_lock.
      (kvm_mmu_zap_all_fast() does take kvm->mmu_lock, but it can be
      temporarily dropped by kvm_zap_obsolete_pages(), so it is not enough
      to enforce exclusivity).
      
      Concurrent fast zap instances causes obsolete shadow pages to be
      incorrectly identified as valid due to the single bit generation number
      wrapping, which results in stale shadow pages being left in KVM's MMU
      and leads to all sorts of undesirable behavior.
      The bug is easily confirmed by running with CONFIG_PROVE_LOCKING and
      toggling nx_huge_pages via its module param.
      
      Note, until commit 4ae5acbc4936 ("KVM: x86/mmu: Take slots_lock when
      using kvm_mmu_zap_all_fast()", 2019-11-13) the fast zap algorithm used
      an ulong-sized generation instead of relying on exclusivity for
      correctness, but all callers except the recently added set_nx_huge_pages()
      needed to hold slots_lock anyways.  Therefore, this patch does not have
      to be backported to stable kernels.
      
      Given that toggling nx_huge_pages is by no means a fast path, force it
      to conform to the current approach instead of reintroducing the previous
      generation count.
      
      Fixes: b8e8c830 ("kvm: mmu: ITLB_MULTIHIT mitigation", but NOT FOR STABLE)
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ed69a6cb
    • M
      sparc: vdso: fix build error of vdso32 · 53472914
      Masahiro Yamada 提交于
      Since commit 54b8ae66 ("kbuild: change *FLAGS_<basetarget>.o to
      take the path relative to $(obj)"), sparc allmodconfig fails to build
      as follows:
      
        CC      arch/sparc/vdso/vdso32/vclock_gettime.o
      unrecognized e_machine 18 arch/sparc/vdso/vdso32/vclock_gettime.o
      arch/sparc/vdso/vdso32/vclock_gettime.o: failed
      
      The cause of the breakage is that -pg flag not being dropped.
      
      The vdso32 files are located in the vdso32/ subdirectory, but I missed
      to update the Makefile.
      
      I removed the meaningless CFLAGS_REMOVE_vdso-note.o since it is only
      effective for C file.
      
      vdso-note.o is compiled from assembly file:
      
        arch/sparc/vdso/vdso-note.S
        arch/sparc/vdso/vdso32/vdso-note.S
      
      Fixes: 54b8ae66 ("kbuild: change *FLAGS_<basetarget>.o to take the path relative to $(obj)")
      Reported-by: NAnatoly Pugachev <matorola@gmail.com>
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Tested-by: NAnatoly Pugachev <matorola@gmail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      53472914
    • A
      arm64: Kconfig: add a choice for endianness · d8e85e14
      Anders Roxell 提交于
      When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig
      CONFIG_CPU_BIG_ENDIAN gets enabled. Which tends not to be what most
      people want. Another concern that has come up is that ACPI isn't built
      for an allmodconfig kernel today since that also depends on !CPU_BIG_ENDIAN.
      
      Rework so that we introduce a 'choice' and default the choice to
      CPU_LITTLE_ENDIAN. That means that when we build an allmodconfig kernel
      it will default to CPU_LITTLE_ENDIAN that most people tends to want.
      Reviewed-by: NJohn Garry <john.garry@huawei.com>
      Acked-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d8e85e14
反馈
建议
客服 返回
顶部