1. 05 12月, 2019 3 次提交
    • W
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 64694b27
      Will Deacon 提交于
      [ Upstream commit 7faa313f05cad184e8b17750f0cbe5216ac6debb ]
      
      Commit 396244692232 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: N"kernelci.org bot" <bot@kernelci.org>
      Tested-by: NKevin Hilman <khilman@baylibre.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      64694b27
    • S
      arm64: smp: Handle errors reported by the firmware · 4b40393b
      Suzuki K Poulose 提交于
      [ Upstream commit f357b3a7e17af7736d67d8267edc1ed3d1dd9391 ]
      
      The __cpu_up() routine ignores the errors reported by the firmware
      for a CPU bringup operation and looks for the error status set by the
      booting CPU. If the CPU never entered the kernel, we could end up
      in assuming stale error status, which otherwise would have been
      set/cleared appropriately by the booting CPU.
      Reported-by: NSteve Capper <steve.capper@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      4b40393b
    • S
      arm64: mm: Prevent mismatched 52-bit VA support · e3d27b94
      Steve Capper 提交于
      [ Upstream commit a96a33b1ca57dbea4285893dedf290aeb8eb090b ]
      
      For cases where there is a mismatch in ARMv8.2-LVA support between CPUs
      we have to be careful in allowing secondary CPUs to boot if 52-bit
      virtual addresses have already been enabled on the boot CPU.
      
      This patch adds code to the secondary startup path. If the boot CPU has
      enabled 52-bit VAs then ID_AA64MMFR2_EL1 is checked to see if the
      secondary can also enable 52-bit support. If not, the secondary is
      prevented from booting and an error message is displayed indicating why.
      
      Technically this patch could be implemented using the cpufeature code
      when considering 52-bit userspace support. However, we employ low level
      checks here as the cpufeature code won't be able to run if we have
      mismatched 52-bit kernel va support.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      e3d27b94
  2. 01 12月, 2019 2 次提交
  3. 24 11月, 2019 2 次提交
    • A
      arm64/numa: Report correct memblock range for the dummy node · 06cb99e6
      Anshuman Khandual 提交于
      [ Upstream commit 77cfe950901e5c13aca2df6437a05f39dd9a929b ]
      
      The dummy node ID is marked into all memory ranges on the system. So the
      dummy node really extends the entire memblock.memory. Hence report correct
      extent information for the dummy node using memblock range helper functions
      instead of the range [0LLU, PFN_PHYS(max_pfn) - 1)].
      
      Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms")
      Acked-by: NPunit Agrawal <punit.agrawal@arm.com>
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      06cb99e6
    • P
      arm64: uaccess: Ensure PAN is re-enabled after unhandled uaccess fault · 70366259
      Pavel Tatashin 提交于
      commit 94bb804e1e6f0a9a77acf20d7c70ea141c6c821e upstream.
      
      A number of our uaccess routines ('__arch_clear_user()' and
      '__arch_copy_{in,from,to}_user()') fail to re-enable PAN if they
      encounter an unhandled fault whilst accessing userspace.
      
      For CPUs implementing both hardware PAN and UAO, this bug has no effect
      when both extensions are in use by the kernel.
      
      For CPUs implementing hardware PAN but not UAO, this means that a kernel
      using hardware PAN may execute portions of code with PAN inadvertently
      disabled, opening us up to potential security vulnerabilities that rely
      on userspace access from within the kernel which would usually be
      prevented by this mechanism. In other words, parts of the kernel run the
      same way as they would on a CPU without PAN implemented/emulated at all.
      
      For CPUs not implementing hardware PAN and instead relying on software
      emulation via 'CONFIG_ARM64_SW_TTBR0_PAN=y', the impact is unfortunately
      much worse. Calling 'schedule()' with software PAN disabled means that
      the next task will execute in the kernel using the page-table and ASID
      of the previous process even after 'switch_mm()', since the actual
      hardware switch is deferred until return to userspace. At this point, or
      if there is a intermediate call to 'uaccess_enable()', the page-table
      and ASID of the new process are installed. Sadly, due to the changes
      introduced by KPTI, this is not an atomic operation and there is a very
      small window (two instructions) where the CPU is configured with the
      page-table of the old task and the ASID of the new task; a speculative
      access in this state is disastrous because it would corrupt the TLB
      entries for the new task with mappings from the previous address space.
      
      As Pavel explains:
      
        | I was able to reproduce memory corruption problem on Broadcom's SoC
        | ARMv8-A like this:
        |
        | Enable software perf-events with PERF_SAMPLE_CALLCHAIN so userland's
        | stack is accessed and copied.
        |
        | The test program performed the following on every CPU and forking
        | many processes:
        |
        |	unsigned long *map = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE,
        |				  MAP_SHARED | MAP_ANONYMOUS, -1, 0);
        |	map[0] = getpid();
        |	sched_yield();
        |	if (map[0] != getpid()) {
        |		fprintf(stderr, "Corruption detected!");
        |	}
        |	munmap(map, PAGE_SIZE);
        |
        | From time to time I was getting map[0] to contain pid for a
        | different process.
      
      Ensure that PAN is re-enabled when returning after an unhandled user
      fault from our uaccess routines.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Cc: <stable@vger.kernel.org>
      Fixes: 338d4f49 ("arm64: kernel: Add support for Privileged Access Never")
      Signed-off-by: NPavel Tatashin <pasha.tatashin@soleen.com>
      [will: rewrote commit message]
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      70366259
  4. 21 11月, 2019 22 次提交
  5. 13 11月, 2019 1 次提交
  6. 10 11月, 2019 4 次提交
  7. 06 11月, 2019 5 次提交
  8. 29 10月, 2019 1 次提交