1. 01 6月, 2018 1 次提交
  2. 31 5月, 2018 1 次提交
    • A
      crypto: clarify licensing of OpenSSL asm code · c2e415fe
      Adam Langley 提交于
      Several source files have been taken from OpenSSL. In some of them a
      comment that "permission to use under GPL terms is granted" was
      included below a contradictory license statement. In several cases,
      there was no indication that the license of the code was compatible
      with the GPLv2.
      
      This change clarifies the licensing for all of these files. I've
      confirmed with the author (Andy Polyakov) that a) he has licensed the
      files with the GPLv2 comment under that license and b) that he's also
      happy to license the other files under GPLv2 too. In one case, the
      file is already contained in his CRYPTOGAMS bundle, which has a GPLv2
      option, and so no special measures are needed.
      
      In all cases, the license status of code has been clarified by making
      the GPLv2 license prominent.
      
      The .S files have been regenerated from the updated .pl files.
      
      This is a comment-only change. No code is changed.
      Signed-off-by: NAdam Langley <agl@chromium.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      c2e415fe
  3. 27 5月, 2018 1 次提交
    • J
      arm64: dts: hikey: Fix eMMC corruption regression · 9c6d26df
      John Stultz 提交于
      This patch is a partial revert of
      commit abd7d097 ("arm64: dts: hikey: Enable HS200 mode on eMMC")
      
      which has been causing eMMC corruption on my HiKey board.
      
      Symptoms usually looked like:
      
      mmc_host mmc0: Bus speed (slot 0) = 24800000Hz (slot req 400000Hz, actual 400000HZ div = 31)
      ...
      mmc_host mmc0: Bus speed (slot 0) = 148800000Hz (slot req 150000000Hz, actual 148800000HZ div = 0)
      mmc0: new HS200 MMC card at address 0001
      ...
      dwmmc_k3 f723d000.dwmmc0: Unexpected command timeout, state 3
      mmc_host mmc0: Bus speed (slot 0) = 24800000Hz (slot req 400000Hz, actual 400000HZ div = 31)
      mmc_host mmc0: Bus speed (slot 0) = 148800000Hz (slot req 150000000Hz, actual 148800000HZ div = 0)
      mmc_host mmc0: Bus speed (slot 0) = 24800000Hz (slot req 400000Hz, actual 400000HZ div = 31)
      mmc_host mmc0: Bus speed (slot 0) = 148800000Hz (slot req 150000000Hz, actual 148800000HZ div = 0)
      mmc_host mmc0: Bus speed (slot 0) = 24800000Hz (slot req 400000Hz, actual 400000HZ div = 31)
      mmc_host mmc0: Bus speed (slot 0) = 148800000Hz (slot req 150000000Hz, actual 148800000HZ div = 0)
      print_req_error: I/O error, dev mmcblk0, sector 8810504
      Aborting journal on device mmcblk0p10-8.
      mmc_host mmc0: Bus speed (slot 0) = 24800000Hz (slot req 400000Hz, actual 400000HZ div = 31)
      mmc_host mmc0: Bus speed (slot 0) = 148800000Hz (slot req 150000000Hz, actual 148800000HZ div = 0)
      mmc_host mmc0: Bus speed (slot 0) = 24800000Hz (slot req 400000Hz, actual 400000HZ div = 31)
      mmc_host mmc0: Bus speed (slot 0) = 148800000Hz (slot req 150000000Hz, actual 148800000HZ div = 0)
      mmc_host mmc0: Bus speed (slot 0) = 24800000Hz (slot req 400000Hz, actual 400000HZ div = 31)
      mmc_host mmc0: Bus speed (slot 0) = 148800000Hz (slot req 150000000Hz, actual 148800000HZ div = 0)
      mmc_host mmc0: Bus speed (slot 0) = 24800000Hz (slot req 400000Hz, actual 400000HZ div = 31)
      mmc_host mmc0: Bus speed (slot 0) = 148800000Hz (slot req 150000000Hz, actual 148800000HZ div = 0)
      EXT4-fs error (device mmcblk0p10): ext4_journal_check_start:61: Detected aborted journal
      EXT4-fs (mmcblk0p10): Remounting filesystem read-only
      
      And quite often this would result in a disk that wouldn't properly
      boot even with older kernels.
      
      It seems the max-frequency property added by the above patch is
      causing the problem, so remove it.
      
      Cc: Ryan Grachek <ryan@edited.us>
      Cc: Wei Xu <xuwei5@hisilicon.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ulf Hansson <ulf.hansson@linaro.org>
      Cc: YongQin Liu <yongqin.liu@linaro.org>
      Cc: Leo Yan <leo.yan@linaro.org>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      Tested-by: NLeo Yan <leo.yan@linaro.org>
      Signed-off-by: NWei Xu <xuwei04@gmail.com>
      9c6d26df
  4. 24 5月, 2018 1 次提交
  5. 23 5月, 2018 1 次提交
    • P
      arm64: fault: Don't leak data in ESR context for user fault on kernel VA · cc198460
      Peter Maydell 提交于
      If userspace faults on a kernel address, handing them the raw ESR
      value on the sigframe as part of the delivered signal can leak data
      useful to attackers who are using information about the underlying hardware
      fault type (e.g. translation vs permission) as a mechanism to defeat KASLR.
      
      However there are also legitimate uses for the information provided
      in the ESR -- notably the GCC and LLVM sanitizers use this to report
      whether wild pointer accesses by the application are reads or writes
      (since a wild write is a more serious bug than a wild read), so we
      don't want to drop the ESR information entirely.
      
      For faulting addresses in the kernel, sanitize the ESR. We choose
      to present userspace with the illusion that there is nothing mapped
      in the kernel's part of the address space at all, by reporting all
      faults as level 0 translation faults taken to EL1.
      
      These fields are safe to pass through to userspace as they depend
      only on the instruction that userspace used to provoke the fault:
       EC IL (always)
       ISV CM WNR (for all data aborts)
      All the other fields in ESR except DFSC are architecturally RES0
      for an L0 translation fault taken to EL1, so can be zeroed out
      without confusing userspace.
      
      The illusion is not entirely perfect, as there is a tiny wrinkle
      where we will report an alignment fault that was not due to the memory
      type (for instance a LDREX to an unaligned address) as a translation
      fault, whereas if you do this on real unmapped memory the alignment
      fault takes precedence. This is not likely to trip anybody up in
      practice, as the only users we know of for the ESR information who
      care about the behaviour for kernel addresses only really want to
      know about the WnR bit.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cc198460
  6. 22 5月, 2018 2 次提交
    • J
      arm64: export tishift functions to modules · 255845fc
      Jason A. Donenfeld 提交于
      Otherwise modules that use these arithmetic operations will fail to
      link. We accomplish this with the usual EXPORT_SYMBOL, which on most
      architectures goes in the .S file but the ARM64 maintainers prefer that
      insead it goes into arm64ksyms.
      
      While we're at it, we also fix this up to use SPDX, and I personally
      choose to relicense this as GPL2||BSD so that these symbols don't need
      to be export_symbol_gpl, so all modules can use the routines, since
      these are important general purpose compiler-generated function calls.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Reported-by: NPaX Team <pageexec@freemail.hu>
      Cc: stable@vger.kernel.org
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      255845fc
    • W
      arm64: lse: Add early clobbers to some input/output asm operands · 32c3fa7c
      Will Deacon 提交于
      For LSE atomics that read and write a register operand, we need to
      ensure that these operands are annotated as "early clobber" if the
      register is written before all of the input operands have been consumed.
      Failure to do so can result in the compiler allocating the same register
      to both operands, leading to splats such as:
      
       Unable to handle kernel paging request at virtual address 11111122222221
       [...]
       x1 : 1111111122222222 x0 : 1111111122222221
       Process swapper/0 (pid: 1, stack limit = 0x000000008209f908)
       Call trace:
        test_atomic64+0x1360/0x155c
      
      where x0 has been allocated as both the value to be stored and also the
      atomic_t pointer.
      
      This patch adds the missing clobbers.
      
      Cc: <stable@vger.kernel.org>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Reported-by: NMark Salter <msalter@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      32c3fa7c
  7. 18 5月, 2018 2 次提交
  8. 15 5月, 2018 5 次提交
    • A
      KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock · bf308242
      Andre Przywara 提交于
      kvm_read_guest() will eventually look up in kvm_memslots(), which requires
      either to hold the kvm->slots_lock or to be inside a kvm->srcu critical
      section.
      In contrast to x86 and s390 we don't take the SRCU lock on every guest
      exit, so we have to do it individually for each kvm_read_guest() call.
      
      Provide a wrapper which does that and use that everywhere.
      
      Note that ending the SRCU critical section before returning from the
      kvm_read_guest() wrapper is safe, because the data has been *copied*, so
      we don't need to rely on valid references to the memslot anymore.
      
      Cc: Stable <stable@vger.kernel.org> # 4.8+
      Reported-by: NJan Glauber <jan.glauber@caviumnetworks.com>
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      bf308242
    • A
      locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked() · c6f5d02b
      Andrea Parri 提交于
      The following commit:
      
        38b850a7 ("arm64: spinlock: order spin_{is_locked,unlock_wait} against local locks")
      
      ... added an smp_mb() to arch_spin_is_locked(), in order
      "to ensure that the lock value is always loaded after any other locks have
      been taken by the current CPU", and reported one example (the "insane case"
      in ipc/sem.c) relying on such guarantee.
      
      It is however understood that spin_is_locked() is not required to provide
      such an ordering guarantee (a guarantee that is currently not provided by
      all the implementations/archs), and that callers relying on such ordering
      should instead insert suitable memory barriers before acting on the result
      of spin_is_locked().
      
      Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(),
      revealing that none of them are relying on the ordering guarantee anymore,
      this commit removes the leading smp_mb() from the primitive thus reverting
      38b850a7.
      
      [1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
          https://marc.info/?l=linux-kernel&m=152042843808540&w=2
          https://marc.info/?l=linux-kernel&m=152043346110262&w=2Signed-off-by: NAndrea Parri <andrea.parri@amarulasolutions.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: akiyks@gmail.com
      Cc: boqun.feng@gmail.com
      Cc: dhowells@redhat.com
      Cc: j.alglave@ucl.ac.uk
      Cc: linux-arch@vger.kernel.org
      Cc: luc.maranget@inria.fr
      Cc: npiggin@gmail.com
      Cc: parri.andrea@gmail.com
      Cc: stern@rowland.harvard.edu
      Link: http://lkml.kernel.org/r/1526338889-7003-2-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c6f5d02b
    • D
      bpf, arm64: save 4 bytes in prologue when ebpf insns came from cbpf · 56ea6a8b
      Daniel Borkmann 提交于
      We can trivially save 4 bytes in prologue for cBPF since tail calls
      can never be used from there. The register push/pop is pairwise,
      here, x25 (fp) and x26 (tcc), so no point in changing that, only
      reset to zero is not needed.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      56ea6a8b
    • D
      bpf, arm64: optimize 32/64 immediate emission · 6d2eea6f
      Daniel Borkmann 提交于
      Improve the JIT to emit 64 and 32 bit immediates, the current
      algorithm is not optimal and we often emit more instructions
      than actually needed. arm64 has movz, movn, movk variants but
      for the current 64 bit immediates we only use movz with a
      series of movk when needed.
      
      For example loading ffffffffffffabab emits the following 4
      instructions in the JIT today:
      
        * movz: abab, shift:  0, result: 000000000000abab
        * movk: ffff, shift: 16, result: 00000000ffffabab
        * movk: ffff, shift: 32, result: 0000ffffffffabab
        * movk: ffff, shift: 48, result: ffffffffffffabab
      
      Whereas after the patch the same load only needs a single
      instruction:
      
        * movn: 5454, shift:  0, result: ffffffffffffabab
      
      Another example where two extra instructions can be saved:
      
        * movz: abab, shift:  0, result: 000000000000abab
        * movk: 1f2f, shift: 16, result: 000000001f2fabab
        * movk: ffff, shift: 32, result: 0000ffff1f2fabab
        * movk: ffff, shift: 48, result: ffffffff1f2fabab
      
      After the patch:
      
        * movn: e0d0, shift: 16, result: ffffffff1f2fffff
        * movk: abab, shift:  0, result: ffffffff1f2fabab
      
      Another example with movz, before:
      
        * movz: 0000, shift:  0, result: 0000000000000000
        * movk: fea0, shift: 32, result: 0000fea000000000
      
      After:
      
        * movz: fea0, shift: 32, result: 0000fea000000000
      
      Moreover, reuse emit_a64_mov_i() for 32 bit immediates that
      are loaded via emit_a64_mov_i64() which is a similar optimization
      as done in 6fe8b9c1 ("bpf, x64: save several bytes by using
      mov over movabsq when possible"). On arm64, the latter allows to
      use a single instruction with movn due to zero extension where
      otherwise two would be needed. And last but not least add a
      missing optimization in emit_a64_mov_i() where movn is used but
      the subsequent movk not needed. With some of the Cilium programs
      in use, this shrinks the needed instructions by about three
      percent. Tested on Cavium ThunderX CN8890.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      6d2eea6f
    • D
      bpf, arm64: save 4 bytes of unneeded stack space · 09ece3d0
      Daniel Borkmann 提交于
      Follow-up to 816d9ef3 ("bpf, arm64: remove ld_abs/ld_ind") in
      that the extra 4 byte JIT scratchpad is not needed anymore since it
      was in ld_abs/ld_ind as stack buffer for bpf_load_pointer().
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      09ece3d0
  9. 14 5月, 2018 1 次提交
    • M
      arm64: dts: exynos: Fix interrupt type for I2S1 device on Exynos5433 · 0d463d84
      Marek Szyprowski 提交于
      All interrupts from SoC internal modules are level triggered, so fix
      incorrect trigger type for I2S1 device on Exynos5433 SoCs.
      
      This fixes following kernel warning:
      
      WARNING: CPU: 2 PID: 1 at drivers/irqchip/irq-gic.c:1016 gic_irq_domain_translate+0xb0/0xb8
      Modules linked in:
      CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.16.0-rc7-next-20180329 #646
      Hardware name: Samsung TM2 board (DT)
      pstate: 20000005 (nzCv daif -PAN -UAO)
      pc : gic_irq_domain_translate+0xb0/0xb8
      lr : irq_create_fwspec_mapping+0x64/0x328
      sp : ffff0000098b38d0
      ...
      Call trace:
       gic_irq_domain_translate+0xb0/0xb8
       irq_create_of_mapping+0x78/0xa0
       of_irq_get+0x6c/0xa0
       of_irq_to_resource+0x38/0x108
       of_irq_to_resource_table+0x50/0x78
       of_device_alloc+0x118/0x1b8
       of_platform_device_create_pdata+0x54/0xe0
       of_platform_bus_create+0x118/0x340
       of_platform_bus_create+0x17c/0x340
       of_platform_populate+0x74/0xd8
       of_platform_default_populate_init+0xb0/0xcc
       do_one_initcall+0x50/0x158
       kernel_init_freeable+0x184/0x22c
       kernel_init+0x10/0x108
       ret_from_fork+0x10/0x18
      ---[ end trace 6decb2b3078d73f0 ]---
      
      Fixes: d8d579c3 ("ARM: dts: exynos: Add I2S1 device node to exynos5433")
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: NKrzysztof Kozlowski <krzk@kernel.org>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      0d463d84
  10. 12 5月, 2018 10 次提交
  11. 09 5月, 2018 8 次提交
  12. 08 5月, 2018 3 次提交
  13. 07 5月, 2018 1 次提交
    • C
      PCI: remove PCI_DMA_BUS_IS_PHYS · 325ef185
      Christoph Hellwig 提交于
      This was used by the ide, scsi and networking code in the past to
      determine if they should bounce payloads.  Now that the dma mapping
      always have to support dma to all physical memory (thanks to swiotlb
      for non-iommu systems) there is no need to this crude hack any more.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Palmer Dabbelt <palmer@sifive.com> (for riscv)
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      325ef185
  14. 05 5月, 2018 1 次提交
  15. 04 5月, 2018 2 次提交
    • J
      arm64: vgic-v2: Fix proxying of cpuif access · b220244d
      James Morse 提交于
      Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
      and co, which check the endianness, which call into vcpu_read_sys_reg...
      which isn't mapped at EL2 (it was inlined before, and got moved OoL
      with the VHE optimizations).
      
      The result is of course a nice panic. Let's add some specialized
      cruft to keep the broken platforms that require this hack alive.
      
      But, this code used vcpu_data_guest_to_host(), which expected us to
      write the value to host memory, instead we have trapped the guest's
      read or write to an mmio-device, and are about to replay it using the
      host's readl()/writel() which also perform swabbing based on the host
      endianness. This goes wrong when both host and guest are big-endian,
      as readl()/writel() will undo the guest's swabbing, causing the
      big-endian value to be written to device-memory.
      
      What needs doing?
      A big-endian guest will have pre-swabbed data before storing, undo this.
      If its necessary for the host, writel() will re-swab it.
      
      For a read a big-endian guest expects to swab the data after the load.
      The hosts's readl() will correct for host endianness, giving us the
      device-memory's value in the register. For a big-endian guest, swab it
      as if we'd only done the load.
      
      For a little-endian guest, nothing needs doing as readl()/writel() leave
      the correct device-memory value in registers.
      
      Tested on Juno with that rarest of things: a big-endian 64K host.
      Based on a patch from Marc Zyngier.
      Reported-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Fixes: bf8feb39 ("arm64: KVM: vgic-v2: Add GICV access from HYP")
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      b220244d
    • J
      KVM: arm64: Fix order of vcpu_write_sys_reg() arguments · 1975fa56
      James Morse 提交于
      A typo in kvm_vcpu_set_be()'s call:
      | vcpu_write_sys_reg(vcpu, SCTLR_EL1, sctlr)
      causes us to use the 32bit register value as an index into the sys_reg[]
      array, and sail off the end of the linear map when we try to bring up
      big-endian secondaries.
      
      | Unable to handle kernel paging request at virtual address ffff80098b982c00
      | Mem abort info:
      |  ESR = 0x96000045
      |  Exception class = DABT (current EL), IL = 32 bits
      |   SET = 0, FnV = 0
      |   EA = 0, S1PTW = 0
      | Data abort info:
      |   ISV = 0, ISS = 0x00000045
      |   CM = 0, WnR = 1
      | swapper pgtable: 4k pages, 48-bit VAs, pgdp = 000000002ea0571a
      | [ffff80098b982c00] pgd=00000009ffff8803, pud=0000000000000000
      | Internal error: Oops: 96000045 [#1] PREEMPT SMP
      | Modules linked in:
      | CPU: 2 PID: 1561 Comm: kvm-vcpu-0 Not tainted 4.17.0-rc3-00001-ga912e2261ca6-dirty #1323
      | Hardware name: ARM Juno development board (r1) (DT)
      | pstate: 60000005 (nZCv daif -PAN -UAO)
      | pc : vcpu_write_sys_reg+0x50/0x134
      | lr : vcpu_write_sys_reg+0x50/0x134
      
      | Process kvm-vcpu-0 (pid: 1561, stack limit = 0x000000006df4728b)
      | Call trace:
      |  vcpu_write_sys_reg+0x50/0x134
      |  kvm_psci_vcpu_on+0x14c/0x150
      |  kvm_psci_0_2_call+0x244/0x2a4
      |  kvm_hvc_call_handler+0x1cc/0x258
      |  handle_hvc+0x20/0x3c
      |  handle_exit+0x130/0x1ec
      |  kvm_arch_vcpu_ioctl_run+0x340/0x614
      |  kvm_vcpu_ioctl+0x4d0/0x840
      |  do_vfs_ioctl+0xc8/0x8d0
      |  ksys_ioctl+0x78/0xa8
      |  sys_ioctl+0xc/0x18
      |  el0_svc_naked+0x30/0x34
      | Code: 73620291 604d00b0 00201891 1ab10194 (957a33f8)
      |---[ end trace 4b4a4f9628596602 ]---
      
      Fix the order of the arguments.
      
      Fixes: 8d404c4c ("KVM: arm64: Rewrite system register accessors to read/write functions")
      CC: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      1975fa56