1. 06 3月, 2019 2 次提交
    • B
      x86: Deprecate a.out support · eac61655
      Borislav Petkov 提交于
      Linux supports ELF binaries for ~25 years now.  a.out coredumping has
      bitrotten quite significantly and would need some fixing to get it into
      shape again but considering how even the toolchains cannot create a.out
      executables in its default configuration, let's deprecate a.out support
      and remove it a couple of releases later, instead.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NRichard Weinberger <richard@nod.at>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: <linux-api@vger.kernel.org>
      Cc: <linux-fsdevel@vger.kernel.org>
      Cc: lkml <linux-kernel@vger.kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: <x86@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eac61655
    • L
      a.out: remove core dumping support · 08300f44
      Linus Torvalds 提交于
      We're (finally) phasing out a.out support for good.  As Borislav Petkov
      points out, we've supported ELF binaries for about 25 years by now, and
      coredumping in particular has bitrotted over the years.
      
      None of the tool chains even support generating a.out binaries any more,
      and the plan is to deprecate a.out support entirely for the kernel.  But
      I want to start with just removing the core dumping code, because I can
      still imagine that somebody actually might want to support a.out as a
      simpler biinary format.
      
      Particularly if you generate some random binaries on the fly, ELF is a
      much more complicated format (admittedly ELF also does have a lot of
      toolchain support, mitigating that complexity a lot and you really
      should have moved over in the last 25 years).
      
      So it's at least somewhat possible that somebody out there has some
      workflow that still involves generating and running a.out executables.
      
      In contrast, it's very unlikely that anybody depends on debugging any
      legacy a.out core files.  But regardless, I want this phase-out to be
      done in two steps, so that we can resurrect a.out support (if needed)
      without having to resurrect the core file dumping that is almost
      certainly not needed.
      
      Jann Horn pointed to the <asm/a.out-core.h> file that my first trivial
      cut at this had missed.
      
      And Alan Cox points out that the a.out binary loader _could_ be done in
      user space if somebody wants to, but we might keep just the loader in
      the kernel if somebody really wants it, since the loader isn't that big
      and has no really odd special cases like the core dumping does.
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
      Cc: Jann Horn <jannh@google.com>
      Cc: Richard Weinberger <richard@nod.at>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      08300f44
  2. 05 3月, 2019 2 次提交
    • L
      get rid of legacy 'get_ds()' function · 736706be
      Linus Torvalds 提交于
      Every in-kernel use of this function defined it to KERNEL_DS (either as
      an actual define, or as an inline function).  It's an entirely
      historical artifact, and long long long ago used to actually read the
      segment selector valueof '%ds' on x86.
      
      Which in the kernel is always KERNEL_DS.
      
      Inspired by a patch from Jann Horn that just did this for a very small
      subset of users (the ones in fs/), along with Al who suggested a script.
      I then just took it to the logical extreme and removed all the remaining
      gunk.
      
      Roughly scripted with
      
         git grep -l '(get_ds())' -- :^tools/ | xargs sed -i 's/(get_ds())/(KERNEL_DS)/'
         git grep -lw 'get_ds' -- :^tools/ | xargs sed -i '/^#define get_ds()/d'
      
      plus manual fixups to remove a few unusual usage patterns, the couple of
      inline function cases and to fix up a comment that had become stale.
      
      The 'get_ds()' function remains in an x86 kvm selftest, since in user
      space it actually does something relevant.
      Inspired-by: NJann Horn <jannh@google.com>
      Inspired-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      736706be
    • L
      x86-64: add warning for non-canonical user access address dereferences · 00c42373
      Linus Torvalds 提交于
      This adds a warning (once) for any kernel dereference that has a user
      exception handler, but accesses a non-canonical address.  It basically
      is a simpler - and more limited - version of commit 9da3f2b7
      ("x86/fault: BUG() when uaccess helpers fault on kernel addresses") that
      got reverted.
      
      Note that unlike that original commit, this only causes a warning,
      because there are real situations where we currently can do this
      (notably speculative argument fetching for uprobes etc).  Also, unlike
      that original commit, this _only_ triggers for #GP accesses, so the
      cases of valid kernel pointers that cross into a non-mapped page aren't
      affected.
      
      The intent of this is two-fold:
      
       - the uprobe/tracing accesses really do need to be more careful. In
         particular, from a portability standpoint it's just wrong to think
         that "a pointer is a pointer", and use the same logic for any random
         pointer value you find on the stack. It may _work_ on x86-64, but it
         doesn't necessarily work on other architectures (where the same
         pointer value can be either a kernel pointer _or_ a user pointer, and
         you really need to be much more careful in how you try to access it)
      
         The warning can hopefully end up being a reminder that just any
         random pointer access won't do.
      
       - Kees in particular wanted a way to actually report invalid uses of
         wild pointers to user space accessors, instead of just silently
         failing them. Automated fuzzers want a way to get reports if the
         kernel ever uses invalid values that the fuzzer fed it.
      
         The non-canonical address range is a fair chunk of the address space,
         and with this you can teach syzkaller to feed in invalid pointer
         values and find cases where we do not properly validate user
         addresses (possibly due to bad uses of "set_fs()").
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Jann Horn <jannh@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00c42373
  3. 02 3月, 2019 3 次提交
    • P
      MIPS: eBPF: Fix icache flush end address · d1a2930d
      Paul Burton 提交于
      The MIPS eBPF JIT calls flush_icache_range() in order to ensure the
      icache observes the code that we just wrote. Unfortunately it gets the
      end address calculation wrong due to some bad pointer arithmetic.
      
      The struct jit_ctx target field is of type pointer to u32, and as such
      adding one to it will increment the address being pointed to by 4 bytes.
      Therefore in order to find the address of the end of the code we simply
      need to add the number of 4 byte instructions emitted, but we mistakenly
      add the number of instructions multiplied by 4. This results in the call
      to flush_icache_range() operating on a memory region 4x larger than
      intended, which is always wasteful and can cause crashes if we overrun
      into an unmapped page.
      
      Fix this by correcting the pointer arithmetic to remove the bogus
      multiplication, and use braces to remove the need for a set of brackets
      whilst also making it obvious that the target field is a pointer.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Fixes: b6bd53f9 ("MIPS: Add missing file for eBPF JIT.")
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Martin KaFai Lau <kafai@fb.com>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: Yonghong Song <yhs@fb.com>
      Cc: netdev@vger.kernel.org
      Cc: bpf@vger.kernel.org
      Cc: linux-mips@vger.kernel.org
      Cc: stable@vger.kernel.org # v4.13+
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      d1a2930d
    • C
      arm64: dts: fsl: ls1028a-rdb: Add ENETC external eth ports for the LS1028A RDB board · 0c805404
      Claudiu Manoil 提交于
      The LS1028A RDB board features an Atheros PHY connected over
      SGMII to the ENETC PF0 (or Port0).  ENETC Port1 (PF1) has no
      external connection on this board, so it can be disabled for now.
      Signed-off-by: NAlex Marginean <alexandru.marginean@nxp.com>
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0c805404
    • C
      arm64: dts: fsl: ls1028a: Add PCI IERC node and ENETC endpoints · 927d7f85
      Claudiu Manoil 提交于
      The LS1028A SoC features a PCI Integrated Endpoint Root Complex
      (IERC) defining several integrated PCI devices, including the ENETC
      ethernet controller integrated endpoints (IEPs). The IERC implements
      ECAM (Enhanced Configuration Access Mechanism) to provide access
      to the PCIe config space of the IEPs. This means the the IEPs
      (including ENETC) do not support the standard PCIe BARs, instead
      the Enhanced Allocation (EA) capability structures in the ECAM space
      are used to fix the base addresses in the system, and the PCI
      subsystem uses these structures for device enumeration and discovery.
      The "ranges" entries contain basic information from these EA capabily
      structures required by the kernel for device enumeration.
      
      The current patch also enables the first 2 ENETC PFs (Physiscal
      Functions) and the associated VFs (Virtual Functions), 2 VFs for
      each PF.  Each of these ENETC PFs has an external ethernet port
      on the LS1028A SoC.
      Signed-off-by: NAlex Marginean <alexandru.marginean@nxp.com>
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      927d7f85
  4. 28 2月, 2019 5 次提交
  5. 27 2月, 2019 2 次提交
    • B
      arm64: dts: rockchip: move QCA6174A wakeup pin into its USB node · 5364a0b4
      Brian Norris 提交于
      Currently, we don't coordinate BT USB activity with our handling of the
      BT out-of-band wake pin, and instead just use gpio-keys. That causes
      problems because we have no way of distinguishing wake activity due to a
      BT device (e.g., mouse) vs. the BT controller (e.g., re-configuring wake
      mask before suspend). This can cause spurious wake events just because
      we, for instance, try to reconfigure the host controller's event mask
      before suspending.
      
      We can avoid these synchronization problems by handling the BT wake pin
      directly in the btusb driver -- for all activity up until BT controller
      suspend(), we simply listen to normal USB activity (e.g., to know the
      difference between device and host activity); once we're really ready to
      suspend the host controller, there should be no more host activity, and
      only *then* do we unmask the GPIO interrupt.
      
      This is already supported by btusb; we just need to describe the wake
      pin in the right node.
      
      We list 2 compatible properties, since both PID/VID pairs show up on
      Scarlet devices, and they're both essentially identical QCA6174A-based
      modules.
      
      Also note that the polarity was wrong before: Qualcomm implemented WAKE
      as active high, not active low. We only got away with this because
      gpio-keys always reconfigured us as bi-directional edge-triggered.
      
      Finally, we have an external pull-up and a level-shifter on this line
      (we didn't notice Qualcomm's polarity in the initial design), so we
      can't do pull-down. Switch to pull-none.
      Signed-off-by: NBrian Norris <briannorris@chromium.org>
      Reviewed-by: NMatthias Kaehlcke <mka@chromium.org>
      Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
      5364a0b4
    • M
      arm64: dts: qcom: msm8998: Extend TZ reserved memory area · 6e533309
      Marc Gonzalez 提交于
      My console locks up as soon as Linux writes to [88800000,88f00000[
      AFAIU, that memory area is reserved for trustzone.
      
      Extend TZ reserved memory range, to prevent Linux from stepping on
      trustzone's toes.
      
      Cc: stable@vger.kernel.org # 4.20+
      Reviewed-by: NSibi Sankar <sibis@codeaurora.org>
      Fixes: c7833949 ("arm64: dts: qcom: msm8998: Add smem related nodes")
      Signed-off-by: NMarc Gonzalez <marc.w.gonzalez@free.fr>
      Signed-off-by: NAndy Gross <andy.gross@linaro.org>
      6e533309
  6. 26 2月, 2019 3 次提交
    • J
      MIPS: BCM63XX: provide DMA masks for ethernet devices · 18836b48
      Jonas Gorski 提交于
      The switch to the generic dma ops made dma masks mandatory, breaking
      devices having them not set. In case of bcm63xx, it broke ethernet with
      the following warning when trying to up the device:
      
      [    2.633123] ------------[ cut here ]------------
      [    2.637949] WARNING: CPU: 0 PID: 325 at ./include/linux/dma-mapping.h:516 bcm_enetsw_open+0x160/0xbbc
      [    2.647423] Modules linked in: gpio_button_hotplug
      [    2.652361] CPU: 0 PID: 325 Comm: ip Not tainted 4.19.16 #0
      [    2.658080] Stack : 80520000 804cd3ec 00000000 00000000 804ccc00 87085bdc 87d3f9d4 804f9a17
      [    2.666707]         8049cf18 00000145 80a942a0 00000204 80ac0000 10008400 87085b90 eb3d5ab7
      [    2.675325]         00000000 00000000 80ac0000 000022b0 00000000 00000000 00000007 00000000
      [    2.683954]         0000007a 80500000 0013b381 00000000 80000000 00000000 804a1664 80289878
      [    2.692572]         00000009 00000204 80ac0000 00000200 00000002 00000000 00000000 80a90000
      [    2.701191]         ...
      [    2.703701] Call Trace:
      [    2.706244] [<8001f3c8>] show_stack+0x58/0x100
      [    2.710840] [<800336e4>] __warn+0xe4/0x118
      [    2.715049] [<800337d4>] warn_slowpath_null+0x48/0x64
      [    2.720237] [<80289878>] bcm_enetsw_open+0x160/0xbbc
      [    2.725347] [<802d1d4c>] __dev_open+0xf8/0x16c
      [    2.729913] [<802d20cc>] __dev_change_flags+0x100/0x1c4
      [    2.735290] [<802d21b8>] dev_change_flags+0x28/0x70
      [    2.740326] [<803539e0>] devinet_ioctl+0x310/0x7b0
      [    2.745250] [<80355fd8>] inet_ioctl+0x1f8/0x224
      [    2.749939] [<802af290>] sock_ioctl+0x30c/0x488
      [    2.754632] [<80112b34>] do_vfs_ioctl+0x740/0x7dc
      [    2.759459] [<80112c20>] ksys_ioctl+0x50/0x94
      [    2.763955] [<800240b8>] syscall_common+0x34/0x58
      [    2.768782] ---[ end trace fb1a6b14d74e28b6 ]---
      [    2.773544] bcm63xx_enetsw bcm63xx_enetsw.0: cannot allocate rx ring 512
      
      Fix this by adding appropriate DMA masks for the platform devices.
      
      Fixes: f8c55dc6 ("MIPS: use generic dma noncoherent ops for simple noncoherent platforms")
      Signed-off-by: NJonas Gorski <jonas.gorski@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: stable@vger.kernel.org # v4.19+
      18836b48
    • A
      x86/uaccess: Don't leak the AC flag into __put_user() value evaluation · 2a418cf3
      Andy Lutomirski 提交于
      When calling __put_user(foo(), ptr), the __put_user() macro would call
      foo() in between __uaccess_begin() and __uaccess_end().  If that code
      were buggy, then those bugs would be run without SMAP protection.
      
      Fortunately, there seem to be few instances of the problem in the
      kernel. Nevertheless, __put_user() should be fixed to avoid doing this.
      Therefore, evaluate __put_user()'s argument before setting AC.
      
      This issue was noticed when an objtool hack by Peter Zijlstra complained
      about genregs_get() and I compared the assembly output to the C source.
      
       [ bp: Massage commit message and fixed up whitespace. ]
      
      Fixes: 11f1a4b9 ("x86: reorganize SMAP handling in user space accesses")
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20190225125231.845656645@infradead.org
      2a418cf3
    • L
      Revert "x86/fault: BUG() when uaccess helpers fault on kernel addresses" · 53a41cb7
      Linus Torvalds 提交于
      This reverts commit 9da3f2b7.
      
      It was well-intentioned, but wrong.  Overriding the exception tables for
      instructions for random reasons is just wrong, and that is what the new
      code did.
      
      It caused problems for tracing, and it caused problems for strncpy_from_user(),
      because the new checks made perfectly valid use cases break, rather than
      catch things that did bad things.
      
      Unchecked user space accesses are a problem, but that's not a reason to
      add invalid checks that then people have to work around with silly flags
      (in this case, that 'kernel_uaccess_faults_ok' flag, which is just an
      odd way to say "this commit was wrong" and was sprinked into random
      places to hide the wrongness).
      
      The real fix to unchecked user space accesses is to get rid of the
      special "let's not check __get_user() and __put_user() at all" logic.
      Make __{get|put}_user() be just aliases to the regular {get|put}_user()
      functions, and make it impossible to access user space without having
      the proper checks in places.
      
      The raison d'être of the special double-underscore versions used to be
      that the range check was expensive, and if you did multiple user
      accesses, you'd do the range check up front (like the signal frame
      handling code, for example).  But SMAP (on x86) and PAN (on ARM) have
      made that optimization pointless, because the _real_ expense is the "set
      CPU flag to allow user space access".
      
      Do let's not break the valid cases to catch invalid cases that shouldn't
      even exist.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Tobin C. Harding <tobin@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Jann Horn <jannh@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      53a41cb7
  7. 25 2月, 2019 1 次提交
  8. 23 2月, 2019 3 次提交
    • Y
      KVM: MMU: record maximum physical address width in kvm_mmu_extended_role · de3ccd26
      Yu Zhang 提交于
      Previously, commit 7dcd5755 ("x86/kvm/mmu: check if tdp/shadow
      MMU reconfiguration is needed") offered some optimization to avoid
      the unnecessary reconfiguration. Yet one scenario is broken - when
      cpuid changes VM's maximum physical address width, reconfiguration
      is needed to reset the reserved bits.  Also, the TDP may need to
      reset its shadow_root_level when this value is changed.
      
      To fix this, a new field, maxphyaddr, is introduced in the extended
      role structure to keep track of the configured guest physical address
      width.
      Signed-off-by: NYu Zhang <yu.c.zhang@linux.intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      de3ccd26
    • Y
      kvm: x86: Return LA57 feature based on hardware capability · 511da98d
      Yu Zhang 提交于
      Previously, 'commit 372fddf7 ("x86/mm: Introduce the 'no5lvl' kernel
      parameter")' cleared X86_FEATURE_LA57 in boot_cpu_data, if Linux chooses
      to not run in 5-level paging mode. Yet boot_cpu_data is queried by
      do_cpuid_ent() as the host capability later when creating vcpus, and Qemu
      will not be able to detect this feature and create VMs with LA57 feature.
      
      As discussed earlier, VMs can still benefit from extended linear address
      width, e.g. to enhance features like ASLR. So we would like to fix this,
      by return the true hardware capability when Qemu queries.
      Signed-off-by: NYu Zhang <yu.c.zhang@linux.intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      511da98d
    • V
      x86/kvm/mmu: fix switch between root and guest MMUs · ad7dc69a
      Vitaly Kuznetsov 提交于
      Commit 14c07ad8 ("x86/kvm/mmu: introduce guest_mmu") brought one subtle
      change: previously, when switching back from L2 to L1, we were resetting
      MMU hooks (like mmu->get_cr3()) in kvm_init_mmu() called from
      nested_vmx_load_cr3() and now we do that in nested_ept_uninit_mmu_context()
      when we re-target vcpu->arch.mmu pointer.
      The change itself looks logical: if nested_ept_init_mmu_context() changes
      something than nested_ept_uninit_mmu_context() restores it back. There is,
      however, one thing: the following call chain:
      
       nested_vmx_load_cr3()
        kvm_mmu_new_cr3()
          __kvm_mmu_new_cr3()
            fast_cr3_switch()
              cached_root_available()
      
      now happens with MMU hooks pointing to the new MMU (root MMU in our case)
      while previously it was happening with the old one. cached_root_available()
      tries to stash current root but it is incorrect to read current CR3 with
      mmu->get_cr3(), we need to use old_mmu->get_cr3() which in case we're
      switching from L2 to L1 is guest_mmu. (BTW, in shadow page tables case this
      is a non-issue because we don't switch MMU).
      
      While we could've tried to guess that we're switching between MMUs and call
      the right ->get_cr3() from cached_root_available() this seems to be overly
      complicated. Instead, just stash the corresponding CR3 when setting
      root_hpa and make cached_root_available() use the stashed value.
      
      Fixes: 14c07ad8 ("x86/kvm/mmu: introduce guest_mmu")
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ad7dc69a
  9. 22 2月, 2019 16 次提交
    • E
      crypto: arm/aes-ce - update IV after partial final CTR block · 511306b2
      Eric Biggers 提交于
      Make the arm ctr-aes-ce algorithm update the IV buffer to contain the
      next counter after processing a partial final block, rather than leave
      it as the last counter.  This makes ctr-aes-ce pass the updated AES-CTR
      tests.  This change also makes the code match the arm64 version in
      arch/arm64/crypto/aes-modes.S more closely.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      511306b2
    • E
      crypto: arm64/aes-blk - update IV after partial final CTR block · fa5fd3af
      Eric Biggers 提交于
      Make the arm64 ctr-aes-neon and ctr-aes-ce algorithms update the IV
      buffer to contain the next counter after processing a partial final
      block, rather than leave it as the last counter.  This makes these
      algorithms pass the updated AES-CTR tests.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      fa5fd3af
    • A
      crypto: sha512/arm - fix crash bug in Thumb2 build · c6431650
      Ard Biesheuvel 提交于
      The SHA512 code we adopted from the OpenSSL project uses a rather
      peculiar way to take the address of the round constant table: it
      takes the address of the sha256_block_data_order() routine, and
      substracts a constant known quantity to arrive at the base of the
      table, which is emitted by the same assembler code right before
      the routine's entry point.
      
      However, recent versions of binutils have helpfully changed the
      behavior of references emitted via an ADR instruction when running
      in Thumb2 mode: it now takes the Thumb execution mode bit into
      account, which is bit 0 af the address. This means the produced
      table address also has bit 0 set, and so we end up with an address
      value pointing 1 byte past the start of the table, which results
      in crashes such as
      
        Unable to handle kernel paging request at virtual address bf825000
        pgd = 42f44b11
        [bf825000] *pgd=80000040206003, *pmd=5f1bd003, *pte=00000000
        Internal error: Oops: 207 [#1] PREEMPT SMP THUMB2
        Modules linked in: sha256_arm(+) sha1_arm_ce sha1_arm ...
        CPU: 7 PID: 396 Comm: cryptomgr_test Not tainted 5.0.0-rc6+ #144
        Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
        PC is at sha256_block_data_order+0xaaa/0xb30 [sha256_arm]
        LR is at __this_module+0x17fd/0xffffe800 [sha256_arm]
        pc : [<bf820bca>]    lr : [<bf824ffd>]    psr: 800b0033
        sp : ebc8bbe8  ip : faaabe1c  fp : 2fdd3433
        r10: 4c5f1692  r9 : e43037df  r8 : b04b0a5a
        r7 : c369d722  r6 : 39c3693e  r5 : 7a013189  r4 : 1580d26b
        r3 : 8762a9b0  r2 : eea9c2cd  r1 : 3e9ab536  r0 : 1dea4ae7
        Flags: Nzcv  IRQs on  FIQs on  Mode SVC_32  ISA Thumb  Segment user
        Control: 70c5383d  Table: 6b8467c0  DAC: dbadc0de
        Process cryptomgr_test (pid: 396, stack limit = 0x69e1fe23)
        Stack: (0xebc8bbe8 to 0xebc8c000)
        ...
        unwind: Unknown symbol address bf820bca
        unwind: Index not found bf820bca
        Code: 441a ea80 40f9 440a (f85e) 3b04
        ---[ end trace e560cce92700ef8a ]---
      
      Given that this affects older kernels as well, in case they are built
      with a recent toolchain, apply a minimal backportable fix, which is
      to emit another non-code label at the start of the routine, and
      reference that instead. (This is similar to the current upstream state
      of this file in OpenSSL)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      c6431650
    • A
      crypto: sha256/arm - fix crash bug in Thumb2 build · 69216a54
      Ard Biesheuvel 提交于
      The SHA256 code we adopted from the OpenSSL project uses a rather
      peculiar way to take the address of the round constant table: it
      takes the address of the sha256_block_data_order() routine, and
      substracts a constant known quantity to arrive at the base of the
      table, which is emitted by the same assembler code right before
      the routine's entry point.
      
      However, recent versions of binutils have helpfully changed the
      behavior of references emitted via an ADR instruction when running
      in Thumb2 mode: it now takes the Thumb execution mode bit into
      account, which is bit 0 af the address. This means the produced
      table address also has bit 0 set, and so we end up with an address
      value pointing 1 byte past the start of the table, which results
      in crashes such as
      
        Unable to handle kernel paging request at virtual address bf825000
        pgd = 42f44b11
        [bf825000] *pgd=80000040206003, *pmd=5f1bd003, *pte=00000000
        Internal error: Oops: 207 [#1] PREEMPT SMP THUMB2
        Modules linked in: sha256_arm(+) sha1_arm_ce sha1_arm ...
        CPU: 7 PID: 396 Comm: cryptomgr_test Not tainted 5.0.0-rc6+ #144
        Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
        PC is at sha256_block_data_order+0xaaa/0xb30 [sha256_arm]
        LR is at __this_module+0x17fd/0xffffe800 [sha256_arm]
        pc : [<bf820bca>]    lr : [<bf824ffd>]    psr: 800b0033
        sp : ebc8bbe8  ip : faaabe1c  fp : 2fdd3433
        r10: 4c5f1692  r9 : e43037df  r8 : b04b0a5a
        r7 : c369d722  r6 : 39c3693e  r5 : 7a013189  r4 : 1580d26b
        r3 : 8762a9b0  r2 : eea9c2cd  r1 : 3e9ab536  r0 : 1dea4ae7
        Flags: Nzcv  IRQs on  FIQs on  Mode SVC_32  ISA Thumb  Segment user
        Control: 70c5383d  Table: 6b8467c0  DAC: dbadc0de
        Process cryptomgr_test (pid: 396, stack limit = 0x69e1fe23)
        Stack: (0xebc8bbe8 to 0xebc8c000)
        ...
        unwind: Unknown symbol address bf820bca
        unwind: Index not found bf820bca
        Code: 441a ea80 40f9 440a (f85e) 3b04
        ---[ end trace e560cce92700ef8a ]---
      
      Given that this affects older kernels as well, in case they are built
      with a recent toolchain, apply a minimal backportable fix, which is
      to emit another non-code label at the start of the routine, and
      reference that instead. (This is similar to the current upstream state
      of this file in OpenSSL)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      69216a54
    • V
      ARCv2: don't assume core 0x54 has dual issue · 7b2e932f
      Vineet Gupta 提交于
      The first release of core4 (0x54) was dual issue only (HS4x).
      Newer releases allow hardware to be configured as single issue (HS3x)
      or dual issue.
      
      Prevent accessing a HS4x only aux register in HS3x, which otherwise
      leads to illegal instruction exceptions
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      7b2e932f
    • D
      parisc: Fix ptrace syscall number modification · b7dc5a07
      Dmitry V. Levin 提交于
      Commit 910cd32e ("parisc: Fix and enable seccomp filter support")
      introduced a regression in ptrace-based syscall tampering: when tracer
      changes syscall number to -1, the kernel fails to initialize %r28 with
      -ENOSYS and subsequently fails to return the error code of the failed
      syscall to userspace.
      
      This erroneous behaviour could be observed with a simple strace syscall
      fault injection command which is expected to print something like this:
      
      $ strace -a0 -ewrite -einject=write:error=enospc echo hello
      write(1, "hello\n", 6) = -1 ENOSPC (No space left on device) (INJECTED)
      write(2, "echo: ", 6) = -1 ENOSPC (No space left on device) (INJECTED)
      write(2, "write error", 11) = -1 ENOSPC (No space left on device) (INJECTED)
      write(2, "\n", 1) = -1 ENOSPC (No space left on device) (INJECTED)
      +++ exited with 1 +++
      
      After commit 910cd32e it loops printing
      something like this instead:
      
      write(1, "hello\n", 6../strace: Failed to tamper with process 12345: unexpectedly got no error (return value 0, error 0)
      ) = 0 (INJECTED)
      
      This bug was found by strace test suite.
      
      Fixes: 910cd32e ("parisc: Fix and enable seccomp filter support")
      Cc: stable@vger.kernel.org # v4.5+
      Signed-off-by: NDmitry V. Levin <ldv@altlinux.org>
      Tested-by: NHelge Deller <deller@gmx.de>
      Signed-off-by: NHelge Deller <deller@gmx.de>
      b7dc5a07
    • A
      ARC: define ARCH_SLAB_MINALIGN = 8 · b6835ea7
      Alexey Brodkin 提交于
      The default value of ARCH_SLAB_MINALIGN in "include/linux/slab.h" is
      "__alignof__(unsigned long long)" which for ARC unexpectedly turns out
      to be 4. This is not a compiler bug, but as defined by ARC ABI [1]
      
      Thus slab allocator would allocate a struct which is 32-bit aligned,
      which is generally OK even if struct has long long members.
      There was however potetial problem when it had any atomic64_t which
      use LLOCKD/SCONDD instructions which are required by ISA to take
      64-bit addresses. This is the problem we ran into
      
      [    4.015732] EXT4-fs (mmcblk0p2): re-mounted. Opts: (null)
      [    4.167881] Misaligned Access
      [    4.172356] Path: /bin/busybox.nosuid
      [    4.176004] CPU: 2 PID: 171 Comm: rm Not tainted 4.19.14-yocto-standard #1
      [    4.182851]
      [    4.182851] [ECR   ]: 0x000d0000 => Check Programmer's Manual
      [    4.190061] [EFA   ]: 0xbeaec3fc
      [    4.190061] [BLINK ]: ext4_delete_entry+0x210/0x234
      [    4.190061] [ERET  ]: ext4_delete_entry+0x13e/0x234
      [    4.202985] [STAT32]: 0x80080002 : IE K
      [    4.207236] BTA: 0x9009329c   SP: 0xbe5b1ec4  FP: 0x00000000
      [    4.212790] LPS: 0x9074b118  LPE: 0x9074b120 LPC: 0x00000000
      [    4.218348] r00: 0x00000040  r01: 0x00000021 r02: 0x00000001
      ...
      ...
      [    4.270510] Stack Trace:
      [    4.274510]   ext4_delete_entry+0x13e/0x234
      [    4.278695]   ext4_rmdir+0xe0/0x238
      [    4.282187]   vfs_rmdir+0x50/0xf0
      [    4.285492]   do_rmdir+0x9e/0x154
      [    4.288802]   EV_Trap+0x110/0x114
      
      The fix is to make sure slab allocations are 64-bit aligned.
      
      Do note that atomic64_t is __attribute__((aligned(8)) which means gcc
      does generate 64-bit aligned references, relative to beginning of
      container struct. However the issue is if the container itself is not
      64-bit aligned, atomic64_t ends up unaligned which is what this patch
      ensures.
      
      [1] https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain/wiki/files/ARCv2_ABI.pdfSigned-off-by: NAlexey Brodkin <abrodkin@synopsys.com>
      Cc: <stable@vger.kernel.org> # 4.8+
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      [vgupta: reworked changelog, added dependency on LL64+LLSC]
      b6835ea7
    • E
      ARC: enable uboot support unconditionally · 493a2f81
      Eugeniy Paltsev 提交于
      After reworking U-boot args handling code and adding paranoid
      arguments check we can eliminate CONFIG_ARC_UBOOT_SUPPORT and
      enable uboot support unconditionally.
      
      For JTAG case we can assume that core registers will come up
      reset value of 0 or in worst case we rely on user passing
      '-on=clear_regs' to Metaware debugger.
      
      Cc: stable@vger.kernel.org
      Tested-by: NCorentin LABBE <clabbe@baylibre.com>
      Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      493a2f81
    • E
      ARC: U-boot: check arguments paranoidly · a66f2e57
      Eugeniy Paltsev 提交于
      Handle U-boot arguments paranoidly:
       * don't allow to pass unknown tag.
       * try to use external device tree blob only if corresponding tag
         (TAG_DTB) is set.
       * don't check uboot_tag if kernel build with no ARC_UBOOT_SUPPORT.
      
      NOTE:
      If U-boot args are invalid we skip them and try to use embedded device
      tree blob. We can't panic on invalid U-boot args as we really pass
      invalid args due to bug in U-boot code.
      This happens if we don't provide external DTB to U-boot and
      don't set 'bootargs' U-boot environment variable (which is default
      case at least for HSDK board) In that case we will pass
      {r0 = 1 (bootargs in r2); r1 = 0; r2 = 0;} to linux which is invalid.
      
      While I'm at it refactor U-boot arguments handling code.
      
      Cc: stable@vger.kernel.org
      Tested-by: NCorentin LABBE <clabbe@baylibre.com>
      Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      a66f2e57
    • V
      ARCv2: support manual regfile save on interrupts · e494239a
      Vineet Gupta 提交于
      There's a hardware bug which affects the HSDK platform, triggered by
      micro-ops for auto-saving regfile on taken interrupt. The workaround is
      to inhibit autosave.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      e494239a
    • V
      ARC: uacces: remove lp_start, lp_end from clobber list · d5e3c55e
      Vineet Gupta 提交于
      Newer ARC gcc handles lp_start, lp_end in a different way and doesn't
      like them in the clobber list.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      d5e3c55e
    • E
      ARC: fix actionpoints configuration detection · cdf92962
      Eugeniy Paltsev 提交于
      Fix reversed logic while actionpoints configuration (full/min)
      detection.
      
      Fixies: 7dd380c3 ("ARC: boot log: print Action point details")
      Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      cdf92962
    • E
      ARCv2: lib: memcpy: fix doing prefetchw outside of buffer · f8a15f97
      Eugeniy Paltsev 提交于
      ARCv2 optimized memcpy uses PREFETCHW instruction for prefetching the
      next cache line but doesn't ensure that the line is not past the end of
      the buffer. PRETECHW changes the line ownership and marks it dirty,
      which can cause data corruption if this area is used for DMA IO.
      
      Fix the issue by avoiding the PREFETCHW. This leads to performance
      degradation but it is OK as we'll introduce new memcpy implementation
      optimized for unaligned memory access using.
      
      We also cut off all PREFETCH instructions at they are quite useless
      here:
       * we call PREFETCH right before LOAD instruction call.
       * we copy 16 or 32 bytes of data (depending on CONFIG_ARC_HAS_LL64)
         in a main logical loop. so we call PREFETCH 4 times (or 2 times)
         for each L1 cache line (in case of 64B L1 cache Line which is
         default case). Obviously this is not optimal.
      Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      f8a15f97
    • E
      ARCv2: Enable unaligned access in early ASM code · 252f6e8e
      Eugeniy Paltsev 提交于
      It is currently done in arc_init_IRQ() which might be too late
      considering gcc 7.3.1 onwards (GNU 2018.03) generates unaligned
      memory accesses by default
      
      Cc: stable@vger.kernel.org #4.4+
      Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      [vgupta: rewrote changelog]
      252f6e8e
    • H
      s390/net: convert pnetids to ascii · 390dde08
      Hans Wippel 提交于
      Pnetids are retrieved from the underlying hardware as EBCDIC. This patch
      converts pnetids to ASCII.
      Signed-off-by: NHans Wippel <hwippel@linux.ibm.com>
      Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      390dde08
    • A
      kasan: fix random seed generation for tag-based mode · 3f41b609
      Andrey Konovalov 提交于
      There are two issues with assigning random percpu seeds right now:
      
      1. We use for_each_possible_cpu() to iterate over cpus, but cpumask is
         not set up yet at the moment of kasan_init(), and thus we only set
         the seed for cpu #0.
      
      2. A call to get_random_u32() always returns the same number and produces
         a message in dmesg, since the random subsystem is not yet initialized.
      
      Fix 1 by calling kasan_init_tags() after cpumask is set up.
      
      Fix 2 by using get_cycles() instead of get_random_u32(). This gives us
      lower quality random numbers, but it's good enough, as KASAN is meant to
      be used as a debugging tool and not a mitigation.
      
      Link: http://lkml.kernel.org/r/1f815cc914b61f3516ed4cc9bfd9eeca9bd5d9de.1550677973.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f41b609
  10. 20 2月, 2019 3 次提交