1. 14 3月, 2019 1 次提交
    • J
      MIPS: Remove function size check in get_frame_info() · cd8520a2
      Jun-Ru Chang 提交于
      [ Upstream commit 2b424cfc69728224fcb5fad138ea7260728e0901 ]
      
      Patch (b6c7a324 "MIPS: Fix get_frame_info() handling of
      microMIPS function size.") introduces additional function size
      check for microMIPS by only checking insn between ip and ip + func_size.
      However, func_size in get_frame_info() is always 0 if KALLSYMS is not
      enabled. This causes get_frame_info() to return immediately without
      calculating correct frame_size, which in turn causes "Can't analyze
      schedule() prologue" warning messages at boot time.
      
      This patch removes func_size check, and let the frame_size check run
      up to 128 insns for both MIPS and microMIPS.
      Signed-off-by: NJun-Ru Chang <jrjang@realtek.com>
      Signed-off-by: NTony Wu <tonywu@realtek.com>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Fixes: b6c7a324 ("MIPS: Fix get_frame_info() handling of microMIPS function size.")
      Cc: <ralf@linux-mips.org>
      Cc: <jhogan@kernel.org>
      Cc: <macro@mips.com>
      Cc: <yamada.masahiro@socionext.com>
      Cc: <peterz@infradead.org>
      Cc: <mingo@kernel.org>
      Cc: <linux-mips@vger.kernel.org>
      Cc: <linux-kernel@vger.kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      cd8520a2
  2. 10 3月, 2019 1 次提交
  3. 06 3月, 2019 1 次提交
    • M
      MIPS: fix truncation in __cmpxchg_small for short values · 3bfa6413
      Michael Clark 提交于
      commit 94ee12b507db8b5876e31c9d6c9d84f556a4b49f upstream.
      
      __cmpxchg_small erroneously uses u8 for load comparison which can
      be either char or short. This patch changes the local variable to
      u32 which is sufficiently sized, as the loaded value is already
      masked and shifted appropriately. Using an integer size avoids
      any unnecessary canonicalization from use of non native widths.
      
      This patch is part of a series that adapts the MIPS small word
      atomics code for xchg and cmpxchg on short and char to RISC-V.
      
      Cc: RISC-V Patches <patches@groups.riscv.org>
      Cc: Linux RISC-V <linux-riscv@lists.infradead.org>
      Cc: Linux MIPS <linux-mips@linux-mips.org>
      Signed-off-by: NMichael Clark <michaeljclark@mac.com>
      [paul.burton@mips.com:
        - Fix varialble typo per Jonas Gorski.
        - Consolidate load variable with other declarations.]
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Fixes: 3ba7f44d ("MIPS: cmpxchg: Implement 1 byte & 2 byte cmpxchg()")
      Cc: stable@vger.kernel.org # v4.13+
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3bfa6413
  4. 15 2月, 2019 1 次提交
    • V
      mips: cm: reprime error cause · 384cc5fd
      Vladimir Kondratiev 提交于
      commit 05dc6001af0630e200ad5ea08707187fe5537e6d upstream.
      
      Accordingly to the documentation
      ---cut---
      The GCR_ERROR_CAUSE.ERR_TYPE field and the GCR_ERROR_MULT.ERR_TYPE
      fields can be cleared by either a reset or by writing the current
      value of GCR_ERROR_CAUSE.ERR_TYPE to the
      GCR_ERROR_CAUSE.ERR_TYPE register.
      ---cut---
      Do exactly this. Original value of cm_error may be safely written back;
      it clears error cause and keeps other bits untouched.
      
      Fixes: 3885c2b4 ("MIPS: CM: Add support for reporting CM cache errors")
      Signed-off-by: NVladimir Kondratiev <vladimir.kondratiev@linux.intel.com>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: linux-mips@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: stable@vger.kernel.org # v4.3+
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      384cc5fd
  5. 10 1月, 2019 1 次提交
    • P
      MIPS: math-emu: Write-protect delay slot emulation pages · 62452b35
      Paul Burton 提交于
      commit adcc81f148d733b7e8e641300c5590a2cdc13bf3 upstream.
      
      Mapping the delay slot emulation page as both writeable & executable
      presents a security risk, in that if an exploit can write to & jump into
      the page then it can be used as an easy way to execute arbitrary code.
      
      Prevent this by mapping the page read-only for userland, and using
      access_process_vm() with the FOLL_FORCE flag to write to it from
      mips_dsemul().
      
      This will likely be less efficient due to copy_to_user_page() performing
      cache maintenance on a whole page, rather than a single line as in the
      previous use of flush_cache_sigtramp(). However this delay slot
      emulation code ought not to be running in any performance critical paths
      anyway so this isn't really a problem, and we can probably do better in
      copy_to_user_page() anyway in future.
      
      A major advantage of this approach is that the fix is small & simple to
      backport to stable kernels.
      Reported-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Fixes: 432c6bac ("MIPS: Use per-mm page to execute branch delay slot instructions")
      Cc: stable@vger.kernel.org # v4.8+
      Cc: linux-mips@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: Rich Felker <dalias@libc.org>
      Cc: David Daney <david.daney@cavium.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      62452b35
  6. 06 12月, 2018 1 次提交
  7. 21 11月, 2018 1 次提交
  8. 29 9月, 2018 2 次提交
    • P
      MIPS: Fix CONFIG_CMDLINE handling · 951d223c
      Paul Burton 提交于
      Commit 8ce355cf ("MIPS: Setup boot_command_line before
      plat_mem_setup") fixed a problem for systems which have
      CONFIG_CMDLINE_BOOL=y & use a DT with a chosen node that has either no
      bootargs property or an empty one. In this configuration
      early_init_dt_scan_chosen() copies CONFIG_CMDLINE into
      boot_command_line, but the MIPS code doesn't know this so it appends
      CONFIG_CMDLINE (via builtin_cmdline) to boot_command_line again. The
      result is that boot_command_line contains the arguments from
      CONFIG_CMDLINE twice.
      
      That commit took the approach of simply setting up boot_command_line
      from the MIPS code before early_init_dt_scan_chosen() runs, causing it
      not to copy CONFIG_CMDLINE to boot_command_line if a chosen node with no
      bootargs property is found.
      
      Unfortunately this is problematic for systems which do have a non-empty
      bootargs property & CONFIG_CMDLINE_BOOL=y. There
      early_init_dt_scan_chosen() will overwrite boot_command_line with the
      arguments from DT, which means we lose those from CONFIG_CMDLINE
      entirely. This breaks CONFIG_MIPS_CMDLINE_DTB_EXTEND. If we have
      CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER or
      CONFIG_MIPS_CMDLINE_BUILTIN_EXTEND selected and the DT has a bootargs
      property which we should ignore, it will instead be honoured breaking
      those configurations too.
      
      Fix this by reverting commit 8ce355cf ("MIPS: Setup
      boot_command_line before plat_mem_setup") to restore the former
      behaviour, and fixing the CONFIG_CMDLINE duplication issue by
      initializing boot_command_line to a non-empty string that
      early_init_dt_scan_chosen() will not overwrite with CONFIG_CMDLINE.
      
      This is a little ugly, but cleanup in this area is on its way. In the
      meantime this is at least easy to backport & contains the ugliness
      within arch/mips/.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Fixes: 8ce355cf ("MIPS: Setup boot_command_line before plat_mem_setup")
      References: https://patchwork.linux-mips.org/patch/18804/
      Patchwork: https://patchwork.linux-mips.org/patch/20813/
      Cc: Frank Rowand <frowand.list@gmail.com>
      Cc: Jaedon Shin <jaedon.shin@gmail.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Rob Herring <robh+dt@kernel.org>
      Cc: devicetree@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org # v4.16+
      951d223c
    • P
      MIPS: VDSO: Always map near top of user memory · ea7e0480
      Paul Burton 提交于
      When using the legacy mmap layout, for example triggered using ulimit -s
      unlimited, get_unmapped_area() fills memory from bottom to top starting
      from a fairly low address near TASK_UNMAPPED_BASE.
      
      This placement is suboptimal if the user application wishes to allocate
      large amounts of heap memory using the brk syscall. With the VDSO being
      located low in the user's virtual address space, the amount of space
      available for access using brk is limited much more than it was prior to
      the introduction of the VDSO.
      
      For example:
      
        # ulimit -s unlimited; cat /proc/self/maps
        00400000-004ec000 r-xp 00000000 08:00 71436      /usr/bin/coreutils
        004fc000-004fd000 rwxp 000ec000 08:00 71436      /usr/bin/coreutils
        004fd000-0050f000 rwxp 00000000 00:00 0
        00cc3000-00ce4000 rwxp 00000000 00:00 0          [heap]
        2ab96000-2ab98000 r--p 00000000 00:00 0          [vvar]
        2ab98000-2ab99000 r-xp 00000000 00:00 0          [vdso]
        2ab99000-2ab9d000 rwxp 00000000 00:00 0
        ...
      
      Resolve this by adjusting STACK_TOP to reserve space for the VDSO &
      providing an address hint to get_unmapped_area() causing it to use this
      space even when using the legacy mmap layout.
      
      We reserve enough space for the VDSO, plus 1MB or 256MB for 32 bit & 64
      bit systems respectively within which we randomize the VDSO base
      address. Previously this randomization was taken care of by the mmap
      base address randomization performed by arch_mmap_rnd(). The 1MB & 256MB
      sizes are somewhat arbitrary but chosen such that we have some
      randomization without taking up too much of the user's virtual address
      space, which is often in short supply for 32 bit systems.
      
      With this the VDSO is always mapped at a high address, leaving lots of
      space for statically linked programs to make use of brk:
      
        # ulimit -s unlimited; cat /proc/self/maps
        00400000-004ec000 r-xp 00000000 08:00 71436      /usr/bin/coreutils
        004fc000-004fd000 rwxp 000ec000 08:00 71436      /usr/bin/coreutils
        004fd000-0050f000 rwxp 00000000 00:00 0
        00c28000-00c49000 rwxp 00000000 00:00 0          [heap]
        ...
        7f67c000-7f69d000 rwxp 00000000 00:00 0          [stack]
        7f7fc000-7f7fd000 rwxp 00000000 00:00 0
        7fcf1000-7fcf3000 r--p 00000000 00:00 0          [vvar]
        7fcf3000-7fcf4000 r-xp 00000000 00:00 0          [vdso]
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Reported-by: NHuacai Chen <chenhc@lemote.com>
      Fixes: ebb5e78c ("MIPS: Initial implementation of a VDSO")
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org # v4.4+
      ea7e0480
  9. 01 9月, 2018 1 次提交
  10. 14 8月, 2018 1 次提交
  11. 11 8月, 2018 1 次提交
    • P
      MIPS: Consistently declare TLB functions · 4bcb4ad6
      Paul Burton 提交于
      Since at least the beginning of the git era we've declared our TLB
      exception handling functions inconsistently. They're actually functions,
      but we declare them as arrays of u32 where each u32 is an encoded
      instruction. This has always been the case for arch/mips/mm/tlbex.c, and
      has also been true for arch/mips/kernel/traps.c since commit
      86a1708a ("MIPS: Make tlb exception handler definitions and
      declarations match.") which aimed for consistency but did so by
      consistently making the our C code inconsistent with our assembly.
      
      This is all usually harmless, but when using GCC 7 or newer to build a
      kernel targeting microMIPS (ie. CONFIG_CPU_MICROMIPS=y) it becomes
      problematic. With microMIPS bit 0 of the program counter indicates the
      ISA mode. When bit 0 is zero instructions are decoded using the standard
      MIPS32 or MIPS64 ISA. When bit 0 is one instructions are decoded using
      microMIPS. This means that function pointers become odd - their least
      significant bit is one for microMIPS code. We work around this in cases
      where we need to access code using loads & stores with our
      msk_isa16_mode() macro which simply clears bit 0 of the value it is
      given:
      
        #define msk_isa16_mode(x) ((x) & ~0x1)
      
      For example we do this for our TLB load handler in
      build_r4000_tlb_load_handler():
      
        u32 *p = (u32 *)msk_isa16_mode((ulong)handle_tlbl);
      
      We then write code to p, expecting it to be suitably aligned (our LEAF
      macro aligns functions on 4 byte boundaries, so (ulong)handle_tlbl will
      give a value one greater than a multiple of 4 - ie. the start of a
      function on a 4 byte boundary, with the ISA mode bit 0 set).
      
      This worked fine up to GCC 6, but GCC 7 & onwards is smart enough to
      presume that handle_tlbl which we declared as an array of u32s must be
      aligned sufficiently that bit 0 of its address will never be set, and as
      a result optimize out msk_isa16_mode(). This leads to p having an
      address with bit 0 set, and when we go on to attempt to store code at
      that address we take an address error exception due to the unaligned
      memory access.
      
      This leads to an exception prior to the kernel having configured its own
      exception handlers, so we jump to whatever handlers the bootloader
      configured. In the case of QEMU this results in a silent hang, since it
      has no useful general exception vector.
      
      Fix this by consistently declaring our TLB-related functions as
      functions. For handle_tlbl(), handle_tlbs() & handle_tlbm() we do this
      in asm/tlbex.h & we make use of the existing declaration of
      tlbmiss_handler_setup_pgd() in asm/mmu_context.h. Our TLB handler
      generation code in arch/mips/mm/tlbex.c is adjusted to deal with these
      definitions, in most cases simply by casting the function pointers to
      u32 pointers.
      
      This allows us to include asm/mmu_context.h in arch/mips/mm/tlbex.c to
      get the definitions of tlbmiss_handler_setup_pgd & pgd_current, removing
      some needless duplication. Consistently using msk_isa16_mode() on
      function pointers means we no longer need the
      tlbmiss_handler_setup_pgd_start symbol so that is removed entirely.
      
      Now that we're declaring our functions as functions GCC stops optimizing
      out msk_isa16_mode() & a microMIPS kernel built with either GCC 7.3.0 or
      8.1.0 boots successfully.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      4bcb4ad6
  12. 02 8月, 2018 3 次提交
    • P
      MIPS: Delete unused code in linux32.c · 48ae93fd
      Paul Burton 提交于
      The A() & AA() macros have been unused since commit 05e43966
      ("[MIPS] Use SYSVIPC_COMPAT to fix various problems on N32"), which
      switched to the more standard compat_ptr().
      
      RLIM_INFINITY32, RESOURCE32() & struct rlimit32 have been present but
      unused since the beginning of the git era.
      
      Remove the dead code.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/20108/
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      48ae93fd
    • P
      MIPS: Remove unused sys_32_mmap2 · 3a1c0fc5
      Paul Burton 提交于
      The sys_32_mmap2 function has been unused since we started using syscall
      wrappers in commit dbda6ac0 ("MIPS: CVE-2009-0029: Enable syscall
      wrappers."), and is indeed identical to the sys_mips_mmap2 function that
      replaced it in sys32_call_table.
      
      Remove the dead code.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/20107/
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      3a1c0fc5
    • P
      MIPS: Remove nabi_no_regargs · 96a68b14
      Paul Burton 提交于
      Our sigreturn functions make use of a macro named nabi_no_regargs to
      declare 8 dummy arguments to a function, forcing the compiler to expect
      a pt_regs structure on the stack rather than in argument registers. This
      is an ugly hack which unnecessarily causes these sigreturn functions to
      need to care about the calling convention of the ABI the kernel is built
      for. Although this is abstracted via nabi_no_regargs, it's still ugly &
      unnecessary.
      
      Remove nabi_no_regargs & the struct pt_regs argument from sigreturn
      functions, and instead use current_pt_regs() to find the struct pt_regs
      on the stack, which works cleanly regardless of ABI.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/20106/
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      96a68b14
  13. 31 7月, 2018 1 次提交
    • P
      MIPS: Allow auto-dection of ARCH_PFN_OFFSET & PHYS_OFFSET · 6c359eb1
      Paul Burton 提交于
      On systems where physical memory begins at a non-zero address, defining
      PHYS_OFFSET (which influences ARCH_PFN_OFFSET) can save us time & memory
      by avoiding book-keeping for pages from address zero to the start of
      memory.
      
      Some MIPS platforms already make use of this, but with the definition of
      PHYS_OFFSET being compile-time constant it hasn't been possible to
      enable this optimization for a kernel which may run on systems with
      varying physical memory base addresses.
      
      Introduce a new Kconfig option CONFIG_MIPS_AUTO_PFN_OFFSET which, when
      enabled, makes ARCH_PFN_OFFSET a variable & detects it from the boot
      memory map (which for example may have been populated from DT). The
      relationship with PHYS_OFFSET is reversed, with PHYS_OFFSET now being
      based on ARCH_PFN_OFFSET. This is because ARCH_PFN_OFFSET is used far
      more often, so avoiding the need for runtime calculation gives us a
      smaller impact on kernel text size (0.1% rather than 0.15% for
      64r6el_defconfig).
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Suggested-by: NVladimir Kondratiev <vladimir.kondratiev@intel.com>
      Patchwork: https://patchwork.linux-mips.org/patch/20048/
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      6c359eb1
  14. 24 7月, 2018 1 次提交
  15. 20 7月, 2018 3 次提交
    • M
      MIPS: Add FP_MODE regset support · 1ae22a0e
      Maciej W. Rozycki 提交于
      Define an NT_MIPS_FP_MODE core file note and implement a corresponding
      regset holding the state handled by PR_SET_FP_MODE and PR_GET_FP_MODE
      prctl(2) requests.  This lets debug software correctly interpret the
      contents of floating-point general registers both in live debugging and
      in core files, and also switch floating-point modes of a live process.
      
      [paul.burton@mips.com:
        - Changed NT_MIPS_FP_MODE to 0x801 to match first nibble of
          NT_MIPS_DSP, which was also changed to avoid a conflict.]
      Signed-off-by: NMaciej W. Rozycki <macro@mips.com>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/19331/
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      1ae22a0e
    • M
      MIPS: Add DSP ASE regset support · 44109c60
      Maciej W. Rozycki 提交于
      Define an NT_MIPS_DSP core file note type and implement a corresponding
      regset holding the DSP ASE register context, following the layout of the
      `mips_dsp_state' structure, except for the DSPControl register stored as
      a 64-bit rather than 32-bit quantity in a 64-bit note.
      
      The lack of DSP ASE register saving to core files can be considered a
      design flaw with commit e50c0a8f ("Support the MIPS32 / MIPS64 DSP
      ASE."), leading to an incomplete state being saved.  Consequently no DSP
      ASE regset has been created with commit 7aeb753b ("MIPS: Implement
      task_user_regset_view."), when regset support was added to the MIPS
      port.
      
      Additionally there is no way for ptrace(2) to correctly access the DSP
      accumulator registers in n32 processes with the existing interfaces.
      This is due to 32-bit truncation of data passed with PTRACE_PEEKUSR and
      PTRACE_POKEUSR requests, which cannot be avoided owing to how the data
      types for ptrace(3) have been defined.  This new NT_MIPS_DSP regset
      fills the missing interface gap.
      
      [paul.burton@mips.com:
        - Change NT_MIPS_DSP to 0x800 to avoid conflict with NT_VMCOREDD
          introduced by commit 2724273e ("vmcore: add API to collect
          hardware dump in second kernel").
        - Drop stable tag. Whilst I agree the lack of this functionality can
          be considered a flaw in earlier DSP ASE support, it's still new
          functionality which doesn't meet up to the requirements set out in
          Documentation/process/stable-kernel-rules.rst.]
      Signed-off-by: NMaciej W. Rozycki <macro@mips.com>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      References: 7aeb753b ("MIPS: Implement task_user_regset_view.")
      Patchwork: https://patchwork.linux-mips.org/patch/19330/
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      44109c60
    • M
      MIPS: Correct the 64-bit DSP accumulator register size · f5958b4c
      Maciej W. Rozycki 提交于
      Use the `unsigned long' rather than `__u32' type for DSP accumulator
      registers, like with the regular MIPS multiply/divide accumulator and
      general-purpose registers, as all are 64-bit in 64-bit implementations
      and using a 32-bit data type leads to contents truncation on context
      saving.
      
      Update `arch_ptrace' and `compat_arch_ptrace' accordingly, removing
      casts that are similarly not used with multiply/divide accumulator or
      general-purpose register accesses.
      Signed-off-by: NMaciej W. Rozycki <macro@mips.com>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Fixes: e50c0a8f ("Support the MIPS32 / MIPS64 DSP ASE.")
      Patchwork: https://patchwork.linux-mips.org/patch/19329/
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Cc: stable@vger.kernel.org # 2.6.15+
      f5958b4c
  16. 18 7月, 2018 1 次提交
    • A
      mips: unify prom_putchar() declarations · 5c93316c
      Alexander Sverdlin 提交于
      prom_putchar() is used centrally in early printk infrastructure therefore
      at least MIPS should agree on the function return type.
      
      [paul.burton@mips.com:
        - Include linux/types.h in asm/setup.h to gain the bool typedef before
          we start include asm/setup.h elsewhere.
        - Include asm/setup.h in all files that use or define prom_putchar().
        - Also standardise on signed rather than unsigned char argument.]
      Signed-off-by: NAlexander Sverdlin <alexander.sverdlin@nokia.com>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/19842/
      Cc: linux-mips@linux-mips.org
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Jonas Gorski <jonas.gorski@gmail.com>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: Philippe Ombredanne <pombredanne@nexb.com>
      5c93316c
  17. 29 6月, 2018 3 次提交
    • P
      MIPS: Annotate cpu_wait implementations with __cpuidle · 97c8580e
      Paul Burton 提交于
      Annotate cpu_wait implementations using the __cpuidle macro which
      places these functions in the .cpuidle.text section. This allows
      cpu_in_idle() to return true for PC values which fall within these
      functions, allowing nmi_backtrace() to produce cleaner output for CPUs
      running idle functions. For example:
      
        # echo l >/proc/sysrq-trigger
        [   38.587170] sysrq: SysRq : Show backtrace of all active CPUs
        [   38.593657] NMI backtrace for cpu 1
        [   38.597611] CPU: 1 PID: 161 Comm: sh Not tainted 4.18.0-rc1+ #27
        [   38.604306] Stack : 00000000 00000004 00000006 80486724 00000000 00000000 00000000 00000000
        [   38.613647]         80e17eda 00000034 00000000 00000000 80d20000 80b67e98 8e559c90 0ffe1e88
        [   38.622986]         00000000 00000000 80e70000 00000000 8f61db18 38312e34 722d302e 202b3163
        [   38.632324]         8e559d3c 8e559adc 00000001 6b636162 80d20000 80000000 00000000 80d1cfa4
        [   38.641664]         00000001 80d20000 80d19520 00000000 00000003 80836724 00000004 80e10004
        [   38.650993]         ...
        [   38.653724] Call Trace:
        [   38.656499] [<8040cdd0>] show_stack+0xa0/0x144
        [   38.661475] [<80b67e98>] dump_stack+0xe8/0x120
        [   38.666455] [<80b6f6d4>] nmi_cpu_backtrace+0x1b4/0x1cc
        [   38.672189] [<80b6f81c>] nmi_trigger_cpumask_backtrace+0x130/0x1e4
        [   38.679081] [<808295d8>] __handle_sysrq+0xc0/0x180
        [   38.684421] [<80829b84>] write_sysrq_trigger+0x50/0x64
        [   38.690176] [<8061c984>] proc_reg_write+0xd0/0xfc
        [   38.695447] [<805aac1c>] __vfs_write+0x54/0x194
        [   38.700500] [<805aaf24>] vfs_write+0xe0/0x18c
        [   38.705360] [<805ab190>] ksys_write+0x7c/0xf0
        [   38.710238] [<80416018>] syscall_common+0x34/0x58
        [   38.715558] Sending NMI from CPU 1 to CPUs 0,2-3:
        [   38.720916] NMI backtrace for cpu 0 skipped: idling at r4k_wait_irqoff+0x2c/0x34
        [   38.729186] NMI backtrace for cpu 3 skipped: idling at r4k_wait_irqoff+0x2c/0x34
        [   38.737449] NMI backtrace for cpu 2 skipped: idling at r4k_wait_irqoff+0x2c/0x34
      
      Without this we get register value & backtrace output from all CPUs,
      which is generally useless for those running the idle function & serves
      only to overwhelm & obfuscate the meaningful output from non-idle CPUs.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/19598/
      97c8580e
    • P
      MIPS: Use async IPIs for arch_trigger_cpumask_backtrace() · b63e132b
      Paul Burton 提交于
      The current MIPS implementation of arch_trigger_cpumask_backtrace() is
      broken because it attempts to use synchronous IPIs despite the fact that
      it may be run with interrupts disabled.
      
      This means that when arch_trigger_cpumask_backtrace() is invoked, for
      example by the RCU CPU stall watchdog, we may:
      
        - Deadlock due to use of synchronous IPIs with interrupts disabled,
          causing the CPU that's attempting to generate the backtrace output
          to hang itself.
      
        - Not succeed in generating the desired output from remote CPUs.
      
        - Produce warnings about this from smp_call_function_many(), for
          example:
      
          [42760.526910] INFO: rcu_sched detected stalls on CPUs/tasks:
          [42760.535755]  0-...!: (1 GPs behind) idle=ade/140000000000000/0 softirq=526944/526945 fqs=0
          [42760.547874]  1-...!: (0 ticks this GP) idle=e4a/140000000000000/0 softirq=547885/547885 fqs=0
          [42760.559869]  (detected by 2, t=2162 jiffies, g=266689, c=266688, q=33)
          [42760.568927] ------------[ cut here ]------------
          [42760.576146] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:416 smp_call_function_many+0x88/0x20c
          [42760.587839] Modules linked in:
          [42760.593152] CPU: 2 PID: 1216 Comm: sh Not tainted 4.15.4-00373-gee058bb4d0c2 #2
          [42760.603767] Stack : 8e09bd20 8e09bd20 8e09bd20 fffffff0 00000007 00000006 00000000 8e09bca8
          [42760.616937]         95b2b379 95b2b379 807a0080 00000007 81944518 0000018a 00000032 00000000
          [42760.630095]         00000000 00000030 80000000 00000000 806eca74 00000009 8017e2b8 000001a0
          [42760.643169]         00000000 00000002 00000000 8e09baa4 00000008 808b8008 86d69080 8e09bca0
          [42760.656282]         8e09ad50 805e20aa 00000000 00000000 00000000 8017e2b8 00000009 801070ca
          [42760.669424]         ...
          [42760.673919] Call Trace:
          [42760.678672] [<27fde568>] show_stack+0x70/0xf0
          [42760.685417] [<84751641>] dump_stack+0xaa/0xd0
          [42760.692188] [<699d671c>] __warn+0x80/0x92
          [42760.698549] [<68915d41>] warn_slowpath_null+0x28/0x36
          [42760.705912] [<f7c76c1c>] smp_call_function_many+0x88/0x20c
          [42760.713696] [<6bbdfc2a>] arch_trigger_cpumask_backtrace+0x30/0x4a
          [42760.722216] [<f845bd33>] rcu_dump_cpu_stacks+0x6a/0x98
          [42760.729580] [<796e7629>] rcu_check_callbacks+0x672/0x6ac
          [42760.737476] [<059b3b43>] update_process_times+0x18/0x34
          [42760.744981] [<6eb94941>] tick_sched_handle.isra.5+0x26/0x38
          [42760.752793] [<478d3d70>] tick_sched_timer+0x1c/0x50
          [42760.759882] [<e56ea39f>] __hrtimer_run_queues+0xc6/0x226
          [42760.767418] [<e88bbcae>] hrtimer_interrupt+0x88/0x19a
          [42760.775031] [<6765a19e>] gic_compare_interrupt+0x2e/0x3a
          [42760.782761] [<0558bf5f>] handle_percpu_devid_irq+0x78/0x168
          [42760.790795] [<90c11ba2>] generic_handle_irq+0x1e/0x2c
          [42760.798117] [<1b6d462c>] gic_handle_local_int+0x38/0x86
          [42760.805545] [<b2ada1c7>] gic_irq_dispatch+0xa/0x14
          [42760.812534] [<90c11ba2>] generic_handle_irq+0x1e/0x2c
          [42760.820086] [<c7521934>] do_IRQ+0x16/0x20
          [42760.826274] [<9aef3ce6>] plat_irq_dispatch+0x62/0x94
          [42760.833458] [<6a94b53c>] except_vec_vi_end+0x70/0x78
          [42760.840655] [<22284043>] smp_call_function_many+0x1ba/0x20c
          [42760.848501] [<54022b58>] smp_call_function+0x1e/0x2c
          [42760.855693] [<ab9fc705>] flush_tlb_mm+0x2a/0x98
          [42760.862730] [<0844cdd0>] tlb_flush_mmu+0x1c/0x44
          [42760.869628] [<cb259b74>] arch_tlb_finish_mmu+0x26/0x3e
          [42760.877021] [<1aeaaf74>] tlb_finish_mmu+0x18/0x66
          [42760.883907] [<b3fce717>] exit_mmap+0x76/0xea
          [42760.890428] [<c4c8a2f6>] mmput+0x80/0x11a
          [42760.896632] [<a41a08f4>] do_exit+0x1f4/0x80c
          [42760.903158] [<ee01cef6>] do_group_exit+0x20/0x7e
          [42760.909990] [<13fa8d54>] __wake_up_parent+0x0/0x1e
          [42760.917045] [<46cf89d0>] smp_call_function_many+0x1a2/0x20c
          [42760.924893] [<8c21a93b>] syscall_common+0x14/0x1c
          [42760.931765] ---[ end trace 02aa09da9dc52a60 ]---
          [42760.938342] ------------[ cut here ]------------
          [42760.945311] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:291 smp_call_function_single+0xee/0xf8
          ...
      
      This patch switches MIPS' arch_trigger_cpumask_backtrace() to use async
      IPIs & smp_call_function_single_async() in order to resolve this
      problem. We ensure use of the pre-allocated call_single_data_t
      structures is serialized by maintaining a cpumask indicating that
      they're busy, and refusing to attempt to send an IPI when a CPU's bit is
      set in this mask. This should only happen if a CPU hasn't responded to a
      previous backtrace IPI - ie. if it's hung - and we print a warning to
      the console in this case.
      
      I've marked this for stable branches as far back as v4.9, to which it
      applies cleanly. Strictly speaking the faulty MIPS implementation can be
      traced further back to commit 856839b7 ("MIPS: Add
      arch_trigger_all_cpu_backtrace() function") in v3.19, but kernel
      versions v3.19 through v4.8 will require further work to backport due to
      the rework performed in commit 9a01c3ed ("nmi_backtrace: add more
      trigger_*_cpu_backtrace() methods").
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/19597/
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org # v4.9+
      Fixes: 856839b7 ("MIPS: Add arch_trigger_all_cpu_backtrace() function")
      Fixes: 9a01c3ed ("nmi_backtrace: add more trigger_*_cpu_backtrace() methods")
      b63e132b
    • P
      MIPS: Call dump_stack() from show_regs() · 5a267832
      Paul Burton 提交于
      The generic nmi_cpu_backtrace() function calls show_regs() when a struct
      pt_regs is available, and dump_stack() otherwise. If we were to make use
      of the generic nmi_cpu_backtrace() with MIPS' current implementation of
      show_regs() this would mean that we see only register data with no
      accompanying stack information, in contrast with our current
      implementation which calls dump_stack() regardless of whether register
      state is available.
      
      In preparation for making use of the generic nmi_cpu_backtrace() to
      implement arch_trigger_cpumask_backtrace(), have our implementation of
      show_regs() call dump_stack() and drop the explicit dump_stack() call in
      arch_dump_stack() which is invoked by arch_trigger_cpumask_backtrace().
      
      This will allow the output we produce to remain the same after a later
      patch switches to using nmi_cpu_backtrace(). It may mean that we produce
      extra stack output in other uses of show_regs(), but this:
      
        1) Seems harmless.
        2) Is good for consistency between arch_trigger_cpumask_backtrace()
           and other users of show_regs().
        3) Matches the behaviour of the ARM & PowerPC architectures.
      
      Marked for stable back to v4.9 as a prerequisite of the following patch
      "MIPS: Call dump_stack() from show_regs()".
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/19596/
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org # v4.9+
      5a267832
  18. 25 6月, 2018 5 次提交
    • P
      MIPS: Add ksig argument to rseq_{signal_deliver,handle_notify_resume} · 662d855c
      Paul Burton 提交于
      Commit 784e0300 ("rseq: Avoid infinite recursion when delivering
      SIGSEGV") added a new ksig argument to the rseq_signal_deliver() &
      rseq_handle_notify_resume() functions, and was merged in v4.18-rc2.
      Meanwhile MIPS support for restartable sequences was also merged in
      v4.18-rc2 with commit 9ea141ad ("MIPS: Add support for restartable
      sequences"), and therefore didn't get updated for the API change.
      
      This results in build failures like the following:
      
          CC      arch/mips/kernel/signal.o
        arch/mips/kernel/signal.c: In function 'handle_signal':
        arch/mips/kernel/signal.c:804:22: error: passing argument 1 of
          'rseq_signal_deliver' from incompatible pointer type
          [-Werror=incompatible-pointer-types]
          rseq_signal_deliver(regs);
                              ^~~~
        In file included from ./include/linux/context_tracking.h:5,
                         from arch/mips/kernel/signal.c:12:
        ./include/linux/sched.h:1811:56: note: expected 'struct ksignal *' but
          argument is of type 'struct pt_regs *'
          static inline void rseq_signal_deliver(struct ksignal *ksig,
                                                 ~~~~~~~~~~~~~~~~^~~~
        arch/mips/kernel/signal.c:804:2: error: too few arguments to function
          'rseq_signal_deliver'
          rseq_signal_deliver(regs);
          ^~~~~~~~~~~~~~~~~~~
      
      Fix this by adding the ksig argument as was done for other architectures
      in commit 784e0300 ("rseq: Avoid infinite recursion when delivering
      SIGSEGV").
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Patchwork: https://patchwork.linux-mips.org/patch/19603/
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      662d855c
    • P
      MIPS: Schedule on CPUs we need to lose FPU for a mode switch · 8c8d953c
      Paul Burton 提交于
      Commit 6b832257 ("MIPS: Force CPUs to lose FP context during mode
      switches") ensures that we react to PR_SET_FP_MODE prctl syscalls
      quickly by broadcasting an IPI in order to cause CPUs to lose FPU access
      when necessary. Whilst it achieves that, unfortunately it causes all
      sorts of strange race conditions because:
      
       1) The IPI may arrive at a point where the FPU is in the process of
          being enabled, but that process is not yet complete leading to a
          state we aren't prepared to handle. For example:
      
          [  370.215903] do_cpu invoked from kernel context![#1]:
          [  370.221064] CPU: 0 PID: 963 Comm: fp-prctl Not tainted 4.9.0-rc5-00323-g210db32-dirty #226
          [  370.229420] task: a8000000fd672e00 task.stack: a8000000fd630000
          [  370.235399] $ 0   : 0000000000000000 0000000000000001 0000000000000001 a8000000fd630000
          [  370.243882] $ 4   : a8000000fd672e00 0000000000000000 0000000000000453 0000000000000000
          [  370.252317] $ 8   : 0000000000000000 a8000000fd637c28 1000000000000000 0000000000000010
          [  370.260753] $12   : 00000000140084e0 ffffffff80109c00 0000000000000000 0000000000000002
          [  370.269179] $16   : ffffffff8092f080 a8000000fd672e00 ffffffff80107fe8 a8000000fd485000
          [  370.277612] $20   : ffffffff8084d328 ffffffff80940000 0000000000000009 ffffffff80930000
          [  370.286038] $24   : 0000000000000000 900000001612048c
          [  370.294476] $28   : a8000000fd630000 a8000000fd637ac0 ffffffff80937300 ffffffff8010807c
          [  370.302909] Hi    : 0000000000000000
          [  370.306595] Lo    : 0000000000000200
          [  370.310376] epc   : ffffffff80115d38 _save_fp+0x10/0xa0
          [  370.315784] ra    : ffffffff8010807c prepare_for_fp_mode_switch+0x94/0x1b0
          [  370.322707] Status: 140084e2 KX SX UX KERNEL EXL
          [  370.327980] Cause : 1080002c (ExcCode 0b)
          [  370.332091] PrId  : 0001a428 (MIPS P6600)
          [  370.336179] Modules linked in:
          [  370.339486] Process fp-prctl (pid: 963, threadinfo=a8000000fd630000, task=a8000000fd672e00, tls=00000000756e67d0)
          [  370.349724] Stack : 0000000000000000 a8000000fd557dc0 0000000000000000 ffffffff801ca8e0
          [  370.358161]         0000000000000000 a8000000fd637b9c 0000000000000009 ffffffff80923780
          [  370.366575]         ffffffff80850000 ffffffff8011610c 00000000000000b8 ffffffff801a5084
          [  370.374989]         ffffffff8084a370 ffffffff8084a388 ffffffff80923780 ffffffff80923828
          [  370.383395]         0000000000010000 ffffffff809237a8 0000000000020000 ffffffff80a40000
          [  370.391817]         000000000000007c 00000000004a0000 00000000756dedd0 ffffffff801a5188
          [  370.400230]         a800000002014900 0000000000000001 ffffffff80923780 0000000080923828
          [  370.408644]         ffffffff80923780 ffffffff80923780 ffffffff80923828 ffffffff801a521c
          [  370.417066]         ffffffff80923780 ffffffff80923828 0000000000010000 ffffffff801a8f84
          [  370.425472]         ffffffff80a40000 a8000000fd637c20 ffffffff80a39240 0000000000000001
          [  370.433885]         ...
          [  370.436562] Call Trace:
          [  370.439222] [<ffffffff80115d38>] _save_fp+0x10/0xa0
          [  370.444305] [<ffffffff8010807c>] prepare_for_fp_mode_switch+0x94/0x1b0
          [  370.451035] [<ffffffff801ca8e0>] flush_smp_call_function_queue+0xf8/0x230
          [  370.457991] [<ffffffff8011610c>] ipi_call_interrupt+0xc/0x20
          [  370.463814] [<ffffffff801a5084>] __handle_irq_event_percpu+0xc4/0x1a8
          [  370.470404] [<ffffffff801a5188>] handle_irq_event_percpu+0x20/0x68
          [  370.476734] [<ffffffff801a521c>] handle_irq_event+0x4c/0x88
          [  370.482486] [<ffffffff801a8f84>] handle_edge_irq+0x12c/0x210
          [  370.488316] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
          [  370.494280] [<ffffffff804a2dbc>] gic_handle_shared_int+0x194/0x268
          [  370.500616] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
          [  370.506529] [<ffffffff80107e60>] do_IRQ+0x18/0x28
          [  370.511445] [<ffffffff804a1524>] plat_irq_dispatch+0xc4/0x140
          [  370.517339] [<ffffffff80106230>] ret_from_irq+0x0/0x4
          [  370.522583] [<ffffffff8010fad4>] do_ri+0x4fc/0x7e8
          [  370.527546] [<ffffffff80106220>] ret_from_exception+0x0/0x10
      
       2) The IPI may arrive during kernel use of the FPU, since we generally
          only disable preemption around use of the FPU & leave interrupts
          enabled. This can lead to us unexpectedly losing access to the FPU
          in places where it previously had not been possible. For example:
      
          do_cpu invoked from kernel context![#2]:
          CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G      D         4.7.0-00424-g49b0c82
          #2
          task: 838e4000 ti: 88d38000 task.ti: 88d38000
          $ 0   : 00000000 00000001 ffffffff 88d3fef8
          $ 4   : 838e4000 88d38004 00000000 00000001
          $ 8   : 3400fc01 801f8020 808e9100 24000000
          $12   : dbffffff 807b69d8 807b0000 00000000
          $16   : 00000000 80786150 00400fc4 809c0398
          $20   : 809c0338 0040273c 88d3ff28 808e9d30
          $24   : 808e9d30 00400fb4
          $28   : 88d38000 88d3fe88 00000000 8011a2ac
          Hi    : 0040273c
          Lo    : 88d3ff28
          epc   : 80114178 _restore_fp+0x10/0xa0
          ra    : 8011a2ac mipsr2_decoder+0xd5c/0x1660
          Status: 1400fc03    KERNEL EXL IE
          Cause : 1080002c (ExcCode 0b)
          PrId  : 0001a920 (MIPS I6400)
          Modules linked in:
          Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
          Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
                808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
                004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
                808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
                00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
                ...
          Call Trace:
          [<80114178>] _restore_fp+0x10/0xa0
          [<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
          [<8010de18>] do_ri+0x90/0x6b8
          [<80105c20>] ret_from_exception+0x0/0x10
      
      At first glance a simple fix may seem to be to disable interrupts around
      kernel use of the FPU rather than merely preemption, however this would
      introduce further overhead outside of the mode switch path & doesn't
      solve the third problem:
      
       3) The IPI may arrive whilst the kernel is running code that will lead
          to a preempt_disable() call & FPU usage soon. If this happens then
          the IPI will be serviced & we'll proceed to enable an FPU whilst the
          mode switch is in progress, leading to strange & inconsistent
          behaviour.
      
      Further to all of this is a separate but related problem:
      
       4) There are various paths through which we may enable the FPU without
          the user having triggered a coprocessor 1 disabled exception. These
          paths are those in which we emulate instructions & then enable the
          FPU with the expectation that the user might execute an FP
          instruction shortly afterwards. However these paths have not
          previously checked whether an FP mode switch is underway for the
          task, and therefore could enable the FPU whilst such a mode switch
          is in progress leading to strange & inconsistent behaviour for user
          code.
      
      This patch fixes all of the above by taking a step back & re-examining
      our approach to FP mode switches. Up until now we have taken these basic
      steps:
      
       a) Prevent any threads that are part of the affected process from being
          able to obtain ownership of the FPU.
      
       b) Cause any threads that are part of the affected process and already
          have ownership of an FPU to lose it.
      
       c) Set the thread flags for each thread that is part of the affected
          process to reflect the new FP mode.
      
       d) Allow threads to obtain ownership of the FPU again.
      
      This approach is however more complex than necessary. All that we really
      require is that the mode switch has occurred for all threads that are
      part of the affected process before mips_set_process_fp_mode(), and thus
      the PR_SET_FP_MODE prctl() syscall, returns. This doesn't require that
      we stop threads from owning or using an FPU whilst a mode switch occurs,
      only that we force them to relinquish it after the mode switch has
      occurred such that they next own an FPU with the correct mode
      configured. Our basic steps therefore simplify to:
      
       A) Set the thread flags for each thread that is part of the affected
          process to reflect the new FP mode.
      
       B) Cause any threads that are part of the affected process and already
          have ownership of an FPU to lose it.
      
      We implement B) by forcing each CPU which might be running a thread
      which is part of the affected process to schedule a no-op function,
      which causes the affected thread to lose its FPU ownership when it is
      descheduled.
      
      The end result is simpler FP mode switching with less overhead in the
      FPU enable path (ie. enable_restore_fp_context()) and fewer moving
      parts.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
      Fixes: 6b832257 ("MIPS: Force CPUs to lose FP context during mode switches")
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: stable <stable@vger.kernel.org> # v4.0+
      8c8d953c
    • H
      MIPS: Fix ejtag handler on SMP · c8bf3805
      Heiher 提交于
      On SMP systems, the shared ejtag debug buffer may be overwritten by
      other cores, because every cores can generate ejtag exception at
      same time.
      
      Unfortunately, in that context, it's difficult to relax more registers
      to access per cpu buffers. so use ll/sc to serialize the access.
      
      [paul.burton@mips.com:
        This could in theory be backported at least as far back as the
        beginning of the git era, however in general it's exceedingly rare
        that anyone would hit this without further changes, so it doesn't seem
        worthwhile marking for backport.]
      Signed-off-by: NHeiher <r@hev.cc>
      Patchwork: https://patchwork.linux-mips.org/patch/19507/Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: jhogan@kernel.org
      Cc: ralf@linux-mips.org
      c8bf3805
    • C
      MIPS: move coherentio setup to setup.c · aa4db775
      Christoph Hellwig 提交于
      We want to be able to use it even when not building dma-default.c
      in the near future.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Patchwork: https://patchwork.linux-mips.org/patch/19543/Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Kevin Cernekee <cernekee@gmail.com>
      Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
      Cc: Tom Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: iommu@lists.linux-foundation.org
      Cc: linux-mips@linux-mips.org
      aa4db775
    • Y
      MIPS: kexec: fix typos · 28a87b45
      Yegor Yefremov 提交于
      Correct a couple of typos within comments in
      arch/mips/kernel/relocate_kernel.S.
      
      [paul.burton@mips.com: Add a commit message.]
      Signed-off-by: NYegor Yefremov <yegorslists@googlemail.com>
      Patchwork: https://patchwork.linux-mips.org/patch/19218/Signed-off-by: NPaul Burton <paul.burton@mips.com>
      28a87b45
  19. 21 6月, 2018 3 次提交
    • M
      bpf/error-inject/kprobes: Clear current_kprobe and enable preempt in kprobe · cce188bd
      Masami Hiramatsu 提交于
      Clear current_kprobe and enable preemption in kprobe
      even if pre_handler returns !0.
      
      This simplifies function override using kprobes.
      
      Jprobe used to require to keep the preemption disabled and
      keep current_kprobe until it returned to original function
      entry. For this reason kprobe_int3_handler() and similar
      arch dependent kprobe handers checks pre_handler result
      and exit without enabling preemption if the result is !0.
      
      After removing the jprobe, Kprobes does not need to
      keep preempt disabled even if user handler returns !0
      anymore.
      
      But since the function override handler in error-inject
      and bpf is also returns !0 if it overrides a function,
      to balancing the preempt count, it enables preemption
      and reset current kprobe by itself.
      
      That is a bad design that is very buggy. This fixes
      such unbalanced preempt-count and current_kprobes setting
      in kprobes, bpf and error-inject.
      
      Note: for powerpc and x86, this removes all preempt_disable
      from kprobe_ftrace_handler because ftrace callbacks are
      called under preempt disabled.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-snps-arc@lists.infradead.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: sparclinux@vger.kernel.org
      Link: https://lore.kernel.org/lkml/152942494574.15209.12323837825873032258.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cce188bd
    • M
      MIPS/kprobes: Don't call the ->break_handler() in MIPS kprobes code · 9b85753d
      Masami Hiramatsu 提交于
      Don't call the ->break_handler() from the MIPS kprobes code,
      because it was only used by jprobes which got removed.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Link: https://lore.kernel.org/lkml/152942482953.15209.843924518200700137.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9b85753d
    • M
      MIPS/kprobes: Remove jprobe implementation · 8c2c3f2d
      Masami Hiramatsu 提交于
      Remove arch dependent setjump/longjump functions
      and unused fields in kprobe_ctlblk for jprobes
      from arch/mips.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Link: https://lore.kernel.org/lkml/152942451058.15209.3459785416221980965.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8c2c3f2d
  20. 20 6月, 2018 5 次提交
  21. 15 6月, 2018 1 次提交
  22. 14 6月, 2018 1 次提交
    • L
      Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables · 050e9baa
      Linus Torvalds 提交于
      The changes to automatically test for working stack protector compiler
      support in the Kconfig files removed the special STACKPROTECTOR_AUTO
      option that picked the strongest stack protector that the compiler
      supported.
      
      That was all a nice cleanup - it makes no sense to have the AUTO case
      now that the Kconfig phase can just determine the compiler support
      directly.
      
      HOWEVER.
      
      It also meant that doing "make oldconfig" would now _disable_ the strong
      stackprotector if you had AUTO enabled, because in a legacy config file,
      the sane stack protector configuration would look like
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_NONE is not set
        # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_STACKPROTECTOR_AUTO=y
      
      and when you ran this through "make oldconfig" with the Kbuild changes,
      it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
      been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
      CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
      used to be disabled (because it was really enabled by AUTO), and would
      disable it in the new config, resulting in:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      That's dangerously subtle - people could suddenly find themselves with
      the weaker stack protector setup without even realizing.
      
      The solution here is to just rename not just the old RECULAR stack
      protector option, but also the strong one.  This does that by just
      removing the CC_ prefix entirely for the user choices, because it really
      is not about the compiler support (the compiler support now instead
      automatially impacts _visibility_ of the options to users).
      
      This results in "make oldconfig" actually asking the user for their
      choice, so that we don't have any silent subtle security model changes.
      The end result would generally look like this:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_STACKPROTECTOR=y
        CONFIG_STACKPROTECTOR_STRONG=y
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      where the "CC_" versions really are about internal compiler
      infrastructure, not the user selections.
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050e9baa
  23. 24 5月, 2018 1 次提交