1. 20 2月, 2019 7 次提交
  2. 16 1月, 2019 3 次提交
    • A
      kasan, arm64: remove redundant ARCH_SLAB_MINALIGN define · 7fa1e2e6
      Andrey Konovalov 提交于
      Defining ARCH_SLAB_MINALIGN in arch/arm64/include/asm/cache.h when KASAN
      is off is not needed, as it is defined in defined in include/linux/slab.h
      as ifndef.
      Signed-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7fa1e2e6
    • A
      arm64: kaslr: ensure randomized quantities are clean to the PoC · 1598ecda
      Ard Biesheuvel 提交于
      kaslr_early_init() is called with the kernel mapped at its
      link time offset, and if it returns with a non-zero offset,
      the kernel is unmapped and remapped again at the randomized
      offset.
      
      During its execution, kaslr_early_init() also randomizes the
      base of the module region and of the linear mapping of DRAM,
      and sets two variables accordingly. However, since these
      variables are assigned with the caches on, they may get lost
      during the cache maintenance that occurs when unmapping and
      remapping the kernel, so ensure that these values are cleaned
      to the PoC.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Fixes: f80fb3a3 ("arm64: add support for kernel ASLR")
      Cc: <stable@vger.kernel.org> # v4.6+
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1598ecda
    • J
      arm64: kpti: Update arm64_kernel_use_ng_mappings() when forced on · 2f979675
      James Morse 提交于
      Since commit b89d82ef ("arm64: kpti: Avoid rewriting early page
      tables when KASLR is enabled"), a kernel built with CONFIG_RANDOMIZE_BASE
      can decide early whether to use non-global mappings by checking the
      kaslr_offset().
      
      A kernel built without CONFIG_RANDOMIZE_BASE, instead checks the
      cpufeature static-key.
      
      This leaves a gap where CONFIG_RANDOMIZE_BASE was enabled, no
      kaslr seed was provided, but kpti was forced on using the cmdline
      option.
      
      When the decision is made late, kpti_install_ng_mappings() will re-write
      the page tables, but arm64_kernel_use_ng_mappings()'s value does not
      change as it only tests the cpufeature static-key if
      CONFIG_RANDOMIZE_BASE is disabled.
      This function influences PROT_DEFAULT via PTE_MAYBE_NG, and causes
      pgattr_change_is_safe() to catch nG->G transitions when the unchanged
      PROT_DEFAULT is used as part of PAGE_KERNEL_RO:
      [    1.942255] alternatives: patching kernel code
      [    1.998288] ------------[ cut here ]------------
      [    2.000693] kernel BUG at arch/arm64/mm/mmu.c:165!
      [    2.019215] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
      [    2.020257] Modules linked in:
      [    2.020807] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.0.0-rc2 #51
      [    2.021917] Hardware name: linux,dummy-virt (DT)
      [    2.022790] pstate: 40000005 (nZcv daif -PAN -UAO)
      [    2.023742] pc : __create_pgd_mapping+0x508/0x6d0
      [    2.024671] lr : __create_pgd_mapping+0x500/0x6d0
      
      [    2.058059] Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____))
      [    2.059369] Call trace:
      [    2.059845]  __create_pgd_mapping+0x508/0x6d0
      [    2.060684]  update_mapping_prot+0x48/0xd0
      [    2.061477]  mark_linear_text_alias_ro+0xdc/0xe4
      [    2.070502]  smp_cpus_done+0x90/0x98
      [    2.071216]  smp_init+0x100/0x114
      [    2.071878]  kernel_init_freeable+0xd4/0x220
      [    2.072750]  kernel_init+0x10/0x100
      [    2.073455]  ret_from_fork+0x10/0x18
      
      [    2.075414] ---[ end trace 3572f3a7782292de ]---
      [    2.076389] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      
      If arm64_kernel_unmapped_at_el0() is true, arm64_kernel_use_ng_mappings()
      should also be true.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      CC: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      CC: John Garry <john.garry@huawei.com>
      CC: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2f979675
  3. 11 1月, 2019 2 次提交
  4. 10 1月, 2019 3 次提交
  5. 09 1月, 2019 2 次提交
  6. 06 1月, 2019 3 次提交
  7. 05 1月, 2019 1 次提交
    • J
      mm: treewide: remove unused address argument from pte_alloc functions · 4cf58924
      Joel Fernandes (Google) 提交于
      Patch series "Add support for fast mremap".
      
      This series speeds up the mremap(2) syscall by copying page tables at
      the PMD level even for non-THP systems.  There is concern that the extra
      'address' argument that mremap passes to pte_alloc may do something
      subtle architecture related in the future that may make the scheme not
      work.  Also we find that there is no point in passing the 'address' to
      pte_alloc since its unused.  This patch therefore removes this argument
      tree-wide resulting in a nice negative diff as well.  Also ensuring
      along the way that the enabled architectures do not do anything funky
      with the 'address' argument that goes unnoticed by the optimization.
      
      Build and boot tested on x86-64.  Build tested on arm64.  The config
      enablement patch for arm64 will be posted in the future after more
      testing.
      
      The changes were obtained by applying the following Coccinelle script.
      (thanks Julia for answering all Coccinelle questions!).
      Following fix ups were done manually:
      * Removal of address argument from  pte_fragment_alloc
      * Removal of pte_alloc_one_fast definitions from m68k and microblaze.
      
      // Options: --include-headers --no-includes
      // Note: I split the 'identifier fn' line, so if you are manually
      // running it, please unsplit it so it runs for you.
      
      virtual patch
      
      @pte_alloc_func_def depends on patch exists@
      identifier E2;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      type T2;
      @@
      
       fn(...
      - , T2 E2
       )
       { ... }
      
      @pte_alloc_func_proto_noarg depends on patch exists@
      type T1, T2, T3, T4;
      identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
      (
      - T3 fn(T1, T2);
      + T3 fn(T1);
      |
      - T3 fn(T1, T2, T4);
      + T3 fn(T1, T2);
      )
      
      @pte_alloc_func_proto depends on patch exists@
      identifier E1, E2, E4;
      type T1, T2, T3, T4;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
      (
      - T3 fn(T1 E1, T2 E2);
      + T3 fn(T1 E1);
      |
      - T3 fn(T1 E1, T2 E2, T4 E4);
      + T3 fn(T1 E1, T2 E2);
      )
      
      @pte_alloc_func_call depends on patch exists@
      expression E2;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
       fn(...
      -,  E2
       )
      
      @pte_alloc_macro depends on patch exists@
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      identifier a, b, c;
      expression e;
      position p;
      @@
      
      (
      - #define fn(a, b, c) e
      + #define fn(a, b) e
      |
      - #define fn(a, b) e
      + #define fn(a) e
      )
      
      Link: http://lkml.kernel.org/r/20181108181201.88826-2-joelaf@google.comSigned-off-by: NJoel Fernandes (Google) <joel@joelfernandes.org>
      Suggested-by: NKirill A. Shutemov <kirill@shutemov.name>
      Acked-by: NKirill A. Shutemov <kirill@shutemov.name>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Julia Lawall <Julia.Lawall@lip6.fr>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4cf58924
  8. 04 1月, 2019 9 次提交
    • W
      arm64: compat: Hook up io_pgetevents() for 32-bit tasks · 7e0b44e8
      Will Deacon 提交于
      Commit 73aeb2cb ("ARM: 8787/1: wire up io_pgetevents syscall")
      hooked up the io_pgetevents() system call for 32-bit ARM, so we can
      do the same for the compat wrapper on arm64.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7e0b44e8
    • W
      arm64: compat: Don't pull syscall number from regs in arm_compat_syscall · 53290432
      Will Deacon 提交于
      The syscall number may have been changed by a tracer, so we should pass
      the actual number in from the caller instead of pulling it from the
      saved r7 value directly.
      
      Cc: <stable@vger.kernel.org>
      Cc: Pi-Hsun Shih <pihsun@chromium.org>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      53290432
    • W
      arm64: compat: Avoid sending SIGILL for unallocated syscall numbers · 169113ec
      Will Deacon 提交于
      The ARM Linux kernel handles the EABI syscall numbers as follows:
      
        0           - NR_SYSCALLS-1	: Invoke syscall via syscall table
        NR_SYSCALLS - 0xeffff		: -ENOSYS (to be allocated in future)
        0xf0000     - 0xf07ff		: Private syscall or -ENOSYS if not allocated
        > 0xf07ff			: SIGILL
      
      Our compat code gets this wrong and ends up sending SIGILL in response
      to all syscalls greater than NR_SYSCALLS which have a value greater
      than 0x7ff in the bottom 16 bits.
      
      Fix this by defining the end of the ARM private syscall region and
      checking the syscall number against that directly. Update the comment
      while we're at it.
      
      Cc: <stable@vger.kernel.org>
      Cc: Dave Martin <Dave.Martin@arm.com>
      Reported-by: NPi-Hsun Shih <pihsun@chromium.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      169113ec
    • D
      arm64/sve: Disentangle <uapi/asm/ptrace.h> from <uapi/asm/sigcontext.h> · 9966a05c
      Dave Martin 提交于
      Currently, <uapi/asm/sigcontext.h> provides common definitions for
      describing SVE context structures that are also used by the ptrace
      definitions in <uapi/asm/ptrace.h>.
      
      For this reason, a #include of <asm/sigcontext.h> was added in
      ptrace.h, but it this turns out that this can interact badly with
      userspace code that tries to include ptrace.h on top of the libc
      headers (which may provide their own shadow definitions for
      sigcontext.h).
      
      To make the headers easier for userspace to consume, this patch
      bounces the common definitions into an __SVE_* namespace and moves
      them to a backend header <uapi/asm/sve_context.h> that can be
      included by the other headers as appropriate.  This should allow
      ptrace.h to be used alongside libc's sigcontext.h (if any) without
      ill effects.
      
      This should make the situation unambiguous: <asm/sigcontext.h> is
      the header to include for the sigframe-specific definitions, while
      <asm/ptrace.h> is the header to include for ptrace-specific
      definitions.
      
      To avoid conflicting with existing usage, <asm/sigcontext.h>
      remains the canonical way to get the common definitions for
      SVE_VQ_MIN, sve_vq_from_vl() etc., both in userspace and in the
      kernel: relying on these being defined as a side effect of
      including just <asm/ptrace.h> was never intended to be safe.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9966a05c
    • D
      arm64/sve: ptrace: Fix SVE_PT_REGS_OFFSET definition · ee1b465b
      Dave Martin 提交于
      SVE_PT_REGS_OFFSET is supposed to indicate the offset for skipping
      over the ptrace NT_ARM_SVE header (struct user_sve_header) to the
      start of the SVE register data proper.
      
      However, currently SVE_PT_REGS_OFFSET is defined in terms of struct
      sve_context, which is wrong: that structure describes the SVE
      header in the signal frame, not in the ptrace regset.
      
      This patch fixes the definition to use the ptrace header structure
      struct user_sve_header instead.
      
      By good fortune, the two structures are the same size anyway, so
      there is no functional or ABI change.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ee1b465b
    • M
      arm64: replace arm64-obj-* in Makefile with obj-* · 2f328fea
      Masahiro Yamada 提交于
      Use the standard obj-$(CONFIG_...) syntex. The behavior is still the
      same.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2f328fea
    • L
      Remove 'type' argument from access_ok() function · 96d4f267
      Linus Torvalds 提交于
      Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
      of the user address range verification function since we got rid of the
      old racy i386-only code to walk page tables by hand.
      
      It existed because the original 80386 would not honor the write protect
      bit when in kernel mode, so you had to do COW by hand before doing any
      user access.  But we haven't supported that in a long time, and these
      days the 'type' argument is a purely historical artifact.
      
      A discussion about extending 'user_access_begin()' to do the range
      checking resulted this patch, because there is no way we're going to
      move the old VERIFY_xyz interface to that model.  And it's best done at
      the end of the merge window when I've done most of my merges, so let's
      just get this done once and for all.
      
      This patch was mostly done with a sed-script, with manual fix-ups for
      the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.
      
      There were a couple of notable cases:
      
       - csky still had the old "verify_area()" name as an alias.
      
       - the iter_iov code had magical hardcoded knowledge of the actual
         values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
         really used it)
      
       - microblaze used the type argument for a debug printout
      
      but other than those oddities this should be a total no-op patch.
      
      I tried to fix up all architectures, did fairly extensive grepping for
      access_ok() uses, and the changes are trivial, but I may have missed
      something.  Any missed conversion should be trivially fixable, though.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      96d4f267
    • Y
      arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region · c8a43c18
      Yueyi Li 提交于
      When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), the top 4K of kernel
      virtual address space may be mapped to physical addresses despite being
      reserved for ERR_PTR values.
      
      Fix the randomization of the linear region so that we avoid mapping the
      last page of the virtual address space.
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: Nliyueyi <liyueyi@live.com>
      [will: rewrote commit message; merged in suggestion from Ard]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c8a43c18
    • M
      arm64: entry: remove unused register aliases · 8c2c596f
      Mark Rutland 提交于
      In commit:
      
        3b714275 ("arm64: convert native/compat syscall entry to C")
      
      ... we moved the syscall invocation code from assembly to C, but left
      behind a number of register aliases which are now unused.
      
      Let's remove them before they confuse someone.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8c2c596f
  9. 03 1月, 2019 1 次提交
    • S
      arm64: smp: Fix compilation error · 1236cd2b
      Shaokun Zhang 提交于
      For arm64: updates for 4.21, there is a compilation error:
      arch/arm64/kernel/head.S: Assembler messages:
      arch/arm64/kernel/head.S:824: Error: missing ')'
      arch/arm64/kernel/head.S:824: Error: missing ')'
      arch/arm64/kernel/head.S:824: Error: missing ')'
      arch/arm64/kernel/head.S:824: Error: unexpected characters following instruction at operand 2 -- `mov x2,#(2)|(2U<<(8))'
      scripts/Makefile.build:391: recipe for target 'arch/arm64/kernel/head.o' failed
      make[1]: *** [arch/arm64/kernel/head.o] Error 1
      GCC version is gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
      
      Let's fix it using the UL() macro.
      
      Fixes: 66f16a24 ("arm64: smp: Rework early feature mismatched detection")
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Tested-by: NJohn Stultz <john.stultz@linaro.org>
      Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com>
      [will: consistent use of UL() for all shifts in asm constants]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1236cd2b
  10. 01 1月, 2019 1 次提交
  11. 30 12月, 2018 3 次提交
    • C
      kgdb/treewide: constify struct kgdb_arch arch_kgdb_ops · cc028297
      Christophe Leroy 提交于
      checkpatch.pl reports the following:
      
        WARNING: struct kgdb_arch should normally be const
        #28: FILE: arch/mips/kernel/kgdb.c:397:
        +struct kgdb_arch arch_kgdb_ops = {
      
      This report makes sense, as all other ops struct, this
      one should also be const. This patch does the change.
      
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: x86@kernel.org
      Acked-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Acked-by: NPaul Burton <paul.burton@mips.com>
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      cc028297
    • D
      kgdb: Fix kgdb_roundup_cpus() for arches who used smp_call_function() · 3cd99ac3
      Douglas Anderson 提交于
      When I had lockdep turned on and dropped into kgdb I got a nice splat
      on my system.  Specifically it hit:
        DEBUG_LOCKS_WARN_ON(current->hardirq_context)
      
      Specifically it looked like this:
        sysrq: SysRq : DEBUG
        ------------[ cut here ]------------
        DEBUG_LOCKS_WARN_ON(current->hardirq_context)
        WARNING: CPU: 0 PID: 0 at .../kernel/locking/lockdep.c:2875 lockdep_hardirqs_on+0xf0/0x160
        CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.19.0 #27
        pstate: 604003c9 (nZCv DAIF +PAN -UAO)
        pc : lockdep_hardirqs_on+0xf0/0x160
        ...
        Call trace:
         lockdep_hardirqs_on+0xf0/0x160
         trace_hardirqs_on+0x188/0x1ac
         kgdb_roundup_cpus+0x14/0x3c
         kgdb_cpu_enter+0x53c/0x5cc
         kgdb_handle_exception+0x180/0x1d4
         kgdb_compiled_brk_fn+0x30/0x3c
         brk_handler+0x134/0x178
         do_debug_exception+0xfc/0x178
         el1_dbg+0x18/0x78
         kgdb_breakpoint+0x34/0x58
         sysrq_handle_dbg+0x54/0x5c
         __handle_sysrq+0x114/0x21c
         handle_sysrq+0x30/0x3c
         qcom_geni_serial_isr+0x2dc/0x30c
        ...
        ...
        irq event stamp: ...45
        hardirqs last  enabled at (...44): [...] __do_softirq+0xd8/0x4e4
        hardirqs last disabled at (...45): [...] el1_irq+0x74/0x130
        softirqs last  enabled at (...42): [...] _local_bh_enable+0x2c/0x34
        softirqs last disabled at (...43): [...] irq_exit+0xa8/0x100
        ---[ end trace adf21f830c46e638 ]---
      
      Looking closely at it, it seems like a really bad idea to be calling
      local_irq_enable() in kgdb_roundup_cpus().  If nothing else that seems
      like it could violate spinlock semantics and cause a deadlock.
      
      Instead, let's use a private csd alongside
      smp_call_function_single_async() to round up the other CPUs.  Using
      smp_call_function_single_async() doesn't require interrupts to be
      enabled so we can remove the offending bit of code.
      
      In order to avoid duplicating this across all the architectures that
      use the default kgdb_roundup_cpus(), we'll add a "weak" implementation
      to debug_core.c.
      
      Looking at all the people who previously had copies of this code,
      there were a few variants.  I've attempted to keep the variants
      working like they used to.  Specifically:
      * For arch/arc we passed NULL to kgdb_nmicallback() instead of
        get_irq_regs().
      * For arch/mips there was a bit of extra code around
        kgdb_nmicallback()
      
      NOTE: In this patch we will still get into trouble if we try to round
      up a CPU that failed to round up before.  We'll try to round it up
      again and potentially hang when we try to grab the csd lock.  That's
      not new behavior but we'll still try to do better in a future patch.
      Suggested-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NDouglas Anderson <dianders@chromium.org>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      3cd99ac3
    • D
      kgdb: Remove irq flags from roundup · 9ef7fa50
      Douglas Anderson 提交于
      The function kgdb_roundup_cpus() was passed a parameter that was
      documented as:
      
      > the flags that will be used when restoring the interrupts. There is
      > local_irq_save() call before kgdb_roundup_cpus().
      
      Nobody used those flags.  Anyone who wanted to temporarily turn on
      interrupts just did local_irq_enable() and local_irq_disable() without
      looking at them.  So we can definitely remove the flags.
      Signed-off-by: NDouglas Anderson <dianders@chromium.org>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      9ef7fa50
  12. 29 12月, 2018 5 次提交