1. 09 8月, 2019 7 次提交
    • S
      arm64: mm: Logic to make offset_ttbr1 conditional · c812026c
      Steve Capper 提交于
      When running with a 52-bit userspace VA and a 48-bit kernel VA we offset
      ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to
      be used for both userspace and kernel.
      
      Moving on to a 52-bit kernel VA we no longer require this offset to
      ttbr1_el1 should we be running on a system with HW support for 52-bit
      VAs.
      
      This patch introduces conditional logic to offset_ttbr1 to query
      SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW
      support for 52-bit VAs then the ttbr1 offset is skipped.
      
      We choose to read a system register rather than vabits_actual because
      offset_ttbr1 can be called in places where the kernel data is not
      actually mapped.
      
      Calls to offset_ttbr1 appear to be made from rarely called code paths so
      this extra logic is not expected to adversely affect performance.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      c812026c
    • S
      arm64: mm: Introduce vabits_actual · 5383cc6e
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, one
      needs to know the actual VA_BITS detected. A new variable vabits_actual
      is introduced in this commit and employed for the KVM hypervisor layout,
      KASAN, fault handling and phys-to/from-virt translation where there
      would normally be compile time constants.
      
      In order to maintain performance in phys_to_virt, another variable
      physvirt_offset is introduced.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      5383cc6e
    • S
      arm64: mm: Introduce VA_BITS_MIN · 90ec95cd
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, the
      kernel needs to know the most conservative VA_BITS possible should it
      need to fall back to this quantity due to lack of hardware support.
      
      A new compile time constant VA_BITS_MIN is introduced in this patch and
      it is employed in the KASAN end address, KASLR, and EFI stub.
      
      For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.
      
      In other words: VA_BITS_MIN = min (48, VA_BITS)
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      90ec95cd
    • S
      arm64: dump: De-constify VA_START and KASAN_SHADOW_START · 99426e5e
      Steve Capper 提交于
      The kernel page table dumper assumes that the placement of VA regions is
      constant and determined at compile time. As we are about to introduce
      variable VA logic, we need to be able to determine certain regions at
      boot time.
      
      Specifically the VA_START and KASAN_SHADOW_START will depend on whether
      or not the system is booted with 52-bit kernel VAs.
      
      This patch adds logic to the kernel page table dumper s.t. these regions
      can be computed at boot time.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      99426e5e
    • S
      arm64: kasan: Switch to using KASAN_SHADOW_OFFSET · 6bd1d0be
      Steve Capper 提交于
      KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
      line argument and affects the codegen of the inline address sanetiser.
      
      Essentially, for an example memory access:
          *ptr1 = val;
      The compiler will insert logic similar to the below:
          shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
          if (somethingWrong(shadowValue))
              flagAnError();
      
      This code sequence is inserted into many places, thus
      KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
      text.
      
      If we want to run a single kernel binary with multiple address spaces,
      then we need to do this with KASAN_SHADOW_OFFSET fixed.
      
      Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
      shadow addresses we know that the end of the shadow region is constant
      w.r.t. VA space size:
          KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET
      
      This means that if we increase the size of the VA space, the start of
      the KASAN region expands into lower addresses whilst the end of the
      KASAN region is fixed.
      
      Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
      build scripts with the VA size used as a parameter. (There are build
      time checks in the C code too to ensure that expected values are being
      derived). It is sufficient, and indeed is a simplification, to remove
      the build scripts (and build time checks) entirely and instead provide
      KASAN_SHADOW_OFFSET values.
      
      This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
      arm64 Makefile, and instead we adopt the approach used by x86 to supply
      offset values in kConfig. To help debug/develop future VA space changes,
      the Makefile logic has been preserved in a script file in the arm64
      Documentation folder.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      6bd1d0be
    • S
      arm64: mm: Flip kernel VA space · 14c127c9
      Steve Capper 提交于
      In order to allow for a KASAN shadow that changes size at boot time, one
      must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
      start address. Also, it is highly desirable to maintain the same
      function addresses in the kernel .text between VA sizes. Both of these
      requirements necessitate us to flip the kernel address space halves s.t.
      the direct linear map occupies the lower addresses.
      
      This patch puts the direct linear map in the lower addresses of the
      kernel VA range and everything else in the higher ranges.
      
      We need to adjust:
       *) KASAN shadow region placement logic,
       *) KASAN_SHADOW_OFFSET computation logic,
       *) virt_to_phys, phys_to_virt checks,
       *) page table dumper.
      
      These are all small changes, that need to take place atomically, so they
      are bundled into this commit.
      
      As part of the re-arrangement, a guard region of 2MB (to preserve
      alignment for fixed map) is added after the vmemmap. Otherwise the
      vmemmap could intersect with IS_ERR pointers.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      14c127c9
    • S
      arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START · 9cb1c5dd
      Steve Capper 提交于
      Currently there are assumptions about the alignment of VMEMMAP_START
      and PAGE_OFFSET that won't be valid after this series is applied.
      
      These assumptions are in the form of bitwise operators being used
      instead of addition and subtraction when calculating addresses.
      
      This patch replaces these bitwise operators with addition/subtraction.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      9cb1c5dd
  2. 02 8月, 2019 2 次提交
    • M
      arm64: Make debug exception handlers visible from RCU · d8bb6718
      Masami Hiramatsu 提交于
      Make debug exceptions visible from RCU so that synchronize_rcu()
      correctly track the debug exception handler.
      
      This also introduces sanity checks for user-mode exceptions as same
      as x86's ist_enter()/ist_exit().
      
      The debug exception can interrupt in idle task. For example, it warns
      if we put a kprobe on a function called from idle task as below.
      The warning message showed that the rcu_read_lock() caused this
      problem. But actually, this means the RCU is lost the context which
      is already in NMI/IRQ.
      
        /sys/kernel/debug/tracing # echo p default_idle_call >> kprobe_events
        /sys/kernel/debug/tracing # echo 1 > events/kprobes/enable
        /sys/kernel/debug/tracing # [  135.122237]
        [  135.125035] =============================
        [  135.125310] WARNING: suspicious RCU usage
        [  135.125581] 5.2.0-08445-g9187c508bdc7 #20 Not tainted
        [  135.125904] -----------------------------
        [  135.126205] include/linux/rcupdate.h:594 rcu_read_lock() used illegally while idle!
        [  135.126839]
        [  135.126839] other info that might help us debug this:
        [  135.126839]
        [  135.127410]
        [  135.127410] RCU used illegally from idle CPU!
        [  135.127410] rcu_scheduler_active = 2, debug_locks = 1
        [  135.128114] RCU used illegally from extended quiescent state!
        [  135.128555] 1 lock held by swapper/0/0:
        [  135.128944]  #0: (____ptrval____) (rcu_read_lock){....}, at: call_break_hook+0x0/0x178
        [  135.130499]
        [  135.130499] stack backtrace:
        [  135.131192] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0-08445-g9187c508bdc7 #20
        [  135.131841] Hardware name: linux,dummy-virt (DT)
        [  135.132224] Call trace:
        [  135.132491]  dump_backtrace+0x0/0x140
        [  135.132806]  show_stack+0x24/0x30
        [  135.133133]  dump_stack+0xc4/0x10c
        [  135.133726]  lockdep_rcu_suspicious+0xf8/0x108
        [  135.134171]  call_break_hook+0x170/0x178
        [  135.134486]  brk_handler+0x28/0x68
        [  135.134792]  do_debug_exception+0x90/0x150
        [  135.135051]  el1_dbg+0x18/0x8c
        [  135.135260]  default_idle_call+0x0/0x44
        [  135.135516]  cpu_startup_entry+0x2c/0x30
        [  135.135815]  rest_init+0x1b0/0x280
        [  135.136044]  arch_call_rest_init+0x14/0x1c
        [  135.136305]  start_kernel+0x4d4/0x500
        [  135.136597]
      
      So make debug exception visible to RCU can fix this warning.
      Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Acked-by: NPaul E. McKenney <paulmck@linux.ibm.com>
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      d8bb6718
    • M
      arm64: kprobes: Recover pstate.D in single-step exception handler · b3980e48
      Masami Hiramatsu 提交于
      kprobes manipulates the interrupted PSTATE for single step, and
      doesn't restore it. Thus, if we put a kprobe where the pstate.D
      (debug) masked, the mask will be cleared after the kprobe hits.
      
      Moreover, in the most complicated case, this can lead a kernel
      crash with below message when a nested kprobe hits.
      
      [  152.118921] Unexpected kernel single-step exception at EL1
      
      When the 1st kprobe hits, do_debug_exception() will be called.
      At this point, debug exception (= pstate.D) must be masked (=1).
      But if another kprobes hits before single-step of the first kprobe
      (e.g. inside user pre_handler), it unmask the debug exception
      (pstate.D = 0) and return.
      Then, when the 1st kprobe setting up single-step, it saves current
      DAIF, mask DAIF, enable single-step, and restore DAIF.
      However, since "D" flag in DAIF is cleared by the 2nd kprobe, the
      single-step exception happens soon after restoring DAIF.
      
      This has been introduced by commit 7419333f ("arm64: kprobe:
      Always clear pstate.D in breakpoint exception handler")
      
      To solve this issue, this stores all DAIF bits and restore it
      after single stepping.
      Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Fixes: 7419333f ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler")
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b3980e48
  3. 01 8月, 2019 8 次提交
  4. 31 7月, 2019 1 次提交
  5. 29 7月, 2019 4 次提交
    • A
      arm64: module: Mark expected switch fall-through · eca92a53
      Anders Roxell 提交于
      When fall-through warnings was enabled by default the following warnings
      was starting to show up:
      
      ../arch/arm64/kernel/module.c: In function ‘apply_relocate_add’:
      ../arch/arm64/kernel/module.c:316:19: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
          overflow_check = false;
          ~~~~~~~~~~~~~~~^~~~~~~
      ../arch/arm64/kernel/module.c:317:3: note: here
         case R_AARCH64_MOVW_UABS_G0:
         ^~~~
      ../arch/arm64/kernel/module.c:322:19: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
          overflow_check = false;
          ~~~~~~~~~~~~~~~^~~~~~~
      ../arch/arm64/kernel/module.c:323:3: note: here
         case R_AARCH64_MOVW_UABS_G1:
         ^~~~
      
      Rework so that the compiler doesn't warn about fall-through.
      
      Fixes: d93512ef0f0e ("Makefile: Globally enable fall-through warning")
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      eca92a53
    • A
      arm64: smp: Mark expected switch fall-through · 66554739
      Anders Roxell 提交于
      When fall-through warnings was enabled by default the following warning
      was starting to show up:
      
      In file included from ../include/linux/kernel.h:15,
                       from ../include/linux/list.h:9,
                       from ../include/linux/kobject.h:19,
                       from ../include/linux/of.h:17,
                       from ../include/linux/irqdomain.h:35,
                       from ../include/linux/acpi.h:13,
                       from ../arch/arm64/kernel/smp.c:9:
      ../arch/arm64/kernel/smp.c: In function ‘__cpu_up’:
      ../include/linux/printk.h:302:2: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
        printk(KERN_CRIT pr_fmt(fmt), ##__VA_ARGS__)
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      ../arch/arm64/kernel/smp.c:156:4: note: in expansion of macro ‘pr_crit’
          pr_crit("CPU%u: may not have shut down cleanly\n", cpu);
          ^~~~~~~
      ../arch/arm64/kernel/smp.c:157:3: note: here
         case CPU_STUCK_IN_KERNEL:
         ^~~~
      
      Rework so that the compiler doesn't warn about fall-through.
      
      Fixes: d93512ef0f0e ("Makefile: Globally enable fall-through warning")
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      66554739
    • W
      arm64: hw_breakpoint: Fix warnings about implicit fallthrough · 75a382f1
      Will Deacon 提交于
      Now that -Wimplicit-fallthrough is passed to GCC by default, the kernel
      build has suddenly got noisy. Annotate the two fall-through cases in our
      hw_breakpoint implementation, since they are both intentional.
      Reported-by: NAnders Roxell <anders.roxell@linaro.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      75a382f1
    • W
      arm64: compat: Allow single-byte watchpoints on all addresses · 849adec4
      Will Deacon 提交于
      Commit d968d2b8 ("ARM: 7497/1: hw_breakpoint: allow single-byte
      watchpoints on all addresses") changed the validation requirements for
      hardware watchpoints on arch/arm/. Update our compat layer to implement
      the same relaxation.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      849adec4
  6. 25 7月, 2019 1 次提交
    • M
      treewide: add "WITH Linux-syscall-note" to SPDX tag of uapi headers · d9c52522
      Masahiro Yamada 提交于
      UAPI headers licensed under GPL are supposed to have exception
      "WITH Linux-syscall-note" so that they can be included into non-GPL
      user space application code.
      
      The exception note is missing in some UAPI headers.
      
      Some of them slipped in by the treewide conversion commit b2441318
      ("License cleanup: add SPDX GPL-2.0 license identifier to files with
      no license"). Just run:
      
        $ git show --oneline b2441318 -- arch/x86/include/uapi/asm/
      
      I believe they are not intentional, and should be fixed too.
      
      This patch was generated by the following script:
      
        git grep -l --not -e Linux-syscall-note --and -e SPDX-License-Identifier \
          -- :arch/*/include/uapi/asm/*.h :include/uapi/ :^*/Kbuild |
        while read file
        do
                sed -i -e '/[[:space:]]OR[[:space:]]/s/\(GPL-[^[:space:]]*\)/(\1 WITH Linux-syscall-note)/g' \
                -e '/[[:space:]]or[[:space:]]/s/\(GPL-[^[:space:]]*\)/(\1 WITH Linux-syscall-note)/g' \
                -e '/[[:space:]]OR[[:space:]]/!{/[[:space:]]or[[:space:]]/!s/\(GPL-[^[:space:]]*\)/\1 WITH Linux-syscall-note/g}' $file
        done
      
      After this patch is applied, there are 5 UAPI headers that do not contain
      "WITH Linux-syscall-note". They are kept untouched since this exception
      applies only to GPL variants.
      
        $ git grep --not -e Linux-syscall-note --and -e SPDX-License-Identifier \
          -- :arch/*/include/uapi/asm/*.h :include/uapi/ :^*/Kbuild
        include/uapi/drm/panfrost_drm.h:/* SPDX-License-Identifier: MIT */
        include/uapi/linux/batman_adv.h:/* SPDX-License-Identifier: MIT */
        include/uapi/linux/qemu_fw_cfg.h:/* SPDX-License-Identifier: BSD-3-Clause */
        include/uapi/linux/vbox_err.h:/* SPDX-License-Identifier: MIT */
        include/uapi/linux/virtio_iommu.h:/* SPDX-License-Identifier: BSD-3-Clause */
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d9c52522
  7. 23 7月, 2019 2 次提交
  8. 22 7月, 2019 11 次提交
  9. 19 7月, 2019 2 次提交
    • D
      mm/memory_hotplug: allow arch_remove_memory() without CONFIG_MEMORY_HOTREMOVE · 80ec922d
      David Hildenbrand 提交于
      We want to improve error handling while adding memory by allowing to use
      arch_remove_memory() and __remove_pages() even if
      CONFIG_MEMORY_HOTREMOVE is not set to e.g., implement something like:
      
      	arch_add_memory()
      	rc = do_something();
      	if (rc) {
      		arch_remove_memory();
      	}
      
      We won't get rid of CONFIG_MEMORY_HOTREMOVE for now, as it will require
      quite some dependencies for memory offlining.
      
      Link: http://lkml.kernel.org/r/20190527111152.16324-7-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Reviewed-by: NPavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Oscar Salvador <osalvador@suse.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: "mike.travis@hpe.com" <mike.travis@hpe.com>
      Cc: Andrew Banman <andrew.banman@hpe.com>
      Cc: Arun KS <arunks@codeaurora.org>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Logan Gunthorpe <logang@deltatee.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chintan Pandya <cpandya@codeaurora.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jun Yao <yaojun8558363@gmail.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      80ec922d
    • D
      arm64/mm: add temporary arch_remove_memory() implementation · 22eb6346
      David Hildenbrand 提交于
      A proper arch_remove_memory() implementation is on its way, which also
      cleanly removes page tables in arch_add_memory() in case something goes
      wrong.
      
      As we want to use arch_remove_memory() in case something goes wrong
      during memory hotplug after arch_add_memory() finished, let's add a
      temporary hack that is sufficient enough until we get a proper
      implementation that cleans up page table entries.
      
      We will remove CONFIG_MEMORY_HOTREMOVE around this code in follow up
      patches.
      
      Link: http://lkml.kernel.org/r/20190527111152.16324-5-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Chintan Pandya <cpandya@codeaurora.org>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Jun Yao <yaojun8558363@gmail.com>
      Cc: Yu Zhao <yuzhao@google.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Andrew Banman <andrew.banman@hpe.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arun KS <arunks@codeaurora.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Logan Gunthorpe <logang@deltatee.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: "mike.travis@hpe.com" <mike.travis@hpe.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Oscar Salvador <osalvador@suse.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Qian Cai <cai@lca.pw>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Wei Yang <richard.weiyang@gmail.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      22eb6346
  10. 17 7月, 2019 2 次提交