1. 23 11月, 2018 1 次提交
  2. 01 11月, 2018 2 次提交
  3. 31 10月, 2018 2 次提交
  4. 20 10月, 2018 2 次提交
    • C
      powerpc/time: Only set CONFIG_ARCH_HAS_SCALED_CPUTIME on PPC64 · abcff86d
      Christophe Leroy 提交于
      scaled cputime is only meaningfull when the processor has
      SPURR and/or PURR, which means only on PPC64.
      
      Removing it on PPC32 significantly reduces the size of
      vtime_account_system() and vtime_account_idle() on an 8xx:
      
      Before:
      00000000 l     F .text	000000a8 vtime_delta
      00000280 g     F .text	0000010c vtime_account_system
      0000038c g     F .text	00000048 vtime_account_idle
      
      After:
      (vtime_delta gets inlined inside the two functions)
      000001d8 g     F .text	000000a0 vtime_account_system
      00000278 g     F .text	00000038 vtime_account_idle
      
      In terms of performance, we also get approximatly 7% improvement on
      task switch. The following small benchmark app is run with perf stat:
      
      void *thread(void *arg)
      {
      	int i;
      
      	for (i = 0; i < atoi((char*)arg); i++)
      		pthread_yield();
      }
      
      int main(int argc, char **argv)
      {
      	pthread_t th1, th2;
      
      	pthread_create(&th1, NULL, thread, argv[1]);
      	pthread_create(&th2, NULL, thread, argv[1]);
      	pthread_join(th1, NULL);
      	pthread_join(th2, NULL);
      
      	return 0;
      }
      
      Before the patch:
      
       Performance counter stats for 'chrt -f 98 ./sched 100000' (50 runs):
      
             8228.476465      task-clock (msec)         #    0.954 CPUs utilized            ( +-  0.23% )
                  200004      context-switches          #    0.024 M/sec                    ( +-  0.00% )
      
      After the patch:
      
       Performance counter stats for 'chrt -f 98 ./sched 100000' (50 runs):
      
             7649.070444      task-clock (msec)         #    0.955 CPUs utilized            ( +-  0.27% )
                  200004      context-switches          #    0.026 M/sec                    ( +-  0.00% )
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      abcff86d
    • N
      powerpc: Add support for function error injection · 7cd01b08
      Naveen N. Rao 提交于
      We implement regs_set_return_value() and override_function_with_return()
      for this purpose.
      
      On powerpc, a return from a function (blr) just branches to the location
      contained in the link register. So, we can just update pt_regs rather
      than redirecting execution to a dummy function that returns.
      Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Reviewed-by: NSamuel Mendoza-Jonas <sam@mendozajonas.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7cd01b08
  5. 13 10月, 2018 2 次提交
  6. 03 10月, 2018 2 次提交
    • C
      powerpc/64: add stack protector support · 06ec27ae
      Christophe Leroy 提交于
      On PPC64, as register r13 points to the paca_struct at all time,
      this patch adds a copy of the canary there, which is copied at
      task_switch.
      That new canary is then used by using the following GCC options:
      -mstack-protector-guard=tls
      -mstack-protector-guard-reg=r13
      -mstack-protector-guard-offset=offsetof(struct paca_struct, canary))
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      06ec27ae
    • C
      powerpc/32: add stack protector support · c3ff2a51
      Christophe Leroy 提交于
      This functionality was tentatively added in the past
      (commit 6533b7c1 ("powerpc: Initial stack protector
      (-fstack-protector) support")) but had to be reverted
      (commit f2574030 ("powerpc: Revert the initial stack
      protector support") because of GCC implementing it differently
      whether it had been built with libc support or not.
      
      Now, GCC offers the possibility to manually set the
      stack-protector mode (global or tls) regardless of libc support.
      
      This time, the patch selects HAVE_STACKPROTECTOR only if
      -mstack-protector-guard=tls is supported by GCC.
      
      On PPC32, as register r2 points to current task_struct at
      all time, the stack_canary located inside task_struct can be
      used directly by using the following GCC options:
      -mstack-protector-guard=tls
      -mstack-protector-guard-reg=r2
      -mstack-protector-guard-offset=offsetof(struct task_struct, stack_canary))
      
      The protector is disabled for prom_init and bootx_init as
      it is too early to handle it properly.
      
       $ echo CORRUPT_STACK > /sys/kernel/debug/provoke-crash/DIRECT
      [  134.943666] Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: lkdtm_CORRUPT_STACK+0x64/0x64
      [  134.943666]
      [  134.955414] CPU: 0 PID: 283 Comm: sh Not tainted 4.18.0-s3k-dev-12143-ga3272be41209 #835
      [  134.963380] Call Trace:
      [  134.965860] [c6615d60] [c001f76c] panic+0x118/0x260 (unreliable)
      [  134.971775] [c6615dc0] [c001f654] panic+0x0/0x260
      [  134.976435] [c6615dd0] [c032c368] lkdtm_CORRUPT_STACK_STRONG+0x0/0x64
      [  134.982769] [c6615e00] [ffffffff] 0xffffffff
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c3ff2a51
  7. 30 8月, 2018 1 次提交
    • A
      powerpc: disable support for relative ksymtab references · ff69279a
      Ard Biesheuvel 提交于
      The newly added code that emits ksymtab entries as pairs of 32-bit
      relative references interacts poorly with the way powerpc lays out its
      address space: when a module exports a per-CPU variable, the primary
      module region covering the ksymtab entry -and thus the 32-bit relative
      reference- is too far away from the actual per-CPU variable's base
      address (to which the per-CPU offsets are applied to obtain the
      respective address of each CPU's copy), resulting in corruption when the
      module loader attempts to resolve symbol references of modules that are
      loaded on top and link to the exported per-CPU symbol.
      
      So let's disable this feature on powerpc.  Even though it implements
      CONFIG_RELOCATABLE, it does not implement CONFIG_RANDOMIZE_BASE and so
      KASLR kernels (which are the main target of the feature) do not exist on
      powerpc anyway.
      Reported-by: NAndreas Schwab <schwab@linux-m68k.org>
      Suggested-by: NNicholas Piggin <nicholas.piggin@gmail.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ff69279a
  8. 23 8月, 2018 1 次提交
    • A
      arch: enable relative relocations for arm64, power and x86 · 271ca788
      Ard Biesheuvel 提交于
      Patch series "add support for relative references in special sections", v10.
      
      This adds support for emitting special sections such as initcall arrays,
      PCI fixups and tracepoints as relative references rather than absolute
      references.  This reduces the size by 50% on 64-bit architectures, but
      more importantly, it removes the need for carrying relocation metadata for
      these sections in relocatable kernels (e.g., for KASLR) that needs to be
      fixed up at boot time.  On arm64, this reduces the vmlinux footprint of
      such a reference by 8x (8 byte absolute reference + 24 byte RELA entry vs
      4 byte relative reference)
      
      Patch #3 was sent out before as a single patch.  This series supersedes
      the previous submission.  This version makes relative ksymtab entries
      dependent on the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS rather
      than trying to infer from kbuild test robot replies for which
      architectures it should be blacklisted.
      
      Patch #1 introduces the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS,
      and sets it for the main architectures that are expected to benefit the
      most from this feature, i.e., 64-bit architectures or ones that use
      runtime relocations.
      
      Patch #2 add support for #define'ing __DISABLE_EXPORTS to get rid of
      ksymtab/kcrctab sections in decompressor and EFI stub objects when
      rebuilding existing C files to run in a different context.
      
      Patches #4 - #6 implement relative references for initcalls, PCI fixups
      and tracepoints, respectively, all of which produce sections with order
      ~1000 entries on an arm64 defconfig kernel with tracing enabled.  This
      means we save about 28 KB of vmlinux space for each of these patches.
      
      [From the v7 series blurb, which included the jump_label patches as well]:
      
        For the arm64 kernel, all patches combined reduce the memory footprint
        of vmlinux by about 1.3 MB (using a config copied from Ubuntu that has
        KASLR enabled), of which ~1 MB is the size reduction of the RELA section
        in .init, and the remaining 300 KB is reduction of .text/.data.
      
      This patch (of 6):
      
      Before updating certain subsystems to use place relative 32-bit
      relocations in special sections, to save space and reduce the number of
      absolute relocations that need to be processed at runtime by relocatable
      kernels, introduce the Kconfig symbol and define it for some architectures
      that should be able to support and benefit from it.
      
      Link: http://lkml.kernel.org/r/20180704083651.24360-2-ard.biesheuvel@linaro.orgSigned-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "Serge E. Hallyn" <serge@hallyn.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
      Cc: James Morris <james.morris@microsoft.com>
      Cc: Jessica Yu <jeyu@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      271ca788
  9. 07 8月, 2018 2 次提交
  10. 03 8月, 2018 1 次提交
  11. 02 8月, 2018 3 次提交
  12. 04 7月, 2018 1 次提交
  13. 11 6月, 2018 2 次提交
  14. 08 6月, 2018 1 次提交
    • L
      mm: introduce ARCH_HAS_PTE_SPECIAL · 3010a5ea
      Laurent Dufour 提交于
      Currently the PTE special supports is turned on in per architecture
      header files.  Most of the time, it is defined in
      arch/*/include/asm/pgtable.h depending or not on some other per
      architecture static definition.
      
      This patch introduce a new configuration variable to manage this
      directly in the Kconfig files.  It would later replace
      __HAVE_ARCH_PTE_SPECIAL.
      
      Here notes for some architecture where the definition of
      __HAVE_ARCH_PTE_SPECIAL is not obvious:
      
      arm
       __HAVE_ARCH_PTE_SPECIAL which is currently defined in
      arch/arm/include/asm/pgtable-3level.h which is included by
      arch/arm/include/asm/pgtable.h when CONFIG_ARM_LPAE is set.
      So select ARCH_HAS_PTE_SPECIAL if ARM_LPAE.
      
      powerpc
      __HAVE_ARCH_PTE_SPECIAL is defined in 2 files:
       - arch/powerpc/include/asm/book3s/64/pgtable.h
       - arch/powerpc/include/asm/pte-common.h
      The first one is included if (PPC_BOOK3S & PPC64) while the second is
      included in all the other cases.
      So select ARCH_HAS_PTE_SPECIAL all the time.
      
      sparc:
      __HAVE_ARCH_PTE_SPECIAL is defined if defined(__sparc__) &&
      defined(__arch64__) which are defined through the compiler in
      sparc/Makefile if !SPARC32 which I assume to be if SPARC64.
      So select ARCH_HAS_PTE_SPECIAL if SPARC64
      
      There is no functional change introduced by this patch.
      
      Link: http://lkml.kernel.org/r/1523433816-14460-2-git-send-email-ldufour@linux.vnet.ibm.comSigned-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Suggested-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Albert Ou <albert@sifive.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Christophe LEROY <christophe.leroy@c-s.fr>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3010a5ea
  15. 06 6月, 2018 1 次提交
    • B
      powerpc: Add support for restartable sequences · 8a417c48
      Boqun Feng 提交于
      Call the rseq_handle_notify_resume() function on return to userspace if
      TIF_NOTIFY_RESUME thread flag is set.
      
      Perform fixup on the pre-signal when a signal is delivered on top of a
      restartable sequence critical section.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Watson <davejwatson@fb.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: Chris Lameter <cl@linux.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Andrew Hunter <ahh@google.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ben Maurer <bmaurer@fb.com>
      Cc: linux-api@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: https://lkml.kernel.org/r/20180602124408.8430-9-mathieu.desnoyers@efficios.com
      8a417c48
  16. 03 6月, 2018 1 次提交
  17. 17 5月, 2018 1 次提交
    • N
      powerpc: Allow LD_DEAD_CODE_DATA_ELIMINATION to be selected · 4c1d9bb0
      Nicholas Piggin 提交于
      This requires further changes to linker script to KEEP some tables
      and wildcard compiler generated sections into the right place. This
      includes pp32 modifications from Christophe Leroy.
      
      When compiling powernv_defconfig with this option, the resulting
      kernel is almost 400kB smaller (and still boots):
      
          text      data       bss        dec   filename
      11827621   4810490   1341080   17979191   vmlinux
      11752437   4598858   1338776   17690071   vmlinux.dcde
      
      Mathieu's numbers for custom Mac Mini G4 config has almost 200kB
      saving. It also had some increase in vmlinux size for as-yet
      unknown reasons.
      
          text      data       bss        dec   filename
       7461457   2475122   1428064   11364643   vmlinux
       7386425   2364370   1425432   11176227   vmlinux.dcde
      
      Tested-by: Christophe Leroy <christophe.leroy@c-s.fr> [8xx]
      Tested-by: Mathieu Malaterre <malat@debian.org> [32-bit powermac]
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      4c1d9bb0
  18. 10 5月, 2018 1 次提交
    • T
      powerpc/livepatch: Implement reliable stack tracing for the consistency model · df78d3f6
      Torsten Duwe 提交于
      The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
      
      [...] There are several rules that must be adhered to in order to ensure
      reliable and consistent call chain backtracing:
      
      * Before a function calls any other function, it shall establish its
        own stack frame, whose size shall be a multiple of 16 bytes.
      
       – In instances where a function’s prologue creates a stack frame, the
         back-chain word of the stack frame shall be updated atomically with
         the value of the stack pointer (r1) when a back chain is implemented.
         (This must be supported as default by all ELF V2 ABI-compliant
         environments.)
      [...]
       – The function shall save the link register that contains its return
         address in the LR save doubleword of its caller’s stack frame before
         calling another function.
      
      To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
      This patch may be unneccessarily limited to ppc64le, but OTOH the only
      user of this flag so far is livepatching, which is only implemented on
      PPCs with 64-LE, a.k.a. ELF ABI v2.
      
      Feel free to add other ppc variants, but so far only ppc64le got tested.
      
      This change also implements save_stack_trace_tsk_reliable() for ppc64le
      that checks for the above conditions, where possible.
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      Signed-off-by: NNicolai Stange <nstange@suse.de>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      df78d3f6
  19. 09 5月, 2018 6 次提交
  20. 08 5月, 2018 1 次提交
  21. 03 5月, 2018 1 次提交
  22. 17 4月, 2018 1 次提交
  23. 14 4月, 2018 1 次提交
    • A
      kexec_file: make use of purgatory optional · b799a09f
      AKASHI Takahiro 提交于
      Patch series "kexec_file, x86, powerpc: refactoring for other
      architecutres", v2.
      
      This is a preparatory patchset for adding kexec_file support on arm64.
      
      It was originally included in a arm64 patch set[1], but Philipp is also
      working on their kexec_file support on s390[2] and some changes are now
      conflicting.
      
      So these common parts were extracted and put into a separate patch set
      for better integration.  What's more, my original patch#4 was split into
      a few small chunks for easier review after Dave's comment.
      
      As such, the resulting code is basically identical with my original, and
      the only *visible* differences are:
      
       - renaming of _kexec_kernel_image_probe() and  _kimage_file_post_load_cleanup()
      
       - change one of types of arguments at prepare_elf64_headers()
      
      Those, unfortunately, require a couple of trivial changes on the rest
      (#1, #6 to #13) of my arm64 kexec_file patch set[1].
      
      Patch #1 allows making a use of purgatory optional, particularly useful
      for arm64.
      
      Patch #2 commonalizes arch_kexec_kernel_{image_probe, image_load,
      verify_sig}() and arch_kimage_file_post_load_cleanup() across
      architectures.
      
      Patches #3-#7 are also intended to generalize parse_elf64_headers(),
      along with exclude_mem_range(), to be made best re-use of.
      
      [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
      [2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
      
      This patch (of 7):
      
      On arm64, crash dump kernel's usable memory is protected by *unmapping*
      it from kernel virtual space unlike other architectures where the region
      is just made read-only.  It is highly unlikely that the region is
      accidentally corrupted and this observation rationalizes that digest
      check code can also be dropped from purgatory.  The resulting code is so
      simple as it doesn't require a bit ugly re-linking/relocation stuff,
      i.e.  arch_kexec_apply_relocations_add().
      
      Please see:
      
         http://lists.infradead.org/pipermail/linux-arm-kernel/2017-December/545428.html
      
      All that the purgatory does is to shuffle arguments and jump into a new
      kernel, while we still need to have some space for a hash value
      (purgatory_sha256_digest) which is never checked against.
      
      As such, it doesn't make sense to have trampline code between old kernel
      and new kernel on arm64.
      
      This patch introduces a new configuration, ARCH_HAS_KEXEC_PURGATORY, and
      allows related code to be compiled in only if necessary.
      
      [takahiro.akashi@linaro.org: fix trivial screwup]
        Link: http://lkml.kernel.org/r/20180309093346.GF25863@linaro.org
      Link: http://lkml.kernel.org/r/20180306102303.9063-2-takahiro.akashi@linaro.orgSigned-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Acked-by: NDave Young <dyoung@redhat.com>
      Tested-by: NDave Young <dyoung@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b799a09f
  24. 06 2月, 2018 1 次提交
    • M
      powerpc, membarrier: Skip memory barrier in switch_mm() · 3ccfebed
      Mathieu Desnoyers 提交于
      Allow PowerPC to skip the full memory barrier in switch_mm(), and
      only issue the barrier when scheduling into a task belonging to a
      process that has registered to use expedited private.
      
      Threads targeting the same VM but which belong to different thread
      groups is a tricky case. It has a few consequences:
      
      It turns out that we cannot rely on get_nr_threads(p) to count the
      number of threads using a VM. We can use
      (atomic_read(&mm->mm_users) == 1 && get_nr_threads(p) == 1)
      instead to skip the synchronize_sched() for cases where the VM only has
      a single user, and that user only has a single thread.
      
      It also turns out that we cannot use for_each_thread() to set
      thread flags in all threads using a VM, as it only iterates on the
      thread group.
      
      Therefore, test the membarrier state variable directly rather than
      relying on thread flags. This means
      membarrier_register_private_expedited() needs to set the
      MEMBARRIER_STATE_PRIVATE_EXPEDITED flag, issue synchronize_sched(), and
      only then set MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY which allows
      private expedited membarrier commands to succeed.
      membarrier_arch_switch_mm() now tests for the
      MEMBARRIER_STATE_PRIVATE_EXPEDITED flag.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andrea Parri <parri.andrea@gmail.com>
      Cc: Andrew Hunter <ahh@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Avi Kivity <avi@scylladb.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Dave Watson <davejwatson@fb.com>
      Cc: David Sehr <sehr@google.com>
      Cc: Greg Hackmann <ghackmann@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Maged Michael <maged.michael@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-api@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Link: http://lkml.kernel.org/r/20180129202020.8515-3-mathieu.desnoyers@efficios.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3ccfebed
  25. 01 2月, 2018 1 次提交
  26. 21 1月, 2018 1 次提交