1. 02 8月, 2018 1 次提交
  2. 13 6月, 2018 1 次提交
    • S
      alpha: Remove custom dec_and_lock() implementation · f2ae6794
      Sebastian Andrzej Siewior 提交于
      Alpha provides a custom implementation of dec_and_lock(). The functions
      is split into two parts:
      - atomic_add_unless() + return 0 (fast path in assembly)
      - remaining part including locking (slow path in C)
      
      Comparing the result of the alpha implementation with the generic
      implementation compiled by gcc it looks like the fast path is optimized
      by avoiding a stack frame (and reloading the GP), register store and all
      this. This is only done in the slowpath.
      After marking the slowpath (atomic_dec_and_lock_1()) as "noinline" and
      doing the slowpath in C (the atomic_add_unless(atomic, -1, 1) part) I
      noticed differences in the resulting assembly:
      - the GP is still reloaded
      - atomic_add_unless() adds more memory barriers compared to the custom
        assembly
      - the custom assembly here does "load, sub, beq" while
        atomic_add_unless() does "load, cmpeq, add, bne". This is okay because
        it compares against zero after subtraction while the generic code
        compares against 1 before.
      
      I'm not sure if avoiding the stack frame (and GP reloading) brings a lot
      in terms of performance. Regarding the different barriers, Peter
      Zijlstra says:
      
      |refcount decrement needs to be a RELEASE operation, such that all the
      |load/stores to the object happen before we decrement the refcount.
      |
      |Otherwise things like:
      |
      |        obj->foo = 5;
      |        refcnt_dec(&obj->ref);
      |
      |can be re-ordered, which then allows fun scenarios like:
      |
      |        CPU0                            CPU1
      |
      |        refcnt_dec(&obj->ref);
      |                                        if (dec_and_test(&obj->ref))
      |                                                free(obj);
      |        obj->foo = 5; // oops UaF
      |
      |
      |This means (for alpha) that there should be a memory barrier _before_
      |the decrement, however the dec_and_lock asm thing only has one _after_,
      |which, per the above, is too late.
      |
      |The generic version using add_unless will result in memory barrier
      |before and after (because that is the rule for atomic ops with a return
      |value) which is strictly too many barriers for the refcount story, but
      |who knows what other ordering requirements code has.
      
      Remove the custom alpha implementation of dec_and_lock() and if it is an
      issue (performance wise) then the fast path could still be inlined.
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: linux-alpha@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180606115918.GG12198@hirez.programming.kicks-ass.net
      Link: https://lkml.kernel.org/r20180612161621.22645-2-bigeasy@linutronix.de
      f2ae6794
  3. 01 6月, 2018 1 次提交
  4. 23 5月, 2018 3 次提交
  5. 09 5月, 2018 4 次提交
  6. 07 5月, 2018 1 次提交
    • C
      PCI: remove PCI_DMA_BUS_IS_PHYS · 325ef185
      Christoph Hellwig 提交于
      This was used by the ide, scsi and networking code in the past to
      determine if they should bounce payloads.  Now that the dma mapping
      always have to support dma to all physical memory (thanks to swiotlb
      for non-iommu systems) there is no need to this crude hack any more.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Palmer Dabbelt <palmer@sifive.com> (for riscv)
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      325ef185
  7. 25 4月, 2018 5 次提交
    • E
      signal/alpha: Use force_sig_fault where appropriate · e4d90ee3
      Eric W. Biederman 提交于
      Filling in struct siginfo before calling force_sig_info a tedious and
      error prone process, where once in a great while the wrong fields
      are filled out, and siginfo has been inconsistently cleared.
      
      Simplify this process by using the helper force_sig_fault.  Which
      takes as a parameters all of the information it needs, ensures
      all of the fiddly bits of filling in struct siginfo are done properly
      and then calls force_sig_info.
      
      In short about a 5 line reduction in code for every time force_sig_info
      is called, which makes the calling function clearer.
      
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: linux-alpha@vger.kernel.org
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      e4d90ee3
    • E
      signal/alpha: Use send_sig_fault where appropriate · 5f50245b
      Eric W. Biederman 提交于
      Filling in struct siginfo before calling send_sig_info a tedious and
      error prone process, where once in a great while the wrong fields
      are filled out, and siginfo has been inconsistently cleared.
      
      Simplify this process by using the helper send_sig_fault.  Which
      takes as a parameters all of the information it needs, ensures
      all of the fiddly bits of filling in struct siginfo are done properly
      and then calls send_sig_info.
      
      In short about a 5 line reduction in code for every time send_sig_info
      is called, which makes the calling function clearer.
      
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: linux-alpha@vger.kernel.org
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      5f50245b
    • E
      signal/alpha: Replace TRAP_FIXME with TRAP_UNK · 535906c6
      Eric W. Biederman 提交于
      Using an si_code of 0 that aliases with SI_USER is clearly the wrong
      thing to do, and causes problems in interesting ways.
      
      For it really is not clear to me if using TRAP_UNK bugcheck or
      the default case of gentrap is really the best way to handle
      things.  There is certainly enough information that that a more
      specific si_code could potentially be used.  That said TRAP_UNK
      is definitely an improvement over 0 as it removes the ambiguiuty
      of what si_code of 0 with SIGTRAP means on alpha.
      
      Recent history suggests no actually cares about crazy corner cases of
      the kernel behavior like this so I don't expect any regressions from
      changing this.  However if something does happen this change is easy
      to revert.
      
      Cc: Helge Deller <deller@gmx.de>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: linux-alpha@vger.kernel.org
      Fixes: 0a635c7a84cf ("Fill in siginfo_t.")
      History Tree: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.gitSigned-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      535906c6
    • E
      signal/alpha: Replace FPE_FIXME with FPE_FLTUNK · 4cc13e4f
      Eric W. Biederman 提交于
      Using an si_code of 0 that aliases with SI_USER is clearly the wrong
      thing todo, and causes problems in interesting ways.
      
      The newly defined FPE_FLTUNK semantically appears to fit the bill so
      use it instead.
      
      Given recent experience in this area odds are it will not break
      anything.  Fixing it removes a hazard to kernel maintenance.
      
      Cc: Helge Deller <deller@gmx.de>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: linux-alpha@vger.kernel.org
      History Tree: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git
      Fixes: 0a635c7a84cf ("Fill in siginfo_t.")
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      4cc13e4f
    • E
      signal: Ensure every siginfo we send has all bits initialized · 3eb0f519
      Eric W. Biederman 提交于
      Call clear_siginfo to ensure every stack allocated siginfo is properly
      initialized before being passed to the signal sending functions.
      
      Note: It is not safe to depend on C initializers to initialize struct
      siginfo on the stack because C is allowed to skip holes when
      initializing a structure.
      
      The initialization of struct siginfo in tracehook_report_syscall_exit
      was moved from the helper user_single_step_siginfo into
      tracehook_report_syscall_exit itself, to make it clear that the local
      variable siginfo gets fully initialized.
      
      In a few cases the scope of struct siginfo has been reduced to make it
      clear that siginfo siginfo is not used on other paths in the function
      in which it is declared.
      
      Instances of using memset to initialize siginfo have been replaced
      with calls clear_siginfo for clarity.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      3eb0f519
  8. 20 4月, 2018 1 次提交
    • A
      y2038: alpha: Remove unneeded ipc uapi header files · 469599f6
      Arnd Bergmann 提交于
      The alpha ipcbuf/msgbuf/sembuf/shmbuf header files are all identical
      to the version from asm-generic.
      
      This patch removes the files and replaces them with 'generic-y'
      statements as part of the y2038 series. Since there is no 32-bit
      syscall support for alpha, we don't need the other changes, but
      it's good to have clean this up anyway.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      469599f6
  9. 19 4月, 2018 1 次提交
    • A
      time: Add an asm-generic/compat.h file · 2b5a9a37
      Arnd Bergmann 提交于
      We have a couple of files that try to include asm/compat.h on
      architectures where this is available. Those should generally use the
      higher-level linux/compat.h file, but that in turn fails to include
      asm/compat.h when CONFIG_COMPAT is disabled, unless we can provide
      that header on all architectures.
      
      This adds the asm/compat.h for all remaining architectures to
      simplify the dependencies.
      
      Architectures that are getting removed in linux-4.17 are not changed
      here, to avoid needless conflicts with the removal patches. Those
      architectures are broken by this patch, but we have already shown
      that they have no users.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      2b5a9a37
  10. 18 4月, 2018 1 次提交
  11. 17 4月, 2018 1 次提交
  12. 12 4月, 2018 1 次提交
    • M
      mm: introduce MAP_FIXED_NOREPLACE · a4ff8e86
      Michal Hocko 提交于
      Patch series "mm: introduce MAP_FIXED_NOREPLACE", v2.
      
      This has started as a follow up discussion [3][4] resulting in the
      runtime failure caused by hardening patch [5] which removes MAP_FIXED
      from the elf loader because MAP_FIXED is inherently dangerous as it
      might silently clobber an existing underlying mapping (e.g.  stack).
      The reason for the failure is that some architectures enforce an
      alignment for the given address hint without MAP_FIXED used (e.g.  for
      shared or file backed mappings).
      
      One way around this would be excluding those archs which do alignment
      tricks from the hardening [6].  The patch is really trivial but it has
      been objected, rightfully so, that this screams for a more generic
      solution.  We basically want a non-destructive MAP_FIXED.
      
      The first patch introduced MAP_FIXED_NOREPLACE which enforces the given
      address but unlike MAP_FIXED it fails with EEXIST if the given range
      conflicts with an existing one.  The flag is introduced as a completely
      new one rather than a MAP_FIXED extension because of the backward
      compatibility.  We really want a never-clobber semantic even on older
      kernels which do not recognize the flag.  Unfortunately mmap sucks
      wrt flags evaluation because we do not EINVAL on unknown flags.  On
      those kernels we would simply use the traditional hint based semantic so
      the caller can still get a different address (which sucks) but at least
      not silently corrupt an existing mapping.  I do not see a good way
      around that.  Except we won't export expose the new semantic to the
      userspace at all.
      
      It seems there are users who would like to have something like that.
      Jemalloc has been mentioned by Michael Ellerman [7]
      
      Florian Weimer has mentioned the following:
      : glibc ld.so currently maps DSOs without hints.  This means that the kernel
      : will map right next to each other, and the offsets between them a completely
      : predictable.  We would like to change that and supply a random address in a
      : window of the address space.  If there is a conflict, we do not want the
      : kernel to pick a non-random address. Instead, we would try again with a
      : random address.
      
      John Hubbard has mentioned CUDA example
      : a) Searches /proc/<pid>/maps for a "suitable" region of available
      : VA space.  "Suitable" generally means it has to have a base address
      : within a certain limited range (a particular device model might
      : have odd limitations, for example), it has to be large enough, and
      : alignment has to be large enough (again, various devices may have
      : constraints that lead us to do this).
      :
      : This is of course subject to races with other threads in the process.
      :
      : Let's say it finds a region starting at va.
      :
      : b) Next it does:
      :     p = mmap(va, ...)
      :
      : *without* setting MAP_FIXED, of course (so va is just a hint), to
      : attempt to safely reserve that region. If p != va, then in most cases,
      : this is a failure (almost certainly due to another thread getting a
      : mapping from that region before we did), and so this layer now has to
      : call munmap(), before returning a "failure: retry" to upper layers.
      :
      :     IMPROVEMENT: --> if instead, we could call this:
      :
      :             p = mmap(va, ... MAP_FIXED_NOREPLACE ...)
      :
      :         , then we could skip the munmap() call upon failure. This
      :         is a small thing, but it is useful here. (Thanks to Piotr
      :         Jaroszynski and Mark Hairgrove for helping me get that detail
      :         exactly right, btw.)
      :
      : c) After that, CUDA suballocates from p, via:
      :
      :      q = mmap(sub_region_start, ... MAP_FIXED ...)
      :
      : Interestingly enough, "freeing" is also done via MAP_FIXED, and
      : setting PROT_NONE to the subregion. Anyway, I just included (c) for
      : general interest.
      
      Atomic address range probing in the multithreaded programs in general
      sounds like an interesting thing to me.
      
      The second patch simply replaces MAP_FIXED use in elf loader by
      MAP_FIXED_NOREPLACE.  I believe other places which rely on MAP_FIXED
      should follow.  Actually real MAP_FIXED usages should be docummented
      properly and they should be more of an exception.
      
      [1] http://lkml.kernel.org/r/20171116101900.13621-1-mhocko@kernel.org
      [2] http://lkml.kernel.org/r/20171129144219.22867-1-mhocko@kernel.org
      [3] http://lkml.kernel.org/r/20171107162217.382cd754@canb.auug.org.au
      [4] http://lkml.kernel.org/r/1510048229.12079.7.camel@abdul.in.ibm.com
      [5] http://lkml.kernel.org/r/20171023082608.6167-1-mhocko@kernel.org
      [6] http://lkml.kernel.org/r/20171113094203.aofz2e7kueitk55y@dhcp22.suse.cz
      [7] http://lkml.kernel.org/r/87efp1w7vy.fsf@concordia.ellerman.id.au
      
      This patch (of 2):
      
      MAP_FIXED is used quite often to enforce mapping at the particular range.
      The main problem of this flag is, however, that it is inherently dangerous
      because it unmaps existing mappings covered by the requested range.  This
      can cause silent memory corruptions.  Some of them even with serious
      security implications.  While the current semantic might be really
      desiderable in many cases there are others which would want to enforce the
      given range but rather see a failure than a silent memory corruption on a
      clashing range.  Please note that there is no guarantee that a given range
      is obeyed by the mmap even when it is free - e.g.  arch specific code is
      allowed to apply an alignment.
      
      Introduce a new MAP_FIXED_NOREPLACE flag for mmap to achieve this
      behavior.  It has the same semantic as MAP_FIXED wrt.  the given address
      request with a single exception that it fails with EEXIST if the requested
      address is already covered by an existing mapping.  We still do rely on
      get_unmaped_area to handle all the arch specific MAP_FIXED treatment and
      check for a conflicting vma after it returns.
      
      The flag is introduced as a completely new one rather than a MAP_FIXED
      extension because of the backward compatibility.  We really want a
      never-clobber semantic even on older kernels which do not recognize the
      flag.  Unfortunately mmap sucks wrt.  flags evaluation because we do not
      EINVAL on unknown flags.  On those kernels we would simply use the
      traditional hint based semantic so the caller can still get a different
      address (which sucks) but at least not silently corrupt an existing
      mapping.  I do not see a good way around that.
      
      [mpe@ellerman.id.au: fix whitespace]
      [fail on clashing range with EEXIST as per Florian Weimer]
      [set MAP_FIXED before round_hint_to_min as per Khalid Aziz]
      Link: http://lkml.kernel.org/r/20171213092550.2774-2-mhocko@kernel.orgReviewed-by: NKhalid Aziz <khalid.aziz@oracle.com>
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: Khalid Aziz <khalid.aziz@oracle.com>
      Cc: Russell King - ARM Linux <linux@armlinux.org.uk>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
      Cc: Joel Stanley <joel@jms.id.au>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Jason Evans <jasone@google.com>
      Cc: David Goldblatt <davidtgoldblatt@gmail.com>
      Cc: Edward Tomasz Napierała <trasz@FreeBSD.org>
      Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a4ff8e86
  13. 08 4月, 2018 4 次提交
  14. 03 4月, 2018 1 次提交
  15. 28 3月, 2018 2 次提交
    • A
      alpha: get rid of pointless insn in ret_from_kernel_thread · 206b1c60
      Al Viro 提交于
      	It used to clear a3, so that signal handling on
      return to userland would've passed zero r0 to do_work_pending(),
      preventing the syscall restart logics from triggering.
      
      	It had been pointless all along, since we only go there
      after successful do_execve().  Which does clear regs->r0 on alpha,
      preventing the syscall restart logics just fine, no extra help
      needed.  Good thing, that, since back in 2012 do_work_pending()
      has lost the second argument, shifting the registers used to pass
      that thing from a3 to a2.  Commit that had done that adjusted the
      entry.S code accordingly, but missed that one.
      
      	As the result, we were left with useless insn in
      ret_from_kernel_thread and confusing comment to go with it.
      Get rid of both...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      206b1c60
    • A
      alpha: switch pci syscalls to SYSCALL_DEFINE · e4eacd6b
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e4eacd6b
  16. 17 3月, 2018 1 次提交
  17. 16 3月, 2018 1 次提交
  18. 12 3月, 2018 2 次提交
    • P
      perf/core: Remove perf_event::group_entry · 8343aae6
      Peter Zijlstra 提交于
      Now that all the grouping is done with RB trees, we no longer need
      group_entry and can replace the whole thing with sibling_list.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8343aae6
    • A
      locking/xchg/alpha: Remove superfluous memory barriers from the _local() variants · fbfcd019
      Andrea Parri 提交于
      The following two commits:
      
        79d44246 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB")
        472e8c55 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs")
      
      ... ended up adding unnecessary barriers to the _local() variants on Alpha,
      which the previous code took care to avoid.
      
      Fix them by adding the smp_mb() into the cmpxchg() macro rather than into the
      ____cmpxchg() variants.
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrea Parri <parri.andrea@gmail.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-alpha@vger.kernel.org
      Fixes: 472e8c55 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs")
      Fixes: 79d44246 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB")
      Link: http://lkml.kernel.org/r/1519704058-13430-1-git-send-email-parri.andrea@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fbfcd019
  19. 23 2月, 2018 2 次提交
  20. 21 2月, 2018 1 次提交
  21. 26 1月, 2018 2 次提交
    • A
      alpha: osf_sys.c: use timespec64 where appropriate · ce4c2535
      Arnd Bergmann 提交于
      Some of the syscall helper functions (do_utimes, poll_select_set_timeout,
      core_sys_select) have changed over the past year or two to use
      'timespec64' pointers rather than 'timespec'. This was fine on alpha,
      since 64-bit architectures treat the two as the same type.
      
      However, I'd like to change that behavior and make 'timespec64' a proper
      type of its own even on 64-bit architectures, and that will introduce
      harmless type mismatch warnings here.
      
      Also, I'm trying to kill off the do_gettimeofday() helper in favor of
      ktime_get() and related interfaces throughout the kernel.
      
      This changes the get_tv32/put_tv32 helper functions to also take a
      timespec64 argument rather than timeval, which allows us to simplify
      some of the syscall helpers a bit and avoid the type warnings.
      
      For the moment, wait4 and adjtimex are still better off with the old
      behavior, so I'm adding a special put_tv_to_tv32() helper for those.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      ce4c2535
    • A
      alpha: osf_sys.c: fix put_tv32 regression · 47669fb6
      Arnd Bergmann 提交于
      There was a typo in the new version of put_tv32() that caused an unguarded
      access of a user space pointer, and failed to return the correct result in
      gettimeofday(), wait4(), usleep_thread() and old_adjtimex().
      
      This fixes it to give the correct behavior again.
      
      Cc: stable@vger.kernel.org
      Fixes: 1cc6c463 ("osf_sys.c: switch handling of timeval32/itimerval32 to copy_{to,from}_user()")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      47669fb6
  22. 21 1月, 2018 3 次提交
    • M
      alpha: fix crash if pthread_create races with signal delivery · 21ffceda
      Mikulas Patocka 提交于
      On alpha, a process will crash if it attempts to start a thread and a
      signal is delivered at the same time. The crash can be reproduced with
      this program: https://cygwin.com/ml/cygwin/2014-11/msg00473.html
      
      The reason for the crash is this:
      * we call the clone syscall
      * we go to the function copy_process
      * copy process calls copy_thread_tls, it is a wrapper around copy_thread
      * copy_thread sets the tls pointer: childti->pcb.unique = regs->r20
      * copy_thread sets regs->r20 to zero
      * we go back to copy_process
      * copy process checks "if (signal_pending(current))" and returns
        -ERESTARTNOINTR
      * the clone syscall is restarted, but this time, regs->r20 is zero, so
        the new thread is created with zero tls pointer
      * the new thread crashes in start_thread when attempting to access tls
      
      The comment in the code says that setting the register r20 is some
      compatibility with OSF/1. But OSF/1 doesn't use the CLONE_SETTLS flag, so
      we don't have to zero r20 if CLONE_SETTLS is set. This patch fixes the bug
      by zeroing regs->r20 only if CLONE_SETTLS is not set.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NMatt Turner <mattst88@gmail.com>
      21ffceda
    • M
      alpha: fix formating of stack content · 4b01abdb
      Mikulas Patocka 提交于
      Since version 4.9, the kernel automatically breaks printk calls into
      multiple newlines unless pr_cont is used. Fix the alpha stacktrace code,
      so that it prints stack trace in four columns, as it was initially
      intended.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org	# v4.9+
      Signed-off-by: NMatt Turner <mattst88@gmail.com>
      4b01abdb
    • M
      alpha: fix reboot on Avanti platform · 55fc633c
      Mikulas Patocka 提交于
      We need to define NEED_SRM_SAVE_RESTORE on the Avanti, otherwise we get
      machine check exception when attempting to reboot the machine.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NMatt Turner <mattst88@gmail.com>
      55fc633c