1. 10 1月, 2018 1 次提交
  2. 03 11月, 2017 2 次提交
    • D
      arm64/sve: Support vector length resetting for new processes · 79ab047c
      Dave Martin 提交于
      It's desirable to be able to reset the vector length to some sane
      default for new processes, since the new binary and its libraries
      may or may not be SVE-aware.
      
      This patch tracks the desired post-exec vector length (if any) in a
      new thread member sve_vl_onexec, and adds a new thread flag
      TIF_SVE_VL_INHERIT to control whether to inherit or reset the
      vector length.  Currently these are inactive.  Subsequent patches
      will provide the capability to configure them.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      79ab047c
    • D
      arm64/sve: Core task context handling · bc0ee476
      Dave Martin 提交于
      This patch adds the core support for switching and managing the SVE
      architectural state of user tasks.
      
      Calls to the existing FPSIMD low-level save/restore functions are
      factored out as new functions task_fpsimd_{save,load}(), since SVE
      now dynamically may or may not need to be handled at these points
      depending on the kernel configuration, hardware features discovered
      at boot, and the runtime state of the task.  To make these
      decisions as fast as possible, const cpucaps are used where
      feasible, via the system_supports_sve() helper.
      
      The SVE registers are only tracked for threads that have explicitly
      used SVE, indicated by the new thread flag TIF_SVE.  Otherwise, the
      FPSIMD view of the architectural state is stored in
      thread.fpsimd_state as usual.
      
      When in use, the SVE registers are not stored directly in
      thread_struct due to their potentially large and variable size.
      Because the task_struct slab allocator must be configured very
      early during kernel boot, it is also tricky to configure it
      correctly to match the maximum vector length provided by the
      hardware, since this depends on examining secondary CPUs as well as
      the primary.  Instead, a pointer sve_state in thread_struct points
      to a dynamically allocated buffer containing the SVE register data,
      and code is added to allocate and free this buffer at appropriate
      times.
      
      TIF_SVE is set when taking an SVE access trap from userspace, if
      suitable hardware support has been detected.  This enables SVE for
      the thread: a subsequent return to userspace will disable the trap
      accordingly.  If such a trap is taken without sufficient system-
      wide hardware support, SIGILL is sent to the thread instead as if
      an undefined instruction had been executed: this may happen if
      userspace tries to use SVE in a system where not all CPUs support
      it for example.
      
      The kernel will clear TIF_SVE and disable SVE for the thread
      whenever an explicit syscall is made by userspace.  For backwards
      compatibility reasons and conformance with the spirit of the base
      AArch64 procedure call standard, the subset of the SVE register
      state that aliases the FPSIMD registers is still preserved across a
      syscall even if this happens.  The remainder of the SVE register
      state logically becomes zero at syscall entry, though the actual
      zeroing work is currently deferred until the thread next tries to
      use SVE, causing another trap to the kernel.  This implementation
      is suboptimal: in the future, the fastpath case may be optimised
      to zero the registers in-place and leave SVE enabled for the task,
      where beneficial.
      
      TIF_SVE is also cleared in the following slowpath cases, which are
      taken as reasonable hints that the task may no longer use SVE:
       * exec
       * fork and clone
      
      Code is added to sync data between thread.fpsimd_state and
      thread.sve_state whenever enabling/disabling SVE, in a manner
      consistent with the SVE architectural programmer's model.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Alex Bennée <alex.bennee@linaro.org>
      [will: added #include to fix allnoconfig build]
      [will: use enable_daif in do_sve_acc]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bc0ee476
  3. 23 8月, 2017 1 次提交
  4. 16 8月, 2017 2 次提交
    • M
      arm64: clean up THREAD_* definitions · dbc9344a
      Mark Rutland 提交于
      Currently we define THREAD_SIZE and THREAD_SIZE_ORDER separately, with
      the latter dependent on particular CONFIG_ARM64_*K_PAGES definitions.
      This is somewhat opaque, and will get in the way of future modifications
      to THREAD_SIZE.
      
      This patch cleans this up, defining both in terms of a common
      THREAD_SHIFT, and using PAGE_SHIFT to calculate THREAD_SIZE_ORDER,
      rather than using a number of definitions dependent on config symbols.
      Subsequent patches will make use of this to alter the stack size used in
      some configurations.
      
      At the same time, these are moved into <asm/memory.h>, which will avoid
      circular include issues in subsequent patches. To ensure that existing
      code isn't adversely affected, <asm/thread_info.h> is updated to
      transitively include these definitions.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      dbc9344a
    • A
      arm64: kernel: remove {THREAD,IRQ_STACK}_START_SP · 34be98f4
      Ard Biesheuvel 提交于
      For historical reasons, we leave the top 16 bytes of our task and IRQ
      stacks unused, a practice used to ensure that the SP can always be
      masked to find the base of the current stack (historically, where
      thread_info could be found).
      
      However, this is not necessary, as:
      
      * When an exception is taken from a task stack, we decrement the SP by
        S_FRAME_SIZE and stash the exception registers before we compare the
        SP against the task stack. In such cases, the SP must be at least
        S_FRAME_SIZE below the limit, and can be safely masked to determine
        whether the task stack is in use.
      
      * When transitioning to an IRQ stack, we'll place a dummy frame onto the
        IRQ stack before enabling asynchronous exceptions, or executing code
        we expect to trigger faults. Thus, if an exception is taken from the
        IRQ stack, the SP must be at least 16 bytes below the limit.
      
      * We no longer mask the SP to find the thread_info, which is now found
        via sp_el0. Note that historically, the offset was critical to ensure
        that cpu_switch_to() found the correct stack for new threads that
        hadn't yet executed ret_from_fork().
      
      Given that, this initial offset serves no purpose, and can be removed.
      This brings us in-line with other architectures (e.g. x86) which do not
      rely on this masking.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [Mark: rebase, kill THREAD_START_SP, commit msg additions]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      34be98f4
  5. 08 7月, 2017 1 次提交
    • T
      arm64/syscalls: Check address limit on user-mode return · cf7de27a
      Thomas Garnier 提交于
      Ensure the address limit is a user-mode segment before returning to
      user-mode. Otherwise a process can corrupt kernel-mode memory and
      elevate privileges [1].
      
      The set_fs function sets the TIF_SETFS flag to force a slow path on
      return. In the slow path, the address limit is checked to be USER_DS if
      needed.
      
      [1] https://bugs.chromium.org/p/project-zero/issues/detail?id=990Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: kernel-hardening@lists.openwall.com
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Pratyush Anand <panand@redhat.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Will Drewry <wad@chromium.org>
      Cc: linux-api@vger.kernel.org
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Link: http://lkml.kernel.org/r/20170615011203.144108-3-thgarnie@google.com
      cf7de27a
  6. 22 11月, 2016 1 次提交
    • C
      arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 · 4b65a5db
      Catalin Marinas 提交于
      This patch adds the uaccess macros/functions to disable access to user
      space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
      written to TTBR0_EL1 must be a physical address, for simplicity this
      patch introduces a reserved_ttbr0 page at a constant offset from
      swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
      adjusted by the reserved_ttbr0 offset.
      
      Enabling access to user is done by restoring TTBR0_EL1 with the value
      from the struct thread_info ttbr0 variable. Interrupts must be disabled
      during the uaccess_ttbr0_enable code to ensure the atomicity of the
      thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
      get_thread_info asm macro from entry.S to assembler.h for reuse in the
      uaccess_ttbr0_* macros.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4b65a5db
  7. 12 11月, 2016 3 次提交
    • M
      arm64: split thread_info from task stack · c02433dd
      Mark Rutland 提交于
      This patch moves arm64's struct thread_info from the task stack into
      task_struct. This protects thread_info from corruption in the case of
      stack overflows, and makes its address harder to determine if stack
      addresses are leaked, making a number of attacks more difficult. Precise
      detection and handling of overflow is left for subsequent patches.
      
      Largely, this involves changing code to store the task_struct in sp_el0,
      and acquire the thread_info from the task struct. Core code now
      implements current_thread_info(), and as noted in <linux/sched.h> this
      relies on offsetof(task_struct, thread_info) == 0, enforced by core
      code.
      
      This change means that the 'tsk' register used in entry.S now points to
      a task_struct, rather than a thread_info as it used to. To make this
      clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
      appropriately updated to account for the structural change.
      
      Userspace clobbers sp_el0, and we can no longer restore this from the
      stack. Instead, the current task is cached in a per-cpu variable that we
      can safely access from early assembly as interrupts are disabled (and we
      are thus not preemptible).
      
      Both secondary entry and idle are updated to stash the sp and task
      pointer separately.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c02433dd
    • M
      arm64: factor out current_stack_pointer · a9ea0017
      Mark Rutland 提交于
      We define current_stack_pointer in <asm/thread_info.h>, though other
      files and header relying upon it do not have this necessary include, and
      are thus fragile to changes in the header soup.
      
      Subsequent patches will affect the header soup such that directly
      including <asm/thread_info.h> may result in a circular header include in
      some of these cases, so we can't simply include <asm/thread_info.h>.
      
      Instead, factor current_thread_info into its own header, and have all
      existing users include this explicitly.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a9ea0017
    • M
      arm64: thread_info remove stale items · dcbe0285
      Mark Rutland 提交于
      We have a comment claiming __switch_to() cares about where cpu_context
      is located relative to cpu_domain in thread_info. However arm64 has
      never had a thread_info::cpu_domain field, and neither __switch_to nor
      cpu_switch_to care where the cpu_context field is relative to others.
      
      Additionally, the init_thread_info alias is never used anywhere in the
      kernel, and will shortly become problematic when thread_info is moved
      into task_struct.
      
      This patch removes both.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dcbe0285
  8. 08 11月, 2016 1 次提交
    • P
      arm64: Add uprobe support · 9842ceae
      Pratyush Anand 提交于
      This patch adds support for uprobe on ARM64 architecture.
      
      Unit tests for following have been done so far and they have been found
      working
          1. Step-able instructions, like sub, ldr, add etc.
          2. Simulation-able like ret, cbnz, cbz etc.
          3. uretprobe
          4. Reject-able instructions like sev, wfe etc.
          5. trapped and abort xol path
          6. probe at unaligned user address.
          7. longjump test cases
      
      Currently it does not support aarch32 instruction probing.
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9842ceae
  9. 09 9月, 2016 1 次提交
    • M
      arm64: simplify sysreg manipulation · adf75899
      Mark Rutland 提交于
      A while back we added {read,write}_sysreg accessors to handle accesses
      to system registers, without the usual boilerplate asm volatile,
      temporary variable, etc.
      
      This patch makes use of these across arm64 to make code shorter and
      clearer. For sequences with a trailing ISB, the existing isb() macro is
      also used so that asm blocks can be removed entirely.
      
      A few uses of inline assembly for msr/mrs are left as-is. Those
      manipulating sp_el0 for the current thread_info value have special
      clobber requiremends.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      adf75899
  10. 19 2月, 2016 2 次提交
    • C
      arm64: Remove the get_thread_info() function · e950631e
      Catalin Marinas 提交于
      This function was introduced by previous commits implementing UAO.
      However, it can be replaced with task_thread_info() in
      uao_thread_switch() or get_fs() in do_page_fault() (the latter being
      called only on the current context, so no need for using the saved
      pt_regs).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e950631e
    • J
      arm64: kernel: Add support for User Access Override · 57f4959b
      James Morse 提交于
      'User Access Override' is a new ARMv8.2 feature which allows the
      unprivileged load and store instructions to be overridden to behave in
      the normal way.
      
      This patch converts {get,put}_user() and friends to use ldtr*/sttr*
      instructions - so that they can only access EL0 memory, then enables
      UAO when fs==KERNEL_DS so that these functions can access kernel memory.
      
      This allows user space's read/write permissions to be checked against the
      page tables, instead of testing addr<USER_DS, then using the kernel's
      read/write permissions.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      57f4959b
  11. 08 12月, 2015 1 次提交
  12. 20 10月, 2015 2 次提交
  13. 07 10月, 2015 1 次提交
    • W
      arm64: mm: rewrite ASID allocator and MM context-switching code · 5aec715d
      Will Deacon 提交于
      Our current switch_mm implementation suffers from a number of problems:
      
        (1) The ASID allocator relies on IPIs to synchronise the CPUs on a
            rollover event
      
        (2) Because of (1), we cannot allocate ASIDs with interrupts disabled
            and therefore make use of a TIF_SWITCH_MM flag to postpone the
            actual switch to finish_arch_post_lock_switch
      
        (3) We run context switch with a reserved (invalid) TTBR0 value, even
            though the ASID and pgd are updated atomically
      
        (4) We take a global spinlock (cpu_asid_lock) during context-switch
      
        (5) We use h/w broadcast TLB operations when they are not required
            (e.g. in flush_context)
      
      This patch addresses these problems by rewriting the ASID algorithm to
      match the bitmap-based arch/arm/ implementation more closely. This in
      turn allows us to remove much of the complications surrounding switch_mm,
      including the ugly thread flag.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5aec715d
  14. 13 4月, 2015 1 次提交
  15. 13 2月, 2015 1 次提交
    • A
      all arches, signal: move restart_block to struct task_struct · f56141e3
      Andy Lutomirski 提交于
      If an attacker can cause a controlled kernel stack overflow, overwriting
      the restart block is a very juicy exploit target.  This is because the
      restart_block is held in the same memory allocation as the kernel stack.
      
      Moving the restart block to struct task_struct prevents this exploit by
      making the restart_block harder to locate.
      
      Note that there are other fields in thread_info that are also easy
      targets, at least on some architectures.
      
      It's also a decent simplification, since the restart code is more or less
      identical on all architectures.
      
      [james.hogan@imgtec.com: metag: align thread_info::supervisor_stack]
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: David Miller <davem@davemloft.net>
      Acked-by: NRichard Weinberger <richard@nod.at>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f56141e3
  16. 08 9月, 2014 2 次提交
  17. 10 7月, 2014 1 次提交
  18. 22 5月, 2014 1 次提交
  19. 12 5月, 2014 1 次提交
  20. 08 5月, 2014 1 次提交
    • A
      arm64: defer reloading a task's FPSIMD state to userland resume · 005f78cd
      Ard Biesheuvel 提交于
      If a task gets scheduled out and back in again and nothing has touched
      its FPSIMD state in the mean time, there is really no reason to reload
      it from memory. Similarly, repeated calls to kernel_neon_begin() and
      kernel_neon_end() will preserve and restore the FPSIMD state every time.
      
      This patch defers the FPSIMD state restore to the last possible moment,
      i.e., right before the task returns to userland. If a task does not return to
      userland at all (for any reason), the existing FPSIMD state is preserved
      and may be reused by the owning task if it gets scheduled in again on the
      same CPU.
      
      This patch adds two more functions to abstract away from straight FPSIMD
      register file saves and restores:
      - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded
      - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      005f78cd
  21. 14 11月, 2013 1 次提交
  22. 26 7月, 2013 1 次提交
  23. 17 9月, 2012 1 次提交
  24. 22 11月, 2011 1 次提交
  25. 17 3月, 2011 1 次提交
  26. 02 10月, 2010 1 次提交
  27. 14 5月, 2010 1 次提交
  28. 16 2月, 2010 1 次提交
  29. 02 9月, 2009 1 次提交
  30. 15 8月, 2009 1 次提交
    • M
      ARM: 5677/1: ARM support for TIF_RESTORE_SIGMASK/pselect6/ppoll/epoll_pwait · 36984265
      Mikael Pettersson 提交于
      This patch adds support for TIF_RESTORE_SIGMASK to ARM's
      signal handling, which allows to hook up the pselect6, ppoll,
      and epoll_pwait syscalls on ARM.
      
      Tested here with eabi userspace and a test program with a
      deliberate race between a child's exit and the parent's
      sigprocmask/select sequence. Using sys_pselect6() instead
      of sigprocmask/select reliably prevents the race.
      
      The other arch's support for TIF_RESTORE_SIGMASK has evolved
      over time:
      
      In 2.6.16:
      - add TIF_RESTORE_SIGMASK which parallels TIF_SIGPENDING
      - test both when checking for pending signal [changed later]
      - reimplement sys_sigsuspend() to use current->saved_sigmask,
        TIF_RESTORE_SIGMASK [changed later], and -ERESTARTNOHAND;
        ditto for sys_rt_sigsuspend(), but drop private code and
        use common code via __ARCH_WANT_SYS_RT_SIGSUSPEND;
      - there are now no "extra" calls to do_signal() so its oldset
        parameter is always &current->blocked so need not be passed,
        also its return value is changed to void
      - change handle_signal() to return 0/-errno
      - change do_signal() to honor TIF_RESTORE_SIGMASK:
        + get oldset from current->saved_sigmask if TIF_RESTORE_SIGMASK
          is set
        + if handle_signal() was successful then clear TIF_RESTORE_SIGMASK
        + if no signal was delivered and TIF_RESTORE_SIGMASK is set then
          clear it and restore the sigmask
      - hook up sys_pselect6() and sys_ppoll()
      
      In 2.6.19:
      - hook up sys_epoll_pwait()
      
      In 2.6.26:
      - allow archs to override how TIF_RESTORE_SIGMASK is implemented;
        default set_restore_sigmask() sets both TIF_RESTORE_SIGMASK and
        TIF_SIGPENDING; archs need now just test TIF_SIGPENDING again
        when checking for pending signal work; some archs now implement
        TIF_RESTORE_SIGMASK as a secondary/non-atomic thread flag bit
      - call set_restore_sigmask() in sys_sigsuspend() instead of setting
        TIF_RESTORE_SIGMASK
      
      In 2.6.29-rc:
      - kill sys_pselect7() which no arch wanted
      
      So for 2.6.31-rc6/ARM this patch does the following:
      - Add TIF_RESTORE_SIGMASK. Use the generic set_restore_sigmask()
        which sets both TIF_SIGPENDING and TIF_RESTORE_SIGMASK, so
        TIF_RESTORE_SIGMASK need not claim one of the scarce low thread
        flags, and existing TIF_SIGPENDING and _TIF_WORK_MASK tests need
        not be extended for TIF_RESTORE_SIGMASK.
      - sys_sigsuspend() is reimplemented to use current->saved_sigmask
        and set_restore_sigmask(), making it identical to most other archs
      - The private code for sys_rt_sigsuspend() is removed, instead
        generic code supplies it via __ARCH_WANT_SYS_RT_SIGSUSPEND.
      - sys_sigsuspend() and sys_rt_sigsuspend() no longer need a pt_regs
        parameter, so their assembly code wrappers are removed.
      - handle_signal() is changed to return 0 on success or -errno.
      - The oldset parameter to do_signal() is now redundant and removed,
        and the return value is now also redundant and changed to void.
      - do_signal() is changed to honor TIF_RESTORE_SIGMASK:
        + get oldset from current->saved_sigmask if TIF_RESTORE_SIGMASK
          is set
        + if handle_signal() was successful then clear TIF_RESTORE_SIGMASK
        + if no signal was delivered and TIF_RESTORE_SIGMASK is set then
          clear it and restore the sigmask
      - Hook up sys_pselect6, sys_ppoll, and sys_epoll_pwait.
      Signed-off-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      36984265
  31. 11 7月, 2009 1 次提交
  32. 12 2月, 2009 2 次提交