1. 28 4月, 2022 1 次提交
  2. 11 1月, 2022 3 次提交
  3. 06 1月, 2022 1 次提交
  4. 11 6月, 2021 1 次提交
  5. 11 3月, 2021 1 次提交
  6. 09 9月, 2020 1 次提交
  7. 17 8月, 2020 1 次提交
  8. 14 9月, 2019 1 次提交
    • R
      harden thread start with failed scheduling against broken __clone · f5eee489
      Rich Felker 提交于
      commit 8a544ee3 introduced a
      dependency of the failure path for explicit scheduling at thread
      creation on __clone's handling of the start function returning, which
      should result in SYS_exit.
      
      as noted in commit 05870abe, the arm
      version of __clone was broken in this case. in the past, the mips
      version was also broken; it was fixed in commit
      8b2b61e0.
      
      since this code path is pretty much entirely untested (previously only
      reachable in applications that call the public clone() and return from
      the start function) and consists of fragile per-arch asm, don't assume
      it works, at least not until it's been thoroughly tested. instead make
      the SYS_exit syscall from the start function's failure path.
      f5eee489
  9. 07 9月, 2019 3 次提交
    • R
      synchronously clean up pthread_create failure due to scheduling errors · 8a544ee3
      Rich Felker 提交于
      previously, when pthread_create failed due to inability to set
      explicit scheduling according to the requested attributes, the nascent
      thread was detached and made responsible for its own cleanup via the
      standard pthread_exit code path. this left it consuming resources
      potentially well after pthread_create returned, in a way that the
      application could not see or mitigate, and unnecessarily exposed its
      existence to the rest of the implementation via the global thread
      list.
      
      instead, attempt explicit scheduling early and reuse the failure path
      for __clone failure if it fails. the nascent thread's exit futex is
      not needed for unlocking the thread list, since the thread calling
      pthread_create holds the thread list lock the whole time, so it can be
      repurposed to ensure the thread has finished exiting. no pthread_exit
      is needed, and freeing the stack, if needed, can happen just as it
      would if __clone failed.
      8a544ee3
    • R
      set explicit scheduling for new thread from calling thread, not self · 022f27d5
      Rich Felker 提交于
      if setting scheduling properties succeeds, the new thread may end up
      with lower priority than the caller, and may be unable to continue
      running due to another intermediate-priority thread. this produces a
      priority inversion situation for the thread calling pthread_create,
      since it cannot return until the new thread reports success.
      
      originally, the parent was responsible for setting the new thread's
      priority; commits b8742f32 and
      40bae2d3 changed it as part of
      trimming down the pthread structure. since then, commit
      04335d92 partly reversed the changes,
      but did not switch responsibilities back. do that now.
      022f27d5
    • R
      fix unsynchronized decrement of thread count on pthread_create error · dd0a23dd
      Rich Felker 提交于
      commit 8f11e612 wrongly documented
      that all changes to libc.threads_minus_1 were guarded by the thread
      list lock, but the decrement for failed SYS_clone took place after the
      thread list lock was released.
      dd0a23dd
  10. 11 4月, 2019 1 次提交
    • R
      overhaul i386 syscall mechanism not to depend on external asm source · 22e5bbd0
      Rich Felker 提交于
      this is the first part of a series of patches intended to make
      __syscall fully self-contained in the object file produced using
      syscall.h, which will make it possible for crt1 code to perform
      syscalls.
      
      the (confusingly named) i386 __vsyscall mechanism, which this commit
      removes, was introduced before the presence of a valid thread pointer
      was mandatory; back then the thread pointer was setup lazily only if
      threads were used. the intent was to be able to perform syscalls using
      the kernel's fast entry point in the VDSO, which can use the sysenter
      (Intel) or syscall (AMD) instruction instead of int $128, but without
      inlining an access to the __syscall global at the point of each
      syscall, which would incur a significant size cost from PIC setup
      everywhere. the mechanism also shuffled registers/calling convention
      around to avoid spills of call-saved registers, and to avoid
      allocating ebx or ebp via asm constraints, since there are plenty of
      broken-but-supported compiler versions which are incapable of
      allocating ebx with -fPIC or ebp with -fno-omit-frame-pointer.
      
      the new mechanism preserves the properties of avoiding spills and
      avoiding allocation of ebx/ebp in constraints, but does it inline,
      using some fairly simple register shuffling, and uses a field of the
      thread structure rather than global data for the vdso-provided syscall
      code address.
      
      for now, the external __syscall function is refactored not to use the
      old __vsyscall so it can be kept, but the intent is to remove it too.
      22e5bbd0
  11. 22 2月, 2019 2 次提交
    • R
      add membarrier syscall wrapper, refactor dynamic tls install to use it · ba18c1ec
      Rich Felker 提交于
      the motivation for this change is twofold. first, it gets the fallback
      logic out of the dynamic linker, improving code readability and
      organization. second, it provides application code that wants to use
      the membarrier syscall, which depends on preregistration of intent
      before the process becomes multithreaded unless unbounded latency is
      acceptable, with a symbol that, when linked, ensures that this
      registration happens.
      ba18c1ec
    • R
      make thread list lock a recursive lock · 7865d569
      Rich Felker 提交于
      this is a prerequisite for factoring the membarrier fallback code into
      a function that can be called from a context with the thread list
      already locked or independently.
      7865d569
  12. 19 2月, 2019 1 次提交
    • R
      install dynamic tls synchronously at dlopen, streamline access · 9d44b646
      Rich Felker 提交于
      previously, dynamic loading of new libraries with thread-local storage
      allocated the storage needed for all existing threads at load-time,
      precluding late failure that can't be handled, but left installation
      in existing threads to take place lazily on first access. this imposed
      an additional memory access and branch on every dynamic tls access,
      and imposed a requirement, which was not actually met, that the
      dynamic tlsdesc asm functions preserve all call-clobbered registers
      before calling C code to to install new dynamic tls on first access.
      the x86[_64] versions of this code wrongly omitted saving and
      restoring of fpu/vector registers, assuming the compiler would not
      generate anything using them in the called C code. the arm and aarch64
      versions saved known existing registers, but failed to be future-proof
      against expansion of the register file.
      
      now that we track live threads in a list, it's possible to install the
      new dynamic tls for each thread at dlopen time. for the most part,
      synchronization is not needed, because if a thread has not
      synchronized with completion of the dlopen, there is no way it can
      meaningfully request access to a slot past the end of the old dtv,
      which remains valid for accessing slots which already existed.
      however, it is necessary to ensure that, if a thread sees its new dtv
      pointer, it sees correct pointers in each of the slots that existed
      prior to the dlopen. my understanding is that, on most real-world
      coherency architectures including all the ones we presently support, a
      built-in consume order guarantees this; however, don't rely on that.
      instead, the SYS_membarrier syscall is used to ensure that all threads
      see the stores to the slots of their new dtv prior to the installation
      of the new dtv. if it is not supported, the same is implemented in
      userspace via signals, using the same mechanism as __synccall.
      
      the __tls_get_addr function, variants, and dynamic tlsdesc asm
      functions are all updated to remove the fallback paths for claiming
      new dynamic tls, and are now all branch-free.
      9d44b646
  13. 16 2月, 2019 3 次提交
    • R
      rewrite __synccall in terms of global thread list · e4235d70
      Rich Felker 提交于
      the __synccall mechanism provides stop-the-world synchronous execution
      of a callback in all threads of the process. it is used to implement
      multi-threaded setuid/setgid operations, since Linux lacks them at the
      kernel level, and for some other less-critical purposes.
      
      this change eliminates dependency on /proc/self/task to determine the
      set of live threads, which in addition to being an unwanted dependency
      and a potential point of resource-exhaustion failure, turned out to be
      inaccurate. test cases provided by Alexey Izbyshev showed that it
      could fail to reflect newly created threads. due to how the
      presignaling phase worked, this usually yielded a deadlock if hit, but
      in the worst case it could also result in threads being silently
      missed (allowed to continue running without executing the callback).
      e4235d70
    • R
      track all live threads in an AS-safe, fully-consistent linked list · 8f11e612
      Rich Felker 提交于
      the hard problem here is unlinking threads from a list when they exit
      without creating a window of inconsistency where the kernel task for a
      thread still exists and is still executing instructions in userspace,
      but is not reflected in the list. the magic solution here is getting
      rid of per-thread exit futex addresses (set_tid_address), and instead
      using the exit futex to unlock the global thread list.
      
      since pthread_join can no longer see the thread enter a detach_state
      of EXITED (which depended on the exit futex address pointing to the
      detach_state), it must now observe the unlocking of the thread list
      lock before it can unmap the joined thread and return. it doesn't
      actually have to take the lock. for this, a __tl_sync primitive is
      offered, with a signature that will allow it to be enhanced for quick
      return even under contention on the lock, if needed. for now, the
      exiting thread always performs a futex wake on its detach_state. a
      future change could optimize this out except when there is already a
      joiner waiting.
      
      initial/dynamic variants of detached state no longer need to be
      tracked separately, since the futex address is always set to the
      global list lock, not a thread-local address that could become invalid
      on detached thread exit. all detached threads, however, must perform a
      second sigprocmask syscall to block implementation-internal signals,
      since locking the thread list with them already blocked is not
      permissible.
      
      the arch-independent C version of __unmapself no longer needs to take
      a lock or setup its own futex address to release the lock, since it
      must necessarily be called with the thread list lock already held,
      guaranteeing exclusive access to the temporary stack.
      
      changes to libc.threads_minus_1 no longer need to be atomic, since
      they are guarded by the thread list lock. it is largely vestigial at
      this point, and can be replaced with a cheaper boolean indicating
      whether the process is multithreaded at some point in the future.
      8f11e612
    • R
      always block signals for starting new threads, refactor start args · 04335d92
      Rich Felker 提交于
      whether signals need to be blocked at thread start, and whether
      unblocking is necessary in the entry point function, has historically
      depended on intricacies of the cancellation design and on whether
      there are scheduling operations to perform on the new thread before
      its successful creation can be committed. future changes to track an
      AS-safe list of live threads will require signals to be blocked
      whenever changes are made to the list, so ...
      
      prior to commits b8742f32 and
      40bae2d3, a signal mask for the entry
      function to restore was part of the pthread structure. it was removed
      to trim down the size of the structure, which both saved a small
      amount of stack space and improved code generation on archs where
      small immediate displacements are less costly than arbitrary ones, by
      limiting the range of offsets between the base of the thread
      structure, its members, and the thread pointer. these commits moved
      the saved mask to a special structure used only when special
      scheduling was needed, in which case the pthread_create caller and new
      thread had to synchronize with each other and could use this memory to
      pass a mask.
      
      this commit partially reverts the above two commits, but instead of
      putting the mask back in the pthread structure, it moves all "start
      argument" members out of the pthread structure, trimming it down
      further, and puts them in a separate structure passed on the new
      thread's stack. the code path for explicit scheduling of the new
      thread is also changed to synchronize with the calling thread in such
      a way to avoid spurious futex wakes.
      04335d92
  14. 19 9月, 2018 1 次提交
  15. 13 9月, 2018 3 次提交
    • R
      split internal lock API out of libc.h, creating lock.h · 5f12ffe1
      Rich Felker 提交于
      this further reduces the number of source files which need to include
      libc.h and thereby be potentially exposed to libc global state and
      internals.
      
      this will also facilitate further improvements like adding an inline
      fast-path, if we want to do so later.
      5f12ffe1
    • R
      overhaul internally-public declarations using wrapper headers · 13d1afa4
      Rich Felker 提交于
      commits leading up to this one have moved the vast majority of
      libc-internal interface declarations to appropriate internal headers,
      allowing them to be type-checked and setting the stage to limit their
      visibility. the ones that have not yet been moved are mostly
      namespace-protected aliases for standard/public interfaces, which
      exist to facilitate implementing plain C functions in terms of POSIX
      functionality, or C or POSIX functionality in terms of extensions that
      are not standardized. some don't quite fit this description, but are
      "internally public" interfacs between subsystems of libc.
      
      rather than create a number of newly-named headers to declare these
      functions, and having to add explicit include directives for them to
      every source file where they're needed, I have introduced a method of
      wrapping the corresponding public headers.
      
      parallel to the public headers in $(srcdir)/include, we now have
      wrappers in $(srcdir)/src/include that come earlier in the include
      path order. they include the public header they're wrapping, then add
      declarations for namespace-protected versions of the same interfaces
      and any "internally public" interfaces for the subsystem they
      correspond to.
      
      along these lines, the wrapper for features.h is now responsible for
      the definition of the hidden, weak, and weak_alias macros. this means
      source files will no longer need to include any special headers to
      access these features.
      
      over time, it is my expectation that the scope of what is "internally
      public" will expand, reducing the number of source files which need to
      include *_impl.h and related headers down to those which are actually
      implementing the corresponding subsystems, not just using them.
      13d1afa4
    • R
      move declarations of tls setup/access functions to pthread_impl.h · 91c6a187
      Rich Felker 提交于
      it's already included in all places where these are needed, and aside
      from __tls_get_addr, they're all implementation internals.
      91c6a187
  16. 17 8月, 2018 1 次提交
    • R
      fix pthread_create return value with PTHREAD_EXPLICIT_SCHED · 91e1e29d
      Rich Felker 提交于
      due to moved code, commit b8742f32
      inadvertently used the return value of __clone, rather than the return
      value of SYS_sched_setscheduler in the new thread, to check whether it
      needed to report failure. since a successful __clone returns the tid
      of the new thread, which is never zero, this caused pthread_create
      always to return with an invalid error number in the code path for
      PTHREAD_EXPLICIT_SCHED.
      
      this regression was not present in any releases.
      91e1e29d
  17. 28 7月, 2018 1 次提交
    • R
      make pthread_attr_init honor defaults set by pthread_setattr_default_np · 14992d43
      Rich Felker 提交于
      this fixes a major gap in the intended functionality of
      pthread_setattr_default_np. if application/library code creating a
      thread does not pass a null attribute pointer to pthread_create, but
      sets up an attribute object to change other properties while leaving
      the stack alone, the created thread will get a stack with size
      DEFAULT_STACK_SIZE. this makes pthread_setattr_default_np useless for
      working around stack overflow issues in such applications, and leaves
      a major risk of regression if previously-working code switches from
      using a null attribute pointer to an attribute object.
      
      this change aligns the behavior more closely with the glibc
      pthread_setattr_default_np functionality too, albeit via a different
      mechanism. glibc encodes "default" specially in the attribute object
      and reads the actual default at thread creation time. with this
      commit, we now copy the current default into the attribute object at
      pthread_attr_init time, so that applications that query the properties
      of the attribute object will see the right values.
      14992d43
  18. 09 5月, 2018 2 次提交
    • R
      make linking of thread-start with explicit scheduling conditional · 40bae2d3
      Rich Felker 提交于
      the wrapper start function that performs scheduling operations is
      unreachable if pthread_attr_setinheritsched is never called, so move
      it there rather than the pthread_create source file, saving some code
      size for static-linked programs.
      40bae2d3
    • R
      improve design of thread-start with explicit scheduling attributes · b8742f32
      Rich Felker 提交于
      eliminate the awkward startlock mechanism and corresponding fields of
      the pthread structure that were only used at startup.
      
      instead of having pthread_create perform the scheduling operations and
      having the new thread wait for them to be completed, start the new
      thread with a wrapper start function that performs its own scheduling,
      sending the result code back via a futex. this way the new thread can
      use storage from the calling thread's stack rather than permanent
      fields in the pthread structure.
      b8742f32
  19. 06 5月, 2018 1 次提交
    • R
      improve joinable/detached thread state handling · cdba6b25
      Rich Felker 提交于
      previously, some accesses to the detached state (from pthread_join and
      pthread_getattr_np) were unsynchronized; they were harmless in
      programs with well-defined behavior, but ugly. other accesses (in
      pthread_exit and pthread_detach) were synchronized by a poorly named
      "exitlock", with an ad-hoc trylock operation on it open-coded in
      pthread_detach, whose only purpose was establishing protocol for which
      thread is responsible for deallocation of detached-thread resources.
      
      instead, use an atomic detach_state and unify it with the futex used
      to wait for thread exit. this eliminates 2 members from the pthread
      structure, gets rid of the hackish lock usage, and makes rigorous the
      trap added in commit 80bf5952 for
      catching attempts to join detached threads. it should also make
      attempt to detach an already-detached thread reliably trap.
      cdba6b25
  20. 05 5月, 2018 1 次提交
    • R
      improve pthread_exit synchronization with functions targeting tid · 526e64f5
      Rich Felker 提交于
      if the last thread exited via pthread_exit, the logic that marked it
      dead did not account for the possibility of it targeting itself via
      atexit handlers. for example, an atexit handler calling
      pthread_kill(pthread_self(), SIGKILL) would return success
      (previously, ESRCH) rather than causing termination via the signal.
      
      move the release of killlock after the determination is made whether
      the exiting thread is the last thread. in the case where it's not,
      move the release all the way to the end of the function. this way we
      can clear the tid rather than spending storage on a dedicated
      dead-flag. clearing the tid is also preferable in that it hardens
      against inadvertent use of the value after the thread has terminated
      but before it is joined.
      526e64f5
  21. 03 5月, 2018 1 次提交
    • R
      use a dedicated futex object for pthread_join instead of tid field · 9e2d820a
      Rich Felker 提交于
      the tid field in the pthread structure is not volatile, and really
      shouldn't be, so as not to limit the compiler's ability to reorder,
      merge, or split loads in code paths that may be relevant to
      performance (like controlling lock ownership).
      
      however, use of objects which are not volatile or atomic with futex
      wait is inherently broken, since the compiler is free to transform a
      single load into multiple loads, thereby using a different value for
      the controlling expression of the loop and the value passed to the
      futex syscall, leading the syscall to block instead of returning.
      
      reportedly glibc's pthread_join was actually affected by an equivalent
      issue in glibc on s390.
      
      add a separate, dedicated join_futex object for pthread_join to use.
      9e2d820a
  22. 03 2月, 2018 1 次提交
  23. 10 1月, 2018 1 次提交
    • J
      consistently use the LOCK an UNLOCK macros · c4bc0b1a
      Jens Gustedt 提交于
      In some places there has been a direct usage of the functions. Use the
      macros consistently everywhere, such that it might be easier later on to
      capture the fast path directly inside the macro and only have the call
      overhead on the slow path.
      c4bc0b1a
  24. 07 9月, 2017 1 次提交
    • R
      fix signal masking race in pthread_create with priority attributes · 9e01be6e
      Rich Felker 提交于
      if the parent thread was able to set the new thread's priority before
      it reached the check for 'startlock', the new thread failed to restore
      its signal mask and thus ran with all signals blocked.
      
      concept for patch by Sergei, who reported the issue; unnecessary
      changes were removed and comments added since the whole 'startlock'
      thing is non-idiomatic and confusing. eventually it should be replaced
      with use of idiomatic synchronization primitives.
      9e01be6e
  25. 09 11月, 2016 2 次提交
  26. 08 11月, 2016 1 次提交
    • R
      simplify pthread_attr_t stack/guard size representation · 33ce9208
      Rich Felker 提交于
      previously, the pthread_attr_t object was always initialized all-zero,
      and stack/guard size were represented as differences versus their
      defaults. this required lots of confusing offset arithmetic everywhere
      they were used. instead, have pthread_attr_init fill in the default
      values, and work with absolute sizes everywhere.
      33ce9208
  27. 28 6月, 2016 1 次提交
    • R
      fix failure to obtain EOWNERDEAD status for process-shared robust mutexes · 384d103d
      Rich Felker 提交于
      Linux's documentation (robust-futex-ABI.txt) claims that, when a
      process dies with a futex on the robust list, bit 30 (0x40000000) is
      set to indicate the status. however, what actually happens is that
      bits 0-30 are replaced with the value 0x40000000, i.e. bits 0-29
      (containing the old owner tid) are cleared at the same time bit 30 is
      set.
      
      our userspace-side code for robust mutexes was written based on that
      documentation, assuming that kernel would never produce a futex value
      of 0x40000000, since the low (owner) bits would always be non-zero.
      commit d338b506 introduced this
      assumption explicitly while fixing another bug in how non-recoverable
      status for robust mutexes was tracked. presumably the tests conducted
      at that time only checked non-process-shared robust mutexes, which are
      handled in pthread_exit (which implemented the documented kernel
      protocol, not the actual one) rather than by the kernel.
      
      change pthread_exit robust list processing to match the kernel
      behavior, clearing bits 0-29 while setting bit 30, and use the value
      0x7fffffff instead of 0x40000000 to encode non-recoverable status. the
      choice of value here is arbitrary; any value with at least one of bits
      0-29 set should work just as well,
      384d103d
  28. 18 6月, 2015 1 次提交
  29. 16 6月, 2015 1 次提交
    • R
      refactor stdio open file list handling, move it out of global libc struct · 1b0cdc87
      Rich Felker 提交于
      functions which open in-memory FILE stream variants all shared a tail
      with __fdopen, adding the FILE structure to stdio's open file list.
      replacing this common tail with a function call reduces code size and
      duplication of logic. the list is also partially encapsulated now.
      
      function signatures were chosen to facilitate tail call optimization
      and reduce the need for additional accessor functions.
      
      with these changes, static linked programs that do not use stdio no
      longer have an open file list at all.
      1b0cdc87