1. 07 9月, 2014 5 次提交
    • R
      add C11 thread creation and related thread functions · 23614b0f
      Rich Felker 提交于
      based on patch by Jens Gustedt.
      
      the main difficulty here is handling the difference between start
      function signatures and thread return types for C11 threads versus
      POSIX threads. pointers to void are assumed to be able to represent
      faithfully all values of int. the function pointer for the thread
      start function is cast to an incorrect type for passing through
      pthread_create, but is cast back to its correct type before calling so
      that the behavior of the call is well-defined.
      
      changes to the existing threads implementation were kept minimal to
      reduce the risk of regressions, and duplication of code that carries
      implementation-specific assumptions was avoided for ease and safety of
      future maintenance.
      23614b0f
    • J
      add C11 condition variable functions · 14397cec
      Jens Gustedt 提交于
      Because of the clear separation for private pthread_cond_t these
      interfaces are quite simple and direct.
      14397cec
    • J
      add C11 mutex functions · 8b047293
      Jens Gustedt 提交于
      8b047293
    • J
      add C11 thread functions operating on tss_t and once_flag · e16f70f4
      Jens Gustedt 提交于
      These all have POSIX equivalents, but aside from tss_get, they all
      have minor changes to the signature or return value and thus need to
      exist as separate functions.
      e16f70f4
    • J
      use weak symbols for the POSIX functions that will be used by C threads · df7d0dfb
      Jens Gustedt 提交于
      The intent of this is to avoid name space pollution of the C threads
      implementation.
      
      This has two sides to it. First we have to provide symbols that wouldn't
      pollute the name space for the C threads implementation. Second we have
      to clean up some internal uses of POSIX functions such that they don't
      implicitly drag in such symbols.
      df7d0dfb
  2. 05 9月, 2014 1 次提交
  3. 26 8月, 2014 4 次提交
    • R
      refrain from spinning on locks when there is already a waiter · f5fb20b0
      Rich Felker 提交于
      if there is already a waiter for a lock, spinning on the lock is
      essentially an attempt to steal it from whichever waiter would obtain
      it via any priority rules in place, and is therefore undesirable. in
      the current implementation, there is always an inherent race window at
      unlock during which a newly-arriving thread may steal the lock from
      the existing waiters, but we should aim to keep this window minimal
      rather than enlarging it.
      f5fb20b0
    • R
      97a7512b
    • R
      spin in sem_[timed]wait before performing futex wait · 2ff714c6
      Rich Felker 提交于
      empirically, this increases the maximum rate of wait/post operations
      between two threads by 20-150 times on machines I tested, including
      x86 and arm. conceptually, it makes sense to do some spinning because
      semaphores are intended to be usable as a notification mechanism
      between threads, not just as locks, and low-latency notification is a
      valuable property to have.
      2ff714c6
    • R
      sanitize number of spins in userspace before futex wait · b8a9c90e
      Rich Felker 提交于
      the previous spin limit of 10000 was utterly unreasonable.
      empirically, it could consume up to 200000 cycles, whereas a failed
      futex wait (EAGAIN) typically takes 1000 cycles or less, and even a
      true wait/wake round seems much less expensive.
      
      the new counts (100 for general wait, 200 in barrier) were simply
      chosen to be in the range of what's reasonable without having adverse
      effects on casual micro-benchmark tests I have been running. they may
      still be too high, from a standpoint of not wasting cpu cycles, but at
      least they're a lot better than before. rigorous testing across
      different archs and cpu models should be performed at some point to
      determine whether further adjustments should be made.
      b8a9c90e
  4. 24 8月, 2014 1 次提交
    • R
      fix false ownership of stdio FILEs due to tid reuse · 5345c9b8
      Rich Felker 提交于
      this is analogous commit fffc5cda
      which fixed the corresponding issue for mutexes.
      
      the robust list can't be used here because the locks do not share a
      common layout with mutexes. at some point it may make sense to simply
      incorporate a mutex object into the FILE structure and use it, but
      that would be a much more invasive change, and it doesn't mesh well
      with the current design that uses a simpler code path for internal
      locking and pulls in the recursive-mutex-like code when the flockfile
      API is used explicitly.
      5345c9b8
  5. 23 8月, 2014 2 次提交
    • R
      fix fallback checks for kernels without private futex support · b8ca9eb5
      Rich Felker 提交于
      for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.
      b8ca9eb5
    • R
      fix use of uninitialized memory with application-provided thread stacks · a6293285
      Rich Felker 提交于
      the subsequent code in pthread_create and the code which copies TLS
      initialization images to the new thread's TLS space assume that the
      memory provided to them is zero-initialized, which is true when it's
      obtained by pthread_create using mmap. however, when the caller
      provides a stack using pthread_attr_setstack, pthread_create cannot
      make any assumptions about the contents. simply zero-filling the
      relevant memory in this case is the simplest and safest fix.
      a6293285
  6. 19 8月, 2014 1 次提交
    • R
      further simplify and optimize new cond var · 4992ace9
      Rich Felker 提交于
      the main idea of the changes made is to have waiters wait directly on
      the "barrier" lock that was used to prevent them from making forward
      progress too early rather than first waiting on the atomic state value
      and then attempting to lock the barrier.
      
      in addition, adjustments to the mutex waiter count are optimized.
      previously, each waking waiter decremented the count (unless it was
      the first) then immediately incremented it again for the next waiter
      (unless it was the last). this was a roundabout was of achieving the
      equivalent of incrementing it once for the first waiter and
      decrementing it once for the last.
      4992ace9
  7. 18 8月, 2014 2 次提交
    • R
      simplify and improve new cond var implementation · 2c4b510b
      Rich Felker 提交于
      previously, wake order could be unpredictable: if a waiter happened to
      leave its futex wait on the state early, e.g. due to EAGAIN while
      restarting after a signal handler, it could acquire the mutex out of
      turn. handling this required ugly O(n) list walking in the unwait
      function and accounting to remove waiters that already woke from the
      list.
      
      with the new changes, the "barrier" locks in each waiter node are only
      unlocked in turn. in addition to simplifying the code, this seems to
      improve performance slightly, probably by reducing the number of
      accesses threads make to each other's stacks.
      
      as an additional benefit, unrecoverable mutex re-locking errors
      (mainly ENOTRECOVERABLE for robust mutexes) no longer need to be
      handled with deadlock; they can be reported to the caller, since the
      unlocking sequence makes it unnecessary to rely on the mutex to
      synchronize access to the waiter list.
      2c4b510b
    • R
      redesign cond var implementation to fix multiple issues · 37195db8
      Rich Felker 提交于
      the immediate issue that was reported by Jens Gustedt and needed to be
      fixed was corruption of the cv/mutex waiter states when switching to
      using a new mutex with the cv after all waiters were unblocked but
      before they finished returning from the wait function.
      
      self-synchronized destruction was also handled poorly and may have had
      race conditions. and the use of sequence numbers for waking waiters
      admitted a theoretical missed-wakeup if the sequence number wrapped
      through the full 32-bit space.
      
      the new implementation is largely documented in the comments in the
      source. the basic principle is to use linked lists initially attached
      to the cv object, but detachable on signal/broadcast, made up of nodes
      residing in automatic storage (stack) on the threads that are waiting.
      this eliminates the need for waiters to access the cv object after
      they are signaled, and allows us to limit wakeup to one waiter at a
      time during broadcasts even when futex requeue cannot be used.
      
      performance is also greatly improved, roughly double some tests.
      
      basically nothing is changed in the process-shared cond var case,
      where this implementation does not work, since processes do not have
      access to one another's local storage.
      37195db8
  8. 17 8月, 2014 4 次提交
    • R
      fix possible failure-to-wake deadlock with robust mutexes · 4220d298
      Rich Felker 提交于
      when the kernel is responsible for waking waiters on a robust mutex
      whose owner died, it does not have a waiters count available and must
      rely entirely on the waiter bit of the lock value.
      
      normally, this bit is only set by newly arriving waiters, so it will
      be clear if no new waiters arrived after the current owner obtained
      the lock, even if there are other waiters present. leaving it clear is
      desirable because it allows timed-lock operations to remove themselves
      as waiters and avoid causing unnecessary futex wake syscalls. however,
      for process-shared robust mutexes, we need to set the bit whenever
      there are existing waiters so that the kernel will know to wake them.
      
      for non-process-shared robust mutexes, the wake happens in userspace
      and can look at the waiters count, so the bit does not need to be set
      in the non-process-shared case.
      4220d298
    • R
      make pointers used in robust list volatile · de7e99c5
      Rich Felker 提交于
      when manipulating the robust list, the order of stores matters,
      because the code may be asynchronously interrupted by a fatal signal
      and the kernel will then access the robust list in what is essentially
      an async-signal context.
      
      previously, aliasing considerations made it seem unlikely that a
      compiler could reorder the stores, but proving that they could not be
      reordered incorrectly would have been extremely difficult. instead
      I've opted to make all the pointers used as part of the robust list,
      including those in the robust list head and in the individual mutexes,
      volatile.
      
      in addition, the format of the robust list has been changed to point
      back to the head at the end, rather than ending with a null pointer.
      this is to match the documented kernel robust list ABI. the null
      pointer, which was previously used, only worked because faults during
      access terminate the robust list processing.
      de7e99c5
    • R
      fix robust mutex unrecoverable status, and related clean-up · d338b506
      Rich Felker 提交于
      a robust mutex should not enter the unrecoverable status until it's
      unlocked without marking it consistent. previously, flag 8 in the type
      was used as an indication of unrecoverable, but only honored after
      successful locking; this resulted in a race window where the
      unrecoverable mutex could appear to a second thread as locked/busy
      again while the first thread was in the process of observing it as
      unrecoverable.
      
      now, flag 8 is used to mean that the mutex is in the process of being
      recovered, but not yet marked consistent. the flag only takes effect
      in pthread_mutex_unlock, where it causes the value 0x40000000 (owner
      dead flag, with old owner tid 0, an otherwise impossible state) to be
      stored in the lock. subsequent lock attempts will interpret this state
      as unrecoverable.
      d338b506
    • R
      fix false ownership of mutexes due to tid reuse, using robust list · fffc5cda
      Rich Felker 提交于
      per the resolution of Austin Group issue 755, the POSIX requirement
      that ownership be enforced for recursive and error-checking mutexes
      does not allow a random new thread to acquire ownership of an orphaned
      mutex just because it happened to be assigned the same tid as the
      original owner that exited with the mutex locked.
      
      one possible fix for this issue would be to disallow the kernel thread
      to terminate when it exited with mutexes held, permanently reserving
      the tid against reuse. however, this does not solve the problem for
      process-shared mutexes where lifetime cannot be controlled, so it was
      not used.
      
      the alternate approach I've taken is to reuse the robust mutex system
      for non-robust recursive and error-checking mutexes. when a thread
      exits, the kernel (or the new userspace robust-list code added in
      commit b092f1c5) will set the
      owner-died bit for these orphaned mutexes, but since the mutex-type is
      not robust, pthread_mutex_trylock will not allow a new owner to
      acquire them. instead, they remain in a state of being permanently
      locked, as desired.
      fffc5cda
  9. 16 8月, 2014 2 次提交
    • R
      enable private futex for process-local robust mutexes · b092f1c5
      Rich Felker 提交于
      the kernel always uses non-private wake when walking the robust list
      when a thread or process exits, so it's not able to wake waiters
      listening with the private futex flag. this problem is solved by doing
      the equivalent in userspace as the last step of pthread_exit.
      
      care is taken to remove mutexes from the robust list before unlocking
      them so that the kernel will not attempt to access them again,
      possibly after another thread locks them. this removal code can treat
      the list as singly-linked, since no further code which would add or
      remove items is able to run at this point. moreover, the pending
      pointer is not needed since the mutexes being unlocked are all
      process-local; in the case of asynchronous process termination, they
      all cease to exist.
      
      since a process-local robust mutex cannot come into existence without
      a call to pthread_mutexattr_setrobust in the same process, the code
      for userspace robust list processing is put in that source file, and
      a weak alias to a dummy function is used to avoid pulling in this
      bloat as part of pthread_exit in static-linked programs.
      b092f1c5
    • R
      make futex operations use private-futex mode when possible · bc09d58c
      Rich Felker 提交于
      private-futex uses the virtual address of the futex int directly as
      the hash key rather than requiring the kernel to resolve the address
      to an underlying backing for the mapping in which it lies. for certain
      usage patterns it improves performance significantly.
      
      in many places, the code using futex __wake and __wait operations was
      already passing a correct fixed zero or nonzero flag for the priv
      argument, so no change was needed at the site of the call, only in the
      __wake and __wait functions themselves. in other places, especially
      where the process-shared attribute for a synchronization object was
      not previously tracked, additional new code is needed. for mutexes,
      the only place to store the flag is in the type field, so additional
      bit masking logic is needed for accessing the type.
      
      for non-process-shared condition variable broadcasts, the futex
      requeue operation is unable to requeue from a private futex to a
      process-shared one in the mutex structure, so requeue is simply
      disabled in this case by waking all waiters.
      
      for robust mutexes, the kernel always performs a non-private wake when
      the owner dies. in order not to introduce a behavioral regression in
      non-process-shared robust mutexes (when the owning thread dies), they
      are simply forced to be treated as process-shared for now, giving
      correct behavior at the expense of performance. this can be fixed by
      adding explicit code to pthread_exit to do the right thing for
      non-shared robust mutexes in userspace rather than relying on the
      kernel to do it, and will be fixed in this way later.
      
      since not all supported kernels have private futex support, the new
      code detects EINVAL from the futex syscall and falls back to making
      the call without the private flag. no attempt to cache the result is
      made; caching it and using the cached value efficiently is somewhat
      difficult, and not worth the complexity when the benefits would be
      seen only on ancient kernels which have numerous other limitations and
      bugs anyway.
      bc09d58c
  10. 19 7月, 2014 1 次提交
    • S
      add or1k (OpenRISC 1000) architecture port · 200d1547
      Stefan Kristiansson 提交于
      With the exception of a fenv implementation, the port is fully featured.
      The port has been tested in or1ksim, the golden reference functional
      simulator for OpenRISC 1000.
      It passes all libc-test tests (except the math tests that
      requires a fenv implementation).
      
      The port assumes an or1k implementation that has support for
      atomic instructions (l.lwa/l.swa).
      
      Although it passes all the libc-test tests, the port is still
      in an experimental state, and has yet experienced very little
      'real-world' use.
      200d1547
  11. 17 7月, 2014 1 次提交
    • R
      work around constant folding bug 61144 in gcc 4.9.0 and 4.9.1 · a6adb2bc
      Rich Felker 提交于
      previously we detected this bug in configure and issued advice for a
      workaround, but this turned out not to work. since then gcc 4.9.0 has
      appeared in several distributions, and now 4.9.1 has been released
      without a fix despite this being a wrong code generation bug which is
      supposed to be a release-blocker, per gcc policy.
      
      since the scope of the bug seems to affect only data objects (rather
      than functions) whose definitions are overridable, and there are only
      a very small number of these in musl, I am just changing them from
      const to volatile for the time being. simply removing the const would
      be sufficient to make gcc 4.9.1 work (the non-const case was
      inadvertently fixed as part of another change in gcc), and this would
      also be sufficient with 4.9.0 if we forced -O0 on the affected files
      or on the whole build. however it's cleaner to just remove all the
      broken compiler detection and use volatile, which will ensure that
      they are never constant-folded. the quality of a non-broken compiler's
      output should not be affected except for the fact that these objects
      are no longer const and thus possibly add a few bytes to data/bss.
      
      this change can be reconsidered and possibly reverted at some point in
      the future when the broken gcc versions are no longer relevant.
      a6adb2bc
  12. 07 7月, 2014 2 次提交
  13. 06 7月, 2014 1 次提交
    • R
      eliminate use of cached pid from thread structure · 83dc6eb0
      Rich Felker 提交于
      the main motivation for this change is to remove the assumption that
      the tid of the main thread is also the pid of the process. (the value
      returned by the set_tid_address syscall was used to fill both fields
      despite it semantically being the tid.) this is historically and
      presently true on linux and unlikely to change, but it conceivably
      could be false on other systems that otherwise reproduce the linux
      syscall api/abi.
      
      only a few parts of the code were actually still using the cached pid.
      in a couple places (aio and synccall) it was a minor optimization to
      avoid a syscall. caching could be reintroduced, but lazily as part of
      the public getpid function rather than at program startup, if it's
      deemed important for performance later. in other places (cancellation
      and pthread_kill) the pid was completely unnecessary; the tkill
      syscall can be used instead of tgkill. this is actually a rather
      subtle issue, since tgkill is supposedly a solution to race conditions
      that can affect use of tkill. however, as documented in the commit
      message for commit 7779dbd2, tgkill
      does not actually solve this race; it just limits it to happening
      within one process rather than between processes. we use a lock that
      avoids the race in pthread_kill, and the use in the cancellation
      signal handler is self-targeted and thus not subject to tid reuse
      races, so both are safe regardless of which syscall (tgkill or tkill)
      is used.
      83dc6eb0
  14. 03 7月, 2014 1 次提交
    • R
      add locale framework · 0bc03091
      Rich Felker 提交于
      this commit adds non-stub implementations of setlocale, duplocale,
      newlocale, and uselocale, along with the data structures and minimal
      code needed for representing the active locale on a per-thread basis
      and optimizing the common case where thread-local locale settings are
      not in use.
      
      at this point, the data structures only contain what is necessary to
      represent LC_CTYPE (a single flag) and LC_MESSAGES (a name for use in
      finding message translation files). representation for the other
      categories will be added later; the expectation is that a single
      pointer will suffice for each.
      
      for LC_CTYPE, the strings "C" and "POSIX" are treated as special; any
      other string is accepted and treated as "C.UTF-8". for other
      categories, any string is accepted after being truncated to a maximum
      supported length (currently 15 bytes). for LC_MESSAGES, the name is
      kept regardless of whether libc itself can use such a message
      translation locale, since applications using catgets or gettext should
      be able to use message locales libc is not aware of. for other
      categories, names which are not successfully loaded as locales (which,
      at present, means all names) are treated as aliases for "C". setlocale
      never fails.
      
      locale settings are not yet used anywhere, so this commit should have
      no visible effects except for the contents of the string returned by
      setlocale.
      0bc03091
  15. 19 6月, 2014 2 次提交
    • R
      separate __tls_get_addr implementation from dynamic linker/init_tls · 5ba238e1
      Rich Felker 提交于
      such separation serves multiple purposes:
      
      - by having the common path for __tls_get_addr alone in its own
        function with a tail call to the slow case, code generation is
        greatly improved.
      
      - by having __tls_get_addr in it own file, it can be replaced on a
        per-arch basis as needed, for optimization or ABI-specific purposes.
      
      - by removing __tls_get_addr from __init_tls.c, a few bytes of code
        are shaved off of static binaries (which are unlikely to use this
        function unless the linker messed up).
      5ba238e1
    • R
      optimize i386 ___tls_get_addr asm · 880c479f
      Rich Felker 提交于
      880c479f
  16. 10 6月, 2014 3 次提交
    • R
      simplify errno implementation · ac31bf27
      Rich Felker 提交于
      the motivation for the errno_ptr field in the thread structure, which
      this commit removes, was to allow the main thread's errno to keep its
      address when lazy thread pointer initialization was used. &errno was
      evaluated prior to setting up the thread pointer and stored in
      errno_ptr for the main thread; subsequently created threads would have
      errno_ptr pointing to their own errno_val in the thread structure.
      
      since lazy initialization was removed, there is no need for this extra
      level of indirection; __errno_location can simply return the address
      of the thread's errno_val directly. this does cause &errno to change,
      but the change happens before entry to application code, and thus is
      not observable.
      ac31bf27
    • R
      replace all remaining internal uses of pthread_self with __pthread_self · df15168c
      Rich Felker 提交于
      prior to version 1.1.0, the difference between pthread_self (the
      public function) and __pthread_self (the internal macro or inline
      function) was that the former would lazily initialize the thread
      pointer if it was not already initialized, whereas the latter would
      crash in this case. since lazy initialization is no longer supported,
      use of pthread_self no longer makes sense; it simply generates larger,
      slower code.
      df15168c
    • R
      add thread-pointer support for pre-2.6 kernels on i386 · 64e32287
      Rich Felker 提交于
      such kernels cannot support threads, but the thread pointer is also
      important for other purposes, most notably stack protector. without a
      valid thread pointer, all code compiled with stack protector will
      crash. the same applies to any use of thread-local storage by
      applications or libraries.
      
      the concept of this patch is to fall back to using the modify_ldt
      syscall, which has been around since linux 1.0, to setup the gs
      segment register. since the kernel does not have a way to
      automatically assign ldt entries, use of slot zero is hard-coded. if
      this fallback path is used, __set_thread_area returns a positive value
      (rather than the usual zero for success, or negative for error)
      indicating to the caller that the thread pointer was successfully set,
      but only for the main thread, and that thread creation will not work
      properly. the code in __init_tp has been changed accordingly to record
      this result for later use by pthread_create.
      64e32287
  17. 16 4月, 2014 1 次提交
    • R
      fix deadlock race in pthread_once · 0d0c2f40
      Rich Felker 提交于
      at the end of successful pthread_once, there was a race window during
      which another thread calling pthread_once would momentarily change the
      state back from 2 (finished) to 1 (in-progress). in this case, the
      status was immediately changed back, but with no wake call, meaning
      that waiters which arrived during this short window could block
      forever. there are two possible fixes. one would be adding the wake to
      the code path where it was missing. but it's better just to avoid
      reverting the status at all, by using compare-and-swap instead of
      swap.
      0d0c2f40
  18. 25 3月, 2014 2 次提交
    • R
      fix pointer type mismatch and misplacement of const · 689e0e6b
      Rich Felker 提交于
      689e0e6b
    • R
      always initialize thread pointer at program start · dab441ae
      Rich Felker 提交于
      this is the first step in an overhaul aimed at greatly simplifying and
      optimizing everything dealing with thread-local state.
      
      previously, the thread pointer was initialized lazily on first access,
      or at program startup if stack protector was in use, or at certain
      random places where inconsistent state could be reached if it were not
      initialized early. while believed to be fully correct, the logic was
      fragile and non-obvious.
      
      in the first phase of the thread pointer overhaul, support is retained
      (and in some cases improved) for systems/situation where loading the
      thread pointer fails, e.g. old kernels.
      
      some notes on specific changes:
      
      - the confusing use of libc.main_thread as an indicator that the
        thread pointer is initialized is eliminated in favor of an explicit
        has_thread_pointer predicate.
      
      - sigaction no longer needs to ensure that the thread pointer is
        initialized before installing a signal handler (this was needed to
        prevent a situation where the signal handler caused the thread
        pointer to be initialized and the subsequent sigreturn cleared it
        again) but it still needs to ensure that implementation-internal
        thread-related signals are not blocked.
      
      - pthread tsd initialization for the main thread is deferred in a new
        manner to minimize bloat in the static-linked __init_tp code.
      
      - pthread_setcancelstate no longer needs special handling for the
        situation before the thread pointer is initialized. it simply fails
        on systems that cannot support a thread pointer, which are
        non-conforming anyway.
      
      - pthread_cleanup_push/pop now check for missing thread pointer and
        nop themselves out in this case, so stdio no longer needs to avoid
        the cancellable path when the thread pointer is not available.
      
      a number of cases remain where certain interfaces may crash if the
      system does not support a thread pointer. at this point, these should
      be limited to pthread interfaces, and the number of such cases should
      be fewer than before.
      dab441ae
  19. 28 2月, 2014 1 次提交
    • R
      rename superh port to "sh" for consistency · aacd3486
      Rich Felker 提交于
      linux, gcc, etc. all use "sh" as the name for the superh arch. there
      was already some inconsistency internally in musl: the dynamic linker
      was searching for "ld-musl-sh.path" as its path file despite its own
      name being "ld-musl-superh.so.1". there was some sentiment in both
      directions as to how to resolve the inconsistency, but overall "sh"
      was favored.
      aacd3486
  20. 24 2月, 2014 1 次提交
  21. 23 2月, 2014 2 次提交