1. 11 4月, 2019 1 次提交
    • R
      overhaul i386 syscall mechanism not to depend on external asm source · 22e5bbd0
      Rich Felker 提交于
      this is the first part of a series of patches intended to make
      __syscall fully self-contained in the object file produced using
      syscall.h, which will make it possible for crt1 code to perform
      syscalls.
      
      the (confusingly named) i386 __vsyscall mechanism, which this commit
      removes, was introduced before the presence of a valid thread pointer
      was mandatory; back then the thread pointer was setup lazily only if
      threads were used. the intent was to be able to perform syscalls using
      the kernel's fast entry point in the VDSO, which can use the sysenter
      (Intel) or syscall (AMD) instruction instead of int $128, but without
      inlining an access to the __syscall global at the point of each
      syscall, which would incur a significant size cost from PIC setup
      everywhere. the mechanism also shuffled registers/calling convention
      around to avoid spills of call-saved registers, and to avoid
      allocating ebx or ebp via asm constraints, since there are plenty of
      broken-but-supported compiler versions which are incapable of
      allocating ebx with -fPIC or ebp with -fno-omit-frame-pointer.
      
      the new mechanism preserves the properties of avoiding spills and
      avoiding allocation of ebx/ebp in constraints, but does it inline,
      using some fairly simple register shuffling, and uses a field of the
      thread structure rather than global data for the vdso-provided syscall
      code address.
      
      for now, the external __syscall function is refactored not to use the
      old __vsyscall so it can be kept, but the intent is to remove it too.
      22e5bbd0
  2. 10 4月, 2019 2 次提交
    • R
      release 1.1.22 · e97681d6
      Rich Felker 提交于
      e97681d6
    • R
      in membarrier fallback, allow for possibility that sigaction fails · a01ff71f
      Rich Felker 提交于
      this is a workaround to avoid a crashing regression on qemu-user when
      dynamic TLS is installed at dlopen time. the sigaction syscall should
      not be able to fail, but it does fail for implementation-internal
      signals under qemu user-level emulation if the host libc qemu is
      running under reserves the same signals for implementation-internal
      use, since qemu makes no provision to redirect/emulate them. after
      sigaction fails, the subsequent tkill would terminate the process
      abnormally as the default action.
      
      no provision to account for membarrier failing is made in the dynamic
      linker code that installs new TLS. at the formal level, the missing
      barrier in this case is incorrect, and perhaps we should fail the
      dlopen operation, but in practice all the archs we support (and
      probably all real-world archs except alpha, which isn't yet supported)
      should give the right behavior with no barrier at all as a consequence
      of consume-order properties.
      
      in the long term, this workaround should be supplemented or replaced
      by something better -- a different fallback approach to ensuring
      memory consistency, or dynamic allocation of implementation-internal
      signals. the latter is appealing in that it would allow cancellation
      to work under qemu-user too, and would even allow many levels of
      nested emulation.
      a01ff71f
  3. 06 4月, 2019 2 次提交
  4. 04 4月, 2019 1 次提交
    • D
      fix unintended global symbols in atanl.c · 81868803
      Dan Gohman 提交于
      Mark atanhi, atanlo, and aT in atanl.c as static, as they're not
      intended to be part of the public API.
      
      These are already static in the LDBL_MANT_DIG == 64 code, so this
      patch is just making the LDBL_MANT_DIG == 113 code do the same thing.
      81868803
  5. 02 4月, 2019 3 次提交
  6. 01 4月, 2019 1 次提交
    • R
      implement priority inheritance mutexes · 54ca6779
      Rich Felker 提交于
      priority inheritance is a feature to mitigate priority inversion
      situations, where a execution of a medium-priority thread can
      unboundedly block forward progress of a high-priority thread when a
      lock it needs is held by a low-priority thread.
      
      the natural way to do priority inheritance would be with a simple
      futex flag to donate the calling thread's priority to a target thread
      while it waits on the futex. unfortunately, linux does not offer such
      an interface, but instead insists on implementing the whole locking
      protocol in kernelspace with special futex commands that exist solely
      for the purpose of doing PI mutexes. this would require the entire
      "trylock" logic to be duplicated in the timedlock code path for PI
      mutexes, since, once the previous lock holder releases the lock and
      the futex call returns, the lock is already held by the caller.
      obviously such code duplication is undesirable.
      
      instead, I've made the PI timedlock success path set the mutex lock
      count to -1, which can be thought of as "not yet complete", since a
      lock count of 0 is "locked, with no recursive references". a simple
      branch in a non-hot path of pthread_mutex_trylock can then see and act
      on this state, skipping past the code that would check and take the
      lock to the same code path that runs after the lock is obtained for a
      non-PI mutex.
      
      because we're forced to let the kernel perform the actual lock and
      unlock operations whenever the mutex is contended, we have to patch
      things up when it does the wrong thing:
      
      1. the lock operation is not aware of whether the mutex is
         error-checking, so it will always fail with EDEADLK rather than
         deadlocking.
      
      2. the lock operation is not aware of whether the mutex is robust, so
         it will successfully obtain mutexes in the owner-died state even if
         they're non-robust, whereas this operation should deadlock.
      
      3. the unlock operation always sets the lock value to zero, whereas
         for robust mutexes, we want to set it to a special value indicating
         that the mutex obtained after its owner died was unlocked without
         marking it consistent, so that future operations all fail with
         ENOTRECOVERABLE.
      
      the first of these is easy to solve, just by performing a futex wait
      on a dummy futex address to simulate deadlock or ETIMEDOUT as
      appropriate. but problems 2 and 3 interact in a nasty way. to solve
      problem 2, we need to back out the spurious success. but if waiters
      are present -- which we can't just ignore, because even if we don't
      want to wake them, the calling thread is incorrectly inheriting their
      priorities -- this requires using the kernel's unlock operation, which
      will zero the lock value, thereby losing the "owner died with lock
      held" state.
      
      to solve these problems, we overload the mutex's waiters field, which
      is unused for PI mutexes since they don't call the normal futex wait
      functions, as an indicator that the PI mutex is permanently
      non-lockable. originally I wanted to use the count field, but there is
      one code path that needs to access this flag without synchronization:
      trylock's CAS failure path needs to be able to decide whether to fail
      with EBUSY or ENOTRECOVERABLE, the waiters field is already treated as
      a relaxed-order atomic in our memory model, so this works out nicely.
      54ca6779
  7. 30 3月, 2019 1 次提交
    • R
      clean up access to mutex type in pthread_mutex_trylock · 2142cafd
      Rich Felker 提交于
      there was no point in masking off the pshared bit when first loading
      the type, since every subsequent access involves a mask anyway. not
      masking it may avoid a subsequent load to check the pshared flag, and
      it's just simpler.
      2142cafd
  8. 22 3月, 2019 3 次提交
  9. 15 3月, 2019 1 次提交
    • R
      fix crash/out-of-bound read in sscanf · 8f12c4e1
      Rich Felker 提交于
      commit d6c855ca caused this
      "regression", though the behavior was undefined before, overlooking
      that f->shend=0 was being used as a sentinel for "EOF" status (actual
      EOF or hitting the scanf field width) of the stream helper (shgetc)
      functions.
      
      obviously the shgetc macro could be adjusted to check for a null
      pointer in addition to the != comparison, but it's the hot path, and
      adding extra code/branches to it begins to defeat the purpose.
      
      so instead of setting shend to a null pointer to block further reads,
      which no longer works, set it to the current position (rpos). this
      makes the shgetc macro work with no change, but it breaks shunget,
      which can no longer look at the value of shend to determine whether to
      back up. Szabolcs Nagy suggested a solution which I'm using here:
      setting shlim to a negative value is inexpensive to test at shunget
      time, and automatically re-trips the cnt>=shlim stop condition in
      __shgetc no matter what the original limit was.
      8f12c4e1
  10. 14 3月, 2019 22 次提交
  11. 13 3月, 2019 3 次提交
    • R
      handle labels with 8-bit byte values in dn_skipname · 2a0ff45b
      Ryan Fairfax 提交于
      The original logic considered each byte until it either found a 0
      value or a value >= 192. This means if a string segment contained any
      byte >= 192 it was interepretted as a compressed segment marker even
      if it wasn't in a position where it should be interpretted as such.
      
      The fix is to adjust dn_skipname to increment by each segments size
      rather than look at each character. This avoids misinterpretting
      string segment characters by not considering those bytes.
      2a0ff45b
    • J
      fix POSIX_FADV_DONTNEED/_NOREUSE on s390x · 4b125dd4
      Jonathan Neuschäfer 提交于
      On s390x, POSIX_FADV_DONTNEED and POSIX_FADV_NOREUSE have different
      values than on all other architectures that Linux supports.
      
      Handle this difference by wrapping their definitions in
      include/fcntl.h in #ifdef, so that arch/s390x/bits/fcntl.h can
      override them.
      4b125dd4
    • R
      expose TSVTX unconditionally in tar.h · 81221e13
      Rich Felker 提交于
      as noted in Austin Group issue #1236, the XSI shading for TSVTX is
      misplaced in the html version of the standard; it was only supposed to
      be on the description text. the intent was that the definition always
      be visible, which is reflected in the pdf version of the standard.
      
      this reverts commits d93c0740 and
      729fef0a.
      81221e13