1. 18 4月, 2019 4 次提交
    • S
      math: remove sun copyright from libm.h · f107d34e
      Szabolcs Nagy 提交于
      Nothing is left from the original fdlibm header nor from the bsd
      modifications to it other than some internal api declarations.
      
      Comments are dropped that may be copyrightable content.
      f107d34e
    • S
      math: add asuint, asuint64, asfloat and asdouble · d59e5042
      Szabolcs Nagy 提交于
      Code generation for SET_HIGH_WORD slightly changes, but it only affects
      pow, otherwise the generated code is unchanged.
      d59e5042
    • S
      math: move complex math out of libm.h · 2d72b580
      Szabolcs Nagy 提交于
      This makes it easier to build musl math code with a compiler that
      does not support complex types (tcc) and in general more sensible
      factorization of the internal headers.
      2d72b580
    • S
      define FP_FAST_FMA* when fma* can be inlined · e980ca7a
      Szabolcs Nagy 提交于
      FP_FAST_FMA can be defined if "the fma function generally executes about
      as fast as, or faster than, a multiply and an add of double operands",
      which can only be true if the fma call is inlined as an instruction.
      
      gcc sets __FP_FAST_FMA if __builtin_fma is inlined as an instruction,
      but that does not mean an fma call will be inlined (e.g. it is defined
      with -fno-builtin-fma), other compilers (clang) don't even have such
      macro, but this is the closest we can get.
      
      (even if the libc fma implementation is a single instruction, the extern
      call overhead is already too big when the macro is used to decide between
      x*y+z and fma(x,y,z) so it cannot be based on libc only, defining the
      macro unconditionally on targets which have fma in the base isa is also
      incorrect: the compiler might not inline fma anyway.)
      
      this solution works with gcc unless fma inlining is explicitly turned off.
      e980ca7a
  2. 11 4月, 2019 8 次提交
    • A
      fcntl.h: define O_TTY_INIT to 0 · 65c8be38
      A. Wilcox 提交于
      POSIX: "[If] either O_TTY_INIT is set in oflag or O_TTY_INIT has the
      value zero, open() shall set any non-standard termios structure
      terminal parameters to a state that provides conforming behavior."
      
      The Linux kernel tty drivers always perform initialisation on their
      devices to set known good termios values during the open(2) call.  This
      means that setting O_TTY_INIT to zero is conforming.
      65c8be38
    • R
      remove external __syscall function and last remaining users · 788d5e24
      Rich Felker 提交于
      the weak version of __syscall_cp_c was using a tail call to __syscall
      to avoid duplicating the 6-argument syscall code inline in small
      static-linked programs, but now that __syscall no longer exists, the
      inline expansion is no longer duplication.
      
      the syscall.h machinery suppported up to 7 syscall arguments, only via
      an external __syscall function, but we presently have no syscall call
      points that actually make use of that many, and the kernel only
      defines 7-argument calling conventions for arm, powerpc (32-bit), and
      sh. if it turns out we need them in the future, they can easily be
      added.
      788d5e24
    • R
      implement inline 5- and 6-argument syscalls for mipsn32 and mips64 · 1bcdaeee
      Rich Felker 提交于
      n32 and n64 ABIs add new argument registers vs o32, so that passing on
      the stack is not necessary, so it's not clear why the 5- and
      6-argument versions were special-cased to begin with; it seems to have
      been pattern-copying from arch/mips (o32).
      
      i've treated the new argument registers like the first 4 in terms of
      clobber status (non-clobbered). hopefully this is correct.
      1bcdaeee
    • R
      cleanup mips64 syscall_arch functions · d3b4869c
      Rich Felker 提交于
      d3b4869c
    • R
      implement inline 5- and 6-argument syscalls for mips · dcb18bea
      Rich Felker 提交于
      the OABI passes these on the stack, using the convention that their
      position on the stack is as if the first four arguments (in registers)
      also had stack slots. originally this was deemed too awkward to do
      inline, falling back to external __syscall, but it's not that bad and
      now that external __syscall is being removed, it's necessary.
      dcb18bea
    • R
      use inline syscalls for powerpc (32-bit) · 6aeb9c67
      Rich Felker 提交于
      the inline syscall code is copied directly from powerpc64. the extent
      of register clobber specifiers may be excessive on both; if that turns
      out to be the case it can be fixed later.
      6aeb9c67
    • R
      remove cruft for supposedly-buggy clang from or1k & microblaze syscall_arch · f76d51a1
      Rich Felker 提交于
      it was never demonstrated to me that this workaround was needed, and
      seems likely that, if there ever was any clang version for which it
      was needed, it's old enough to be unusably buggy in other ways. if it
      turns out some compilers actually can't do the register allocation
      right, we'll need to replace this with inline shuffling code, since
      the external __syscall dependency is being removed.
      f76d51a1
    • R
      overhaul i386 syscall mechanism not to depend on external asm source · 22e5bbd0
      Rich Felker 提交于
      this is the first part of a series of patches intended to make
      __syscall fully self-contained in the object file produced using
      syscall.h, which will make it possible for crt1 code to perform
      syscalls.
      
      the (confusingly named) i386 __vsyscall mechanism, which this commit
      removes, was introduced before the presence of a valid thread pointer
      was mandatory; back then the thread pointer was setup lazily only if
      threads were used. the intent was to be able to perform syscalls using
      the kernel's fast entry point in the VDSO, which can use the sysenter
      (Intel) or syscall (AMD) instruction instead of int $128, but without
      inlining an access to the __syscall global at the point of each
      syscall, which would incur a significant size cost from PIC setup
      everywhere. the mechanism also shuffled registers/calling convention
      around to avoid spills of call-saved registers, and to avoid
      allocating ebx or ebp via asm constraints, since there are plenty of
      broken-but-supported compiler versions which are incapable of
      allocating ebx with -fPIC or ebp with -fno-omit-frame-pointer.
      
      the new mechanism preserves the properties of avoiding spills and
      avoiding allocation of ebx/ebp in constraints, but does it inline,
      using some fairly simple register shuffling, and uses a field of the
      thread structure rather than global data for the vdso-provided syscall
      code address.
      
      for now, the external __syscall function is refactored not to use the
      old __vsyscall so it can be kept, but the intent is to remove it too.
      22e5bbd0
  3. 10 4月, 2019 2 次提交
    • R
      release 1.1.22 · e97681d6
      Rich Felker 提交于
      e97681d6
    • R
      in membarrier fallback, allow for possibility that sigaction fails · a01ff71f
      Rich Felker 提交于
      this is a workaround to avoid a crashing regression on qemu-user when
      dynamic TLS is installed at dlopen time. the sigaction syscall should
      not be able to fail, but it does fail for implementation-internal
      signals under qemu user-level emulation if the host libc qemu is
      running under reserves the same signals for implementation-internal
      use, since qemu makes no provision to redirect/emulate them. after
      sigaction fails, the subsequent tkill would terminate the process
      abnormally as the default action.
      
      no provision to account for membarrier failing is made in the dynamic
      linker code that installs new TLS. at the formal level, the missing
      barrier in this case is incorrect, and perhaps we should fail the
      dlopen operation, but in practice all the archs we support (and
      probably all real-world archs except alpha, which isn't yet supported)
      should give the right behavior with no barrier at all as a consequence
      of consume-order properties.
      
      in the long term, this workaround should be supplemented or replaced
      by something better -- a different fallback approach to ensuring
      memory consistency, or dynamic allocation of implementation-internal
      signals. the latter is appealing in that it would allow cancellation
      to work under qemu-user too, and would even allow many levels of
      nested emulation.
      a01ff71f
  4. 06 4月, 2019 2 次提交
  5. 04 4月, 2019 1 次提交
    • D
      fix unintended global symbols in atanl.c · 81868803
      Dan Gohman 提交于
      Mark atanhi, atanlo, and aT in atanl.c as static, as they're not
      intended to be part of the public API.
      
      These are already static in the LDBL_MANT_DIG == 64 code, so this
      patch is just making the LDBL_MANT_DIG == 113 code do the same thing.
      81868803
  6. 02 4月, 2019 3 次提交
  7. 01 4月, 2019 1 次提交
    • R
      implement priority inheritance mutexes · 54ca6779
      Rich Felker 提交于
      priority inheritance is a feature to mitigate priority inversion
      situations, where a execution of a medium-priority thread can
      unboundedly block forward progress of a high-priority thread when a
      lock it needs is held by a low-priority thread.
      
      the natural way to do priority inheritance would be with a simple
      futex flag to donate the calling thread's priority to a target thread
      while it waits on the futex. unfortunately, linux does not offer such
      an interface, but instead insists on implementing the whole locking
      protocol in kernelspace with special futex commands that exist solely
      for the purpose of doing PI mutexes. this would require the entire
      "trylock" logic to be duplicated in the timedlock code path for PI
      mutexes, since, once the previous lock holder releases the lock and
      the futex call returns, the lock is already held by the caller.
      obviously such code duplication is undesirable.
      
      instead, I've made the PI timedlock success path set the mutex lock
      count to -1, which can be thought of as "not yet complete", since a
      lock count of 0 is "locked, with no recursive references". a simple
      branch in a non-hot path of pthread_mutex_trylock can then see and act
      on this state, skipping past the code that would check and take the
      lock to the same code path that runs after the lock is obtained for a
      non-PI mutex.
      
      because we're forced to let the kernel perform the actual lock and
      unlock operations whenever the mutex is contended, we have to patch
      things up when it does the wrong thing:
      
      1. the lock operation is not aware of whether the mutex is
         error-checking, so it will always fail with EDEADLK rather than
         deadlocking.
      
      2. the lock operation is not aware of whether the mutex is robust, so
         it will successfully obtain mutexes in the owner-died state even if
         they're non-robust, whereas this operation should deadlock.
      
      3. the unlock operation always sets the lock value to zero, whereas
         for robust mutexes, we want to set it to a special value indicating
         that the mutex obtained after its owner died was unlocked without
         marking it consistent, so that future operations all fail with
         ENOTRECOVERABLE.
      
      the first of these is easy to solve, just by performing a futex wait
      on a dummy futex address to simulate deadlock or ETIMEDOUT as
      appropriate. but problems 2 and 3 interact in a nasty way. to solve
      problem 2, we need to back out the spurious success. but if waiters
      are present -- which we can't just ignore, because even if we don't
      want to wake them, the calling thread is incorrectly inheriting their
      priorities -- this requires using the kernel's unlock operation, which
      will zero the lock value, thereby losing the "owner died with lock
      held" state.
      
      to solve these problems, we overload the mutex's waiters field, which
      is unused for PI mutexes since they don't call the normal futex wait
      functions, as an indicator that the PI mutex is permanently
      non-lockable. originally I wanted to use the count field, but there is
      one code path that needs to access this flag without synchronization:
      trylock's CAS failure path needs to be able to decide whether to fail
      with EBUSY or ENOTRECOVERABLE, the waiters field is already treated as
      a relaxed-order atomic in our memory model, so this works out nicely.
      54ca6779
  8. 30 3月, 2019 1 次提交
    • R
      clean up access to mutex type in pthread_mutex_trylock · 2142cafd
      Rich Felker 提交于
      there was no point in masking off the pshared bit when first loading
      the type, since every subsequent access involves a mask anyway. not
      masking it may avoid a subsequent load to check the pshared flag, and
      it's just simpler.
      2142cafd
  9. 22 3月, 2019 3 次提交
  10. 15 3月, 2019 1 次提交
    • R
      fix crash/out-of-bound read in sscanf · 8f12c4e1
      Rich Felker 提交于
      commit d6c855ca caused this
      "regression", though the behavior was undefined before, overlooking
      that f->shend=0 was being used as a sentinel for "EOF" status (actual
      EOF or hitting the scanf field width) of the stream helper (shgetc)
      functions.
      
      obviously the shgetc macro could be adjusted to check for a null
      pointer in addition to the != comparison, but it's the hot path, and
      adding extra code/branches to it begins to defeat the purpose.
      
      so instead of setting shend to a null pointer to block further reads,
      which no longer works, set it to the current position (rpos). this
      makes the shgetc macro work with no change, but it breaks shunget,
      which can no longer look at the value of shend to determine whether to
      back up. Szabolcs Nagy suggested a solution which I'm using here:
      setting shlim to a negative value is inexpensive to test at shunget
      time, and automatically re-trips the cnt>=shlim stop condition in
      __shgetc no matter what the original limit was.
      8f12c4e1
  11. 14 3月, 2019 14 次提交