1. 21 3月, 2015 2 次提交
    • R
      suppress backref processing in ERE regcomp · 7c8c86f6
      Rich Felker 提交于
      one of the features of ERE is that it's actually a regular language
      and does not admit expressions which cannot be matched in linear time.
      introduction of \n backref support into regcomp's ERE parsing was
      unintentional.
      7c8c86f6
    • R
      fix memory-corruption in regcomp with backslash followed by high byte · 39dfd584
      Rich Felker 提交于
      the regex parser handles the (undefined) case of an unexpected byte
      following a backslash as a literal. however, instead of correctly
      decoding a character, it was treating the byte value itself as a
      character. this was not only semantically unjustified, but turned out
      to be dangerous on archs where plain char is signed: bytes in the
      range 252-255 alias the internal codes -4 through -1 used for special
      types of literal nodes in the AST.
      39dfd584
  2. 20 3月, 2015 1 次提交
  3. 19 3月, 2015 1 次提交
  4. 18 3月, 2015 1 次提交
    • R
      fix MINSIGSTKSZ values for archs with large signal contexts · d5a50453
      Rich Felker 提交于
      the previous values (2k min and 8k default) were too small for some
      archs. aarch64 reserves 4k in the signal context for future extensions
      and requires about 4.5k total, and powerpc reportedly uses over 2k.
      the new minimums are chosen to fit the saved context and also allow a
      minimal signal handler to run.
      
      since the default (SIGSTKSZ) has always been 6k larger than the
      minimum, it is also increased to maintain the 6k usable by the signal
      handler. this happens to be able to store one pathname buffer and
      should be sufficient for calling any function in libc that doesn't
      involve conversion between floating point and decimal representations.
      
      x86 (both 32-bit and 64-bit variants) may also need a larger minimum
      (around 2.5k) in the future to support avx-512, but the values on
      these archs are left alone for now pending further analysis.
      
      the value for PTHREAD_STACK_MIN is not increased to match MINSIGSTKSZ
      at this time. this is so as not to preclude applications from using
      extremely small thread stacks when they know they will not be handling
      signals. unfortunately cancellation and multi-threaded set*id() use
      signals as an implementation detail and therefore require a stack
      large enough for a signal context, so applications which use extremely
      small thread stacks may still need to avoid using these features.
      d5a50453
  5. 17 3月, 2015 2 次提交
    • R
      block all signals (even internal ones) in cancellation signal handler · 76fd0117
      Rich Felker 提交于
      previously the implementation-internal signal used for multithreaded
      set*id operations was left unblocked during handling of the
      cancellation signal. however, on some archs, signal contexts are huge
      (up to 5k) and the possibility of nested signal handlers drastically
      increases the minimum stack requirement. since the cancellation signal
      handler will do its job and return in bounded time before possibly
      passing execution to application code, there is no need to allow other
      signals to interrupt it.
      76fd0117
    • R
      update authors/contributors list · eceaf1d2
      Rich Felker 提交于
      these additions were made based on scanning commit authors since the
      last update, at the time of the 1.1.4 release.
      eceaf1d2
  6. 16 3月, 2015 3 次提交
    • R
      avoid sending huge names as nscd passwd/group queries · 4b5ca13f
      Rich Felker 提交于
      overly long user/group names are potentially a DoS vector and source
      of other problems like partial writes by sendmsg, and not useful.
      4b5ca13f
    • R
      simplify nscd lookup code for alt passwd/group backends · 49d1e7f9
      Rich Felker 提交于
      previously, a sentinel value of (FILE *)-1 was used to inform the
      caller of __nscd_query that nscd is not in use. aside from being an
      ugly hack, this resulted in duplicate code paths for two logically
      equivalent cases: no nscd, and "not found" result from nscd.
      
      now, __nscd_query simply skips closing the socket and returns a valid
      FILE pointer when nscd is not in use, and produces a fake "not found"
      response header. the caller is then responsible for closing the socket
      just like it would do if it had gotten a real "not found" response.
      49d1e7f9
    • J
      add alternate backend support for getgrouplist · 2894a44b
      Josiah Worcester 提交于
      This completes the alternate backend support that was previously added
      to the getpw* and getgr* functions. Unlike those, though, it
      unconditionally queries nscd. Any groups from nscd that aren't in the
      /etc/groups file are added to the returned list, and any that are
      present in the file are ignored. The purpose of this behavior is to
      provide a view of the group database consistent with what is observed
      by the getgr* functions. If group memberships reported by nscd were
      honored when the corresponding group already has a definition in the
      /etc/groups file, the user's getgrouplist-based membership in the
      group would conflict with their non-membership in the reported
      gr_mem[] for the group.
      
      The changes made also make getgrouplist thread-safe and eliminate its
      clobbering of the global getgrent state.
      2894a44b
  7. 15 3月, 2015 2 次提交
    • S
      aarch64: fix typo in bits/ioctl.h · 962cbfbf
      Szabolcs Nagy 提交于
      962cbfbf
    • S
      aarch64: add struct _aarch64_ctx to signal.h · 38bf2d7c
      Szabolcs Nagy 提交于
      The unwind code in libgcc uses this type for unwinding across signal
      handlers. On aarch64 the kernel may place a sequence of structs on the
      signal stack on top of the ucontext to provide additional information.
      The unwinder only needs the header, but added all the types the kernel
      currently defines for this mechanism because they are part of the uapi.
      38bf2d7c
  8. 13 3月, 2015 1 次提交
    • R
      align x32 pthread type sizes to be common with 32-bit archs · 673cab5c
      Rich Felker 提交于
      previously, commit e7b9887e aligned
      the sizes with the glibc ABI. subsequent discussion during the merge
      of the aarch64 port reached a conclusion that we should reject larger
      arch-specific sizes, which have significant cost and no benefit, and
      stick with the existing common 32-bit sizes for all 32-bit/ILP32 archs
      and the x86_64 sizes for 64-bit archs.
      
      one peculiarity of this change is that x32 pthread_attr_t is now
      larger in musl than in the glibc x32 ABI, making it unsafe to call
      pthread_attr_init from x32 code that was compiled against glibc. with
      all the ABI issues of x32, it's not clear that ABI compatibility will
      ever work, but if it's needed, pthread_attr_init and related functions
      could be modified not to write to the last slot of the object.
      
      this is not a regression versus previous releases, since on previous
      releases the x32 pthread type sizes were all severely oversized
      already (due to incorrectly using the x86_64 LP64 definitions).
      moreover, x32 is still considered experimental and not ABI-stable.
      673cab5c
  9. 12 3月, 2015 4 次提交
    • S
      add aarch64 port · 01ef3dd9
      Szabolcs Nagy 提交于
      This adds complete aarch64 target support including bigendian subarch.
      
      Some of the long double math functions are known to be broken otherwise
      interfaces should be fully functional, but at this point consider this
      port experimental.
      
      Initial work on this port was done by Sireesh Tripurari and Kevin Bortis.
      01ef3dd9
    • S
      math: add dummy implementations of 128 bit long double functions · f4e4632a
      Szabolcs Nagy 提交于
      This is in preparation for the aarch64 port only to have the long
      double math symbols available on ld128 platforms. The implementations
      should be fixed up later once we have proper tests for these functions.
      
      Added bigendian handling for ld128 bit manipulations too.
      f4e4632a
    • S
      math: add ld128 exp2l based on the freebsd implementation · 53cfe0c6
      Szabolcs Nagy 提交于
      Changed the special case handling and bit manipulation to better
      match the double version.
      53cfe0c6
    • S
      copy the dtv pointer to the end of the pthread struct for TLS_ABOVE_TP archs · 204a69d2
      Szabolcs Nagy 提交于
      There are two main abi variants for thread local storage layout:
      
       (1) TLS is above the thread pointer at a fixed offset and the pthread
       struct is below that. So the end of the struct is at known offset.
      
       (2) the thread pointer points to the pthread struct and TLS starts
       below it. So the start of the struct is at known (zero) offset.
      
      Assembly code for the dynamic TLSDESC callback needs to access the
      dynamic thread vector (dtv) pointer which is currently at the front
      of the pthread struct. So in case of (1) the asm code needs to hard
      code the offset from the end of the struct which can easily break if
      the struct changes.
      
      This commit adds a copy of the dtv at the end of the struct. New members
      must not be added after dtv_copy, only before it. The size of the struct
      is increased a bit, but there is opportunity for size optimizations.
      204a69d2
  10. 08 3月, 2015 2 次提交
  11. 07 3月, 2015 1 次提交
    • R
      fix over-alignment of TLS, insufficient builtin TLS on 64-bit archs · bd67959f
      Rich Felker 提交于
      a conservative estimate of 4*sizeof(size_t) was used as the minimum
      alignment for thread-local storage, despite the only requirements
      being alignment suitable for struct pthread and void* (which struct
      pthread already contains). additional alignment required by the
      application or libraries is encoded in their headers and is already
      applied.
      
      over-alignment prevented the builtin_tls array from ever being used in
      dynamic-linked programs on 64-bit archs, thereby requiring allocation
      at startup even in programs with no TLS of their own.
      bd67959f
  12. 05 3月, 2015 7 次提交
  13. 04 3月, 2015 4 次提交
    • R
      remove useless check of bin match in malloc · 064898cf
      Rich Felker 提交于
      this re-check idiom seems to have been copied from the alloc_fwd and
      alloc_rev functions, which guess a bin based on non-synchronized
      memory access to adjacent chunk headers then need to confirm, after
      locking the bin, that the chunk is actually in the bin they locked.
      
      the check being removed, however, was being performed on a chunk
      obtained from the already-locked bin. there is no race to account for
      here; the check could only fail in the event of corrupt free lists,
      and even then it would not catch them but simply continue running.
      
      since the bin_index function is mildly expensive, it seems preferable
      to remove the check rather than trying to convert it into a useful
      consistency check. casual testing shows a 1-5% reduction in run time.
      064898cf
    • R
      eliminate atomics in syslog setlogmask function · 6de071a0
      Rich Felker 提交于
      6de071a0
    • R
      fix init race that could lead to deadlock in malloc init code · 7a81fe37
      Rich Felker 提交于
      the malloc init code provided its own version of pthread_once type
      logic, including the exact same bug that was fixed in pthread_once in
      commit 0d0c2f40.
      
      since this code is called adjacent to expand_heap, which takes a lock,
      there is no reason to have pthread_once-type initialization. simply
      moving the init code into the interval where expand_heap already holds
      its lock on the brk achieves the same result with much less
      synchronization logic, and allows the buggy code to be eliminated
      rather than just fixed.
      7a81fe37
    • R
      make all objects used with atomic operations volatile · 56fbaa3b
      Rich Felker 提交于
      the memory model we use internally for atomics permits plain loads of
      values which may be subject to concurrent modification without
      requiring that a special load function be used. since a compiler is
      free to make transformations that alter the number of loads or the way
      in which loads are performed, the compiler is theoretically free to
      break this usage. the most obvious concern is with atomic cas
      constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be
      transformed to a_cas(p,*p,f(*p)); where the latter is intended to show
      multiple loads of *p whose resulting values might fail to be equal;
      this would break the atomicity of the whole operation. but even more
      fundamental breakage is possible.
      
      with the changes being made now, objects that may be modified by
      atomics are modeled as volatile, and the atomic operations performed
      on them by other threads are modeled as asynchronous stores by
      hardware which happens to be acting on the request of another thread.
      such modeling of course does not itself address memory synchronization
      between cores/cpus, but that aspect was already handled. this all
      seems less than ideal, but it's the best we can do without mandating a
      C11 compiler and using the C11 model for atomics.
      
      in the case of pthread_once_t, the ABI type of the underlying object
      is not volatile-qualified. so we are assuming that accessing the
      object through a volatile-qualified lvalue via casts yields volatile
      access semantics. the language of the C standard is somewhat unclear
      on this matter, but this is an assumption the linux kernel also makes,
      and seems to be the correct interpretation of the standard.
      56fbaa3b
  14. 03 3月, 2015 4 次提交
  15. 28 2月, 2015 1 次提交
    • R
      fix failure of internal futex __timedwait to report ECANCELED · 76ca7a54
      Rich Felker 提交于
      as part of abstracting the futex wait, this function suppresses all
      futex error values which callers should not see using a whitelist
      approach. when the masked cancellation mode was added, the new
      ECANCELED error was not whitelisted. this omission caused the new
      pthread_cond_wait code using masked cancellation to exhibit a spurious
      wake (rather than acting on cancellation) when the request arrived
      after blocking on the cond var.
      76ca7a54
  16. 26 2月, 2015 3 次提交
    • R
      overhaul optimized x86_64 memset asm · e346ff86
      Rich Felker 提交于
      on most cpu models, "rep stosq" has high overhead that makes it
      undesirable for small memset sizes. the new code extends the
      minimal-branch fast path for short memsets from size 15 up to size
      126, and shrink-wraps this code path. in addition, "rep stosq" is
      sensitive to misalignment. the cost varies with size and with cpu
      model, but it has been observed performing 1.5 times slower when the
      destination address is not aligned mod 16. the new code thus ensures
      alignment mod 16, but also preserves any existing additional
      alignment, in case there are cpu models where it is beneficial.
      
      this version is based in part on changes proposed by Denys Vlasenko.
      e346ff86
    • R
      overhaul optimized i386 memset asm · 69858fa9
      Rich Felker 提交于
      on most cpu models, "rep stosl" has high overhead that makes it
      undesirable for small memset sizes. the new code extends the
      minimal-branch fast path for short memsets from size 15 up to size 62,
      and shrink-wraps this code path. in addition, "rep stosl" is very
      sensitive to misalignment. the cost varies with size and with cpu
      model, but it has been observed performing 1.5 to 4 times slower when
      the destination address is not aligned mod 16. the new code thus
      ensures alignment mod 16, but also preserves any existing additional
      alignment, in case there are cpu models where it is beneficial.
      
      this version is based in part on changes to the x86_64 memset asm
      proposed by Denys Vlasenko.
      69858fa9
    • A
      getloadavg: use sysinfo() instead of /proc/loadavg · 20cbd607
      Alexander Monakov 提交于
      Based on a patch by Szabolcs Nagy.
      20cbd607
  17. 24 2月, 2015 1 次提交
    • R
      fix possible isatty false positives and unwanted device state changes · 2de85a98
      Rich Felker 提交于
      the equivalent checks for newly opened stdio output streams, used to
      determine buffering mode, are also fixed.
      
      on most archs, the TCGETS ioctl command shares a value with
      SNDCTL_TMR_TIMEBASE, part of the OSS sound API which was apparently
      used with certain MIDI and timer devices. for file descriptors
      referring to such a device, TCGETS will not fail with ENOTTY as
      expected; it may produce a different error, or may succeed, and if it
      succeeds it changes the mode of the device. while it's unlikely that
      such devices are in use, this is in principle very harmful behavior
      for an operation which is supposed to do nothing but query whether the
      fd refers to a tty.
      
      TIOCGWINSZ, used to query logical window size for a terminal, was
      chosen as an alternate ioctl to perform the isatty check. it does not
      share a value with any other ioctl commands, and it succeeds on any
      tty device.
      
      this change also cleans up strace output to be less ugly and
      misleading.
      2de85a98