- 03 6月, 2018 1 次提交
-
-
由 Szabolcs Nagy 提交于
In TLS variant I the TLS is above TP (or above a fixed offset from TP) but on some targets there is a reserved gap above TP before TLS starts. This matters for the local-exec tls access model when the offsets of TLS variables from the TP are hard coded by the linker into the executable, so the libc must compute these offsets the same way as the linker. The tls offset of the main module has to be alignup(GAP_ABOVE_TP, main_tls_align). If there is no TLS in the main module then the gap can be ignored since musl does not use it and the tls access models of shared libraries are not affected. The previous setup only worked if (tls_align & -GAP_ABOVE_TP) == 0 (i.e. TLS did not require large alignment) because the gap was treated as a fixed offset from TP. Now the TP points at the end of the pthread struct (which is aligned) and there is a gap above it (which may also need alignment). The fix required changing TP_ADJ and __pthread_self on affected targets (aarch64, arm and sh) and in the tlsdesc asm the offset to access the dtv changed too.
-
- 02 6月, 2018 2 次提交
-
-
由 Rich Felker 提交于
since this iconv implementation's output is stateless, it's necessary to know before writing anything to the output buffer whether the conversion of the current input character will fit. previously we used a hard-coded table of the output size needed for each supported output encoding, but failed to update the table when adding support for conversion to jis-based encodings and again when adding separate encoding identifiers for implicit-endianness utf-16/32 and ucs-2/4 variants, resulting in out-of-bound table reads and incorrect size checks. no buffer overflow was possible, but the affected characters could be converted incorrectly, and iconv could potentially produce an incorrect return value as a result. remove the hard-coded table, and instead perform the recursive iconv conversion to a temporary buffer, measuring the output size and transferring it to the actual output buffer only if the whole converted result fits.
-
由 Rich Felker 提交于
this case is handled with a recursive call to iconv using a specially-constructed conversion descriptor. the constant 0 was used as the offset for utf-8, since utf-8 appears first in the charmaps table, but the offset used needs to point into the charmap entry, past the name/aliases at the beginning, to the byte identifying the encoding. as a result of this error, junk was produced. instead, call find_charmap so we don't have to hard-code a nontrivial offset. with this change, the code has been tested and found to work in the case of converting the affected hkscs characters to utf-8.
-
- 10 5月, 2018 2 次提交
-
-
由 Will Dietz 提交于
maintainer's notes: commit 95c6044e split UTF-32 and UTF-32BE but neglected to add a case for the former as a destination encoding, resulting in it wrongly being handled by the default case. the intent was that the value of the macro be chosen to encode "big endian" in the low bits, so that no code would be needed, but this was botched; instead, handle it the way UCS2 is handled.
-
由 Will Dietz 提交于
maintainer's notes: commit a223dbd2 added the reverse conversions to JIS-based encodings, but omitted the check for remining buffer space in the case where the next character to be written was single-byte, allowing conversion to continue past the end of the destination buffer.
-
- 09 5月, 2018 2 次提交
-
-
由 Rich Felker 提交于
the wrapper start function that performs scheduling operations is unreachable if pthread_attr_setinheritsched is never called, so move it there rather than the pthread_create source file, saving some code size for static-linked programs.
-
由 Rich Felker 提交于
eliminate the awkward startlock mechanism and corresponding fields of the pthread structure that were only used at startup. instead of having pthread_create perform the scheduling operations and having the new thread wait for them to be completed, start the new thread with a wrapper start function that performs its own scheduling, sending the result code back via a futex. this way the new thread can use storage from the calling thread's stack rather than permanent fields in the pthread structure.
-
- 08 5月, 2018 1 次提交
-
-
由 Rich Felker 提交于
over time the pthread structure has accumulated a lot of cruft taking up size. this commit removes unused fields and packs booleans and other small data more efficiently. changes which would also require changing code are not included at this time. non-volatile booleans are packed as unsigned char bitfield members. the canceldisable and cancelasync fields need volatile qualification due to how they're accessed from the cancellation signal handler and cancellable syscalls called from signal handlers. since volatile bitfield semantics are not clearly defined, discrete char objects are used instead. the pid field is completely removed; it has been unused since commit 83dc6eb0. the tid field's type is changed to int because its use is as a value in futexes, which are defined as plain int. it has no conceptual relationship to pid_t. also, its position is not ABI. startlock is reduced to a length-1 array. the second element was presumably intended as a waiter count, but it was never used and made no sense, since there is at most one waiter.
-
- 06 5月, 2018 1 次提交
-
-
由 Rich Felker 提交于
previously, some accesses to the detached state (from pthread_join and pthread_getattr_np) were unsynchronized; they were harmless in programs with well-defined behavior, but ugly. other accesses (in pthread_exit and pthread_detach) were synchronized by a poorly named "exitlock", with an ad-hoc trylock operation on it open-coded in pthread_detach, whose only purpose was establishing protocol for which thread is responsible for deallocation of detached-thread resources. instead, use an atomic detach_state and unify it with the futex used to wait for thread exit. this eliminates 2 members from the pthread structure, gets rid of the hackish lock usage, and makes rigorous the trap added in commit 80bf5952 for catching attempts to join detached threads. it should also make attempt to detach an already-detached thread reliably trap.
-
- 05 5月, 2018 2 次提交
-
-
由 Rich Felker 提交于
if the last thread exited via pthread_exit, the logic that marked it dead did not account for the possibility of it targeting itself via atexit handlers. for example, an atexit handler calling pthread_kill(pthread_self(), SIGKILL) would return success (previously, ESRCH) rather than causing termination via the signal. move the release of killlock after the determination is made whether the exiting thread is the last thread. in the case where it's not, move the release all the way to the end of the function. this way we can clear the tid rather than spending storage on a dedicated dead-flag. clearing the tid is also preferable in that it hardens against inadvertent use of the value after the thread has terminated but before it is joined.
-
由 Rich Felker 提交于
posix documents in the rationale and future directions for pthread_kill that, since the lifetime of the thread id for a joinable thread lasts until it is joined, ESRCH is not a correct error for pthread_kill to produce when the target thread has exited but not yet been joined, and that conforming applications cannot attempt to detect this state. future versions of the standard may explicitly require that ESRCH not be returned for this case.
-
- 03 5月, 2018 1 次提交
-
-
由 Rich Felker 提交于
the tid field in the pthread structure is not volatile, and really shouldn't be, so as not to limit the compiler's ability to reorder, merge, or split loads in code paths that may be relevant to performance (like controlling lock ownership). however, use of objects which are not volatile or atomic with futex wait is inherently broken, since the compiler is free to transform a single load into multiple loads, thereby using a different value for the controlling expression of the loop and the value passed to the futex syscall, leading the syscall to block instead of returning. reportedly glibc's pthread_join was actually affected by an equivalent issue in glibc on s390. add a separate, dedicated join_futex object for pthread_join to use.
-
- 02 5月, 2018 3 次提交
-
-
由 Rich Felker 提交于
the static const zero set ended up getting put in bss instead of rodata, wasting writable memory, and the call to memcmp was size-inefficient. generally for nonstandard extension functions we try to avoid poking at any internals directly, but the way the zero set was setup was arguably already doing so.
-
由 Rich Felker 提交于
to support the GNU extension of allocating a buffer for getcwd's result when a null pointer is passed without incurring a link dependency on free, we use a PATH_MAX-sized buffer on the stack and only duplicate it to allocated storage after the operation succeeds. unfortunately this imposed excessive stack usage on all callers, including those not making use of the GNU extension. instead, use a VLA to make stack allocation conditional.
-
由 Rich Felker 提交于
in thumb mode, r7 is the ABI frame pointer register, and unless frame pointer is disabled, gcc insists on treating it as a fixed register, refusing to spill it to satisfy constraints. unfortunately, r7 is also used in the syscall ABI for passing the syscall number. up til now we just treated this as a requirement to disable frame pointer when generating code as thumb, but it turns out gcc forcibly enables frame pointer, and the fixed register constraint that goes with it, for functions which contain VLAs. this produces an unacceptable arch-specific constraint that (non-arm-specific) source files making syscalls cannot use VLAs. as a workaround, avoid r7 register constraints when producing thumb code and instead save/restore r7 in a temp register as part of the asm block. at some point we may want/need to support armv6-m/thumb1, so the asm has been tweaked to be thumb1-compatible while also near-optimal for thumb2: it allows the temp and/or syscall number to be in high registers (necessary since r0-r5 may all be used for syscalll args) and in thumb2 mode allows the syscall number to be an 8-bit immediate.
-
- 27 4月, 2018 1 次提交
-
-
由 Rich Felker 提交于
for getopt_long, partial (prefix) matches of long options always begin with "--" and thus can never be ambiguous with a short option. for getopt_long_only, though, a single-character option can match both a short option and as a prefix for a long option. in this case, we wrongly interpreted it as a prefix for the long option. introduce a new pass, only in long-only mode, to check the prefix match against short options before accepting it. the only reason there's a slightly nontrivial loop being introduced rather than strchr is that our getopt already supports multibyte short options, and getopt_long_long should handle them consistently. a temp buffer and strstr could have been used, but the code to set it up would be just as large as what's introduced here and it would unnecessarily pull in relatively large code for strstr.
-
- 20 4月, 2018 10 次提交
-
-
由 Rich Felker 提交于
commit 618b18c7 removed the previous detection and hardening since it was incorrect. commit 72141795 already handled all that remained for hardening the static-linked case. in the dynamic-linked case, have the dynamic linker check whether malloc was replaced and make that information available. with these changes, the properties documented in commit c9f415d7 are restored: if calloc is not provided, it will behave as malloc+memset, and any of the memalign-family functions not provided will fail with ENOMEM.
-
由 Rich Felker 提交于
this change serves multiple purposes: 1. it ensures that static linking of memalign-family functions will pull in the system malloc implementation, thereby causing link errors if an attempt is made to link the system memalign functions with a replacement malloc (incomplete allocator replacement). 2. it eliminates calls to free that are unpaired with allocations, which are confusing when setting breakpoints or tracing execution. as a bonus, making __bin_chunk external may discourage aggressive and unnecessary inlining of it.
-
由 Rich Felker 提交于
the generated code should be mostly unchanged, except for explicit use of C_INUSE in place of copying the low bits from existing chunk headers/footers. these changes also remove mild UB due to dubious arithmetic on pointers into imaginary size_t[] arrays.
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
commit c9f415d7 included checks to make calloc fallback to memset if used with a replaced malloc that didn't also replace calloc, and the memalign family fail if free has been replaced. however, the checks gave false positives for replacement whenever malloc or free resolved to a PLT entry in the main program. for now, disable the checks so as not to leave libc in a broken state. this means that the properties documented in the above commit are no longer satisfied; failure to replace calloc and the memalign family along with malloc is unsafe if they are ever called. the calloc checks were correct but useless for static linking. in both cases (simple or full malloc), calloc and malloc are in a source file together, so replacement of one but not the other would give linking errors. the memalign-family check was useful for static linking, but broken for dynamic as described above, and can be replaced with a better link-time check.
-
由 Will Dietz 提交于
-
由 Andre McCurdy 提交于
ARMv6 cores with support for Thumb2 can take advantage of the "ldrex" and "strex" based implementations of a_ll and a_sc.
-
由 Andre McCurdy 提交于
__ARM_ARCH_6ZK__ is a gcc specific historical typo which may not be defined by other compilers. https://gcc.gnu.org/ml/gcc-patches/2015-07/msg02237.html To avoid unexpected results when building for ARMv6KZ with clang, the correct form of the macro (ie 6KZ) needs to be tested. The incorrect form of the macro (ie 6ZK) still needs to be tested for compatibility with pre-2015 versions of gcc.
-
由 Andre McCurdy 提交于
Provide an ARM specific a_ctz_32 helper function for architecture versions for which it can be implemented efficiently via the "rbit" instruction (ie all Thumb-2 capable versions of ARM v6 and above).
-
由 Andre McCurdy 提交于
Update atomic.h to provide a_ctz_l in all cases (atomic_arch.h should now only provide a_ctz_32 and/or a_ctz_64). The generic version of a_ctz_32 now takes advantage of a_clz_32 if available and the generic a_ctz_64 now makes use of a_ctz_32.
-
- 19 4月, 2018 8 次提交
-
-
由 Marc André Tanner 提交于
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
bring these functions up to date with the current idioms we use/prefer in fmemopen and fopencookie.
-
由 Rich Felker 提交于
rather than manually performing pointer arithmetic to carve multiple objects out of one allocation, use a containing struct that encompasses them all.
-
由 Rich Felker 提交于
assign entire struct rather than member-at-a-time. don't repeat buffer sizes; always use sizeof to ensure consistency.
-
由 Rich Felker 提交于
replacement is subject to conditions on the replacement functions. they may only call functions which are async-signal-safe, as specified either by POSIX or as an implementation-defined extension. if any allocator functions are replaced, at least malloc, realloc, and free must be provided. if calloc is not provided, it will behave as malloc+memset. any of the memalign-family functions not provided will fail with ENOMEM. in order to implement the above properties, calloc and __memalign check that they are using their own malloc or free, respectively. choice to check malloc or free is based on considerations of supporting __simple_malloc. in order to make this work, calloc is split into separate versions for __simple_malloc and full malloc; commit ba819787 already did most of the split anyway, and completing it saves an extra call frame. previously, use of -Bsymbolic-functions made dynamic interposition impossible. now, we are using an explicit dynamic-list, so add allocator functions to the list. most are not referenced anyway, but all are added for completeness.
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
instead of using a waiters count, add a bit to the lock field indicating that the lock may have waiters. threads which obtain the lock after contending for it will perform a potentially-spurious wake when they release the lock.
-
- 18 4月, 2018 6 次提交
-
-
由 Rich Felker 提交于
commit e3bc22f1 removed all references to __brk.
-
由 Rich Felker 提交于
the existing laddr function for fdpic cannot translate ELF virtual addresses outside of the LOAD segments to runtime addresses because the fdpic loadmap only covers the logically-mapped part. however the whole point of reclaim_gaps is to recover the slack space up to the page boundaries, so it needs to work with such addresses. add a new laddr_pg function that accepts any address in the page range for the LOAD segment by expanding the loadmap records out to page boundaries. only use the new version for reclaim_gaps, so as not to impact performance of other address lookups. also, only use laddr_pg for the start address of a gap; the end address lies one byte beyond the end, potentially in a different page where it would get mapped differently. instead of mapping end, apply the length (end-start) to the mapped value of start.
-
由 Rich Felker 提交于
-
由 Alexander Monakov 提交于
Split 'free' into unmap_chunk and bin_chunk, use the latter to introduce __malloc_donate and use it in reclaim_gaps instead of calling 'free'.
-
由 Alexander Monakov 提交于
Fix an instance where realloc code would overallocate by OVERHEAD bytes amount. Manually arrange for reuse of memcpy-free-return exit sequence.
-
由 Rich Felker 提交于
we have always bound symbols at libc.so link time rather than runtime to minimize startup-time relocations and overhead of calls through the PLT, and possibly also to preclude interposition that would not work correctly anyway if allowed. historically, binding at link-time was also necessary for the dynamic linker to work, but the dynamic linker bootstrap overhaul in commit f3ddd173 made it unnecessary. our use of -Bsymbolic-functions, rather than -Bsymbolic, was chosen because the latter is incompatible with public global data; it makes it incompatible with copy relocations in the main program. however, not all global data needs to be public. by using --dynamic-list instead with an explicit list, we can reduce the number of symbolic relocations left for runtime. this change will also allow us to permit interposition of specific functions (e.g. the allocator) if/when we want to, by adding them to the dynamic list.
-