- 11 9月, 2014 1 次提交
-
-
由 Rich Felker 提交于
the vast majority of these failures seem to have been oversights at the time _BSD_SOURCE was added, or perhaps shortly afterward. the one which may have had some reason behind it is omission of setpgrp from the _BSD_SOURCE feature profile, since the standard setpgrp interface conflicts with a legacy (pre-POSIX) BSD interface by the same name. however, such omission is not aligned with our general policy in this area (for example, handling of similar _GNU_SOURCE cases) and should not be preserved.
-
- 08 9月, 2014 3 次提交
-
-
由 Szabolcs Nagy 提交于
the previous commit was a no op in exp10l because LDBL_* macros were implicitly 0 (the preprocessor does not warn about undefined symbols).
-
由 Szabolcs Nagy 提交于
__polevll, __p1evll and exp10l were provided on archs when long double is the same as double. The first two were completely unused and exp10l can be a wrapper around exp10.
-
由 Szabolcs Nagy 提交于
open file description locks are inherited across fork and only auto dropped after the last fd of the file description is closed, they can be used to synchronize between threads that open separate file descriptions for the same file. new in linux 3.15 commit 0d3f7a2dd2f5cf9642982515e020c1aee2cf7af6
-
- 07 9月, 2014 7 次提交
-
-
由 Rich Felker 提交于
based on patch by Jens Gustedt. the main difficulty here is handling the difference between start function signatures and thread return types for C11 threads versus POSIX threads. pointers to void are assumed to be able to represent faithfully all values of int. the function pointer for the thread start function is cast to an incorrect type for passing through pthread_create, but is cast back to its correct type before calling so that the behavior of the call is well-defined. changes to the existing threads implementation were kept minimal to reduce the risk of regressions, and duplication of code that carries implementation-specific assumptions was avoided for ease and safety of future maintenance.
-
由 Jens Gustedt 提交于
Because of the clear separation for private pthread_cond_t these interfaces are quite simple and direct.
-
由 Jens Gustedt 提交于
-
由 Jens Gustedt 提交于
These all have POSIX equivalents, but aside from tss_get, they all have minor changes to the signature or return value and thus need to exist as separate functions.
-
由 Rich Felker 提交于
based on patch by Jens Gustedt. mtx_t and cnd_t are defined in such a way that they are formally "compatible types" with pthread_mutex_t and pthread_cond_t, respectively, when accessed from a different translation unit. this makes it possible to implement the C11 functions using the pthread functions (which will dereference them with the pthread types) without having to use the same types, which would necessitate either namespace violations (exposing pthread type names in threads.h) or incompatible changes to the C++ name mangling ABI for the pthread types. for the rest of the types, things are much simpler; using identical types is possible without any namespace considerations.
-
由 Jens Gustedt 提交于
The intent of this is to avoid name space pollution of the C threads implementation. This has two sides to it. First we have to provide symbols that wouldn't pollute the name space for the C threads implementation. Second we have to clean up some internal uses of POSIX functions such that they don't implicitly drag in such symbols.
-
由 Rich Felker 提交于
based on patch by Jens Gustedt for inclusion with C11 threads implementation, but committed separately since it's independent of threads.
-
- 06 9月, 2014 7 次提交
-
-
由 Rich Felker 提交于
-
由 Szabolcs Nagy 提交于
there is no blksize64_t (blksize_t is always long) but there are fsblkcnt64_t and fsfilcnt64_t types in sys/stat.h and sys/types.h. and glob.h missed glob64_t.
-
由 Szabolcs Nagy 提交于
versionsort64, aio*64 and lio*64 symbols were missing, they are only needed for glibc ABI compatibility, on the source level dirent.h and aio.h already redirect them.
-
由 Szabolcs Nagy 提交于
-
由 Rich Felker 提交于
this error resulted in an out-of-bounds read, as opposed to a reported error, when calling the function with an argument one greater than the max valid index.
-
由 Rich Felker 提交于
if the loop stopped due to reaching the end of the string, the subsequent increment could possibly move the position one past the end of the buffer. no further writes happen, the reads cannot fault anyway unless the stack completely lacks any zero bytes, and reading junk should not yield an incorrect result from the function either. nonetheless the code was wrong and needs to be fixed.
-
由 Rich Felker 提交于
the condition was probably intended to be !*p rather than !p, but neither is needed here. the subsequent code naturally handles the case where it's already at end of string.
-
- 05 9月, 2014 6 次提交
-
-
由 Rich Felker 提交于
U+00DF ('ß') has had an uppercase form (U+1E9E) available since Unicode 5.1, but Unicode lacks the case mappings for it due to stability policy. when I added support for the new character in commit 1a63a9fc, I omitted the mapping in the lowercase-to-uppercase direction. this choice was not based on any actual information, only assumptions. this commit adds bidirectional case mappings between U+00DF and U+1E9E, and removes the special-case hack that allowed U+00DF to be identified as lowecase despite lacking a mapping. aside from strong evidence that this is the "right" behavior for real-world usage of these characters, several factors informed this decision: - the other "potentially correct" mapping, to "SS", is not representable in the C case-mapping system anyway. - leaving one letter in lowercase form when transforming a string to uppercase is obviously wrong. - having a character which is nominally lowercase but which is fixed under case mapping violates reasonable invariants.
-
由 Rich Felker 提交于
per POSIX these functions are both cancellation points, so they must act on any cancellation request which is pending prior to the call. previously, only the code path where actual waiting took place could act on cancellation.
-
由 Rich Felker 提交于
the outer getnameinfo function already has a properly-sized temporary buffer for storing the reverse dns (ptr) result. there is no reason for the callback to use a secondary buffer and copy it on success, and doing so potentially expanded the impact of the dn_expand bug that was fixed in commit 49d2c8c6. this change reduces the code size by a small amount, and also reduces the run-time stack space requirements by about 256 bytes.
-
由 Rich Felker 提交于
previously, fgets, fputs, fread, and fwrite completely omitted locking and access to the FILE object when their arguments yielded a zero length read or write operation independent of the FILE state. this optimization was invalid; it wrongly skipped marking the stream as byte-oriented (a C conformance bug) and exposed observably missing synchronization (a POSIX conformance bug) where one of these functions could wrongly complete despite another thread provably holding the lock.
-
由 Rich Felker 提交于
the C standard requires that "the contents of the array remain unchanged" in this case. this patch also changes the behavior on read errors, but in that case "the array contents are indeterminate", so the application cannot inspect them anyway.
-
由 Szabolcs Nagy 提交于
Empty name was rejected in dn_expand since commit 56b57f37 which is a regression as reported by Natanael Copa. Furthermore if an offset pointer in a compressed name pointed to a terminating 0 byte (instead of a label) the returned name was not null terminated.
-
- 27 8月, 2014 2 次提交
-
-
由 Szabolcs Nagy 提交于
add static_assert and protect the other new C11 keyword macros with #ifndef __cplusplus so they don't conflict with C++ keywords.
-
由 Szabolcs Nagy 提交于
C11 introduced *_DECIMAL_DIG and *_HAS_SUBNORM macros.
-
- 26 8月, 2014 7 次提交
-
-
由 Rich Felker 提交于
this function is needed for some important practical applications of ABI compatibility, and may be useful for supporting some non-portable software at the source level too. I was hesitant to add a function which imposes any constraints on malloc internals; however, it turns out that any malloc implementation which has realloc must already have an efficient way to determine the size of existing allocations, so no additional constraint is imposed. for now, some internal malloc definitions are duplicated in the new source file. if/when malloc is refactored to put them in a shared internal header file, these could be removed. since malloc_usable_size is conventionally declared in malloc.h, the empty stub version of this file was no longer suitable. it's updated to provide the standard allocator functions, nonstandard ones (even if stdlib.h would not expose them based on the feature test macros in effect), and any malloc-extension functions provided (currently, only malloc_usable_size).
-
由 Rich Felker 提交于
if there is already a waiter for a lock, spinning on the lock is essentially an attempt to steal it from whichever waiter would obtain it via any priority rules in place, and is therefore undesirable. in the current implementation, there is always an inherent race window at unlock during which a newly-arriving thread may steal the lock from the existing waiters, but we should aim to keep this window minimal rather than enlarging it.
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
empirically, this increases the maximum rate of wait/post operations between two threads by 20-150 times on machines I tested, including x86 and arm. conceptually, it makes sense to do some spinning because semaphores are intended to be usable as a notification mechanism between threads, not just as locks, and low-latency notification is a valuable property to have.
-
由 Rich Felker 提交于
this was broken by commit ea818ea8.
-
由 Rich Felker 提交于
the previous spin limit of 10000 was utterly unreasonable. empirically, it could consume up to 200000 cycles, whereas a failed futex wait (EAGAIN) typically takes 1000 cycles or less, and even a true wait/wake round seems much less expensive. the new counts (100 for general wait, 200 in barrier) were simply chosen to be in the range of what's reasonable without having adverse effects on casual micro-benchmark tests I have been running. they may still be too high, from a standpoint of not wasting cpu cycles, but at least they're a lot better than before. rigorous testing across different archs and cpu models should be performed at some point to determine whether further adjustments should be made.
-
由 Rich Felker 提交于
conceptually, a_spin needs to be at least a compiler barrier, so the compiler will not optimize out loops (and the load on each iteration) while spinning. it should also be a memory barrier, or the spinning thread might keep spinning without noticing stores from other threads, thus delaying for longer than it should. ideally, an optimal a_spin implementation that avoids unnecessary cache/memory contention should be chosen for each arch, but for now, the easiest thing is to perform a useless a_cas on the calling thread's stack.
-
- 24 8月, 2014 1 次提交
-
-
由 Rich Felker 提交于
this is analogous commit fffc5cda which fixed the corresponding issue for mutexes. the robust list can't be used here because the locks do not share a common layout with mutexes. at some point it may make sense to simply incorporate a mutex object into the FILE structure and use it, but that would be a much more invasive change, and it doesn't mesh well with the current design that uses a simpler code path for internal locking and pulls in the recursive-mutex-like code when the flockfile API is used explicitly.
-
- 23 8月, 2014 2 次提交
-
-
由 Rich Felker 提交于
for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.
-
由 Rich Felker 提交于
the subsequent code in pthread_create and the code which copies TLS initialization images to the new thread's TLS space assume that the memory provided to them is zero-initialized, which is true when it's obtained by pthread_create using mmap. however, when the caller provides a stack using pthread_attr_setstack, pthread_create cannot make any assumptions about the contents. simply zero-filling the relevant memory in this case is the simplest and safest fix.
-
- 21 8月, 2014 1 次提交
-
-
由 Rich Felker 提交于
unfortunately this needs to be able to vary by arch, because of a huge mess GCC made: the GCC definition, which became the ABI, depends on quirks in GCC's definition of __alignof__, which does not match the formal alignment of the type. GCC's __alignof__ unexpectedly exposes the an implementation detail, its "preferred alignment" for the type, rather than the formal/ABI alignment of the type, which it only actually uses in structures. on most archs the two values are the same, but on some (at least i386) the preferred alignment is greater than the ABI alignment. I considered using _Alignas(8) unconditionally, but on at least one arch (or1k), the alignment of max_align_t with GCC's definition is only 4 (even the "preferred alignment" for these types is only 4).
-
- 19 8月, 2014 1 次提交
-
-
由 Rich Felker 提交于
the main idea of the changes made is to have waiters wait directly on the "barrier" lock that was used to prevent them from making forward progress too early rather than first waiting on the atomic state value and then attempting to lock the barrier. in addition, adjustments to the mutex waiter count are optimized. previously, each waking waiter decremented the count (unless it was the first) then immediately incremented it again for the next waiter (unless it was the last). this was a roundabout was of achieving the equivalent of incrementing it once for the first waiter and decrementing it once for the last.
-
- 18 8月, 2014 2 次提交
-
-
由 Rich Felker 提交于
previously, wake order could be unpredictable: if a waiter happened to leave its futex wait on the state early, e.g. due to EAGAIN while restarting after a signal handler, it could acquire the mutex out of turn. handling this required ugly O(n) list walking in the unwait function and accounting to remove waiters that already woke from the list. with the new changes, the "barrier" locks in each waiter node are only unlocked in turn. in addition to simplifying the code, this seems to improve performance slightly, probably by reducing the number of accesses threads make to each other's stacks. as an additional benefit, unrecoverable mutex re-locking errors (mainly ENOTRECOVERABLE for robust mutexes) no longer need to be handled with deadlock; they can be reported to the caller, since the unlocking sequence makes it unnecessary to rely on the mutex to synchronize access to the waiter list.
-
由 Rich Felker 提交于
the immediate issue that was reported by Jens Gustedt and needed to be fixed was corruption of the cv/mutex waiter states when switching to using a new mutex with the cv after all waiters were unblocked but before they finished returning from the wait function. self-synchronized destruction was also handled poorly and may have had race conditions. and the use of sequence numbers for waking waiters admitted a theoretical missed-wakeup if the sequence number wrapped through the full 32-bit space. the new implementation is largely documented in the comments in the source. the basic principle is to use linked lists initially attached to the cv object, but detachable on signal/broadcast, made up of nodes residing in automatic storage (stack) on the threads that are waiting. this eliminates the need for waiters to access the cv object after they are signaled, and allows us to limit wakeup to one waiter at a time during broadcasts even when futex requeue cannot be used. performance is also greatly improved, roughly double some tests. basically nothing is changed in the process-shared cond var case, where this implementation does not work, since processes do not have access to one another's local storage.
-