- 04 10月, 2022 1 次提交
-
-
由 olefirenque 提交于
Signed-off-by: NMaxim Polyakov <polyakov.maksim@huawei.com>
-
- 29 9月, 2022 3 次提交
-
-
由 Maxim Polyakov 提交于
Signed-off-by: NMaxim Polyakov <polyakov.maksim@huawei.com> Change-Id: Ic5e182b6e70537e74c4d764aa9140d6fa932f1ab
-
由 Maxim Polyakov 提交于
Signed-off-by: NMaxim Polyakov <polyakov.maksim@huawei.com> Change-Id: Idc288e4cd9b467ada410f4f47126ad0fb1de8dfd
-
由 Maxim Polyakov 提交于
Add malloc_info, malloc_stats_print, mallinfo2, malloc_iterate, malloc_enable, malloc_disable, mallopt, malloc_backtrace to musl default allocator Signed-off-by: NMaxim Polyakov <polyakov.maksim@huawei.com> Change-Id: I9300684afd69750973a3b9046aeaaade72ee88fe
-
- 15 9月, 2022 2 次提交
-
-
由 leixin 提交于
Signed-off-by: Nleixin <leixin19@huawei.com>
-
由 leixin 提交于
Change-Id: Iba2ad5a46f7fca175e77720a2cee97f401ffbafa Signed-off-by: Nleixin <leixin19@huawei.com>
-
- 01 9月, 2022 1 次提交
-
-
由 Far 提交于
1. chunk overhead区增加两个字段usize和state,分别记录实际占用的payload的大小以及当前chunk的状态。 其中chunk的状态包括是否分配给用户以及是否被下毒。下毒指的是在chunk除有效payload(即用户实际使用 的内存)外的内存中填充进随机生成的数据。在malloc/free时检测这些区域即可实现对溢出以及UAF的校验。 2. 为了提高性能,并不会对所有chunk下毒,而是每POISON_COUNT_DOWN_BASE次malloc/free时进行一次下毒。 Signed-off-by: NFar <yesiyuan2@huawei.com> Change-Id: Idb341c202d8ec99f5370d4f589ee261ded8b163f
-
- 19 8月, 2022 1 次提交
-
-
由 Far 提交于
空闲块释放后,不会立即进入待分配队列或归还给系统,而是被放入一个隔离区。 当隔离区满后,其中的空闲块会被放入待分配队列或归还物理内存给操作系统。 Signed-off-by: NFar <yesiyuan2@huawei.com> Change-Id: I019c065b2bc52f83655e516e13fcb14420a78861
-
- 16 8月, 2022 1 次提交
-
-
由 ganlan 提交于
Signed-off-by: Nganlan <tony.gan@huawei.com>
-
- 28 7月, 2022 1 次提交
-
-
由 Far 提交于
1. 指针混淆: 对空闲chunk的双向链表指针next、prev进行混淆。具体为将该指针与一个key做异或操作。 不同的bin拥有不同的key,key通过随机数生成器生成。 2. safe unlink: 在unbin操作时校验双向链表的有效性,即检查双向链表中前一项和后一项的指向当前chunk 的指针是否正常,否则终止进程。 这两个功能均可通过MALLOC_FREELIST_HARDENED宏开关 这个宏可以通过编译框架直接开关(在编译命令后增加 --gn-args "musl_secure_level=1"打开) Change-Id: I05fd4404aeebcb396c8471f181a30305fb9dbe74 Signed-off-by: NFar <yesiyuan2@huawei.com>
-
- 18 3月, 2022 1 次提交
-
-
由 zhushengle 提交于
Signed-off-by: Nzhushengle <zhushengle@huawei.com> Change-Id: I1e9c3ee45c16ce719450ab3fe9819e8f608eeb49
-
- 25 1月, 2022 1 次提交
-
-
由 Wang xiaoyuan 提交于
Signed-off-by: NWang xiaoyuan <wangxiaoyuan6@huawei.com>
-
- 18 1月, 2022 1 次提交
-
-
由 Wang xiaoyuan 提交于
Signed-off-by: NWang xiaoyuan <wangxiaoyuan6@huawei.com>
-
- 06 1月, 2022 1 次提交
-
-
由 chuxuezhe1111 提交于
Signed-off-by: chuxuezhe111 <hanjixiao@huawei.com>
-
- 07 7月, 2021 1 次提交
-
-
由 zhuoli 提交于
2. Add BUILD.gn and its components Signed-off-by: Nzhuoli <pengzhuoli@huawei.com>
-
- 11 6月, 2021 1 次提交
-
-
由 Caoruihong 提交于
isolate changes, keep orignal musl sources clean. Signed-off-by: NCaoruihong <crh.cao@huawei.com> Change-Id: Id7f3a5109771f93d397e30febba36e09ddaf4f36
-
- 11 3月, 2021 1 次提交
-
-
由 mamingshuai 提交于
-
- 09 9月, 2020 1 次提交
-
-
由 wenjun 提交于
-
- 17 8月, 2020 1 次提交
-
-
由 c00346986 提交于
Description:userspace musl code Team:OTHERS Feature or Bugfix:Feature Binary Source:NA PrivateCode(Yes/No):No Change-Id: I1d445ef7d16285be98b1857f4c01b94c9759daea Reviewed-on: http://mgit-tm.rnd.huawei.com/10274931Reviewed-by: Ncaoruihong 00546070 <crh.cao@huawei.com> Tested-by: Npublic jenkins <public_jenkins@notesmail.huawei.com> Reviewed-by: Nshenwei 00579521 <denny.shenwei@huawei.com>
-
- 13 9月, 2018 1 次提交
-
-
由 Rich Felker 提交于
-
- 20 4月, 2018 4 次提交
-
-
由 Rich Felker 提交于
commit 618b18c7 removed the previous detection and hardening since it was incorrect. commit 72141795 already handled all that remained for hardening the static-linked case. in the dynamic-linked case, have the dynamic linker check whether malloc was replaced and make that information available. with these changes, the properties documented in commit c9f415d7 are restored: if calloc is not provided, it will behave as malloc+memset, and any of the memalign-family functions not provided will fail with ENOMEM.
-
由 Rich Felker 提交于
this change serves multiple purposes: 1. it ensures that static linking of memalign-family functions will pull in the system malloc implementation, thereby causing link errors if an attempt is made to link the system memalign functions with a replacement malloc (incomplete allocator replacement). 2. it eliminates calls to free that are unpaired with allocations, which are confusing when setting breakpoints or tracing execution. as a bonus, making __bin_chunk external may discourage aggressive and unnecessary inlining of it.
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
commit c9f415d7 included checks to make calloc fallback to memset if used with a replaced malloc that didn't also replace calloc, and the memalign family fail if free has been replaced. however, the checks gave false positives for replacement whenever malloc or free resolved to a PLT entry in the main program. for now, disable the checks so as not to leave libc in a broken state. this means that the properties documented in the above commit are no longer satisfied; failure to replace calloc and the memalign family along with malloc is unsafe if they are ever called. the calloc checks were correct but useless for static linking. in both cases (simple or full malloc), calloc and malloc are in a source file together, so replacement of one but not the other would give linking errors. the memalign-family check was useful for static linking, but broken for dynamic as described above, and can be replaced with a better link-time check.
-
- 19 4月, 2018 1 次提交
-
-
由 Rich Felker 提交于
replacement is subject to conditions on the replacement functions. they may only call functions which are async-signal-safe, as specified either by POSIX or as an implementation-defined extension. if any allocator functions are replaced, at least malloc, realloc, and free must be provided. if calloc is not provided, it will behave as malloc+memset. any of the memalign-family functions not provided will fail with ENOMEM. in order to implement the above properties, calloc and __memalign check that they are using their own malloc or free, respectively. choice to check malloc or free is based on considerations of supporting __simple_malloc. in order to make this work, calloc is split into separate versions for __simple_malloc and full malloc; commit ba819787 already did most of the split anyway, and completing it saves an extra call frame. previously, use of -Bsymbolic-functions made dynamic interposition impossible. now, we are using an explicit dynamic-list, so add allocator functions to the list. most are not referenced anyway, but all are added for completeness.
-
- 18 4月, 2018 3 次提交
-
-
由 Rich Felker 提交于
-
由 Alexander Monakov 提交于
Split 'free' into unmap_chunk and bin_chunk, use the latter to introduce __malloc_donate and use it in reclaim_gaps instead of calling 'free'.
-
由 Alexander Monakov 提交于
Fix an instance where realloc code would overallocate by OVERHEAD bytes amount. Manually arrange for reuse of memcpy-free-return exit sequence.
-
- 12 4月, 2018 1 次提交
-
-
由 Alexander Monakov 提交于
Implementation of __malloc0 in malloc.c takes care to preserve zero pages by overwriting only non-zero data. However, malloc must have already modified auxiliary heap data just before and beyond the allocated region, so we know that edge pages need not be preserved. For allocations smaller than one page, pass them immediately to memset. Otherwise, use memset to handle partial pages at the head and tail of the allocation, and scan complete pages in the interior. Optimize the scanning loop by processing 16 bytes per iteration and handling rest of page via memset as soon as a non-zero byte is found.
-
- 05 7月, 2017 1 次提交
-
-
由 Alexander Monakov 提交于
-
- 16 6月, 2017 1 次提交
-
-
由 Rich Felker 提交于
mremap seems to always fail on nommu, and on some non-Linux implementations of the Linux syscall API, it at least fails to increase allocation size, and may fail to move (i.e. defragment) the existing mapping when shrinking it too. instead of failing realloc or leaving an over-sized allocation that may waste a large amount of memory, fallback to malloc-memcpy-free if mremap fails.
-
- 18 12月, 2016 1 次提交
-
-
由 Szabolcs Nagy 提交于
float conversion is slow and big on soft-float targets. The lookup table increases code size a bit on most hard float targets (and adds 60byte rodata), performance can be a bit slower because of position independent data access and cpu internal state dependence (cache, extra branches), but the overall effect should be minimal (common, small size allocations should be unaffected).
-
- 08 8月, 2015 1 次提交
-
-
由 Rich Felker 提交于
during calls to free, any free chunks adjacent to the chunk being freed are momentarily held in allocated state for the purpose of merging, possibly leaving little or no available free memory for other threads to allocate. under this condition, other threads will attempt to expand the heap rather than waiting to use memory that will soon be available. the race window where this happens is normally very small, but became huge when free chooses to use madvise to release unused physical memory, causing unbounded heap size growth. this patch drastically shrinks the race window for unwanted heap expansion by performing madvise with the bin lock held and marking the bin non-empty in the binmask before making the expensive madvise syscall. testing by Timo Teräs has shown this approach to be a suitable mitigation. more invasive changes to the synchronization between malloc and free would be needed to completely eliminate the problem. it's not clear whether such changes would improve or worsen typical-case performance, or whether this would be a worthwhile direction to take malloc development.
-
- 23 6月, 2015 1 次提交
-
-
由 Rich Felker 提交于
previously, calloc's implementation encoded assumptions about the implementation of malloc, accessing a size_t word just prior to the allocated memory to determine if it was obtained by mmap to optimize out the zero-filling. when __simple_malloc is used (static linking a program with no realloc/free), it doesn't matter if the result of this check is wrong, since all allocations are zero-initialized anyway. but the access could be invalid if it crosses a page boundary or if the pointer is not sufficiently aligned, which can happen for very small allocations. this patch fixes the issue by moving the zero-fill logic into malloc.c with the full malloc, as a new function named __malloc0, which is provided by a weak alias to __simple_malloc (which always gives zero-filled memory) when the full malloc is not in use.
-
- 14 6月, 2015 1 次提交
-
-
由 Rich Felker 提交于
this extends the brk/stack collision protection added to full malloc in commit 276904c2 to also protect the __simple_malloc function used in static-linked programs that don't reference the free function. it also extends support for using mmap when brk fails, which full malloc got in commit 54463033, to __simple_malloc. since __simple_malloc may expand the heap by arbitrarily large increments, the stack collision detection is enhanced to detect interval overlap rather than just proximity of a single address to the stack. code size is increased a bit, but this is partly offset by the sharing of code between the two malloc implementations, which due to linking semantics, both get linked in a program that needs the full malloc with realloc/free support.
-
- 10 6月, 2015 1 次提交
-
-
由 Rich Felker 提交于
the linux/nommu fdpic ELF loader sets up the brk range to overlap entirely with the main thread's stack (but growing from opposite ends), so that the resulting failure mode for malloc is not to return a null pointer but to start returning pointers to memory that overlaps with the caller's stack. needless to say this extremely dangerous and makes brk unusable. since it's non-trivial to detect execution environments that might be affected by this kernel bug, and since the severity of the bug makes any sort of detection that might yield false-negatives unsafe, we instead check the proximity of the brk to the stack pointer each time the brk is to be expanded. both the main thread's stack (where the real known risk lies) and the calling thread's stack are checked. an arbitrary gap distance of 8 MB is imposed, chosen to be larger than linux default main-thread stack reservation sizes and larger than any reasonable stack configuration on nommu. the effeciveness of this patch relies on an assumption that the amount by which the brk is being grown is smaller than the gap limit, which is always true for malloc's use of brk. reliance on this assumption is why the check is being done in malloc-specific code and not in __brk.
-
- 04 3月, 2015 3 次提交
-
-
由 Rich Felker 提交于
this re-check idiom seems to have been copied from the alloc_fwd and alloc_rev functions, which guess a bin based on non-synchronized memory access to adjacent chunk headers then need to confirm, after locking the bin, that the chunk is actually in the bin they locked. the check being removed, however, was being performed on a chunk obtained from the already-locked bin. there is no race to account for here; the check could only fail in the event of corrupt free lists, and even then it would not catch them but simply continue running. since the bin_index function is mildly expensive, it seems preferable to remove the check rather than trying to convert it into a useful consistency check. casual testing shows a 1-5% reduction in run time.
-
由 Rich Felker 提交于
the malloc init code provided its own version of pthread_once type logic, including the exact same bug that was fixed in pthread_once in commit 0d0c2f40. since this code is called adjacent to expand_heap, which takes a lock, there is no reason to have pthread_once-type initialization. simply moving the init code into the interval where expand_heap already holds its lock on the brk achieves the same result with much less synchronization logic, and allows the buggy code to be eliminated rather than just fixed.
-
由 Rich Felker 提交于
the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
-
- 03 4月, 2014 1 次提交
-
-
由 Rich Felker 提交于
this issue mainly affects PIE binaries and execution of programs via direct invocation of the dynamic linker binary: depending on kernel behavior, in these cases the initial brk may be placed at at location where it cannot be extended, due to conflicting adjacent maps. when brk fails, mmap is used instead to expand the heap. in order to avoid expensive bookkeeping for managing fragmentation by merging these new heap regions, the minimum size for new heap regions increases exponentially in the number of regions. this limits the number of regions, and thereby the number of fixed fragmentation points, to a quantity which is logarithmic with respect to the size of virtual address space and thus negligible. the exponential growth is tuned so as to avoid expanding the heap by more than approximately 50% of its current total size.
-