1. 04 10月, 2022 1 次提交
  2. 29 9月, 2022 3 次提交
  3. 15 9月, 2022 2 次提交
  4. 01 9月, 2022 1 次提交
    • F
      feat: 增加red zone/poison特性,提高musl内存分配器对溢出和UAF的防护能力 · 65228e15
      Far 提交于
      1. chunk overhead区增加两个字段usize和state,分别记录实际占用的payload的大小以及当前chunk的状态。
      其中chunk的状态包括是否分配给用户以及是否被下毒。下毒指的是在chunk除有效payload(即用户实际使用
      的内存)外的内存中填充进随机生成的数据。在malloc/free时检测这些区域即可实现对溢出以及UAF的校验。
      
      2. 为了提高性能,并不会对所有chunk下毒,而是每POISON_COUNT_DOWN_BASE次malloc/free时进行一次下毒。
      Signed-off-by: NFar <yesiyuan2@huawei.com>
      Change-Id: Idb341c202d8ec99f5370d4f589ee261ded8b163f
      65228e15
  5. 19 8月, 2022 1 次提交
  6. 16 8月, 2022 1 次提交
  7. 28 7月, 2022 1 次提交
    • F
      feat: malloc指针混淆以及safe unlink · 1d4c1642
      Far 提交于
      1. 指针混淆:
         对空闲chunk的双向链表指针next、prev进行混淆。具体为将该指针与一个key做异或操作。
         不同的bin拥有不同的key,key通过随机数生成器生成。
      2. safe unlink:
         在unbin操作时校验双向链表的有效性,即检查双向链表中前一项和后一项的指向当前chunk
         的指针是否正常,否则终止进程。
      
      这两个功能均可通过MALLOC_FREELIST_HARDENED宏开关
      这个宏可以通过编译框架直接开关(在编译命令后增加 --gn-args "musl_secure_level=1"打开)
      
      Change-Id: I05fd4404aeebcb396c8471f181a30305fb9dbe74
      Signed-off-by: NFar <yesiyuan2@huawei.com>
      1d4c1642
  8. 18 3月, 2022 1 次提交
  9. 25 1月, 2022 1 次提交
  10. 18 1月, 2022 1 次提交
  11. 06 1月, 2022 1 次提交
  12. 07 7月, 2021 1 次提交
  13. 11 6月, 2021 1 次提交
  14. 11 3月, 2021 1 次提交
  15. 09 9月, 2020 1 次提交
  16. 17 8月, 2020 1 次提交
  17. 13 9月, 2018 1 次提交
  18. 20 4月, 2018 4 次提交
    • R
      reintroduce hardening against partially-replaced allocator · b4b1e103
      Rich Felker 提交于
      commit 618b18c7 removed the previous
      detection and hardening since it was incorrect. commit
      72141795 already handled all that
      remained for hardening the static-linked case. in the dynamic-linked
      case, have the dynamic linker check whether malloc was replaced and
      make that information available.
      
      with these changes, the properties documented in commit
      c9f415d7 are restored: if calloc is
      not provided, it will behave as malloc+memset, and any of the
      memalign-family functions not provided will fail with ENOMEM.
      b4b1e103
    • R
      return chunks split off by memalign using __bin_chunk instead of free · 72141795
      Rich Felker 提交于
      this change serves multiple purposes:
      
      1. it ensures that static linking of memalign-family functions will
      pull in the system malloc implementation, thereby causing link errors
      if an attempt is made to link the system memalign functions with a
      replacement malloc (incomplete allocator replacement).
      
      2. it eliminates calls to free that are unpaired with allocations,
      which are confusing when setting breakpoints or tracing execution.
      
      as a bonus, making __bin_chunk external may discourage aggressive and
      unnecessary inlining of it.
      72141795
    • R
      23389b19
    • R
      revert detection of partially-replaced allocator · 618b18c7
      Rich Felker 提交于
      commit c9f415d7 included checks to
      make calloc fallback to memset if used with a replaced malloc that
      didn't also replace calloc, and the memalign family fail if free has
      been replaced. however, the checks gave false positives for
      replacement whenever malloc or free resolved to a PLT entry in the
      main program.
      
      for now, disable the checks so as not to leave libc in a broken state.
      this means that the properties documented in the above commit are no
      longer satisfied; failure to replace calloc and the memalign family
      along with malloc is unsafe if they are ever called.
      
      the calloc checks were correct but useless for static linking. in both
      cases (simple or full malloc), calloc and malloc are in a source file
      together, so replacement of one but not the other would give linking
      errors. the memalign-family check was useful for static linking, but
      broken for dynamic as described above, and can be replaced with a
      better link-time check.
      618b18c7
  19. 19 4月, 2018 1 次提交
    • R
      allow interposition/replacement of allocator (malloc) · c9f415d7
      Rich Felker 提交于
      replacement is subject to conditions on the replacement functions.
      they may only call functions which are async-signal-safe, as specified
      either by POSIX or as an implementation-defined extension. if any
      allocator functions are replaced, at least malloc, realloc, and free
      must be provided. if calloc is not provided, it will behave as
      malloc+memset. any of the memalign-family functions not provided will
      fail with ENOMEM.
      
      in order to implement the above properties, calloc and __memalign
      check that they are using their own malloc or free, respectively.
      choice to check malloc or free is based on considerations of
      supporting __simple_malloc. in order to make this work, calloc is
      split into separate versions for __simple_malloc and full malloc;
      commit ba819787 already did most of
      the split anyway, and completing it saves an extra call frame.
      
      previously, use of -Bsymbolic-functions made dynamic interposition
      impossible. now, we are using an explicit dynamic-list, so add
      allocator functions to the list. most are not referenced anyway, but
      all are added for completeness.
      c9f415d7
  20. 18 4月, 2018 3 次提交
  21. 12 4月, 2018 1 次提交
    • A
      optimize malloc0 · 424eab22
      Alexander Monakov 提交于
      Implementation of __malloc0 in malloc.c takes care to preserve zero
      pages by overwriting only non-zero data. However, malloc must have
      already modified auxiliary heap data just before and beyond the
      allocated region, so we know that edge pages need not be preserved.
      
      For allocations smaller than one page, pass them immediately to memset.
      Otherwise, use memset to handle partial pages at the head and tail of
      the allocation, and scan complete pages in the interior. Optimize the
      scanning loop by processing 16 bytes per iteration and handling rest of
      page via memset as soon as a non-zero byte is found.
      424eab22
  22. 05 7月, 2017 1 次提交
  23. 16 6月, 2017 1 次提交
    • R
      handle mremap failure in realloc of mmap-serviced allocations · 1c86c7f5
      Rich Felker 提交于
      mremap seems to always fail on nommu, and on some non-Linux
      implementations of the Linux syscall API, it at least fails to
      increase allocation size, and may fail to move (i.e. defragment) the
      existing mapping when shrinking it too. instead of failing realloc or
      leaving an over-sized allocation that may waste a large amount of
      memory, fallback to malloc-memcpy-free if mremap fails.
      1c86c7f5
  24. 18 12月, 2016 1 次提交
    • S
      use lookup table for malloc bin index instead of float conversion · 61ff1af7
      Szabolcs Nagy 提交于
      float conversion is slow and big on soft-float targets.
      
      The lookup table increases code size a bit on most hard float targets
      (and adds 60byte rodata), performance can be a bit slower because of
      position independent data access and cpu internal state dependence
      (cache, extra branches), but the overall effect should be minimal
      (common, small size allocations should be unaffected).
      61ff1af7
  25. 08 8月, 2015 1 次提交
    • R
      mitigate blow-up of heap size under malloc/free contention · c3761622
      Rich Felker 提交于
      during calls to free, any free chunks adjacent to the chunk being
      freed are momentarily held in allocated state for the purpose of
      merging, possibly leaving little or no available free memory for other
      threads to allocate. under this condition, other threads will attempt
      to expand the heap rather than waiting to use memory that will soon be
      available. the race window where this happens is normally very small,
      but became huge when free chooses to use madvise to release unused
      physical memory, causing unbounded heap size growth.
      
      this patch drastically shrinks the race window for unwanted heap
      expansion by performing madvise with the bin lock held and marking the
      bin non-empty in the binmask before making the expensive madvise
      syscall. testing by Timo Teräs has shown this approach to be a
      suitable mitigation.
      
      more invasive changes to the synchronization between malloc and free
      would be needed to completely eliminate the problem. it's not clear
      whether such changes would improve or worsen typical-case performance,
      or whether this would be a worthwhile direction to take malloc
      development.
      c3761622
  26. 23 6月, 2015 1 次提交
    • R
      fix calloc when __simple_malloc implementation is used · ba819787
      Rich Felker 提交于
      previously, calloc's implementation encoded assumptions about the
      implementation of malloc, accessing a size_t word just prior to the
      allocated memory to determine if it was obtained by mmap to optimize
      out the zero-filling. when __simple_malloc is used (static linking a
      program with no realloc/free), it doesn't matter if the result of this
      check is wrong, since all allocations are zero-initialized anyway. but
      the access could be invalid if it crosses a page boundary or if the
      pointer is not sufficiently aligned, which can happen for very small
      allocations.
      
      this patch fixes the issue by moving the zero-fill logic into malloc.c
      with the full malloc, as a new function named __malloc0, which is
      provided by a weak alias to __simple_malloc (which always gives
      zero-filled memory) when the full malloc is not in use.
      ba819787
  27. 14 6月, 2015 1 次提交
    • R
      refactor malloc's expand_heap to share with __simple_malloc · e3bc22f1
      Rich Felker 提交于
      this extends the brk/stack collision protection added to full malloc
      in commit 276904c2 to also protect the
      __simple_malloc function used in static-linked programs that don't
      reference the free function.
      
      it also extends support for using mmap when brk fails, which full
      malloc got in commit 54463033, to
      __simple_malloc.
      
      since __simple_malloc may expand the heap by arbitrarily large
      increments, the stack collision detection is enhanced to detect
      interval overlap rather than just proximity of a single address to the
      stack. code size is increased a bit, but this is partly offset by the
      sharing of code between the two malloc implementations, which due to
      linking semantics, both get linked in a program that needs the full
      malloc with realloc/free support.
      e3bc22f1
  28. 10 6月, 2015 1 次提交
    • R
      in malloc, refuse to use brk if it grows into stack · 276904c2
      Rich Felker 提交于
      the linux/nommu fdpic ELF loader sets up the brk range to overlap
      entirely with the main thread's stack (but growing from opposite
      ends), so that the resulting failure mode for malloc is not to return
      a null pointer but to start returning pointers to memory that overlaps
      with the caller's stack. needless to say this extremely dangerous and
      makes brk unusable.
      
      since it's non-trivial to detect execution environments that might be
      affected by this kernel bug, and since the severity of the bug makes
      any sort of detection that might yield false-negatives unsafe, we
      instead check the proximity of the brk to the stack pointer each time
      the brk is to be expanded. both the main thread's stack (where the
      real known risk lies) and the calling thread's stack are checked. an
      arbitrary gap distance of 8 MB is imposed, chosen to be larger than
      linux default main-thread stack reservation sizes and larger than any
      reasonable stack configuration on nommu.
      
      the effeciveness of this patch relies on an assumption that the amount
      by which the brk is being grown is smaller than the gap limit, which
      is always true for malloc's use of brk. reliance on this assumption is
      why the check is being done in malloc-specific code and not in __brk.
      276904c2
  29. 04 3月, 2015 3 次提交
    • R
      remove useless check of bin match in malloc · 064898cf
      Rich Felker 提交于
      this re-check idiom seems to have been copied from the alloc_fwd and
      alloc_rev functions, which guess a bin based on non-synchronized
      memory access to adjacent chunk headers then need to confirm, after
      locking the bin, that the chunk is actually in the bin they locked.
      
      the check being removed, however, was being performed on a chunk
      obtained from the already-locked bin. there is no race to account for
      here; the check could only fail in the event of corrupt free lists,
      and even then it would not catch them but simply continue running.
      
      since the bin_index function is mildly expensive, it seems preferable
      to remove the check rather than trying to convert it into a useful
      consistency check. casual testing shows a 1-5% reduction in run time.
      064898cf
    • R
      fix init race that could lead to deadlock in malloc init code · 7a81fe37
      Rich Felker 提交于
      the malloc init code provided its own version of pthread_once type
      logic, including the exact same bug that was fixed in pthread_once in
      commit 0d0c2f40.
      
      since this code is called adjacent to expand_heap, which takes a lock,
      there is no reason to have pthread_once-type initialization. simply
      moving the init code into the interval where expand_heap already holds
      its lock on the brk achieves the same result with much less
      synchronization logic, and allows the buggy code to be eliminated
      rather than just fixed.
      7a81fe37
    • R
      make all objects used with atomic operations volatile · 56fbaa3b
      Rich Felker 提交于
      the memory model we use internally for atomics permits plain loads of
      values which may be subject to concurrent modification without
      requiring that a special load function be used. since a compiler is
      free to make transformations that alter the number of loads or the way
      in which loads are performed, the compiler is theoretically free to
      break this usage. the most obvious concern is with atomic cas
      constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be
      transformed to a_cas(p,*p,f(*p)); where the latter is intended to show
      multiple loads of *p whose resulting values might fail to be equal;
      this would break the atomicity of the whole operation. but even more
      fundamental breakage is possible.
      
      with the changes being made now, objects that may be modified by
      atomics are modeled as volatile, and the atomic operations performed
      on them by other threads are modeled as asynchronous stores by
      hardware which happens to be acting on the request of another thread.
      such modeling of course does not itself address memory synchronization
      between cores/cpus, but that aspect was already handled. this all
      seems less than ideal, but it's the best we can do without mandating a
      C11 compiler and using the C11 model for atomics.
      
      in the case of pthread_once_t, the ABI type of the underlying object
      is not volatile-qualified. so we are assuming that accessing the
      object through a volatile-qualified lvalue via casts yields volatile
      access semantics. the language of the C standard is somewhat unclear
      on this matter, but this is an assumption the linux kernel also makes,
      and seems to be the correct interpretation of the standard.
      56fbaa3b
  30. 03 4月, 2014 1 次提交
    • R
      avoid malloc failure for small requests when brk can't be extended · 54463033
      Rich Felker 提交于
      this issue mainly affects PIE binaries and execution of programs via
      direct invocation of the dynamic linker binary: depending on kernel
      behavior, in these cases the initial brk may be placed at at location
      where it cannot be extended, due to conflicting adjacent maps.
      
      when brk fails, mmap is used instead to expand the heap. in order to
      avoid expensive bookkeeping for managing fragmentation by merging
      these new heap regions, the minimum size for new heap regions
      increases exponentially in the number of regions. this limits the
      number of regions, and thereby the number of fixed fragmentation
      points, to a quantity which is logarithmic with respect to the size of
      virtual address space and thus negligible. the exponential growth is
      tuned so as to avoid expanding the heap by more than approximately 50%
      of its current total size.
      54463033