提交 b761bd19 编写于 作者: R Rich Felker

fix rare but nasty under-allocation bug in malloc with large requests

the bug appeared only with requests roughly 2*sizeof(size_t) to
4*sizeof(size_t) bytes smaller than a multiple of the page size, and
only for requests large enough to be serviced by mmap instead of the
normal heap. it was only ever observed on 64-bit machines but
presumably could also affect 32-bit (albeit with a smaller window of
opportunity).
上级 98c5583a
...@@ -333,7 +333,7 @@ void *malloc(size_t n) ...@@ -333,7 +333,7 @@ void *malloc(size_t n)
if (adjust_size(&n) < 0) return 0; if (adjust_size(&n) < 0) return 0;
if (n > MMAP_THRESHOLD) { if (n > MMAP_THRESHOLD) {
size_t len = n + PAGE_SIZE - 1 & -PAGE_SIZE; size_t len = n + OVERHEAD + PAGE_SIZE - 1 & -PAGE_SIZE;
char *base = __mmap(0, len, PROT_READ|PROT_WRITE, char *base = __mmap(0, len, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
if (base == (void *)-1) return 0; if (base == (void *)-1) return 0;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册