1. 18 1月, 2023 1 次提交
    • C
      binder: fix UAF of alloc->vma in race with munmap() · b84980c2
      Carlos Llamas 提交于
      stable inclusion
      from stable-v5.10.154
      commit 015ac18be7de25d17d6e5f1643cb3b60bfbe859e
      category: bugfix
      bugzilla: https://gitee.com/src-openeuler/kernel/issues/I68WW5
      CVE: CVE-2023-20928
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=015ac18be7de25d17d6e5f1643cb3b60bfbe859e
      
      --------------------------------
      
      In commit 720c2419 ("ANDROID: binder: change down_write to
      down_read") binder assumed the mmap read lock is sufficient to protect
      alloc->vma inside binder_update_page_range(). This used to be accurate
      until commit dd2283f2 ("mm: mmap: zap pages with read mmap_sem in
      munmap"), which now downgrades the mmap_lock after detaching the vma
      from the rbtree in munmap(). Then it proceeds to teardown and free the
      vma with only the read lock held.
      
      This means that accesses to alloc->vma in binder_update_page_range() now
      will race with vm_area_free() in munmap() and can cause a UAF as shown
      in the following KASAN trace:
      
        ==================================================================
        BUG: KASAN: use-after-free in vm_insert_page+0x7c/0x1f0
        Read of size 8 at addr ffff16204ad00600 by task server/558
      
        CPU: 3 PID: 558 Comm: server Not tainted 5.10.150-00001-gdc8dcf942daa #1
        Hardware name: linux,dummy-virt (DT)
        Call trace:
         dump_backtrace+0x0/0x2a0
         show_stack+0x18/0x2c
         dump_stack+0xf8/0x164
         print_address_description.constprop.0+0x9c/0x538
         kasan_report+0x120/0x200
         __asan_load8+0xa0/0xc4
         vm_insert_page+0x7c/0x1f0
         binder_update_page_range+0x278/0x50c
         binder_alloc_new_buf+0x3f0/0xba0
         binder_transaction+0x64c/0x3040
         binder_thread_write+0x924/0x2020
         binder_ioctl+0x1610/0x2e5c
         __arm64_sys_ioctl+0xd4/0x120
         el0_svc_common.constprop.0+0xac/0x270
         do_el0_svc+0x38/0xa0
         el0_svc+0x1c/0x2c
         el0_sync_handler+0xe8/0x114
         el0_sync+0x180/0x1c0
      
        Allocated by task 559:
         kasan_save_stack+0x38/0x6c
         __kasan_kmalloc.constprop.0+0xe4/0xf0
         kasan_slab_alloc+0x18/0x2c
         kmem_cache_alloc+0x1b0/0x2d0
         vm_area_alloc+0x28/0x94
         mmap_region+0x378/0x920
         do_mmap+0x3f0/0x600
         vm_mmap_pgoff+0x150/0x17c
         ksys_mmap_pgoff+0x284/0x2dc
         __arm64_sys_mmap+0x84/0xa4
         el0_svc_common.constprop.0+0xac/0x270
         do_el0_svc+0x38/0xa0
         el0_svc+0x1c/0x2c
         el0_sync_handler+0xe8/0x114
         el0_sync+0x180/0x1c0
      
        Freed by task 560:
         kasan_save_stack+0x38/0x6c
         kasan_set_track+0x28/0x40
         kasan_set_free_info+0x24/0x4c
         __kasan_slab_free+0x100/0x164
         kasan_slab_free+0x14/0x20
         kmem_cache_free+0xc4/0x34c
         vm_area_free+0x1c/0x2c
         remove_vma+0x7c/0x94
         __do_munmap+0x358/0x710
         __vm_munmap+0xbc/0x130
         __arm64_sys_munmap+0x4c/0x64
         el0_svc_common.constprop.0+0xac/0x270
         do_el0_svc+0x38/0xa0
         el0_svc+0x1c/0x2c
         el0_sync_handler+0xe8/0x114
         el0_sync+0x180/0x1c0
      
        [...]
        ==================================================================
      
      To prevent the race above, revert back to taking the mmap write lock
      inside binder_update_page_range(). One might expect an increase of mmap
      lock contention. However, binder already serializes these calls via top
      level alloc->mutex. Also, there was no performance impact shown when
      running the binder benchmark tests.
      
      Note this patch is specific to stable branches 5.4 and 5.10. Since in
      newer kernel releases binder no longer caches a pointer to the vma.
      Instead, it has been refactored to use vma_lookup() which avoids the
      issue described here. This switch was introduced in commit a43cfc87
      ("android: binder: stop saving a pointer to the VMA").
      
      Fixes: dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap")
      Reported-by: NJann Horn <jannh@google.com>
      Cc: <stable@vger.kernel.org> # 5.10.x
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Yang Shi <yang.shi@linux.alibaba.com>
      Cc: Liam Howlett <liam.howlett@oracle.com>
      Signed-off-by: NCarlos Llamas <cmllamas@google.com>
      Acked-by: NTodd Kjos <tkjos@google.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NChen Jiahao <chenjiahao16@huawei.com>
      Reviewed-by: NLiao Chang <liaochang1@huawei.com>
      Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com>
      Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com>
      b84980c2
  2. 28 1月, 2022 1 次提交
  3. 12 1月, 2021 1 次提交
  4. 16 9月, 2020 1 次提交
  5. 04 9月, 2020 2 次提交
  6. 29 7月, 2020 1 次提交
  7. 23 7月, 2020 1 次提交
  8. 10 6月, 2020 2 次提交
  9. 14 11月, 2019 3 次提交
  10. 22 10月, 2019 1 次提交
  11. 17 10月, 2019 1 次提交
    • J
      binder: Don't modify VMA bounds in ->mmap handler · 45d02f79
      Jann Horn 提交于
      binder_mmap() tries to prevent the creation of overly big binder mappings
      by silently truncating the size of the VMA to 4MiB. However, this violates
      the API contract of mmap(). If userspace attempts to create a large binder
      VMA, and later attempts to unmap that VMA, it will call munmap() on a range
      beyond the end of the VMA, which may have been allocated to another VMA in
      the meantime. This can lead to userspace memory corruption.
      
      The following sequence of calls leads to a segfault without this commit:
      
      int main(void) {
        int binder_fd = open("/dev/binder", O_RDWR);
        if (binder_fd == -1) err(1, "open binder");
        void *binder_mapping = mmap(NULL, 0x800000UL, PROT_READ, MAP_SHARED,
                                    binder_fd, 0);
        if (binder_mapping == MAP_FAILED) err(1, "mmap binder");
        void *data_mapping = mmap(NULL, 0x400000UL, PROT_READ|PROT_WRITE,
                                  MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
        if (data_mapping == MAP_FAILED) err(1, "mmap data");
        munmap(binder_mapping, 0x800000UL);
        *(char*)data_mapping = 1;
        return 0;
      }
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NJann Horn <jannh@google.com>
      Acked-by: NTodd Kjos <tkjos@google.com>
      Acked-by: NChristian Brauner <christian.brauner@ubuntu.com>
      Link: https://lore.kernel.org/r/20191016150119.154756-1-jannh@google.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      45d02f79
  12. 10 10月, 2019 1 次提交
  13. 01 7月, 2019 1 次提交
  14. 05 6月, 2019 1 次提交
  15. 25 4月, 2019 1 次提交
  16. 21 3月, 2019 1 次提交
  17. 19 2月, 2019 1 次提交
    • M
      binder: reduce mmap_sem write-side lock · 3013bf62
      Minchan Kim 提交于
      binder has used write-side mmap_sem semaphore to release memory
      mapped at address space of the process. However, right lock to
      release pages is down_read, not down_write because page table lock
      already protects the race for parallel freeing.
      
      Please do not use mmap_sem write-side lock which is well known
      contented lock.
      
      Cc: Todd Kjos <tkjos@google.com>
      Cc: Martijn Coenen <maco@android.com>
      Cc: Arve Hjønnevåg <arve@android.com>
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3013bf62
  18. 12 2月, 2019 5 次提交
  19. 27 11月, 2018 2 次提交
    • T
      binder: fix sparse warnings on locking context · 324fa64c
      Todd Kjos 提交于
      Add __acquire()/__release() annnotations to fix warnings
      in sparse context checking
      
      There is one case where the warning was due to a lack of
      a "default:" case in a switch statement where a lock was
      being released in each of the cases, so the default
      case was added.
      Signed-off-by: NTodd Kjos <tkjos@google.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      324fa64c
    • T
      binder: fix race that allows malicious free of live buffer · 7bada55a
      Todd Kjos 提交于
      Malicious code can attempt to free buffers using the BC_FREE_BUFFER
      ioctl to binder. There are protections against a user freeing a buffer
      while in use by the kernel, however there was a window where
      BC_FREE_BUFFER could be used to free a recently allocated buffer that
      was not completely initialized. This resulted in a use-after-free
      detected by KASAN with a malicious test program.
      
      This window is closed by setting the buffer's allow_user_free attribute
      to 0 when the buffer is allocated or when the user has previously freed
      it instead of waiting for the caller to set it. The problem was that
      when the struct buffer was recycled, allow_user_free was stale and set
      to 1 allowing a free to go through.
      Signed-off-by: NTodd Kjos <tkjos@google.com>
      Acked-by: NArve Hjønnevåg <arve@android.com>
      Cc: stable <stable@vger.kernel.org> # 4.14
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7bada55a
  20. 12 9月, 2018 1 次提交
    • M
      android: binder: fix the race mmap and alloc_new_buf_locked · da1b9564
      Minchan Kim 提交于
      There is RaceFuzzer report like below because we have no lock to close
      below the race between binder_mmap and binder_alloc_new_buf_locked.
      To close the race, let's use memory barrier so that if someone see
      alloc->vma is not NULL, alloc->vma_vm_mm should be never NULL.
      
      (I didn't add stable mark intentionallybecause standard android
      userspace libraries that interact with binder (libbinder & libhwbinder)
      prevent the mmap/ioctl race. - from Todd)
      
      "
      Thread interleaving:
      CPU0 (binder_alloc_mmap_handler)              CPU1 (binder_alloc_new_buf_locked)
      =====                                         =====
      // drivers/android/binder_alloc.c
      // #L718 (v4.18-rc3)
      alloc->vma = vma;
                                                    // drivers/android/binder_alloc.c
                                                    // #L346 (v4.18-rc3)
                                                    if (alloc->vma == NULL) {
                                                        ...
                                                        // alloc->vma is not NULL at this point
                                                        return ERR_PTR(-ESRCH);
                                                    }
                                                    ...
                                                    // #L438
                                                    binder_update_page_range(alloc, 0,
                                                            (void *)PAGE_ALIGN((uintptr_t)buffer->data),
                                                            end_page_addr);
      
                                                    // In binder_update_page_range() #L218
                                                    // But still alloc->vma_vm_mm is NULL here
                                                    if (need_mm && mmget_not_zero(alloc->vma_vm_mm))
      alloc->vma_vm_mm = vma->vm_mm;
      
      Crash Log:
      ==================================================================
      BUG: KASAN: null-ptr-deref in __atomic_add_unless include/asm-generic/atomic-instrumented.h:89 [inline]
      BUG: KASAN: null-ptr-deref in atomic_add_unless include/linux/atomic.h:533 [inline]
      BUG: KASAN: null-ptr-deref in mmget_not_zero include/linux/sched/mm.h:75 [inline]
      BUG: KASAN: null-ptr-deref in binder_update_page_range+0xece/0x18e0 drivers/android/binder_alloc.c:218
      Write of size 4 at addr 0000000000000058 by task syz-executor0/11184
      
      CPU: 1 PID: 11184 Comm: syz-executor0 Not tainted 4.18.0-rc3 #1
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014
      Call Trace:
       __dump_stack lib/dump_stack.c:77 [inline]
       dump_stack+0x16e/0x22c lib/dump_stack.c:113
       kasan_report_error mm/kasan/report.c:352 [inline]
       kasan_report+0x163/0x380 mm/kasan/report.c:412
       check_memory_region_inline mm/kasan/kasan.c:260 [inline]
       check_memory_region+0x140/0x1a0 mm/kasan/kasan.c:267
       kasan_check_write+0x14/0x20 mm/kasan/kasan.c:278
       __atomic_add_unless include/asm-generic/atomic-instrumented.h:89 [inline]
       atomic_add_unless include/linux/atomic.h:533 [inline]
       mmget_not_zero include/linux/sched/mm.h:75 [inline]
       binder_update_page_range+0xece/0x18e0 drivers/android/binder_alloc.c:218
       binder_alloc_new_buf_locked drivers/android/binder_alloc.c:443 [inline]
       binder_alloc_new_buf+0x467/0xc30 drivers/android/binder_alloc.c:513
       binder_transaction+0x125b/0x4fb0 drivers/android/binder.c:2957
       binder_thread_write+0xc08/0x2770 drivers/android/binder.c:3528
       binder_ioctl_write_read.isra.39+0x24f/0x8e0 drivers/android/binder.c:4456
       binder_ioctl+0xa86/0xf34 drivers/android/binder.c:4596
       vfs_ioctl fs/ioctl.c:46 [inline]
       do_vfs_ioctl+0x154/0xd40 fs/ioctl.c:686
       ksys_ioctl+0x94/0xb0 fs/ioctl.c:701
       __do_sys_ioctl fs/ioctl.c:708 [inline]
       __se_sys_ioctl fs/ioctl.c:706 [inline]
       __x64_sys_ioctl+0x43/0x50 fs/ioctl.c:706
       do_syscall_64+0x167/0x4b0 arch/x86/entry/common.c:290
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      "
      Signed-off-by: NTodd Kjos <tkjos@google.com>
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Reviewed-by: NMartijn Coenen <maco@android.com>
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      da1b9564
  21. 08 8月, 2018 1 次提交
  22. 24 7月, 2018 1 次提交
  23. 13 6月, 2018 1 次提交
    • K
      treewide: kzalloc() -> kcalloc() · 6396bb22
      Kees Cook 提交于
      The kzalloc() function has a 2-factor argument form, kcalloc(). This
      patch replaces cases of:
      
              kzalloc(a * b, gfp)
      
      with:
              kcalloc(a * b, gfp)
      
      as well as handling cases of:
      
              kzalloc(a * b * c, gfp)
      
      with:
      
              kzalloc(array3_size(a, b, c), gfp)
      
      as it's slightly less ugly than:
      
              kzalloc_array(array_size(a, b), c, gfp)
      
      This does, however, attempt to ignore constant size factors like:
      
              kzalloc(4 * 1024, gfp)
      
      though any constants defined via macros get caught up in the conversion.
      
      Any factors with a sizeof() of "unsigned char", "char", and "u8" were
      dropped, since they're redundant.
      
      The Coccinelle script used for this was:
      
      // Fix redundant parens around sizeof().
      @@
      type TYPE;
      expression THING, E;
      @@
      
      (
        kzalloc(
      -	(sizeof(TYPE)) * E
      +	sizeof(TYPE) * E
        , ...)
      |
        kzalloc(
      -	(sizeof(THING)) * E
      +	sizeof(THING) * E
        , ...)
      )
      
      // Drop single-byte sizes and redundant parens.
      @@
      expression COUNT;
      typedef u8;
      typedef __u8;
      @@
      
      (
        kzalloc(
      -	sizeof(u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * COUNT
      +	COUNT
        , ...)
      )
      
      // 2-factor product with sizeof(type/expression) and identifier or constant.
      @@
      type TYPE;
      expression THING;
      identifier COUNT_ID;
      constant COUNT_CONST;
      @@
      
      (
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_ID)
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_ID
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_CONST
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_ID)
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_ID
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_CONST
      +	COUNT_CONST, sizeof(THING)
        , ...)
      )
      
      // 2-factor product, only identifiers.
      @@
      identifier SIZE, COUNT;
      @@
      
      - kzalloc
      + kcalloc
        (
      -	SIZE * COUNT
      +	COUNT, SIZE
        , ...)
      
      // 3-factor product with 1 sizeof(type) or sizeof(expression), with
      // redundant parens removed.
      @@
      expression THING;
      identifier STRIDE, COUNT;
      type TYPE;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      )
      
      // 3-factor product with 2 sizeof(variable), with redundant parens removed.
      @@
      expression THING1, THING2;
      identifier COUNT;
      type TYPE1, TYPE2;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      )
      
      // 3-factor product, only identifiers, with redundant parens removed.
      @@
      identifier STRIDE, SIZE, COUNT;
      @@
      
      (
        kzalloc(
      -	(COUNT) * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      )
      
      // Any remaining multi-factor products, first at least 3-factor products,
      // when they're not all constants...
      @@
      expression E1, E2, E3;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(
      -	(E1) * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * (E3)
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	E1 * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      )
      
      // And then all remaining 2 factors products when they're not all constants,
      // keeping sizeof() as the second factor argument.
      @@
      expression THING, E1, E2;
      type TYPE;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(sizeof(THING) * C2, ...)
      |
        kzalloc(sizeof(TYPE) * C2, ...)
      |
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(C1 * C2, ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (E2)
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * E2
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (E2)
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * E2
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * E2
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * (E2)
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	E1 * E2
      +	E1, E2
        , ...)
      )
      Signed-off-by: NKees Cook <keescook@chromium.org>
      6396bb22
  24. 14 5月, 2018 1 次提交
    • M
      ANDROID: binder: change down_write to down_read · 720c2419
      Minchan Kim 提交于
      binder_update_page_range needs down_write of mmap_sem because
      vm_insert_page need to change vma->vm_flags to VM_MIXEDMAP unless
      it is set. However, when I profile binder working, it seems
      every binder buffers should be mapped in advance by binder_mmap.
      It means we could set VM_MIXEDMAP in binder_mmap time which is
      already hold a mmap_sem as down_write so binder_update_page_range
      doesn't need to hold a mmap_sem as down_write.
      Please use proper API down_read. It would help mmap_sem contention
      problem as well as fixing down_write abuse.
      
      Ganesh Mahendran tested app launching and binder throughput test
      and he said he couldn't find any problem and I did binder latency
      test per Greg KH request(Thanks Martijn to teach me how I can do)
      I cannot find any problem, too.
      
      Cc: Ganesh Mahendran <opensource.ganesh@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Arve Hjønnevåg <arve@android.com>
      Cc: Todd Kjos <tkjos@google.com>
      Reviewed-by: NMartijn Coenen <maco@android.com>
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Reviewed-by: NJoel Fernandes (Google) <joel@joelfernandes.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      720c2419
  25. 25 1月, 2018 1 次提交
    • G
      android: binder: use VM_ALLOC to get vm area · aac6830e
      Ganesh Mahendran 提交于
      VM_IOREMAP is used to access hardware through a mechanism called
      I/O mapped memory. Android binder is a IPC machanism which will
      not access I/O memory.
      
      And VM_IOREMAP has alignment requiement which may not needed in
      binder.
          __get_vm_area_node()
          {
          ...
              if (flags & VM_IOREMAP)
                  align = 1ul << clamp_t(int, fls_long(size),
                     PAGE_SHIFT, IOREMAP_MAX_ORDER);
          ...
          }
      
      This patch will save some kernel vm area, especially for 32bit os.
      
      In 32bit OS, kernel vm area is only 240MB. We may got below
      error when launching a app:
      
      <3>[ 4482.440053] binder_alloc: binder_alloc_mmap_handler: 15728 8ce67000-8cf65000 get_vm_area failed -12
      <3>[ 4483.218817] binder_alloc: binder_alloc_mmap_handler: 15745 8ce67000-8cf65000 get_vm_area failed -12
      Signed-off-by: NGanesh Mahendran <opensource.ganesh@gmail.com>
      Acked-by: NMartijn Coenen <maco@android.com>
      Acked-by: NTodd Kjos <tkjos@google.com>
      Cc: stable <stable@vger.kernel.org>
      
      ----
      V3: update comments
      V2: update comments
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      aac6830e
  26. 19 12月, 2017 1 次提交
  27. 18 12月, 2017 1 次提交
  28. 28 11月, 2017 1 次提交
  29. 21 10月, 2017 2 次提交
  30. 20 10月, 2017 1 次提交