1. 18 10月, 2021 2 次提交
  2. 17 10月, 2021 2 次提交
  3. 16 10月, 2021 6 次提交
  4. 04 10月, 2021 1 次提交
  5. 02 10月, 2021 1 次提交
  6. 28 9月, 2021 1 次提交
  7. 25 9月, 2021 2 次提交
  8. 16 9月, 2021 2 次提交
    • L
      blk-cgroup: fix UAF by grabbing blkcg lock before destroying blkg pd · 858560b2
      Li Jinlin 提交于
      KASAN reports a use-after-free report when doing fuzz test:
      
      [693354.104835] ==================================================================
      [693354.105094] BUG: KASAN: use-after-free in bfq_io_set_weight_legacy+0xd3/0x160
      [693354.105336] Read of size 4 at addr ffff888be0a35664 by task sh/1453338
      
      [693354.105607] CPU: 41 PID: 1453338 Comm: sh Kdump: loaded Not tainted 4.18.0-147
      [693354.105610] Hardware name: Huawei 2288H V5/BC11SPSCB0, BIOS 0.81 07/02/2018
      [693354.105612] Call Trace:
      [693354.105621]  dump_stack+0xf1/0x19b
      [693354.105626]  ? show_regs_print_info+0x5/0x5
      [693354.105634]  ? printk+0x9c/0xc3
      [693354.105638]  ? cpumask_weight+0x1f/0x1f
      [693354.105648]  print_address_description+0x70/0x360
      [693354.105654]  kasan_report+0x1b2/0x330
      [693354.105659]  ? bfq_io_set_weight_legacy+0xd3/0x160
      [693354.105665]  ? bfq_io_set_weight_legacy+0xd3/0x160
      [693354.105670]  bfq_io_set_weight_legacy+0xd3/0x160
      [693354.105675]  ? bfq_cpd_init+0x20/0x20
      [693354.105683]  cgroup_file_write+0x3aa/0x510
      [693354.105693]  ? ___slab_alloc+0x507/0x540
      [693354.105698]  ? cgroup_file_poll+0x60/0x60
      [693354.105702]  ? 0xffffffff89600000
      [693354.105708]  ? usercopy_abort+0x90/0x90
      [693354.105716]  ? mutex_lock+0xef/0x180
      [693354.105726]  kernfs_fop_write+0x1ab/0x280
      [693354.105732]  ? cgroup_file_poll+0x60/0x60
      [693354.105738]  vfs_write+0xe7/0x230
      [693354.105744]  ksys_write+0xb0/0x140
      [693354.105749]  ? __ia32_sys_read+0x50/0x50
      [693354.105760]  do_syscall_64+0x112/0x370
      [693354.105766]  ? syscall_return_slowpath+0x260/0x260
      [693354.105772]  ? do_page_fault+0x9b/0x270
      [693354.105779]  ? prepare_exit_to_usermode+0xf9/0x1a0
      [693354.105784]  ? enter_from_user_mode+0x30/0x30
      [693354.105793]  entry_SYSCALL_64_after_hwframe+0x65/0xca
      
      [693354.105875] Allocated by task 1453337:
      [693354.106001]  kasan_kmalloc+0xa0/0xd0
      [693354.106006]  kmem_cache_alloc_node_trace+0x108/0x220
      [693354.106010]  bfq_pd_alloc+0x96/0x120
      [693354.106015]  blkcg_activate_policy+0x1b7/0x2b0
      [693354.106020]  bfq_create_group_hierarchy+0x1e/0x80
      [693354.106026]  bfq_init_queue+0x678/0x8c0
      [693354.106031]  blk_mq_init_sched+0x1f8/0x460
      [693354.106037]  elevator_switch_mq+0xe1/0x240
      [693354.106041]  elevator_switch+0x25/0x40
      [693354.106045]  elv_iosched_store+0x1a1/0x230
      [693354.106049]  queue_attr_store+0x78/0xb0
      [693354.106053]  kernfs_fop_write+0x1ab/0x280
      [693354.106056]  vfs_write+0xe7/0x230
      [693354.106060]  ksys_write+0xb0/0x140
      [693354.106064]  do_syscall_64+0x112/0x370
      [693354.106069]  entry_SYSCALL_64_after_hwframe+0x65/0xca
      
      [693354.106114] Freed by task 1453336:
      [693354.106225]  __kasan_slab_free+0x130/0x180
      [693354.106229]  kfree+0x90/0x1b0
      [693354.106233]  blkcg_deactivate_policy+0x12c/0x220
      [693354.106238]  bfq_exit_queue+0xf5/0x110
      [693354.106241]  blk_mq_exit_sched+0x104/0x130
      [693354.106245]  __elevator_exit+0x45/0x60
      [693354.106249]  elevator_switch_mq+0xd6/0x240
      [693354.106253]  elevator_switch+0x25/0x40
      [693354.106257]  elv_iosched_store+0x1a1/0x230
      [693354.106261]  queue_attr_store+0x78/0xb0
      [693354.106264]  kernfs_fop_write+0x1ab/0x280
      [693354.106268]  vfs_write+0xe7/0x230
      [693354.106271]  ksys_write+0xb0/0x140
      [693354.106275]  do_syscall_64+0x112/0x370
      [693354.106280]  entry_SYSCALL_64_after_hwframe+0x65/0xca
      
      [693354.106329] The buggy address belongs to the object at ffff888be0a35580
                       which belongs to the cache kmalloc-1k of size 1024
      [693354.106736] The buggy address is located 228 bytes inside of
                       1024-byte region [ffff888be0a35580, ffff888be0a35980)
      [693354.107114] The buggy address belongs to the page:
      [693354.107273] page:ffffea002f828c00 count:1 mapcount:0 mapping:ffff888107c17080 index:0x0 compound_mapcount: 0
      [693354.107606] flags: 0x17ffffc0008100(slab|head)
      [693354.107760] raw: 0017ffffc0008100 ffffea002fcbc808 ffffea0030bd3a08 ffff888107c17080
      [693354.108020] raw: 0000000000000000 00000000001c001c 00000001ffffffff 0000000000000000
      [693354.108278] page dumped because: kasan: bad access detected
      
      [693354.108511] Memory state around the buggy address:
      [693354.108671]  ffff888be0a35500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
      [693354.116396]  ffff888be0a35580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [693354.124473] >ffff888be0a35600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [693354.132421]                                                        ^
      [693354.140284]  ffff888be0a35680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [693354.147912]  ffff888be0a35700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [693354.155281] ==================================================================
      
      blkgs are protected by both queue and blkcg locks and holding
      either should stabilize them. However, the path of destroying
      blkg policy data is only protected by queue lock in
      blkcg_activate_policy()/blkcg_deactivate_policy(). Other tasks
      can get the blkg policy data before the blkg policy data is
      destroyed, and use it after destroyed, which will result in a
      use-after-free.
      
      CPU0                             CPU1
      blkcg_deactivate_policy
        spin_lock_irq(&q->queue_lock)
                                       bfq_io_set_weight_legacy
                                         spin_lock_irq(&blkcg->lock)
                                         blkg_to_bfqg(blkg)
                                           pd_to_bfqg(blkg->pd[pol->plid])
                                           ^^^^^^blkg->pd[pol->plid] != NULL
                                                 bfqg != NULL
        pol->pd_free_fn(blkg->pd[pol->plid])
          pd_to_bfqg(blkg->pd[pol->plid])
          bfqg_put(bfqg)
            kfree(bfqg)
        blkg->pd[pol->plid] = NULL
        spin_unlock_irq(q->queue_lock);
                                         bfq_group_set_weight(bfqg, val, 0)
                                           bfqg->entity.new_weight
                                           ^^^^^^trigger uaf here
                                         spin_unlock_irq(&blkcg->lock);
      
      Fix by grabbing the matching blkcg lock before trying to
      destroy blkg policy data.
      Suggested-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NLi Jinlin <lijinlin3@huawei.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Link: https://lore.kernel.org/r/20210914042605.3260596-1-lijinlin3@huawei.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      858560b2
    • Y
      blkcg: fix memory leak in blk_iolatency_init · 6f5ddde4
      Yanfei Xu 提交于
      BUG: memory leak
      unreferenced object 0xffff888129acdb80 (size 96):
        comm "syz-executor.1", pid 12661, jiffies 4294962682 (age 15.220s)
        hex dump (first 32 bytes):
          20 47 c9 85 ff ff ff ff 20 d4 8e 29 81 88 ff ff   G...... ..)....
          01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace:
          [<ffffffff82264ec8>] kmalloc include/linux/slab.h:591 [inline]
          [<ffffffff82264ec8>] kzalloc include/linux/slab.h:721 [inline]
          [<ffffffff82264ec8>] blk_iolatency_init+0x28/0x190 block/blk-iolatency.c:724
          [<ffffffff8225b8c4>] blkcg_init_queue+0xb4/0x1c0 block/blk-cgroup.c:1185
          [<ffffffff822253da>] blk_alloc_queue+0x22a/0x2e0 block/blk-core.c:566
          [<ffffffff8223b175>] blk_mq_init_queue_data block/blk-mq.c:3100 [inline]
          [<ffffffff8223b175>] __blk_mq_alloc_disk+0x25/0xd0 block/blk-mq.c:3124
          [<ffffffff826a9303>] loop_add+0x1c3/0x360 drivers/block/loop.c:2344
          [<ffffffff826a966e>] loop_control_get_free drivers/block/loop.c:2501 [inline]
          [<ffffffff826a966e>] loop_control_ioctl+0x17e/0x2e0 drivers/block/loop.c:2516
          [<ffffffff81597eec>] vfs_ioctl fs/ioctl.c:51 [inline]
          [<ffffffff81597eec>] __do_sys_ioctl fs/ioctl.c:874 [inline]
          [<ffffffff81597eec>] __se_sys_ioctl fs/ioctl.c:860 [inline]
          [<ffffffff81597eec>] __x64_sys_ioctl+0xfc/0x140 fs/ioctl.c:860
          [<ffffffff843fa745>] do_syscall_x64 arch/x86/entry/common.c:50 [inline]
          [<ffffffff843fa745>] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
          [<ffffffff84600068>] entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      Once blk_throtl_init() queue init failed, blkcg_iolatency_exit() will
      not be invoked for cleanup. That leads a memory leak. Swap the
      blk_throtl_init() and blk_iolatency_init() calls can solve this.
      
      Reported-by: syzbot+01321b15cc98e6bf96d6@syzkaller.appspotmail.com
      Fixes: 19688d7f (block/blk-cgroup: Swap the blk_throtl_init() and blk_iolatency_init() calls)
      Signed-off-by: NYanfei Xu <yanfei.xu@windriver.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Link: https://lore.kernel.org/r/20210915072426.4022924-1-yanfei.xu@windriver.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      6f5ddde4
  9. 15 9月, 2021 2 次提交
  10. 14 9月, 2021 1 次提交
  11. 13 9月, 2021 1 次提交
    • M
      blk-mq: avoid to iterate over stale request · 67f3b2f8
      Ming Lei 提交于
      blk-mq can't run allocating driver tag and updating ->rqs[tag]
      atomically, meantime blk-mq doesn't clear ->rqs[tag] after the driver
      tag is released.
      
      So there is chance to iterating over one stale request just after the
      tag is allocated and before updating ->rqs[tag].
      
      scsi_host_busy_iter() calls scsi_host_check_in_flight() to count scsi
      in-flight requests after scsi host is blocked, so no new scsi command can
      be marked as SCMD_STATE_INFLIGHT. However, driver tag allocation still can
      be run by blk-mq core. One request is marked as SCMD_STATE_INFLIGHT,
      but this request may have been kept in another slot of ->rqs[], meantime
      the slot can be allocated out but ->rqs[] isn't updated yet. Then this
      in-flight request is counted twice as SCMD_STATE_INFLIGHT. This way causes
      trouble in handling scsi error.
      
      Fixes the issue by not iterating over stale request.
      
      Cc: linux-scsi@vger.kernel.org
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Reported-by: Nluojiaxing <luojiaxing@huawei.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/20210906065003.439019-1-ming.lei@redhat.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      67f3b2f8
  12. 08 9月, 2021 1 次提交
  13. 07 9月, 2021 4 次提交
    • C
      block: move fs/block_dev.c to block/bdev.c · 0dca4462
      Christoph Hellwig 提交于
      Move it together with the rest of the block layer.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20210907141303.1371844-3-hch@lst.deSigned-off-by: NJens Axboe <axboe@kernel.dk>
      0dca4462
    • C
      block: split out operations on block special files · cd82cca7
      Christoph Hellwig 提交于
      Add a new block/fops.c for all the file and address_space operations
      that provide the block special file support.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20210907141303.1371844-2-hch@lst.de
      [axboe: correct trailing whitespace while at it]
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cd82cca7
    • L
      blk-throttle: fix UAF by deleteing timer in blk_throtl_exit() · 884f0e84
      Li Jinlin 提交于
      The pending timer has been set up in blk_throtl_init(). However, the
      timer is not deleted in blk_throtl_exit(). This means that the timer
      handler may still be running after freeing the timer, which would
      result in a use-after-free.
      
      Fix by calling del_timer_sync() to delete the timer in blk_throtl_exit().
      Signed-off-by: NLi Jinlin <lijinlin3@huawei.com>
      Link: https://lore.kernel.org/r/20210907121242.2885564-1-lijinlin3@huawei.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      884f0e84
    • T
      block: genhd: don't call blkdev_show() with major_names_lock held · dfbb3409
      Tetsuo Handa 提交于
      If CONFIG_BLK_DEV_LOOP && CONFIG_MTD (at least; there might be other
      combinations), lockdep complains circular locking dependency at
      __loop_clr_fd(), for major_names_lock serves as a locking dependency
      aggregating hub across multiple block modules.
      
       ======================================================
       WARNING: possible circular locking dependency detected
       5.14.0+ #757 Tainted: G            E
       ------------------------------------------------------
       systemd-udevd/7568 is trying to acquire lock:
       ffff88800f334d48 ((wq_completion)loop0){+.+.}-{0:0}, at: flush_workqueue+0x70/0x560
      
       but task is already holding lock:
       ffff888014a7d4a0 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0x4d/0x400 [loop]
      
       which lock already depends on the new lock.
      
       the existing dependency chain (in reverse order) is:
      
       -> #6 (&lo->lo_mutex){+.+.}-{3:3}:
              lock_acquire+0xbe/0x1f0
              __mutex_lock_common+0xb6/0xe10
              mutex_lock_killable_nested+0x17/0x20
              lo_open+0x23/0x50 [loop]
              blkdev_get_by_dev+0x199/0x540
              blkdev_open+0x58/0x90
              do_dentry_open+0x144/0x3a0
              path_openat+0xa57/0xda0
              do_filp_open+0x9f/0x140
              do_sys_openat2+0x71/0x150
              __x64_sys_openat+0x78/0xa0
              do_syscall_64+0x3d/0xb0
              entry_SYSCALL_64_after_hwframe+0x44/0xae
      
       -> #5 (&disk->open_mutex){+.+.}-{3:3}:
              lock_acquire+0xbe/0x1f0
              __mutex_lock_common+0xb6/0xe10
              mutex_lock_nested+0x17/0x20
              bd_register_pending_holders+0x20/0x100
              device_add_disk+0x1ae/0x390
              loop_add+0x29c/0x2d0 [loop]
              blk_request_module+0x5a/0xb0
              blkdev_get_no_open+0x27/0xa0
              blkdev_get_by_dev+0x5f/0x540
              blkdev_open+0x58/0x90
              do_dentry_open+0x144/0x3a0
              path_openat+0xa57/0xda0
              do_filp_open+0x9f/0x140
              do_sys_openat2+0x71/0x150
              __x64_sys_openat+0x78/0xa0
              do_syscall_64+0x3d/0xb0
              entry_SYSCALL_64_after_hwframe+0x44/0xae
      
       -> #4 (major_names_lock){+.+.}-{3:3}:
              lock_acquire+0xbe/0x1f0
              __mutex_lock_common+0xb6/0xe10
              mutex_lock_nested+0x17/0x20
              blkdev_show+0x19/0x80
              devinfo_show+0x52/0x60
              seq_read_iter+0x2d5/0x3e0
              proc_reg_read_iter+0x41/0x80
              vfs_read+0x2ac/0x330
              ksys_read+0x6b/0xd0
              do_syscall_64+0x3d/0xb0
              entry_SYSCALL_64_after_hwframe+0x44/0xae
      
       -> #3 (&p->lock){+.+.}-{3:3}:
              lock_acquire+0xbe/0x1f0
              __mutex_lock_common+0xb6/0xe10
              mutex_lock_nested+0x17/0x20
              seq_read_iter+0x37/0x3e0
              generic_file_splice_read+0xf3/0x170
              splice_direct_to_actor+0x14e/0x350
              do_splice_direct+0x84/0xd0
              do_sendfile+0x263/0x430
              __se_sys_sendfile64+0x96/0xc0
              do_syscall_64+0x3d/0xb0
              entry_SYSCALL_64_after_hwframe+0x44/0xae
      
       -> #2 (sb_writers#3){.+.+}-{0:0}:
              lock_acquire+0xbe/0x1f0
              lo_write_bvec+0x96/0x280 [loop]
              loop_process_work+0xa68/0xc10 [loop]
              process_one_work+0x293/0x480
              worker_thread+0x23d/0x4b0
              kthread+0x163/0x180
              ret_from_fork+0x1f/0x30
      
       -> #1 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}:
              lock_acquire+0xbe/0x1f0
              process_one_work+0x280/0x480
              worker_thread+0x23d/0x4b0
              kthread+0x163/0x180
              ret_from_fork+0x1f/0x30
      
       -> #0 ((wq_completion)loop0){+.+.}-{0:0}:
              validate_chain+0x1f0d/0x33e0
              __lock_acquire+0x92d/0x1030
              lock_acquire+0xbe/0x1f0
              flush_workqueue+0x8c/0x560
              drain_workqueue+0x80/0x140
              destroy_workqueue+0x47/0x4f0
              __loop_clr_fd+0xb4/0x400 [loop]
              blkdev_put+0x14a/0x1d0
              blkdev_close+0x1c/0x20
              __fput+0xfd/0x220
              task_work_run+0x69/0xc0
              exit_to_user_mode_prepare+0x1ce/0x1f0
              syscall_exit_to_user_mode+0x26/0x60
              do_syscall_64+0x4c/0xb0
              entry_SYSCALL_64_after_hwframe+0x44/0xae
      
       other info that might help us debug this:
      
       Chain exists of:
         (wq_completion)loop0 --> &disk->open_mutex --> &lo->lo_mutex
      
        Possible unsafe locking scenario:
      
              CPU0                    CPU1
              ----                    ----
         lock(&lo->lo_mutex);
                                      lock(&disk->open_mutex);
                                      lock(&lo->lo_mutex);
         lock((wq_completion)loop0);
      
        *** DEADLOCK ***
      
       2 locks held by systemd-udevd/7568:
        #0: ffff888012554128 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0x4c/0x1d0
        #1: ffff888014a7d4a0 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0x4d/0x400 [loop]
      
       stack backtrace:
       CPU: 0 PID: 7568 Comm: systemd-udevd Tainted: G            E     5.14.0+ #757
       Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
       Call Trace:
        dump_stack_lvl+0x79/0xbf
        print_circular_bug+0x5d6/0x5e0
        ? stack_trace_save+0x42/0x60
        ? save_trace+0x3d/0x2d0
        check_noncircular+0x10b/0x120
        validate_chain+0x1f0d/0x33e0
        ? __lock_acquire+0x953/0x1030
        ? __lock_acquire+0x953/0x1030
        __lock_acquire+0x92d/0x1030
        ? flush_workqueue+0x70/0x560
        lock_acquire+0xbe/0x1f0
        ? flush_workqueue+0x70/0x560
        flush_workqueue+0x8c/0x560
        ? flush_workqueue+0x70/0x560
        ? sched_clock_cpu+0xe/0x1a0
        ? drain_workqueue+0x41/0x140
        drain_workqueue+0x80/0x140
        destroy_workqueue+0x47/0x4f0
        ? blk_mq_freeze_queue_wait+0xac/0xd0
        __loop_clr_fd+0xb4/0x400 [loop]
        ? __mutex_unlock_slowpath+0x35/0x230
        blkdev_put+0x14a/0x1d0
        blkdev_close+0x1c/0x20
        __fput+0xfd/0x220
        task_work_run+0x69/0xc0
        exit_to_user_mode_prepare+0x1ce/0x1f0
        syscall_exit_to_user_mode+0x26/0x60
        do_syscall_64+0x4c/0xb0
        entry_SYSCALL_64_after_hwframe+0x44/0xae
       RIP: 0033:0x7f0fd4c661f7
       Code: 00 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 13 fc ff ff
       RSP: 002b:00007ffd1c9e9fd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
       RAX: 0000000000000000 RBX: 00007f0fd46be6c8 RCX: 00007f0fd4c661f7
       RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000006
       RBP: 0000000000000006 R08: 000055fff1eaf400 R09: 0000000000000000
       R10: 00007f0fd46be6c8 R11: 0000000000000246 R12: 0000000000000000
       R13: 0000000000000000 R14: 0000000000002f08 R15: 00007ffd1c9ea050
      
      Commit 1c500ad7 ("loop: reduce the loop_ctl_mutex scope") is for
      breaking "loop_ctl_mutex => &lo->lo_mutex" dependency chain. But enabling
      a different block module results in forming circular locking dependency
      due to shared major_names_lock mutex.
      
      The simplest fix is to call probe function without holding
      major_names_lock [1], but Christoph Hellwig does not like such idea.
      Therefore, instead of holding major_names_lock in blkdev_show(),
      introduce a different lock for blkdev_show() in order to break
      "sb_writers#$N => &p->lock => major_names_lock" dependency chain.
      
      Link: https://lkml.kernel.org/r/b2af8a5b-3c1b-204e-7f56-bea0b15848d6@i-love.sakura.ne.jp [1]
      Signed-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Link: https://lore.kernel.org/r/18a02da2-0bf3-550e-b071-2b4ab13c49f0@i-love.sakura.ne.jpSigned-off-by: NJens Axboe <axboe@kernel.dk>
      dfbb3409
  14. 04 9月, 2021 1 次提交
    • C
      mm: remove flush_kernel_dcache_page · f358afc5
      Christoph Hellwig 提交于
      flush_kernel_dcache_page is a rather confusing interface that implements a
      subset of flush_dcache_page by not being able to properly handle page
      cache mapped pages.
      
      The only callers left are in the exec code as all other previous callers
      were incorrect as they could have dealt with page cache pages.  Replace
      the calls to flush_kernel_dcache_page with calls to flush_dcache_page,
      which for all architectures does either exactly the same thing, can
      contains one or more of the following:
      
       1) an optimization to defer the cache flush for page cache pages not
          mapped into userspace
       2) additional flushing for mapped page cache pages if cache aliases
          are possible
      
      Link: https://lkml.kernel.org/r/20210712060928.4161649-7-hch@lst.deSigned-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: NIra Weiny <ira.weiny@intel.com>
      Cc: Alex Shi <alexs@kernel.org>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Cercueil <paul@crapouillou.net>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Ulf Hansson <ulf.hansson@linaro.org>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Yoshinori Sato <ysato@users.osdn.me>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f358afc5
  15. 03 9月, 2021 1 次提交
  16. 02 9月, 2021 2 次提交
  17. 27 8月, 2021 1 次提交
  18. 25 8月, 2021 7 次提交
  19. 24 8月, 2021 2 次提交