• X
    md: Set prev_flush_start and flush_bio in an atomic way · dc5d17a3
    Xiao Ni 提交于
    One customer reports a crash problem which causes by flush request. It
    triggers a warning before crash.
    
            /* new request after previous flush is completed */
            if (ktime_after(req_start, mddev->prev_flush_start)) {
                    WARN_ON(mddev->flush_bio);
                    mddev->flush_bio = bio;
                    bio = NULL;
            }
    
    The WARN_ON is triggered. We use spin lock to protect prev_flush_start and
    flush_bio in md_flush_request. But there is no lock protection in
    md_submit_flush_data. It can set flush_bio to NULL first because of
    compiler reordering write instructions.
    
    For example, flush bio1 sets flush bio to NULL first in
    md_submit_flush_data. An interrupt or vmware causing an extended stall
    happen between updating flush_bio and prev_flush_start. Because flush_bio
    is NULL, flush bio2 can get the lock and submit to underlayer disks. Then
    flush bio1 updates prev_flush_start after the interrupt or extended stall.
    
    Then flush bio3 enters in md_flush_request. The start time req_start is
    behind prev_flush_start. The flush_bio is not NULL(flush bio2 hasn't
    finished). So it can trigger the WARN_ON now. Then it calls INIT_WORK
    again. INIT_WORK() will re-initialize the list pointers in the
    work_struct, which then can result in a corrupted work list and the
    work_struct queued a second time. With the work list corrupted, it can
    lead in invalid work items being used and cause a crash in
    process_one_work.
    
    We need to make sure only one flush bio can be handled at one same time.
    So add spin lock in md_submit_flush_data to protect prev_flush_start and
    flush_bio in an atomic way.
    Reviewed-by: NDavid Jeffery <djeffery@redhat.com>
    Signed-off-by: NXiao Ni <xni@redhat.com>
    Signed-off-by: NSong Liu <songliubraving@fb.com>
    dc5d17a3
md.c 258.2 KB