• D
    xfs: fix split buffer vector log recovery support · 7d2ffe80
    Dave Chinner 提交于
    A long time ago in a galaxy far away....
    
    .. the was a commit made to fix some ilinux specific "fragmented
    buffer" log recovery problem:
    
    http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-import.git;a=commitdiff;h=b29c0bece51da72fb3ff3b61391a391ea54e1603
    
    That problem occurred when a contiguous dirty region of a buffer was
    split across across two pages of an unmapped buffer. It's been a
    long time since that has been done in XFS, and the changes to log
    the entire inode buffers for CRC enabled filesystems has
    re-introduced that corner case.
    
    And, of course, it turns out that the above commit didn't actually
    fix anything - it just ensured that log recovery is guaranteed to
    fail when this situation occurs. And now for the gory details.
    
    xfstest xfs/085 is failing with this assert:
    
    XFS (vdb): bad number of regions (0) in inode log format
    XFS: Assertion failed: 0, file: fs/xfs/xfs_log_recover.c, line: 1583
    
    Largely undocumented factoid #1: Log recovery depends on all log
    buffer format items starting with this format:
    
    struct foo_log_format {
    	__uint16_t	type;
    	__uint16_t	size;
    	....
    
    As recoery uses the size field and assumptions about 32 bit
    alignment in decoding format items.  So don't pay much attention to
    the fact log recovery thinks that it decoding an inode log format
    item - it just uses them to determine what the size of the item is.
    
    But why would it see a log format item with a zero size? Well,
    luckily enough xfs_logprint uses the same code and gives the same
    error, so with a bit of gdb magic, it turns out that it isn't a log
    format that is being decoded. What logprint tells us is this:
    
    Oper (130): tid: a0375e1a  len: 28  clientid: TRANS  flags: none
    BUF:  #regs: 2   start blkno: 144 (0x90)  len: 16  bmap size: 2  flags: 0x4000
    Oper (131): tid: a0375e1a  len: 4096  clientid: TRANS  flags: none
    BUF DATA
    ----------------------------------------------------------------------------
    Oper (132): tid: a0375e1a  len: 4096  clientid: TRANS  flags: none
    xfs_logprint: unknown log operation type (4e49)
    **********************************************************************
    * ERROR: data block=2                                                 *
    **********************************************************************
    
    That we've got a buffer format item (oper 130) that has two regions;
    the format item itself and one dirty region. The subsequent region
    after the buffer format item and it's data is them what we are
    tripping over, and the first bytes of it at an inode magic number.
    Not a log opheader like there is supposed to be.
    
    That means there's a problem with the buffer format item. It's dirty
    data region is 4096 bytes, and it contains - you guessed it -
    initialised inodes. But inode buffers are 8k, not 4k, and we log
    them in their entirety. So something is wrong here. The buffer
    format item contains:
    
    (gdb) p /x *(struct xfs_buf_log_format *)in_f
    $22 = {blf_type = 0x123c, blf_size = 0x2, blf_flags = 0x4000,
           blf_len = 0x10, blf_blkno = 0x90, blf_map_size = 0x2,
           blf_data_map = {0xffffffff, 0xffffffff, .... }}
    
    Two regions, and a signle dirty contiguous region of 64 bits.  64 *
    128 = 8k, so this should be followed by a single 8k region of data.
    And the blf_flags tell us that the type of buffer is a
    XFS_BLFT_DINO_BUF. It contains inodes. And because it doesn't have
    the XFS_BLF_INODE_BUF flag set, that means it's an inode allocation
    buffer. So, it should be followed by 8k of inode data.
    
    But we know that the next region has a header of:
    
    (gdb) p /x *ohead
    $25 = {oh_tid = 0x1a5e37a0, oh_len = 0x100000, oh_clientid = 0x69,
           oh_flags = 0x0, oh_res2 = 0x0}
    
    and so be32_to_cpu(oh_len) = 0x1000 = 4096 bytes. It's simply not
    long enough to hold all the logged data. There must be another
    region. There is - there's a following opheader for another 4k of
    data that contains the other half of the inode cluster data - the
    one we assert fail on because it's not a log format header.
    
    So why is the second part of the data not being accounted to the
    correct buffer log format structure? It took a little more work with
    gdb to work out that the buffer log format structure was both
    expecting it to be there but hadn't accounted for it. It was at that
    point I went to the kernel code, as clearly this wasn't a bug in
    xfs_logprint and the kernel was writing bad stuff to the log.
    
    First port of call was the buffer item formatting code, and the
    discontiguous memory/contiguous dirty region handling code
    immediately stood out. I've wondered for a long time why the code
    had this comment in it:
    
                            vecp->i_addr = xfs_buf_offset(bp, buffer_offset);
                            vecp->i_len = nbits * XFS_BLF_CHUNK;
                            vecp->i_type = XLOG_REG_TYPE_BCHUNK;
    /*
     * You would think we need to bump the nvecs here too, but we do not
     * this number is used by recovery, and it gets confused by the boundary
     * split here
     *                      nvecs++;
     */
                            vecp++;
    
    And it didn't account for the extra vector pointer. The case being
    handled here is that a contiguous dirty region lies across a
    boundary that cannot be memcpy()d across, and so has to be split
    into two separate operations for xlog_write() to perform.
    
    What this code assumes is that what is written to the log is two
    consecutive blocks of data that are accounted in the buf log format
    item as the same contiguous dirty region and so will get decoded as
    such by the log recovery code.
    
    The thing is, xlog_write() knows nothing about this, and so just
    does it's normal thing of adding an opheader for each vector. That
    means the 8k region gets written to the log as two separate regions
    of 4k each, but because nvecs has not been incremented, the buf log
    format item accounts for only one of them.
    
    Hence when we come to log recovery, we process the first 4k region
    and then expect to come across a new item that starts with a log
    format structure of some kind that tells us whenteh next data is
    going to be. Instead, we hit raw buffer data and things go bad real
    quick.
    
    So, the commit from 2002 that commented out nvecs++ is just plain
    wrong. It breaks log recovery completely, and it would seem the only
    reason this hasn't been since then is that we don't log large
    contigous regions of multi-page unmapped buffers very often. Never
    would be a closer estimate, at least until the CRC code came along....
    
    So, lets fix that by restoring the nvecs accounting for the extra
    region when we hit this case.....
    
    .... and there's the problemin log recovery it is apparently working
    around:
    
    XFS: Assertion failed: i == item->ri_total, file: fs/xfs/xfs_log_recover.c, line: 2135
    
    Yup, xlog_recover_do_reg_buffer() doesn't handle contigous dirty
    regions being broken up into multiple regions by the log formatting
    code. That's an easy fix, though - if the number of contiguous dirty
    bits exceeds the length of the region being copied out of the log,
    only account for the number of dirty bits that region covers, and
    then loop again and copy more from the next region. It's a 2 line
    fix.
    
    Now xfstests xfs/085 passes, we have one less piece of mystery
    code, and one more important piece of knowledge about how to
    structure new log format items..
    Signed-off-by: NDave Chinner <dchinner@redhat.com>
    Reviewed-by: NMark Tinguely <tinguely@sgi.com>
    Signed-off-by: NBen Myers <bpm@sgi.com>
    
    (cherry picked from commit 709da6a6)
    7d2ffe80
xfs_log_recover.c 111.1 KB