• E
    [PATCH] fix garbage instead of zeroes in UFS · d63b7090
    Evgeniy Dushistov 提交于
    Looks like this is the problem, which point Al Viro some time ago:
    
    ufs's get_block callback allocates 16k of disk at a time, and links that
    entire 16k into the file's metadata.  But because get_block is called for only
    a single buffer_head (a 2k buffer_head in this case?) we are only able to tell
    the VFS that this 2k is buffer_new().
    
    So when ufs_getfrag_block() is later called to map some more data in the file,
    and when that data resides within the remaining 14k of this fragment,
    ufs_getfrag_block() will incorrectly return a !buffer_new() buffer_head.
    
    I don't see _right_ way to do nullification of whole block, if use inode
    page cache, some pages may be outside of inode limits (inode size), and
    will be lost; if use blockdev page cache it is possible to zero real data,
    if later inode page cache will be used.
    
    The simpliest way, as can I see usage of block device page cache, but not only
    mark dirty, but also sync it during "nullification".  I use my simple tests
    collection, which I used for check that create,open,write,read,close works on
    ufs, and I see that this patch makes ufs code 18% slower then before.
    Signed-off-by: NAndrew Morton <akpm@osdl.org>
    Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
    d63b7090
inode.c 22.6 KB