提交 da8b94ea 编写于 作者: A Artem Bityutskiy

UBIFS: fix recovery broken by the previous recovery fix

Unfortunately, the recovery fix d1606a59b6be4ea392eabd40d1250aa1eeb19efb
(UBIFS: fix extremely rare mount failure) broke recovery. This commit make
UBIFS drop the last min. I/O unit in all journal heads, but this is needed only
for the GC head. And this does not work for non-GC heads. For example, if
suppose we have min. I/O units A and B, and A contains a valid node X, which
was fsynced, and then a group of nodes Y which spans the rest of A and B. In
this case we'll drop not only Y, but also X, which is obviously incorrect.

This patch fixes the issue and additionally makes recovery to drop last min.
I/O unit only for the GC head, and leave things as they have been for ages for
the other heads - this is safer.
Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
上级 efcfde54
...@@ -564,19 +564,15 @@ static int fix_unclean_leb(struct ubifs_info *c, struct ubifs_scan_leb *sleb, ...@@ -564,19 +564,15 @@ static int fix_unclean_leb(struct ubifs_info *c, struct ubifs_scan_leb *sleb,
} }
/** /**
* drop_last_node - drop the last node or group of nodes. * drop_last_group - drop the last group of nodes.
* @sleb: scanned LEB information * @sleb: scanned LEB information
* @offs: offset of dropped nodes is returned here * @offs: offset of dropped nodes is returned here
* @grouped: non-zero if whole group of nodes have to be dropped
* *
* This is a helper function for 'ubifs_recover_leb()' which drops the last * This is a helper function for 'ubifs_recover_leb()' which drops the last
* node of the scanned LEB or the last group of nodes if @grouped is not zero. * group of nodes of the scanned LEB.
* This function returns %1 if a node was dropped and %0 otherwise.
*/ */
static int drop_last_node(struct ubifs_scan_leb *sleb, int *offs, int grouped) static void drop_last_group(struct ubifs_scan_leb *sleb, int *offs)
{ {
int dropped = 0;
while (!list_empty(&sleb->nodes)) { while (!list_empty(&sleb->nodes)) {
struct ubifs_scan_node *snod; struct ubifs_scan_node *snod;
struct ubifs_ch *ch; struct ubifs_ch *ch;
...@@ -585,17 +581,40 @@ static int drop_last_node(struct ubifs_scan_leb *sleb, int *offs, int grouped) ...@@ -585,17 +581,40 @@ static int drop_last_node(struct ubifs_scan_leb *sleb, int *offs, int grouped)
list); list);
ch = snod->node; ch = snod->node;
if (ch->group_type != UBIFS_IN_NODE_GROUP) if (ch->group_type != UBIFS_IN_NODE_GROUP)
return dropped; break;
dbg_rcvry("dropping node at %d:%d", sleb->lnum, snod->offs);
dbg_rcvry("dropping grouped node at %d:%d",
sleb->lnum, snod->offs);
*offs = snod->offs;
list_del(&snod->list);
kfree(snod);
sleb->nodes_cnt -= 1;
}
}
/**
* drop_last_node - drop the last node.
* @sleb: scanned LEB information
* @offs: offset of dropped nodes is returned here
* @grouped: non-zero if whole group of nodes have to be dropped
*
* This is a helper function for 'ubifs_recover_leb()' which drops the last
* node of the scanned LEB.
*/
static void drop_last_node(struct ubifs_scan_leb *sleb, int *offs)
{
struct ubifs_scan_node *snod;
if (!list_empty(&sleb->nodes)) {
snod = list_entry(sleb->nodes.prev, struct ubifs_scan_node,
list);
dbg_rcvry("dropping last node at %d:%d", sleb->lnum, snod->offs);
*offs = snod->offs; *offs = snod->offs;
list_del(&snod->list); list_del(&snod->list);
kfree(snod); kfree(snod);
sleb->nodes_cnt -= 1; sleb->nodes_cnt -= 1;
dropped = 1;
if (!grouped)
break;
} }
return dropped;
} }
/** /**
...@@ -697,59 +716,62 @@ struct ubifs_scan_leb *ubifs_recover_leb(struct ubifs_info *c, int lnum, ...@@ -697,59 +716,62 @@ struct ubifs_scan_leb *ubifs_recover_leb(struct ubifs_info *c, int lnum,
* If nodes are grouped, always drop the incomplete group at * If nodes are grouped, always drop the incomplete group at
* the end. * the end.
*/ */
drop_last_node(sleb, &offs, 1); drop_last_group(sleb, &offs);
if (jhead == GCHD) {
/* /*
* While we are in the middle of the same min. I/O unit keep dropping * If this LEB belongs to the GC head then while we are in the
* nodes. So basically, what we want is to make sure that the last min. * middle of the same min. I/O unit keep dropping nodes. So
* I/O unit where we saw the corruption is dropped completely with all * basically, what we want is to make sure that the last min.
* the uncorrupted nodes which may possibly sit there. * I/O unit where we saw the corruption is dropped completely
* with all the uncorrupted nodes which may possibly sit there.
* *
* In other words, let's name the min. I/O unit where the corruption * In other words, let's name the min. I/O unit where the
* starts B, and the previous min. I/O unit A. The below code tries to * corruption starts B, and the previous min. I/O unit A. The
* deal with a situation when half of B contains valid nodes or the end * below code tries to deal with a situation when half of B
* of a valid node, and the second half of B contains corrupted data or * contains valid nodes or the end of a valid node, and the
* garbage. This means that UBIFS had been writing to B just before the * second half of B contains corrupted data or garbage. This
* power cut happened. I do not know how realistic is this scenario * means that UBIFS had been writing to B just before the power
* that half of the min. I/O unit had been written successfully and the * cut happened. I do not know how realistic is this scenario
* other half not, but this is possible in our 'failure mode emulation' * that half of the min. I/O unit had been written successfully
* infrastructure at least. * and the other half not, but this is possible in our 'failure
* mode emulation' infrastructure at least.
* *
* So what is the problem, why we need to drop those nodes? Whey can't * So what is the problem, why we need to drop those nodes? Why
* we just clean-up the second half of B by putting a padding node * can't we just clean-up the second half of B by putting a
* there? We can, and this works fine with one exception which was * padding node there? We can, and this works fine with one
* reproduced with power cut emulation testing and happens extremely * exception which was reproduced with power cut emulation
* rarely. The description follows, but it is worth noting that that is * testing and happens extremely rarely.
* only about the GC head, so we could do this trick only if the bud
* belongs to the GC head, but it does not seem to be worth an
* additional "if" statement.
* *
* So, imagine the file-system is full, we run GC which is moving valid * Imagine the file-system is full, we run GC which starts
* nodes from LEB X to LEB Y (obviously, LEB Y is the current GC head * moving valid nodes from LEB X to LEB Y (obviously, LEB Y is
* LEB). The @c->gc_lnum is -1, which means that GC will retain LEB X * the current GC head LEB). The @c->gc_lnum is -1, which means
* and will try to continue. Imagine that LEB X is currently the * that GC will retain LEB X and will try to continue. Imagine
* dirtiest LEB, and the amount of used space in LEB Y is exactly the * that LEB X is currently the dirtiest LEB, and the amount of
* same as amount of free space in LEB X. * used space in LEB Y is exactly the same as amount of free
* space in LEB X.
* *
* And a power cut happens when nodes are moved from LEB X to LEB Y. We * And a power cut happens when nodes are moved from LEB X to
* are here trying to recover LEB Y which is the GC head LEB. We find * LEB Y. We are here trying to recover LEB Y which is the GC
* the min. I/O unit B as described above. Then we clean-up LEB Y by * head LEB. We find the min. I/O unit B as described above.
* padding min. I/O unit. And later 'ubifs_rcvry_gc_commit()' function * Then we clean-up LEB Y by padding min. I/O unit. And later
* fails, because it cannot find a dirty LEB which could be GC'd into * 'ubifs_rcvry_gc_commit()' function fails, because it cannot
* LEB Y! Even LEB X does not match because the amount of valid nodes * find a dirty LEB which could be GC'd into LEB Y! Even LEB X
* there does not fit the free space in LEB Y any more! And this is * does not match because the amount of valid nodes there does
* not fit the free space in LEB Y any more! And this is
* because of the padding node which we added to LEB Y. The * because of the padding node which we added to LEB Y. The
* user-visible effect of this which I once observed and analysed is * user-visible effect of this which I once observed and
* that we cannot mount the file-system with -ENOSPC error. * analysed is that we cannot mount the file-system with
* -ENOSPC error.
* *
* So obviously, to make sure that situation does not happen we should * So obviously, to make sure that situation does not happen we
* free min. I/O unit B in LEB Y completely and the last used min. I/O * should free min. I/O unit B in LEB Y completely and the last
* unit in LEB Y should be A. This is basically what the below code * used min. I/O unit in LEB Y should be A. This is basically
* tries to do. * what the below code tries to do.
*/ */
while (min_io_unit == round_down(offs, c->min_io_size) && while (offs > min_io_unit)
min_io_unit != offs && drop_last_node(sleb, &offs);
drop_last_node(sleb, &offs, grouped)); }
buf = sbuf + offs; buf = sbuf + offs;
len = c->leb_size - offs; len = c->leb_size - offs;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册