- 23 4月, 2008 4 次提交
-
-
由 David Woodhouse 提交于
Just to keep the debug code happy when it's adding all the blocks up. Otherwise, they disappear for a while while the locks are dropped to check them and write the cleanmarker. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 Anders Grafström 提交于
It looks the error paths in jffs2_block_check_erase() have wrong return values. A block that failed to be erased never gets marked as bad. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 David Woodhouse 提交于
The problem fixed in commit 014b164e (space leak with in-band cleanmarkers) would have been caught a lot quicker if our paranoid debugging mode had included adding up the size counts from all the eraseblocks and comparing the totals with the counts in the superblock. Add that. Make jffs2_mark_erased_block() file the newly-erased block on the free_list before calling the debug function, to make it happy. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 David Woodhouse 提交于
We were accounting for the cleanmarker by calling jffs2_link_node_ref() (without locking!), which adjusted both superblock and per-eraseblock accounting, subtracting the size of the cleanmarker from {jeb,c}->free_size and adding it to {jeb,c}->used_size. But only _then_ were we adding the size of the newly-erased block back to the superblock counts, and we were adding each of jeb->{free,used}_size to the corresponding superblock counts. Thus, the size of the cleanmarker was effectively subtracted from the superblock's free_size _twice_. Fix this, by always adding a full eraseblock size to c->free_size when we've erased a block. And call jffs2_link_node_ref() under the proper lock, while we're at it. Thanks to Alexander Yurchenko and/or Damir Shayhutdinov for (almost) pinpointing the problem. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 22 4月, 2008 1 次提交
-
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 24 9月, 2007 1 次提交
-
-
由 Andy Lowe 提交于
Fix a couple of instances in JFFS2 where the unpoint() routine is being called with the wrong length in cases where the point() routine truncated a request. Signed-off-by: NAndy Lowe <alowe@mvista.com> Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 22 8月, 2007 1 次提交
-
-
由 Andrew Morton 提交于
fs/jffs2/erase.c: In function 'jffs2_block_check_erase': fs/jffs2/erase.c:355: warning: format '%08x' expects type 'unsigned int', but argument 3 has type 'long unsigned int' and fs/jffs2/erase.c: In function 'jffs2_erase_pending_blocks': fs/jffs2/erase.c:404: warning: 'bad_offset' may be used uninitialized in this function Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 10 7月, 2007 1 次提交
-
-
由 David Woodhouse 提交于
Convert many spaces to tabs; one or two other minor cosmetic fixes. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 29 6月, 2007 3 次提交
-
-
由 Joakim Tjernlund 提交于
With current desing erase_free_sem is locked every time the flash block is being erased. For NOR flashes - ~1 second is needed to erase single flash block. In the worst case scenario erase_free_sem may be locked for a couple of seconds when the number of blocks is being erased (e.g. after large file was removed). When erase_free_sem is locked all read/write operations for given JFFS2 partition are locked too - in effect from time to time access to the JFFS2 partition is locked for a number of seconds. This fix makes critical section in flash erasing procedure shorter - now erase_free_sem is locked around erase_completion_lock spinlock only. Originally from Radoslaw Bisewski Signed-off-by: NJoakim Tjernlund <Joakim.Tjernlund@transmode.se> Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 Joakim Tjernlund 提交于
Faster and won't trash the D-cache. Signed-off-by: NJoakim Tjernlund <Joakim.Tjernlund@transmode.se> Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 Joakim Tjernlund 提交于
When pdflush is erasing lots of sectors, drivers calling mtd->sync will hang until all blocks are erased. Be nicer. Signed-off-by: NJoakim Tjernlund <Joakim.Tjernlund@transmode.se> Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 25 4月, 2007 1 次提交
-
-
由 David Woodhouse 提交于
In particular, remove the bit in the LICENCE file about contacting Red Hat for alternative arrangements. Their errant IS department broke that arrangement a long time ago -- the policy of collecting copyright assignments from contributors came to an end when the plug was pulled on the servers hosting the project, without notice or reason. We do still dual-license it for use with eCos, with the GPL+exception licence approved by the FSF as being GPL-compatible. It's just that nobody has the right to license it differently. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 18 4月, 2007 1 次提交
-
-
由 Artem Bityutskiy 提交于
When the MTD driver returns write failure, the following deadlock occurs: We are in __jffs2_flush_wbuf(), we hold &c->wbuf_sem. Write failure. jffs2_wbuf_recover()->jffs2_reserve_space_gc()->jffs2_do_reserve_space() ->jffs2_erase_pending_blocks()->jffs2_flash_read() and it tries to lock &c->wbuf_sem again. Deadlock. Reported-by: NAdrian Hunter <ext-adrian.hunter@nokia.com> Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com> Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 27 6月, 2006 2 次提交
-
-
由 KaiGai Kohei 提交于
- When xdatum is removed, a new xdatum with 'delete marker' is written. (version==0xffffffff means 'delete marker') - When xref is removed, a new xref with 'delete marker' is written. (odd-numbered xseqno means 'delete marker') - delete_xattr_(datum/xref)_delay() are new deletion functions are added. We can only use them if we can detect the target obsolete xdatum/xref as a orphan or errir one. (e.g when inode deletion, or detecting crc error) [1/3] jffs2-xattr-v6-01-delete_marker.patch Signed-off-by: NKaiGai Kohei <kaigai@ak.jp.nec.com> Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 Akinobu Mita 提交于
This patch converts the combination of list_del(A) and list_add(A, B) to list_move(A, B) under fs/. Cc: Ian Kent <raven@themaw.net> Acked-by: NJoel Becker <joel.becker@oracle.com> Cc: Neil Brown <neilb@cse.unsw.edu.au> Cc: Hans Reiser <reiserfs-dev@namesys.com> Cc: Urban Widmark <urban@teststation.com> Acked-by: NDavid Howells <dhowells@redhat.com> Acked-by: NMark Fasheh <mark.fasheh@oracle.com> Signed-off-by: NAkinobu Mita <mita@miraclelinux.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 5月, 2006 1 次提交
-
-
由 David Woodhouse 提交于
This allows us to drop another pointer from the struct jffs2_raw_node_ref, shrinking it to 8 bytes on 32-bit machines (if the TEST_TOTLEN) paranoia check is turned off, which will be committed soon). Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 25 5月, 2006 2 次提交
-
-
由 David Woodhouse 提交于
Preallocation of refs is shortly going to be a per-eraseblock thing, rather than per-filesystem. Add the required argument to the function. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 David Woodhouse 提交于
... to jffs2_free_jeb_node_refs() since that's what it does. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 24 5月, 2006 1 次提交
-
-
由 David Woodhouse 提交于
As the first step towards eliminating the ref->next_phys member and saving memory by using an _array_ of struct jffs2_raw_node_ref per eraseblock, stop the write functions from allocating their own refs; have them just _reserve_ the appropriate number instead. Then jffs2_link_node_ref() can just fill them in. Use a linked list of pre-allocated refs in the superblock, for now. Once we switch to an array, it'll just be a case of extending that array. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 22 5月, 2006 2 次提交
-
-
由 David Woodhouse 提交于
In a couple of places, we assume that what's at the end of the ->next_in_ino list is a struct jffs2_inode_cache. Let's check for that, since we expect it to change soon. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 David Woodhouse 提交于
Let's avoid the potential for forgetting to set ref->next_in_ino, by doing it within jffs2_link_node_ref() instead. This highlights the ugliness of what we're currently doing with xattr_datum and xattr_ref structures -- we should find a nicer way of dealing with that. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 21 5月, 2006 2 次提交
-
-
由 David Woodhouse 提交于
For RWCOMPAT and ROCOMPAT nodes, we should still allow the mount to succeed. Just abandon the summary and fall through to the full scan. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 David Woodhouse 提交于
The same sequence of code was repeated in many places, to add a new struct jffs2_raw_node_ref to an eraseblock and adjust the space accounting accordingly. Move it out-of-line. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 07 11月, 2005 3 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Artem B. Bityutskiy 提交于
Simplify the debugging code further. Update the TODO list Signed-off-by: NArtem B. Bityutskiy <dedekind@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Artem B. Bityutskiy 提交于
Various simplifiactions. printk format corrections. Convert more code to use the new debug functions. Signed-off-by: NArtem B. Bityutskiy <dedekind@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 06 11月, 2005 1 次提交
-
-
由 Artem B. Bityutskiy 提交于
Move debug functions into a seperate source file Signed-off-by: NArtem B. Bityutskiy <dedekind@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 15 7月, 2005 1 次提交
-
-
由 Thomas Gleixner 提交于
In the rare case of failing to write the cleanmarker the allocated node was not freed. Pointed out by Forrest Zhao Initial cleanup by Joern Engel Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 23 5月, 2005 7 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Artem B. Bityuckiy 提交于
Fix the bug that caouses problems when compiling for NOR. We read a newly erased block so we don't need to check ECC. Define jffs2_is_writebuffered as zero if there is no wbuf. Signed-off-by: NArtem B. Bityuckiy <dedekind@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Artem B. Bityuckiy 提交于
Signed-off-by: NArtem B. Bityuckiy <dedekind@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Artem B. Bityuckiy 提交于
Signed-off-by: NArtem B. Bityuckiy <dedekind@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 David Woodhouse 提交于
Don't remove inocache for inodes which are in read_inode() or clear_inode() until they're done. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andrew Victor 提交于
For Dataflash, can_mark_obsolete = false and the NAND write buffering code (wbuf.c) is used. Since the DataFlash chip will automatically erase pages when writing, the cleanmarkers are not needed - so cleanmarker_oob = false and cleanmarker_size = 0 DataFlash page-sizes are not a power of two (they're multiples of 528 bytes). The SECTOR_ADDR macro (added in the previous core patch) is replaced with a (slower) div/mod version if CONFIG_JFFS2_FS_DATAFLASH is selected. Signed-off-by: NAndrew Victor <andrew@sanpeople.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andrew Victor 提交于
DataFlash page-sizes are not a power of two (they're multiples of 528 bytes). There are a few places in JFFS2 code where sector_size is used as a bitmask. A new macro (SECTOR_ADDR) was defined to calculate these sector addresses. For non-DataFlash devices, the original (faster) bitmask operation is still used. In scan.c, the EMPTY_SCAN_SIZE was a constant of 1024. Since this could be larger than the sector size of the DataFlash, this is now basically set to MIN(sector_size, 1024). Addition of a jffs2_is_writebuffered() macro. Signed-off-by: NAndrew Victor <andrew@sanpeople.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-