- 14 5月, 2006 1 次提交
-
-
由 David Woodhouse 提交于
It was just too painful to deal with. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 13 5月, 2006 2 次提交
-
-
由 David Woodhouse 提交于
We were scanning for 0xFF through the entire chip -- which takes a while when it's a 512MiB device as I have on my current toy. The specs only say we need to check certain bytes -- so do only that. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
由 David Woodhouse 提交于
These new chips have 128KiB blocks. Don't try to kmalloc that. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 07 11月, 2005 1 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 16 7月, 2005 1 次提交
-
-
由 Thomas Gleixner 提交于
The previous change to read a single byte from oob breaks the bad block scan on 16 bit devices, when the byte is on an odd address. Read the complete oob for now. Remove the unused arguments from check_short_pattern() Move the wait for ready function so it is only executed when consecutive reads happen. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 29 6月, 2005 1 次提交
-
-
由 Thomas Gleixner 提交于
Make the bad block table search functional again Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 23 5月, 2005 2 次提交
-
-
由 Artem B. Bityuckiy 提交于
Scan 1st and 2nd pages of SP devices for BB marker by default. Fix more then one page scanning in create_bbt.c. Signed-off-by: NArtem B. Bityuckiy <dedekind@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Artem B. Bityuckiy 提交于
When scanning NAND for bad blocks, don't read the whole page, read only needed OOB bytes instead. Also check the return code of the nand_read_raw() function. Correctly free the this->bbt array in case of failure. Tested with Large page NAND. Fix debugging message. Signed-off-by: NArtem B. Bityuckiy <dedekind@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-