1. 21 4月, 2009 1 次提交
    • C
      Btrfs: use WRITE_SYNC for synchronous writes · ffbd517d
      Chris Mason 提交于
      Part of reducing fsync/O_SYNC/O_DIRECT latencies is using WRITE_SYNC for
      writes we plan on waiting on in the near future.  This patch
      mirrors recent changes in other filesystems and the generic code to
      use WRITE_SYNC when WB_SYNC_ALL is passed and to use WRITE_SYNC for
      other latency critical writes.
      
      Btrfs uses async worker threads for checksumming before the write is done,
      and then again to actually submit the bios.  The bio submission code just
      runs a per-device list of bios that need to be sent down the pipe.
      
      This list is split into low priority and high priority lists so the
      WRITE_SYNC IO happens first.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      ffbd517d
  2. 14 4月, 2009 3 次提交
    • J
      ext2: fix data corruption for racing writes · 316cb4ef
      Jan Kara 提交于
      If two writers allocating blocks to file race with each other (e.g.
      because writepages races with ordinary write or two writepages race with
      each other), ext2_getblock() can be called on the same inode in parallel.
      Before we are going to allocate new blocks, we have to recheck the block
      chain we have obtained so far without holding truncate_mutex.  Otherwise
      we could overwrite the indirect block pointer set by the other writer
      leading to data loss.
      
      The below test program by Ying is able to reproduce the data loss with ext2
      on in BRD in a few minutes if the machine is under memory pressure:
      
      long kMemSize  = 50 << 20;
      int kPageSize = 4096;
      
      int main(int argc, char **argv) {
      	int status;
      	int count = 0;
      	int i;
      	char *fname = "/mnt/test.mmap";
      	char *mem;
      	unlink(fname);
      	int fd = open(fname, O_CREAT | O_EXCL | O_RDWR, 0600);
      	status = ftruncate(fd, kMemSize);
      	mem = mmap(0, kMemSize, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
      	// Fill the memory with 1s.
      	memset(mem, 1, kMemSize);
      	sleep(2);
      	for (i = 0; i < kMemSize; i++) {
      		int byte_good = mem[i] != 0;
      		if (!byte_good && ((i % kPageSize) == 0)) {
      			//printf("%d ", i / kPageSize);
      			count++;
      		}
      	}
      	munmap(mem, kMemSize);
      	close(fd);
      	unlink(fname);
      
      	if (count > 0) {
      		printf("Running %d bad page\n", count);
      		return 1;
      	}
      	return 0;
      }
      
      Cc: Ying Han <yinghan@google.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Mingming Cao <cmm@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      316cb4ef
    • J
      jbd: update locking coments · 32433879
      Jan Kara 提交于
      Update information about locking in JBD revoke code.
      
      Reported-by: Lin Tan <tammy000@gmail.com>.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      32433879
    • D
      hfs: fix memory leak when unmounting · eb2e5f45
      Dave Anderson 提交于
      When an HFS filesystem is unmounted, it leaks a 2-page bitmap.  Also,
      under extreme memory pressure, it's possible that hfs_releasepage() may
      use a tree pointer that has not been initialized, and if so, the release
      request should just be rejected.
      
      [akpm@linux-foundation.org: free_pages(0) is legal, remove obvious comment]
      Signed-off-by: NDave Anderson <anderson@redhat.com>
      Tested-by: NEugene Teo <eugeneteo@kernel.sg>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eb2e5f45
  3. 13 4月, 2009 8 次提交
  4. 10 4月, 2009 1 次提交
  5. 09 4月, 2009 6 次提交
  6. 08 4月, 2009 2 次提交
  7. 07 4月, 2009 19 次提交