1. 29 10月, 2013 9 次提交
    • C
      NFS: Add nfs4_update_server · 32e62b7c
      Chuck Lever 提交于
      New function nfs4_update_server() moves an nfs_server to a different
      nfs_client.  This is done as part of migration recovery.
      
      Though it may be appealing to think of them as the same thing,
      migration recovery is not the same as following a referral.
      
      For a referral, the client has not descended into the file system
      yet: it has no nfs_server, no super block, no inodes or open state.
      It is enough to simply instantiate the nfs_server and super block,
      and perform a referral mount.
      
      For a migration, however, we have all of those things already, and
      they have to be moved to a different nfs_client.  No local namespace
      changes are needed here.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      32e62b7c
    • W
      NFSv4: don't reprocess cached open CLAIM_PREVIOUS · d2bfda2e
      Weston Andros Adamson 提交于
      Cached opens have already been handled by _nfs4_opendata_reclaim_to_nfs4_state
      and can safely skip being reprocessed, but must still call update_open_stateid
      to make sure that all active fmodes are recovered.
      Signed-off-by: NWeston Andros Adamson <dros@netapp.com>
      Cc: stable@vger.kernel.org # 3.7.x: f494a607: NFSv4: fix NULL dereference
      Cc: stable@vger.kernel.org # 3.7.x: a43ec98b: NFSv4: don't fail on missin
      Cc: stable@vger.kernel.org # 3.7.x
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      d2bfda2e
    • T
      NFSv4: Fix state reference counting in _nfs4_opendata_reclaim_to_nfs4_state · d49f042a
      Trond Myklebust 提交于
      Currently, if the call to nfs_refresh_inode fails, then we end up leaking
      a reference count, due to the call to nfs4_get_open_state.
      While we're at it, replace nfs4_get_open_state with a simple call to
      atomic_inc(); there is no need to do a full lookup of the struct nfs_state
      since it is passed as an argument in the struct nfs4_opendata, and
      is already assigned to the variable 'state'.
      
      Cc: stable@vger.kernel.org # 3.7.x: a43ec98b: NFSv4: don't fail on missing
      Cc: stable@vger.kernel.org # 3.7.x
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      d49f042a
    • W
      NFSv4: don't fail on missing fattr in open recover · a43ec98b
      Weston Andros Adamson 提交于
      This is an unneeded check that could cause the client to fail to recover
      opens.
      Signed-off-by: NWeston Andros Adamson <dros@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      a43ec98b
    • W
      NFSv4: fix NULL dereference in open recover · f494a607
      Weston Andros Adamson 提交于
      _nfs4_opendata_reclaim_to_nfs4_state doesn't expect to see a cached
      open CLAIM_PREVIOUS, but this can happen. An example is when there are
      RDWR openers and RDONLY openers on a delegation stateid. The recovery
      path will first try an open CLAIM_PREVIOUS for the RDWR openers, this
      marks the delegation as not needing RECLAIM anymore, so the open
      CLAIM_PREVIOUS for the RDONLY openers will not actually send an rpc.
      
      The NULL dereference is due to _nfs4_opendata_reclaim_to_nfs4_state
      returning PTR_ERR(rpc_status) when !rpc_done. When the open is
      cached, rpc_done == 0 and rpc_status == 0, thus
      _nfs4_opendata_reclaim_to_nfs4_state returns NULL - this is unexpected
      by callers of nfs4_opendata_to_nfs4_state().
      
      This can be reproduced easily by opening the same file two times on an
      NFSv4.0 mount with delegations enabled, once as RDWR and once as RDONLY then
      sleeping for a long time.  While the files are held open, kick off state
      recovery and this NULL dereference will be hit every time.
      
      An example OOPS:
      
      [   65.003602] BUG: unable to handle kernel NULL pointer dereference at 00000000
      00000030
      [   65.005312] IP: [<ffffffffa037d6ee>] __nfs4_close+0x1e/0x160 [nfsv4]
      [   65.006820] PGD 7b0ea067 PUD 791ff067 PMD 0
      [   65.008075] Oops: 0000 [#1] SMP
      [   65.008802] Modules linked in: rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache
      snd_ens1371 gameport nfsd snd_rawmidi snd_ac97_codec ac97_bus btusb snd_seq snd
      _seq_device snd_pcm ppdev bluetooth auth_rpcgss coretemp snd_page_alloc crc32_pc
      lmul crc32c_intel ghash_clmulni_intel microcode rfkill nfs_acl vmw_balloon serio
      _raw snd_timer lockd parport_pc e1000 snd soundcore parport i2c_piix4 shpchp vmw
      _vmci sunrpc ata_generic mperf pata_acpi mptspi vmwgfx ttm scsi_transport_spi dr
      m mptscsih mptbase i2c_core
      [   65.018684] CPU: 0 PID: 473 Comm: 192.168.10.85-m Not tainted 3.11.2-201.fc19
      .x86_64 #1
      [   65.020113] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop
      Reference Platform, BIOS 6.00 07/31/2013
      [   65.022012] task: ffff88003707e320 ti: ffff88007b906000 task.ti: ffff88007b906000
      [   65.023414] RIP: 0010:[<ffffffffa037d6ee>]  [<ffffffffa037d6ee>] __nfs4_close+0x1e/0x160 [nfsv4]
      [   65.025079] RSP: 0018:ffff88007b907d10  EFLAGS: 00010246
      [   65.026042] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
      [   65.027321] RDX: 0000000000000050 RSI: 0000000000000001 RDI: 0000000000000000
      [   65.028691] RBP: ffff88007b907d38 R08: 0000000000016f60 R09: 0000000000000000
      [   65.029990] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
      [   65.031295] R13: 0000000000000050 R14: 0000000000000000 R15: 0000000000000001
      [   65.032527] FS:  0000000000000000(0000) GS:ffff88007f600000(0000) knlGS:0000000000000000
      [   65.033981] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [   65.035177] CR2: 0000000000000030 CR3: 000000007b27f000 CR4: 00000000000407f0
      [   65.036568] Stack:
      [   65.037011]  0000000000000000 0000000000000001 ffff88007b907d90 ffff88007a880220
      [   65.038472]  ffff88007b768de8 ffff88007b907d48 ffffffffa037e4a5 ffff88007b907d80
      [   65.039935]  ffffffffa036a6c8 ffff880037020e40 ffff88007a880000 ffff880037020e40
      [   65.041468] Call Trace:
      [   65.042050]  [<ffffffffa037e4a5>] nfs4_close_state+0x15/0x20 [nfsv4]
      [   65.043209]  [<ffffffffa036a6c8>] nfs4_open_recover_helper+0x148/0x1f0 [nfsv4]
      [   65.044529]  [<ffffffffa036a886>] nfs4_open_recover+0x116/0x150 [nfsv4]
      [   65.045730]  [<ffffffffa036d98d>] nfs4_open_reclaim+0xad/0x150 [nfsv4]
      [   65.046905]  [<ffffffffa037d979>] nfs4_do_reclaim+0x149/0x5f0 [nfsv4]
      [   65.048071]  [<ffffffffa037e1dc>] nfs4_run_state_manager+0x3bc/0x670 [nfsv4]
      [   65.049436]  [<ffffffffa037de20>] ? nfs4_do_reclaim+0x5f0/0x5f0 [nfsv4]
      [   65.050686]  [<ffffffffa037de20>] ? nfs4_do_reclaim+0x5f0/0x5f0 [nfsv4]
      [   65.051943]  [<ffffffff81088640>] kthread+0xc0/0xd0
      [   65.052831]  [<ffffffff81088580>] ? insert_kthread_work+0x40/0x40
      [   65.054697]  [<ffffffff8165686c>] ret_from_fork+0x7c/0xb0
      [   65.056396]  [<ffffffff81088580>] ? insert_kthread_work+0x40/0x40
      [   65.058208] Code: 5c 41 5d 5d c3 0f 1f 84 00 00 00 00 00 66 66 66 66 90 55 48 89 e5 41 57 41 89 f7 41 56 41 89 ce 41 55 41 89 d5 41 54 53 48 89 fb <4c> 8b 67 30 f0 41 ff 44 24 44 49 8d 7c 24 40 e8 0e 0a 2d e1 44
      [   65.065225] RIP  [<ffffffffa037d6ee>] __nfs4_close+0x1e/0x160 [nfsv4]
      [   65.067175]  RSP <ffff88007b907d10>
      [   65.068570] CR2: 0000000000000030
      [   65.070098] ---[ end trace 0d1fe4f5c7dd6f8b ]---
      
      Cc: <stable@vger.kernel.org> #3.7+
      Signed-off-by: NWeston Andros Adamson <dros@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      f494a607
    • T
      NFSv4.1: Don't change the security label as part of open reclaim. · 83c78eb0
      Trond Myklebust 提交于
      The current caching model calls for the security label to be set on
      first lookup and/or on any subsequent label changes. There is no
      need to do it as part of an open reclaim.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      83c78eb0
    • J
      nfs: fix handling of invalid mount options in nfs_remount · 1966903f
      Jeff Layton 提交于
      nfs_parse_mount_options returns 0 on error, not -errno.
      Reported-by: NKarel Zak <kzak@redhat.com>
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      1966903f
    • J
    • A
      NFSv4 Remove zeroing state kern warnings · 3660cd43
      Andy Adamson 提交于
      As of commit 5d422301 we no longer zero the
      state.
      Signed-off-by: NAndy Adamson <andros@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      3660cd43
  2. 02 10月, 2013 2 次提交
  3. 01 10月, 2013 2 次提交
    • V
      nilfs2: fix issue with race condition of competition between segments for dirty blocks · 7f42ec39
      Vyacheslav Dubeyko 提交于
      Many NILFS2 users were reported about strange file system corruption
      (for example):
      
         NILFS: bad btree node (blocknr=185027): level = 0, flags = 0x0, nchildren = 768
         NILFS error (device sda4): nilfs_bmap_last_key: broken bmap (inode number=11540)
      
      But such error messages are consequence of file system's issue that takes
      place more earlier.  Fortunately, Jerome Poulin <jeromepoulin@gmail.com>
      and Anton Eliasson <devel@antoneliasson.se> were reported about another
      issue not so recently.  These reports describe the issue with segctor
      thread's crash:
      
        BUG: unable to handle kernel paging request at 0000000000004c83
        IP: nilfs_end_page_io+0x12/0xd0 [nilfs2]
      
        Call Trace:
         nilfs_segctor_do_construct+0xf25/0x1b20 [nilfs2]
         nilfs_segctor_construct+0x17b/0x290 [nilfs2]
         nilfs_segctor_thread+0x122/0x3b0 [nilfs2]
         kthread+0xc0/0xd0
         ret_from_fork+0x7c/0xb0
      
      These two issues have one reason.  This reason can raise third issue
      too.  Third issue results in hanging of segctor thread with eating of
      100% CPU.
      
      REPRODUCING PATH:
      
      One of the possible way or the issue reproducing was described by
      Jermoe me Poulin <jeromepoulin@gmail.com>:
      
      1. init S to get to single user mode.
      2. sysrq+E to make sure only my shell is running
      3. start network-manager to get my wifi connection up
      4. login as root and launch "screen"
      5. cd /boot/log/nilfs which is a ext3 mount point and can log when NILFS dies.
      6. lscp | xz -9e > lscp.txt.xz
      7. mount my snapshot using mount -o cp=3360839,ro /dev/vgUbuntu/root /mnt/nilfs
      8. start a screen to dump /proc/kmsg to text file since rsyslog is killed
      9. start a screen and launch strace -f -o find-cat.log -t find
      /mnt/nilfs -type f -exec cat {} > /dev/null \;
      10. start a screen and launch strace -f -o apt-get.log -t apt-get update
      11. launch the last command again as it did not crash the first time
      12. apt-get crashes
      13. ps aux > ps-aux-crashed.log
      13. sysrq+W
      14. sysrq+E  wait for everything to terminate
      15. sysrq+SUSB
      
      Simplified way of the issue reproducing is starting kernel compilation
      task and "apt-get update" in parallel.
      
      REPRODUCIBILITY:
      
      The issue is reproduced not stable [60% - 80%].  It is very important to
      have proper environment for the issue reproducing.  The critical
      conditions for successful reproducing:
      
      (1) It should have big modified file by mmap() way.
      
      (2) This file should have the count of dirty blocks are greater that
          several segments in size (for example, two or three) from time to time
          during processing.
      
      (3) It should be intensive background activity of files modification
          in another thread.
      
      INVESTIGATION:
      
      First of all, it is possible to see that the reason of crash is not valid
      page address:
      
        NILFS [nilfs_segctor_complete_write]:2100 bh->b_count 0, bh->b_blocknr 13895680, bh->b_size 13897727, bh->b_page 0000000000001a82
        NILFS [nilfs_segctor_complete_write]:2101 segbuf->sb_segnum 6783
      
      Moreover, value of b_page (0x1a82) is 6786.  This value looks like segment
      number.  And b_blocknr with b_size values look like block numbers.  So,
      buffer_head's pointer points on not proper address value.
      
      Detailed investigation of the issue is discovered such picture:
      
        [-----------------------------SEGMENT 6783-------------------------------]
        NILFS [nilfs_segctor_do_construct]:2310 nilfs_segctor_begin_construction
        NILFS [nilfs_segctor_do_construct]:2321 nilfs_segctor_collect
        NILFS [nilfs_segctor_do_construct]:2336 nilfs_segctor_assign
        NILFS [nilfs_segctor_do_construct]:2367 nilfs_segctor_update_segusage
        NILFS [nilfs_segctor_do_construct]:2371 nilfs_segctor_prepare_write
        NILFS [nilfs_segctor_do_construct]:2376 nilfs_add_checksums_on_logs
        NILFS [nilfs_segctor_do_construct]:2381 nilfs_segctor_write
        NILFS [nilfs_segbuf_submit_bio]:464 bio->bi_sector 111149024, segbuf->sb_segnum 6783
      
        [-----------------------------SEGMENT 6784-------------------------------]
        NILFS [nilfs_segctor_do_construct]:2310 nilfs_segctor_begin_construction
        NILFS [nilfs_segctor_do_construct]:2321 nilfs_segctor_collect
        NILFS [nilfs_lookup_dirty_data_buffers]:782 bh->b_count 1, bh->b_page ffffea000709b000, page->index 0, i_ino 1033103, i_size 25165824
        NILFS [nilfs_lookup_dirty_data_buffers]:783 bh->b_assoc_buffers.next ffff8802174a6798, bh->b_assoc_buffers.prev ffff880221cffee8
        NILFS [nilfs_segctor_do_construct]:2336 nilfs_segctor_assign
        NILFS [nilfs_segctor_do_construct]:2367 nilfs_segctor_update_segusage
        NILFS [nilfs_segctor_do_construct]:2371 nilfs_segctor_prepare_write
        NILFS [nilfs_segctor_do_construct]:2376 nilfs_add_checksums_on_logs
        NILFS [nilfs_segctor_do_construct]:2381 nilfs_segctor_write
        NILFS [nilfs_segbuf_submit_bh]:575 bh->b_count 1, bh->b_page ffffea000709b000, page->index 0, i_ino 1033103, i_size 25165824
        NILFS [nilfs_segbuf_submit_bh]:576 segbuf->sb_segnum 6784
        NILFS [nilfs_segbuf_submit_bh]:577 bh->b_assoc_buffers.next ffff880218a0d5f8, bh->b_assoc_buffers.prev ffff880218bcdf50
        NILFS [nilfs_segbuf_submit_bio]:464 bio->bi_sector 111150080, segbuf->sb_segnum 6784, segbuf->sb_nbio 0
        [----------] ditto
        NILFS [nilfs_segbuf_submit_bio]:464 bio->bi_sector 111164416, segbuf->sb_segnum 6784, segbuf->sb_nbio 15
      
        [-----------------------------SEGMENT 6785-------------------------------]
        NILFS [nilfs_segctor_do_construct]:2310 nilfs_segctor_begin_construction
        NILFS [nilfs_segctor_do_construct]:2321 nilfs_segctor_collect
        NILFS [nilfs_lookup_dirty_data_buffers]:782 bh->b_count 2, bh->b_page ffffea000709b000, page->index 0, i_ino 1033103, i_size 25165824
        NILFS [nilfs_lookup_dirty_data_buffers]:783 bh->b_assoc_buffers.next ffff880219277e80, bh->b_assoc_buffers.prev ffff880221cffc88
        NILFS [nilfs_segctor_do_construct]:2367 nilfs_segctor_update_segusage
        NILFS [nilfs_segctor_do_construct]:2371 nilfs_segctor_prepare_write
        NILFS [nilfs_segctor_do_construct]:2376 nilfs_add_checksums_on_logs
        NILFS [nilfs_segctor_do_construct]:2381 nilfs_segctor_write
        NILFS [nilfs_segbuf_submit_bh]:575 bh->b_count 2, bh->b_page ffffea000709b000, page->index 0, i_ino 1033103, i_size 25165824
        NILFS [nilfs_segbuf_submit_bh]:576 segbuf->sb_segnum 6785
        NILFS [nilfs_segbuf_submit_bh]:577 bh->b_assoc_buffers.next ffff880218a0d5f8, bh->b_assoc_buffers.prev ffff880222cc7ee8
        NILFS [nilfs_segbuf_submit_bio]:464 bio->bi_sector 111165440, segbuf->sb_segnum 6785, segbuf->sb_nbio 0
        [----------] ditto
        NILFS [nilfs_segbuf_submit_bio]:464 bio->bi_sector 111177728, segbuf->sb_segnum 6785, segbuf->sb_nbio 12
      
        NILFS [nilfs_segctor_do_construct]:2399 nilfs_segctor_wait
        NILFS [nilfs_segbuf_wait]:676 segbuf->sb_segnum 6783
        NILFS [nilfs_segbuf_wait]:676 segbuf->sb_segnum 6784
        NILFS [nilfs_segbuf_wait]:676 segbuf->sb_segnum 6785
      
        NILFS [nilfs_segctor_complete_write]:2100 bh->b_count 0, bh->b_blocknr 13895680, bh->b_size 13897727, bh->b_page 0000000000001a82
      
        BUG: unable to handle kernel paging request at 0000000000001a82
        IP: [<ffffffffa024d0f2>] nilfs_end_page_io+0x12/0xd0 [nilfs2]
      
      Usually, for every segment we collect dirty files in list.  Then, dirty
      blocks are gathered for every dirty file, prepared for write and
      submitted by means of nilfs_segbuf_submit_bh() call.  Finally, it takes
      place complete write phase after calling nilfs_end_bio_write() on the
      block layer.  Buffers/pages are marked as not dirty on final phase and
      processed files removed from the list of dirty files.
      
      It is possible to see that we had three prepare_write and submit_bio
      phases before segbuf_wait and complete_write phase.  Moreover, segments
      compete between each other for dirty blocks because on every iteration
      of segments processing dirty buffer_heads are added in several lists of
      payload_buffers:
      
        [SEGMENT 6784]: bh->b_assoc_buffers.next ffff880218a0d5f8, bh->b_assoc_buffers.prev ffff880218bcdf50
        [SEGMENT 6785]: bh->b_assoc_buffers.next ffff880218a0d5f8, bh->b_assoc_buffers.prev ffff880222cc7ee8
      
      The next pointer is the same but prev pointer has changed.  It means
      that buffer_head has next pointer from one list but prev pointer from
      another.  Such modification can be made several times.  And, finally, it
      can be resulted in various issues: (1) segctor hanging, (2) segctor
      crashing, (3) file system metadata corruption.
      
      FIX:
      This patch adds:
      
      (1) setting of BH_Async_Write flag in nilfs_segctor_prepare_write()
          for every proccessed dirty block;
      
      (2) checking of BH_Async_Write flag in
          nilfs_lookup_dirty_data_buffers() and
          nilfs_lookup_dirty_node_buffers();
      
      (3) clearing of BH_Async_Write flag in nilfs_segctor_complete_write(),
          nilfs_abort_logs(), nilfs_forget_buffer(), nilfs_clear_dirty_page().
      Reported-by: NJerome Poulin <jeromepoulin@gmail.com>
      Reported-by: NAnton Eliasson <devel@antoneliasson.se>
      Cc: Paul Fertser <fercerpav@gmail.com>
      Cc: ARAI Shun-ichi <hermes@ceres.dti.ne.jp>
      Cc: Piotr Szymaniak <szarpaj@grubelek.pl>
      Cc: Juan Barry Manuel Canham <Linux@riotingpacifist.net>
      Cc: Zahid Chowdhury <zahid.chowdhury@starsolutions.com>
      Cc: Elmer Zhang <freeboy6716@gmail.com>
      Cc: Kenneth Langga <klangga@gmail.com>
      Signed-off-by: NVyacheslav Dubeyko <slava@dubeyko.com>
      Acked-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f42ec39
    • D
      fs/binfmt_elf.c: prevent a coredump with a large vm_map_count from Oopsing · 72023656
      Dan Aloni 提交于
      A high setting of max_map_count, and a process core-dumping with a large
      enough vm_map_count could result in an NT_FILE note not being written,
      and the kernel crashing immediately later because it has assumed
      otherwise.
      
      Reproduction of the oops-causing bug described here:
      
          https://lkml.org/lkml/2013/8/30/50
      
      Rge ussue originated in commit 2aa362c4 ("coredump: extend core dump
      note section to contain file names of mapped file") from Oct 4, 2012.
      
      This patch make that section optional in that case.  fill_files_note()
      should signify the error, and also let the info struct in
      elf_core_dump() be zero-initialized so that we can check for the
      optionally written note.
      
      [akpm@linux-foundation.org: avoid abusing E2BIG, remove a couple of not-really-needed local variables]
      [akpm@linux-foundation.org: fix sparse warning]
      Signed-off-by: NDan Aloni <alonid@stratoscale.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Denys Vlasenko <vda.linux@googlemail.com>
      Reported-by: NMartin MOKREJS <mmokrejs@gmail.com>
      Tested-by: NMartin MOKREJS <mmokrejs@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72023656
  4. 30 9月, 2013 7 次提交
  5. 26 9月, 2013 2 次提交
    • M
      xfs: fix node forward in xfs_node_toosmall · 997def25
      Mark Tinguely 提交于
      Commit f5ea1100 cleans up the disk to host conversions for
      node directory entries, but because a variable is reused in
      xfs_node_toosmall() the next node is not correctly found.
      If the original node is small enough (<= 3/8 of the node size),
      this change may incorrectly cause a node collapse when it should
      not. That will cause an assert in xfstest generic/319:
      
         Assertion failed: first <= last && last < BBTOB(bp->b_length),
         file: /root/newest/xfs/fs/xfs/xfs_trans_buf.c, line: 569
      
      Keep the original node header to get the correct forward node.
      
      (When a node is considered for a merge with a sibling, it overwrites the
       sibling pointers of the original incore nodehdr with the sibling's
       pointers.  This leads to loop considering the original node as a merge
       candidate with itself in the second pass, and so it incorrectly
       determines a merge should occur.)
      Signed-off-by: NMark Tinguely <tinguely@sgi.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      
      [v3: added Dave Chinner's (slightly modified) suggestion to the commit header,
      	cleaned up whitespace.  -bpm]
      997def25
    • T
      NFSv4: Honour the 'opened' parameter in the atomic_open() filesystem method · 5bc2afc2
      Trond Myklebust 提交于
      Determine if we've created a new file by examining the directory change
      attribute and/or the O_EXCL flag.
      
      This fixes a regression when doing a non-exclusive create of a new file.
      If the FILE_CREATED flag is not set, the atomic_open() command will
      perform full file access permissions checks instead of just checking
      for MAY_OPEN.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      5bc2afc2
  6. 25 9月, 2013 6 次提交
    • G
      fs/ocfs2/super.c: use a bigger nodestr in ocfs2_dismount_volume · 99d7a882
      Goldwyn Rodrigues 提交于
      While printing 32-bit node numbers, an 8-byte string is not enough.
      Increase the size of the string to 12 chars.
      
      This got left out in commit 49fa8140 ("fs/ocfs2/super.c: Use bigger
      nodestr to accomodate 32-bit node numbers").
      Signed-off-by: NGoldwyn Rodrigues <rgoldwyn@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99d7a882
    • K
      block: Fix bio_copy_data() · 2f6cf0de
      Kent Overstreet 提交于
      The memcpy() in bio_copy_data() was using the wrong offset vars, leading
      to data corruption in weird unusual setups.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-stable <stable@vger.kernel.org> # >= v3.9
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f6cf0de
    • D
      xfs: log recovery lsn ordering needs uuid check · 566055d3
      Dave Chinner 提交于
      After a fair number of xfstests runs, xfs/182 started to fail
      regularly with a corrupted directory - a directory read verifier was
      failing after recovery because it found a block with a XARM magic
      number (remote attribute block) rather than a directory data block.
      
      The first time I saw this repeated failure I did /something/ and the
      problem went away, so I was never able to find the underlying
      problem. Test xfs/182 failed again today, and I found the root
      cause before I did /something else/ that made it go away.
      
      Tracing indicated that the block in question was being correctly
      logged, the log was being flushed by sync, but the buffer was not
      being written back before the shutdown occurred. Tracing also
      indicated that log recovery was also reading the block, but then
      never writing it before log recovery invalidated the cache,
      indicating that it was not modified by log recovery.
      
      More detailed analysis of the corpse indicated that the filesystem
      had a uuid of "a4131074-1872-4cac-9323-2229adbcb886" but the XARM
      block had a uuid of "8f32f043-c3c9-e7f8-f947-4e7f989c05d3", which
      indicated it was a block from an older filesystem. The reason that
      log recovery didn't replay it was that the LSN in the XARM block was
      larger than the LSN of the transaction being replayed, and so the
      block was not overwritten by log recovery.
      
      Hence, log recovery cant blindly trust the magic number and LSN in
      the block - it must verify that it belongs to the filesystem being
      recovered before using the LSN. i.e. if the UUIDs don't match, we
      need to unconditionally recovery the change held in the log.
      
      This patch was first tested on a block device that was repeatedly
      causing xfs/182 to fail with the same failure on the same block with
      the same directory read corruption signature (i.e. XARM block). It
      did not fail, and hasn't failed since.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBen Myers <bpm@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      566055d3
    • D
      xfs: fix XFS_IOC_FREE_EOFBLOCKS definition · b771af2f
      Dave Chinner 提交于
      It uses a kernel internal structure in it's definition rather than
      the user visible structure that is passed to the ioctl.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      b771af2f
    • D
      xfs: asserting lock not held during freeing not valid · b313a5f1
      Dave Chinner 提交于
      When we free an inode, we do so via RCU. As an RCU lookup can occur
      at any time before we free an inode, and that lookup takes the inode
      flags lock, we cannot safely assert that the flags lock is not held
      just before marking it dead and running call_rcu() to free the
      inode.
      
      We check on allocation of a new inode structre that the lock is not
      held, so we still have protection against locks being leaked and
      hence not correctly initialised when allocated out of the slab.
      Hence just remove the assert...
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      b313a5f1
    • D
      xfs: lock the AIL before removing the buffer item · 48852358
      Dave Chinner 提交于
      Regression introduced by commit 46f9d2eb ("xfs: aborted buf items can
      be in the AIL") which fails to lock the AIL before removing the
      item. Spinlock debugging throws a warning about this.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      48852358
  7. 24 9月, 2013 3 次提交
    • J
      reiserfs: fix race with flush_used_journal_lists and flush_journal_list · 721a769c
      Jeff Mahoney 提交于
      There are two locks involved in managing the journal lists. The general
      reiserfs_write_lock and the journal->j_flush_mutex.
      
      While flush_journal_list is sleeping to acquire the j_flush_mutex or to
      submit a block for write, it will drop the write lock. This allows
      another thread to acquire the write lock and ultimately call
      flush_used_journal_lists to traverse the list of journal lists and
      select one for flushing. It can select the journal_list that has just
      had flush_journal_list called on it in the original thread and call it
      again with the same journal_list.
      
      The second thread then drops the write lock to acquire j_flush_mutex and
      the first thread reacquires it and continues execution and eventually
      clears and frees the journal list before dropping j_flush_mutex and
      returning.
      
      The second thread acquires j_flush_mutex and ends up operating on a
      journal_list that has already been released. If the memory hasn't
      been reused, we'll soon after hit a BUG_ON because the transaction id
      has already been cleared. If it's been reused, we'll crash in other
      fun ways.
      
      Since flush_journal_list will synchronize on j_flush_mutex, we can fix
      the race by taking a proper reference in flush_used_journal_lists
      and checking to see if it's still valid after the mutex is taken. It's
      safe to iterate the list of journal lists and pick a list with
      just the write lock as long as a reference is taken on the journal list
      before we drop the lock. We already have code to handle whether a
      transaction has been flushed already so we can use that to handle the
      race and get rid of the trans_id BUG_ON.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      721a769c
    • J
      reiserfs: remove useless flush_old_journal_lists · 7bc9cc07
      Jeff Mahoney 提交于
      Commit a3172027 introduced test_transaction as a requirement for
      flushing old lists -- but it can never return 1 unless the transaction
      has already been flushed.
      
      As a result, we have a routine that iterates the j_realblocks list but
      doesn't actually do anything. Since it's been this way since 2006 and
      the latency numbers were what Chris expected, let's just rip it out.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      7bc9cc07
    • J
      udf: Fortify LVID loading · 69d75671
      Jan Kara 提交于
      A user has reported an oops in udf_statfs() that was caused by
      numOfPartitions entry in LVID structure being corrupted. Fix the problem
      by verifying whether numOfPartitions makes sense at least to the extent
      that LVID fits into a single block as it should.
      Reported-by: NJuergen Weigert <jw@suse.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      69d75671
  8. 21 9月, 2013 9 次提交