1. 21 10月, 2008 3 次提交
  2. 20 10月, 2008 2 次提交
  3. 17 10月, 2008 3 次提交
    • J
      cifs: don't use CREATE_DELETE_ON_CLOSE in cifs_rename_pending_delete · dd1db2de
      Jeff Layton 提交于
      cifs: don't use CREATE_DELETE_ON_CLOSE in cifs_rename_pending_delete
      
      CREATE_DELETE_ON_CLOSE apparently has different semantics than when you
      set the DELETE_ON_CLOSE bit after opening the file. Setting it in the
      open says "delete this file as soon as this filehandle is closed". That's
      not what we want for cifs_rename_pending_delete.
      
      Don't set this bit in the CreateFlags. Experimentation shows that
      setting this flag in the SET_FILE_INFO call has no effect.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      dd1db2de
    • J
      [CIFS] eliminate usage of kthread_stop for cifsd · 469ee614
      Jeff Layton 提交于
      When cifs_demultiplex_thread was converted to a kthread based kernel
      thread, great pains were taken to make it so that kthread_stop would be
      used to bring it down. This just added unnecessary complexity since we
      needed to use a signal anyway to break out of kernel_recvmsg.
      
      Also, cifs_demultiplex_thread does a bit of cleanup as it's exiting, and
      we need to be certain that this gets done. It's possible for a kthread
      to exit before its main function is ever run if kthread_stop is called
      soon after its creation. While I'm not sure that this is a real problem
      with cifsd now, it could be at some point in the future if cifs_mount is
      ever changed to bring down the thread quickly.
      
      The upshot here is that using kthread_stop to bring down the thread just
      adds extra complexity with no real benefit. This patch changes the code
      to use the original method to bring down the thread, but still leaves it
      so that the thread is actually started with kthread_run.
      
      This seems to fix the deadlock caused by the reproducer in this bug
      report:
      
      https://bugzilla.samba.org/show_bug.cgi?id=5720Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      469ee614
    • S
      [CIFS] Add nodfs mount option · 2c1b8615
      Steve French 提交于
      Older samba server (eg. 3.0.24 from Debian etch) don't work correctly,
      if DFS paths are used. Such server claim that they support DFS, but fail
      to process some requests with DFS paths. Starting with Linux 2.6.26,
      the cifs clients starts sending DFS paths in such situations, rendering
      it unuseable with older samba servers.
      
      The nodfs mount options forces a share to be used with non DFS paths,
      even if the server claims, that it supports it.
      Signed-off-by: NMartin Koegler <mkoegler@auto.tuwien.ac.at>
      Acked-by: NJeff Layton <jlayton@redhat.com>
      Acked-by: NIgor Mammedov <niallain@gmail.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      2c1b8615
  4. 14 10月, 2008 8 次提交
  5. 13 10月, 2008 4 次提交
  6. 12 10月, 2008 1 次提交
  7. 11 10月, 2008 4 次提交
    • H
      ext4: add an option to control error handling on file data · 5bf5683a
      Hidehiro Kawai 提交于
      If the journal doesn't abort when it gets an IO error in file data
      blocks, the file data corruption will spread silently.  Because
      most of applications and commands do buffered writes without fsync(),
      they don't notice the IO error.  It's scary for mission critical
      systems.  On the other hand, if the journal aborts whenever it gets
      an IO error in file data blocks, the system will easily become
      inoperable.  So this patch introduces a filesystem option to
      determine whether it aborts the journal or just call printk() when
      it gets an IO error in file data.
      
      If you mount an ext4 fs with data_err=abort option, it aborts on file
      data write error.  If you mount it with data_err=ignore, it doesn't
      abort, just call printk().  data_err=ignore is the default.
      
      Here is the corresponding patch of the ext3 version:
      http://kerneltrap.org/mailarchive/linux-kernel/2008/9/9/3239374Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      5bf5683a
    • H
      jbd2: don't dirty original metadata buffer on abort · 7ad7445f
      Hidehiro Kawai 提交于
      Currently, original metadata buffers are dirtied when they are
      unfiled whether the journal has aborted or not.  Eventually these
      buffers will be written-back to the filesystem by pdflush.  This
      means some metadata buffers are written to the filesystem without
      journaling if the journal aborts.  So if both journal abort and
      system crash happen at the same time, the filesystem would become
      inconsistent state.  Additionally, replaying journaled metadata
      can overwrite the latest metadata on the filesystem partly.
      Because, if the journal gets aborted, journaled metadata are
      preserved and replayed during the next mount not to lose
      uncheckpointed metadata.  This would also break the consistency
      of the filesystem.
      
      This patch prevents original metadata buffers from being dirtied
      on abort by clearing BH_JBDDirty flag from those buffers.  Thus,
      no metadata buffers are written to the filesystem without journaling.
      Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      7ad7445f
    • H
      ext4: add checks for errors from jbd2 · 7ffe1ea8
      Hidehiro Kawai 提交于
      If the journal has aborted due to a checkpointing failure, we
      have to keep the contents of the journal space.  Otherwise, the
      filesystem will lose uncheckpointed metadata completely and
      become inconsistent.  To avoid this, we need to keep needs_recovery
      flag if checkpoint has failed.
      
      With this patch, ext4_put_super() detects a checkpointing failure
      from the return value of journal_destroy(), then it invokes
      ext4_abort() to make the filesystem read only and keep
      needs_recovery flag.  Errors from jbd2_journal_flush() are also
      handled by this patch in some places.
      Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      7ffe1ea8
    • H
      jbd2: fix error handling for checkpoint io · 44519faf
      Hidehiro Kawai 提交于
      When a checkpointing IO fails, current JBD2 code doesn't check the
      error and continue journaling.  This means latest metadata can be
      lost from both the journal and filesystem.
      
      This patch leaves the failed metadata blocks in the journal space
      and aborts journaling in the case of jbd2_log_do_checkpoint().
      To achieve this, we need to do:
      
      1. don't remove the failed buffer from the checkpoint list where in
         the case of __try_to_free_cp_buf() because it may be released or
         overwritten by a later transaction
      2. jbd2_log_do_checkpoint() is the last chance, remove the failed
         buffer from the checkpoint list and abort the journal
      3. when checkpointing fails, don't update the journal super block to
         prevent the journaled contents from being cleaned.  For safety,
         don't update j_tail and j_tail_sequence either
      4. when checkpointing fails, notify this error to the ext4 layer so
         that ext4 don't clear the needs_recovery flag, otherwise the
         journaled contents are ignored and cleaned in the recovery phase
      5. if the recovery fails, keep the needs_recovery flag
      6. prevent jbd2_cleanup_journal_tail() from being called between
         __jbd2_journal_drop_transaction() and jbd2_journal_abort()
         (a possible race issue between jbd2_log_do_checkpoint()s called by
         jbd2_journal_flush() and __jbd2_log_wait_for_space())
      Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      44519faf
  8. 13 10月, 2008 1 次提交
    • H
      jbd2: abort when failed to log metadata buffers · 77e841de
      Hidehiro Kawai 提交于
      If we failed to write metadata buffers to the journal space and
      succeeded to write the commit record, stale data can be written
      back to the filesystem as metadata in the recovery phase.
      
      To avoid this, when we failed to write out metadata buffers,
      abort the journal before writing the commit record.
      
      We can also avoid this kind of corruption by using the journal
      checksum feature because it can detect invalid metadata blocks in the
      journal and avoid them from being replayed.  So we don't need to care
      about asynchronous commit record writeout with a checksum.
      Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      77e841de
  9. 11 10月, 2008 2 次提交
  10. 10 10月, 2008 11 次提交
  11. 09 10月, 2008 1 次提交