1. 07 4月, 2010 2 次提交
  2. 07 2月, 2010 1 次提交
  3. 29 10月, 2009 1 次提交
    • C
      blkdev: flush disk cache on ->fsync · ab0a9735
      Christoph Hellwig 提交于
      Currently there is no barrier support in the block device code.  That
      means we cannot guarantee any sort of data integerity when using the
      block device node with dis kwrite caches enabled.  Using the raw block
      device node is a typical use case for virtualization (and I assume
      databases, too).  This patch changes block_fsync to issue a cache flush
      and thus make fsync on block device nodes actually useful.
      
      Note that in mainline we would also need to add such code to the
      ->aio_write method for O_SYNC handling, but assuming that Jan's patch
      series for the O_SYNC rewrite goes in it will also call into ->fsync
      for 2.6.32.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      ab0a9735
  4. 26 10月, 2009 1 次提交
  5. 24 9月, 2009 2 次提交
    • C
      freeze_bdev: grab active reference to frozen superblocks · 4504230a
      Christoph Hellwig 提交于
      Currently we held s_umount while a filesystem is frozen, despite that we
      might return to userspace and unlock it from a different process.  Instead
      grab an active reference to keep the file system busy and add an explicit
      check for frozen filesystems in remount and reject the remount instead
      of blocking on s_umount.
      
      Add a new get_active_super helper to super.c for use by freeze_bdev that
      grabs an active reference to a superblock from a given block device.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4504230a
    • C
      freeze_bdev: kill bd_mount_sem · 4fadd7bb
      Christoph Hellwig 提交于
      Now that we have the freeze count there is not much reason for bd_mount_sem
      anymore.  The actual freeze/thaw operations are serialized using the
      bd_fsfreeze_mutex, and the only other place we take bd_mount_sem is
      get_sb_bdev which tries to prevent mounting a filesystem while the block
      device is frozen.  Instead of add a check for bd_fsfreeze_count and
      return -EBUSY if a filesystem is frozen.  While that is a change in user
      visible behaviour a failing mount is much better for this case rather
      than having the mount process stuck uninterruptible for a long time.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4fadd7bb
  6. 22 9月, 2009 1 次提交
  7. 16 9月, 2009 1 次提交
  8. 14 9月, 2009 1 次提交
  9. 30 7月, 2009 1 次提交
  10. 12 6月, 2009 4 次提交
  11. 05 6月, 2009 1 次提交
  12. 23 5月, 2009 1 次提交
  13. 28 4月, 2009 1 次提交
  14. 01 4月, 2009 1 次提交
  15. 28 3月, 2009 1 次提交
  16. 10 1月, 2009 1 次提交
    • T
      filesystem freeze: implement generic freeze feature · fcccf502
      Takashi Sato 提交于
      The ioctls for the generic freeze feature are below.
      o Freeze the filesystem
        int ioctl(int fd, int FIFREEZE, arg)
          fd: The file descriptor of the mountpoint
          FIFREEZE: request code for the freeze
          arg: Ignored
          Return value: 0 if the operation succeeds. Otherwise, -1
      
      o Unfreeze the filesystem
        int ioctl(int fd, int FITHAW, arg)
          fd: The file descriptor of the mountpoint
          FITHAW: request code for unfreeze
          arg: Ignored
          Return value: 0 if the operation succeeds. Otherwise, -1
          Error number: If the filesystem has already been unfrozen,
                        errno is set to EINVAL.
      
      [akpm@linux-foundation.org: fix CONFIG_BLOCK=n]
      Signed-off-by: NTakashi Sato <t-sato@yk.jp.nec.com>
      Signed-off-by: NMasayuki Hamaguchi <m-hamaguchi@ys.jp.nec.com>
      Cc: <xfs-masters@oss.sgi.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Dave Kleikamp <shaggy@austin.ibm.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fcccf502
  17. 09 1月, 2009 1 次提交
    • N
      md: make devices disappear when they are no longer needed. · d3374825
      NeilBrown 提交于
      Currently md devices, once created, never disappear until the module
      is unloaded.  This is essentially because the gendisk holds a
      reference to the mddev, and the mddev holds a reference to the
      gendisk, this a circular reference.
      
      If we drop the reference from mddev to gendisk, then we need to ensure
      that the mddev is destroyed when the gendisk is destroyed.  However it
      is not possible to hook into the gendisk destruction process to enable
      this.
      
      So we drop the reference from the gendisk to the mddev and destroy the
      gendisk when the mddev gets destroyed.  However this has a
      complication.
      Between the call
         __blkdev_get->get_gendisk->kobj_lookup->md_probe
      and the call
         __blkdev_get->md_open
      
      there is no obvious way to hold a reference on the mddev any more, so
      unless something is done, it will disappear and gendisk will be
      destroyed prematurely.
      
      Also, once we decide to destroy the mddev, there will be an unlockable
      moment before the gendisk is unlinked (blk_unregister_region) during
      which a new reference to the gendisk can be created.  We need to
      ensure that this reference can not be used.  i.e. the ->open must
      fail.
      
      So:
       1/  in md_probe we set a flag in the mddev (hold_active) which
           indicates that the array should be treated as active, even
           though there are no references, and no appearance of activity.
           This is cleared by md_release when the device is closed if it
           is no longer needed.
           This ensures that the gendisk will survive between md_probe and
           md_open.
      
       2/  In md_open we check if the mddev we expect to open matches
           the gendisk that we did open.
           If there is a mismatch we return -ERESTARTSYS and modify
           __blkdev_get to retry from the top in that case.
           In the -ERESTARTSYS sys case we make sure to wait until
           the old gendisk (that we succeeded in opening) is really gone so
           we loop at most once.
      
      Some udev configurations will always open an md device when it first
      appears.   If we allow an md device that was just created by an open
      to disappear on an immediate close, then this can race with such udev
      configurations and result in an infinite loop the device being opened
      and closed, then re-open due to the 'ADD' even from the first open,
      and then close and so on.
      So we make sure an md device, once created by an open, remains active
      at least until some md 'ioctl' has been made on it.  This means that
      all normal usage of md devices will allow them to disappear promptly
      when not needed, but the worst that an incorrect usage will do it
      cause an inactive md device to be left in existence (it can easily be
      removed).
      
      As an array can be stopped by writing to a sysfs attribute
        echo clear > /sys/block/mdXXX/md/array_state
      we need to use scheduled work for deleting the gendisk and other
      kobjects.  This allows us to wait for any pending gendisk deletion to
      complete by simply calling flush_scheduled_work().
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d3374825
  18. 07 1月, 2009 1 次提交
  19. 03 1月, 2009 1 次提交
  20. 01 1月, 2009 1 次提交
  21. 04 12月, 2008 2 次提交
  22. 06 11月, 2008 1 次提交
  23. 23 10月, 2008 1 次提交
  24. 21 10月, 2008 8 次提交
  25. 17 10月, 2008 1 次提交
    • R
      block: fix current kernel-doc warnings · 496aa8a9
      Randy Dunlap 提交于
      Fix block kernel-doc warnings:
      
      Warning(linux-2.6.27-git4//fs/block_dev.c:1272): No description found for parameter 'path'
      Warning(linux-2.6.27-git4//block/blk-core.c:1021): No description found for parameter 'cpu'
      Warning(linux-2.6.27-git4//block/blk-core.c:1021): No description found for parameter 'part'
      Warning(/var/linsrc/linux-2.6.27-git4//block/genhd.c:544): No description found for parameter 'partno'
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      496aa8a9
  26. 09 10月, 2008 2 次提交
    • R
      block_dev: fix kernel-doc in new functions · 57d1b536
      Randy Dunlap 提交于
      Fix kernel-doc in new functions:
      
      Error(mmotm-2008-1002-1617//fs/block_dev.c:895): duplicate section name 'Description'
      Error(mmotm-2008-1002-1617//fs/block_dev.c:924): duplicate section name 'Description'
      Warning(mmotm-2008-1002-1617//fs/block_dev.c:1282): No description found for parameter 'pathname'
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      cc: Andrew Patterson <andrew.patterson@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      57d1b536
    • A
      Call flush_disk() after detecting an online resize. · 608aeef1
      Andrew Patterson 提交于
      We call flush_disk() to make sure the buffer cache for the disk is
      flushed after a disk resize. There are two resize cases, growing and
      shrinking. Given that users can shrink/then grow a disk before
      revalidate_disk() is called, we treat the grow case identically to
      shrinking. We need to flush the buffer cache after an online shrink
      because, as James Bottomley puts it,
      
           The two use cases for shrinking I can see are
      
           1. planned: the fs is already shrunk to within the new boundaries
              and all data is relocated, so invalidate is fine (any dirty
              buffers that might exist in the shrunk region are there only
              because they were relocated but not yet written to their
              original location).
           2. unplanned:  In this case, the fs is probably toast, so whether
              we invalidate or not isn't going to make a whole lot of
              difference; it's still going to try to read or write from
              sectors beyond the new size and get I/O errors.
      
      Immediately invalidating shrunk disks will cause errors for outstanding
      I/Os for reads/write beyond the new end of the disk to be generated
      earlier then if we waited for the normal buffer cache operation. It also
      removes a potential security hole where we might keep old data around
      from beyond the end of the shrunk disk if the disk was not invalidated.
      Signed-off-by: NAndrew Patterson <andrew.patterson@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      608aeef1