1. 24 9月, 2009 1 次提交
    • C
      freeze_bdev: kill bd_mount_sem · 4fadd7bb
      Christoph Hellwig 提交于
      Now that we have the freeze count there is not much reason for bd_mount_sem
      anymore.  The actual freeze/thaw operations are serialized using the
      bd_fsfreeze_mutex, and the only other place we take bd_mount_sem is
      get_sb_bdev which tries to prevent mounting a filesystem while the block
      device is frozen.  Instead of add a check for bd_fsfreeze_count and
      return -EBUSY if a filesystem is frozen.  While that is a change in user
      visible behaviour a failing mount is much better for this case rather
      than having the mount process stuck uninterruptible for a long time.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4fadd7bb
  2. 22 9月, 2009 1 次提交
  3. 16 9月, 2009 1 次提交
  4. 14 9月, 2009 1 次提交
  5. 30 7月, 2009 1 次提交
  6. 12 6月, 2009 4 次提交
  7. 05 6月, 2009 1 次提交
  8. 23 5月, 2009 1 次提交
  9. 28 4月, 2009 1 次提交
  10. 01 4月, 2009 1 次提交
  11. 28 3月, 2009 1 次提交
  12. 10 1月, 2009 1 次提交
    • T
      filesystem freeze: implement generic freeze feature · fcccf502
      Takashi Sato 提交于
      The ioctls for the generic freeze feature are below.
      o Freeze the filesystem
        int ioctl(int fd, int FIFREEZE, arg)
          fd: The file descriptor of the mountpoint
          FIFREEZE: request code for the freeze
          arg: Ignored
          Return value: 0 if the operation succeeds. Otherwise, -1
      
      o Unfreeze the filesystem
        int ioctl(int fd, int FITHAW, arg)
          fd: The file descriptor of the mountpoint
          FITHAW: request code for unfreeze
          arg: Ignored
          Return value: 0 if the operation succeeds. Otherwise, -1
          Error number: If the filesystem has already been unfrozen,
                        errno is set to EINVAL.
      
      [akpm@linux-foundation.org: fix CONFIG_BLOCK=n]
      Signed-off-by: NTakashi Sato <t-sato@yk.jp.nec.com>
      Signed-off-by: NMasayuki Hamaguchi <m-hamaguchi@ys.jp.nec.com>
      Cc: <xfs-masters@oss.sgi.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Dave Kleikamp <shaggy@austin.ibm.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fcccf502
  13. 09 1月, 2009 1 次提交
    • N
      md: make devices disappear when they are no longer needed. · d3374825
      NeilBrown 提交于
      Currently md devices, once created, never disappear until the module
      is unloaded.  This is essentially because the gendisk holds a
      reference to the mddev, and the mddev holds a reference to the
      gendisk, this a circular reference.
      
      If we drop the reference from mddev to gendisk, then we need to ensure
      that the mddev is destroyed when the gendisk is destroyed.  However it
      is not possible to hook into the gendisk destruction process to enable
      this.
      
      So we drop the reference from the gendisk to the mddev and destroy the
      gendisk when the mddev gets destroyed.  However this has a
      complication.
      Between the call
         __blkdev_get->get_gendisk->kobj_lookup->md_probe
      and the call
         __blkdev_get->md_open
      
      there is no obvious way to hold a reference on the mddev any more, so
      unless something is done, it will disappear and gendisk will be
      destroyed prematurely.
      
      Also, once we decide to destroy the mddev, there will be an unlockable
      moment before the gendisk is unlinked (blk_unregister_region) during
      which a new reference to the gendisk can be created.  We need to
      ensure that this reference can not be used.  i.e. the ->open must
      fail.
      
      So:
       1/  in md_probe we set a flag in the mddev (hold_active) which
           indicates that the array should be treated as active, even
           though there are no references, and no appearance of activity.
           This is cleared by md_release when the device is closed if it
           is no longer needed.
           This ensures that the gendisk will survive between md_probe and
           md_open.
      
       2/  In md_open we check if the mddev we expect to open matches
           the gendisk that we did open.
           If there is a mismatch we return -ERESTARTSYS and modify
           __blkdev_get to retry from the top in that case.
           In the -ERESTARTSYS sys case we make sure to wait until
           the old gendisk (that we succeeded in opening) is really gone so
           we loop at most once.
      
      Some udev configurations will always open an md device when it first
      appears.   If we allow an md device that was just created by an open
      to disappear on an immediate close, then this can race with such udev
      configurations and result in an infinite loop the device being opened
      and closed, then re-open due to the 'ADD' even from the first open,
      and then close and so on.
      So we make sure an md device, once created by an open, remains active
      at least until some md 'ioctl' has been made on it.  This means that
      all normal usage of md devices will allow them to disappear promptly
      when not needed, but the worst that an incorrect usage will do it
      cause an inactive md device to be left in existence (it can easily be
      removed).
      
      As an array can be stopped by writing to a sysfs attribute
        echo clear > /sys/block/mdXXX/md/array_state
      we need to use scheduled work for deleting the gendisk and other
      kobjects.  This allows us to wait for any pending gendisk deletion to
      complete by simply calling flush_scheduled_work().
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d3374825
  14. 07 1月, 2009 1 次提交
  15. 03 1月, 2009 1 次提交
  16. 01 1月, 2009 1 次提交
  17. 04 12月, 2008 2 次提交
  18. 06 11月, 2008 1 次提交
  19. 23 10月, 2008 1 次提交
  20. 21 10月, 2008 8 次提交
  21. 17 10月, 2008 1 次提交
    • R
      block: fix current kernel-doc warnings · 496aa8a9
      Randy Dunlap 提交于
      Fix block kernel-doc warnings:
      
      Warning(linux-2.6.27-git4//fs/block_dev.c:1272): No description found for parameter 'path'
      Warning(linux-2.6.27-git4//block/blk-core.c:1021): No description found for parameter 'cpu'
      Warning(linux-2.6.27-git4//block/blk-core.c:1021): No description found for parameter 'part'
      Warning(/var/linsrc/linux-2.6.27-git4//block/genhd.c:544): No description found for parameter 'partno'
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      496aa8a9
  22. 09 10月, 2008 8 次提交
    • R
      block_dev: fix kernel-doc in new functions · 57d1b536
      Randy Dunlap 提交于
      Fix kernel-doc in new functions:
      
      Error(mmotm-2008-1002-1617//fs/block_dev.c:895): duplicate section name 'Description'
      Error(mmotm-2008-1002-1617//fs/block_dev.c:924): duplicate section name 'Description'
      Warning(mmotm-2008-1002-1617//fs/block_dev.c:1282): No description found for parameter 'pathname'
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      cc: Andrew Patterson <andrew.patterson@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      57d1b536
    • A
      Call flush_disk() after detecting an online resize. · 608aeef1
      Andrew Patterson 提交于
      We call flush_disk() to make sure the buffer cache for the disk is
      flushed after a disk resize. There are two resize cases, growing and
      shrinking. Given that users can shrink/then grow a disk before
      revalidate_disk() is called, we treat the grow case identically to
      shrinking. We need to flush the buffer cache after an online shrink
      because, as James Bottomley puts it,
      
           The two use cases for shrinking I can see are
      
           1. planned: the fs is already shrunk to within the new boundaries
              and all data is relocated, so invalidate is fine (any dirty
              buffers that might exist in the shrunk region are there only
              because they were relocated but not yet written to their
              original location).
           2. unplanned:  In this case, the fs is probably toast, so whether
              we invalidate or not isn't going to make a whole lot of
              difference; it's still going to try to read or write from
              sectors beyond the new size and get I/O errors.
      
      Immediately invalidating shrunk disks will cause errors for outstanding
      I/Os for reads/write beyond the new end of the disk to be generated
      earlier then if we waited for the normal buffer cache operation. It also
      removes a potential security hole where we might keep old data around
      from beyond the end of the shrunk disk if the disk was not invalidated.
      Signed-off-by: NAndrew Patterson <andrew.patterson@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      608aeef1
    • A
      Added flush_disk to factor out common buffer cache flushing code. · 56ade44b
      Andrew Patterson 提交于
      We need to be able to flush the buffer cache for for more than
      just when a disk is changed, so we factor out common cache flush code
      in check_disk_change() to an internal flush_disk() routine.  This
      routine will then be used for both disk changes and disk resizes (in a
      later patch).
      
      Include the disk name in the text indicating that there are busy
      inodes on the device and increase the KERN severity of the message.
      Signed-off-by: NAndrew Patterson <andrew.patterson@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      56ade44b
    • A
      Adjust block device size after an online resize of a disk. · c3279d14
      Andrew Patterson 提交于
      The revalidate_disk routine now checks if a disk has been resized by
      comparing the gendisk capacity to the bdev inode size.  If they are
      different (usually because the disk has been resized underneath the kernel)
      the bdev inode size is adjusted to match the capacity.
      Signed-off-by: NAndrew Patterson <andrew.patterson@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c3279d14
    • A
      Wrapper for lower-level revalidate_disk routines. · 0c002c2f
      Andrew Patterson 提交于
      This is a wrapper for the lower-level revalidate_disk call-backs such
      as sd_revalidate_disk(). It allows us to perform pre and post
      operations when calling them.
      
      We will use this wrapper in a later patch to adjust block device sizes
      after an online resize (a _post_ operation).
      Signed-off-by: NAndrew Patterson <andrew.patterson@hp.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      0c002c2f
    • T
      block: always set bdev->bd_part · 0762b8bd
      Tejun Heo 提交于
      Till now, bdev->bd_part is set only if the bdev was for parts other
      than part0.  This patch makes bdev->bd_part always set so that code
      paths don't have to differenciate common handling.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      0762b8bd
    • T
      block: move holder_dir from disk to part0 · 4c46501d
      Tejun Heo 提交于
      Move disk->holder_dir to part0->holder_dir.  Kill now mostly
      superflous bdev_get_holder().
      
      While at it, kill superflous kobject_get/put() around holder_dir,
      slave_dir and cmd_filter creation and collapse
      disk_sysfs_add_subdirs() into register_disk().  These serve no purpose
      but obfuscating the code.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      4c46501d
    • T
      block: introduce partition 0 · b5d0b9df
      Tejun Heo 提交于
      genhd and partition code handled disk and partitions separately.  All
      information about the whole disk was in struct genhd and partitions in
      struct hd_struct.  However, the whole disk (part0) and other
      partitions have a lot in common and the data structures end up having
      good number of common fields and thus separate code paths doing the
      same thing.  Also, the partition array was indexed by partno - 1 which
      gets pretty confusing at times.
      
      This patch introduces partition 0 and makes the partition array
      indexed by partno.  Following patches will unify the handling of disk
      and parts piece-by-piece.
      
      This patch also implements disk_partitionable() which tests whether a
      disk is partitionable.  With coming dynamic partition array change,
      the most common usage of disk_max_parts() will be testing whether a
      disk is partitionable and the number of max partitions will become
      much less important.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      b5d0b9df