1. 03 8月, 2016 3 次提交
  2. 20 7月, 2016 1 次提交
  3. 18 5月, 2016 2 次提交
  4. 06 4月, 2016 1 次提交
  5. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  6. 02 3月, 2016 1 次提交
    • E
      xfs: fix up inode32/64 (re)mount handling · 12c3f05c
      Eric Sandeen 提交于
      inode32/inode64 allocator behavior with respect to mount, remount
      and growfs is a little tricky.
      
      The inode32 mount option should only enable the inode32 allocator
      heuristics if the filesystem is large enough for 64-bit inodes to
      exist.  Today, it has this behavior on the initial mount, but a
      remount with inode32 unconditionally changes the allocation
      heuristics, even for a small fs.
      
      Also, an inode32 mounted small filesystem should transition to the
      inode32 allocator if the filesystem is subsequently grown to a
      sufficient size.  Today that does not happen.
      
      This patch consolidates xfs_set_inode32 and xfs_set_inode64 into a
      single new function, and moves the "is the maximum inode number big
      enough to matter" test into that function, so it doesn't rely on the
      caller to get it right - which remount did not do, previously.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      12c3f05c
  7. 10 2月, 2016 1 次提交
  8. 09 2月, 2016 1 次提交
  9. 03 11月, 2015 1 次提交
  10. 12 10月, 2015 1 次提交
    • B
      xfs: per-filesystem stats in sysfs · 225e4635
      Bill O'Donnell 提交于
      This patch implements per-filesystem stats objects in sysfs. It
      depends on the application of the previous patch series that
      develops the infrastructure to support both xfs global stats and
      xfs per-fs stats in sysfs.
      
      Stats objects are instantiated when an xfs filesystem is mounted
      and deleted on unmount. With this patch, the stats directory is
      created and populated with the familiar stats and stats_clear files.
      Example:
              /sys/fs/xfs/sda9/stats/stats
              /sys/fs/xfs/sda9/stats/stats_clear
      
      With this patch, the individual counts within the new per-fs
      stats file(s) remain at zero. Functions that use the the macros
      to increment, decrement, and add-to the per-fs stats counts will
      be covered in a separate new patch to follow this one. Note that
      the counts within the global stats file (/sys/fs/xfs/stats/stats)
      advance normally and can be cleared as it was prior to this patch.
      
      [dchinner: move setup/teardown to xfs_fs_{fill|put}_super() so
      it is down before/after any path that uses the per-mount stats. ]
      Signed-off-by: NBill O'Donnell <billodo@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      225e4635
  11. 19 8月, 2015 2 次提交
    • B
      xfs: clean up root inode properly on mount failure · 0ae120f8
      Brian Foster 提交于
      The root inode is read as part of the xfs_mountfs() sequence and the
      reference is dropped in the event of failure after we grab the
      inode.  The reference drop doesn't necessarily free the inode,
      however. It marks it for reclaim and potentially kicks off the
      reclaim workqueue.  The workqueue is destroyed further up the error
      path, which means we are subject to crash if the workqueue job runs
      after this point or a memory leak which is identified if the
      xfs_inode_zone is destroyed (e.g., on module removal). Both of these
      outcomes are reproducible via manual instrumentation of a mount
      error after the root inode xfs_iget() call in xfs_mountfs().
      
      Update the xfs_mountfs() error path to cancel any potential reclaim
      work items and to run a synchronous inode reclaim if the root inode
      is marked for reclaim. This ensures that no jobs remain on the queue
      before it is destroyed and that the root inode is freed before the
      reclaim mechanism is torn down.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      0ae120f8
    • B
      xfs: don't leave EFIs on AIL on mount failure · f0b2efad
      Brian Foster 提交于
      Log recovery occurs in two phases at mount time. In the first phase,
      EFIs and EFDs are processed and potentially cancelled out. EFIs without
      EFD objects are inserted into the AIL for processing and recovery in the
      second phase. xfs_mountfs() runs various other operations between the
      phases and is thus subject to failure. If failure occurs after the first
      phase but before the second, pending EFIs sit on the AIL, pin it and
      cause the mount to hang.
      
      Update the mount sequence to ensure that pending EFIs are cancelled in
      the event of failure. Add a recovery cancellation mechanism to iterate
      the AIL and cancel all EFI items when requested. Plumb cancellation
      support through the log mount finish helper and update xfs_mountfs() to
      invoke cancellation in the event of failure after recovery has started.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      f0b2efad
  12. 29 5月, 2015 2 次提交
    • B
      xfs: sparse inode chunks feature helpers and mount requirements · e5376fc1
      Brian Foster 提交于
      The sparse inode chunks feature uses the helper function to enable the
      allocation of sparse inode chunks. The incompatible feature bit is set
      on disk at mkfs time to prevent mount from unsupported kernels.
      
      Also, enforce the inode alignment requirements required for sparse inode
      chunks at mount time. When enabled, full inode chunks (and all inode
      record) alignment is increased from cluster size to inode chunk size.
      Sparse inode alignment must match the cluster size of the fs. Both
      superblock alignment fields are set as such by mkfs when sparse inode
      support is enabled.
      
      Finally, warn that sparse inode chunks is an experimental feature until
      further notice.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      e5376fc1
    • D
      xfs: inode and free block counters need to use __percpu_counter_compare · 8c1903d3
      Dave Chinner 提交于
      Because the counters use a custom batch size, the comparison
      functions need to be aware of that batch size otherwise the
      comparison does not work correctly. This leads to ASSERT failures
      on generic/027 like this:
      
       XFS: Assertion failed: 0, file: fs/xfs/xfs_mount.c, line: 1099
       ------------[ cut here ]------------
      ....
       Call Trace:
        [<ffffffff81522a39>] xfs_mod_icount+0x99/0xc0
        [<ffffffff815285cb>] xfs_trans_unreserve_and_mod_sb+0x28b/0x5b0
        [<ffffffff8152f941>] xfs_log_commit_cil+0x321/0x580
        [<ffffffff81528e17>] xfs_trans_commit+0xb7/0x260
        [<ffffffff81503d4d>] xfs_bmap_finish+0xcd/0x1b0
        [<ffffffff8151da41>] xfs_inactive_ifree+0x1e1/0x250
        [<ffffffff8151dbe0>] xfs_inactive+0x130/0x200
        [<ffffffff81523a21>] xfs_fs_evict_inode+0x91/0xf0
        [<ffffffff811f3958>] evict+0xb8/0x190
        [<ffffffff811f433b>] iput+0x18b/0x1f0
        [<ffffffff811e8853>] do_unlinkat+0x1f3/0x320
        [<ffffffff811d548a>] ? filp_close+0x5a/0x80
        [<ffffffff811e999b>] SyS_unlinkat+0x1b/0x40
        [<ffffffff81e0892e>] system_call_fastpath+0x12/0x71
      
      This is a regression introduced by commit 501ab323 ("xfs: use generic
      percpu counters for inode counter").
      
      This patch fixes the same problem for both the inode counter and the
      free block counter in the superblocks.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      8c1903d3
  13. 23 2月, 2015 7 次提交
    • D
      xfs: remove xfs_mod_incore_sb API · 964aa8d9
      Dave Chinner 提交于
      Now that there are no users of the bitfield based incore superblock
      modification API, just remove the whole damn lot of it, including
      all the bitfield definitions. This finally removes a lot of cruft
      that has been around for a long time.
      
      Credit goes to Christoph Hellwig for providing a great patch
      connecting all the dots to enale us to do this. This patch is
      derived from that work.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      964aa8d9
    • D
      xfs: replace xfs_mod_incore_sb_batched · 0bd5dded
      Dave Chinner 提交于
      Introduce helper functions for modifying fields in the superblock
      into xfs_trans.c, the only caller of xfs_mod_incore_sb_batch().  We
      can then use these directly in xfs_trans_unreserve_and_mod_sb() and
      so remove another user of the xfs_mode_incore_sb() API without
      losing any functionality or scalability of the transaction commit
      code..
      
      Based on a patch from Christoph Hellwig.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      0bd5dded
    • D
      xfs: introduce xfs_mod_frextents · bab98bbe
      Dave Chinner 提交于
      Add a new helper to modify the incore counter of free realtime
      extents. This matches the helpers used for inode and data block
      counters, and removes a significant users of the xfs_mod_incore_sb()
      interface.
      
      Based on a patch originally from Christoph Hellwig.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      bab98bbe
    • D
      xfs: Remove icsb infrastructure · 5681ca40
      Dave Chinner 提交于
      Now that the in-core superblock infrastructure has been replaced with
      generic per-cpu counters, we don't need it anymore. Nuke it from
      orbit so we are sure that it won't haunt us again...
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      5681ca40
    • D
      xfs: use generic percpu counters for free block counter · 0d485ada
      Dave Chinner 提交于
      XFS has hand-rolled per-cpu counters for the superblock since before
      there was any generic implementation. The free block counter is
      special in that it is used for ENOSPC detection outside transaction
      contexts for for delayed allocation. This means that the counter
      needs to be accurate at zero. The current per-cpu counter code jumps
      through lots of hoops to ensure we never run past zero, but we don't
      need to make all those jumps with the generic counter
      implementation.
      
      The generic counter implementation allows us to pass a "batch"
      threshold at which the addition/subtraction to the counter value
      will be folded back into global value under lock. We can use this
      feature to reduce the batch size as we approach 0 in a very similar
      manner to the existing counters and their rebalance algorithm. If we
      use a batch size of 1 as we approach 0, then every addition and
      subtraction will be done against the global value and hence allow
      accurate detection of zero threshold crossing.
      
      Hence we can replace the handrolled, accurate-at-zero counters with
      generic percpu counters.
      
      Note: this removes just enough of the icsb infrastructure to compile
      without warnings. The rest will go in subsequent commits.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      0d485ada
    • D
      xfs: use generic percpu counters for free inode counter · e88b64ea
      Dave Chinner 提交于
      XFS has hand-rolled per-cpu counters for the superblock since before
      there was any generic implementation. The free inode counter is not
      used for any limit enforcement - the per-AG free inode counters are
      used during allocation to determine if there are inode available for
      allocation.
      
      Hence we don't need any of the complexity of the hand-rolled
      counters and we can simply replace them with generic per-cpu
      counters similar to the inode counter.
      
      This version introduces a xfs_mod_ifree() helper function from
      Christoph Hellwig.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      e88b64ea
    • D
      xfs: use generic percpu counters for inode counter · 501ab323
      Dave Chinner 提交于
      XFS has hand-rolled per-cpu counters for the superblock since before
      there was any generic implementation. There are some warts around
      the  use of them for the inode counter as the hand rolled counter is
      designed to be accurate at zero, but has no specific accurracy at
      any other value. This design causes problems for the maximum inode
      count threshold enforcement, as there is no trigger that balances
      the counters as they get close tothe maximum threshold.
      
      Instead of designing new triggers for balancing, just replace the
      handrolled per-cpu counter with a generic counter.  This enables us
      to update the counter through the normal superblock modification
      funtions, but rather than do that we add a xfs_mod_icount() helper
      function (from Christoph Hellwig) and keep the percpu counter
      outside the superblock in the struct xfs_mount.
      
      This means we still need to initialise the per-cpu counter
      specifically when we read the superblock, and vice versa when we
      log/write it, but it does mean that we don't need to change any
      other code.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      501ab323
  14. 22 1月, 2015 3 次提交
    • D
      xfs: sanitise sb_bad_features2 handling · 074e427b
      Dave Chinner 提交于
      We currently have to ensure that every time we update sb_features2
      that we update sb_bad_features2. Now that we log and format the
      superblock in it's entirety we actually don't have to care because
      we can simply update the sb_bad_features2 when we format it into the
      buffer. This removes the need for anything but the mount and
      superblock formatting code to care about sb_bad_features2, and
      hence removes the possibility that we forget to update bad_features2
      when necessary in the future.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      074e427b
    • D
      xfs: consolidate superblock logging functions · 61e63ecb
      Dave Chinner 提交于
      We now have several superblock loggin functions that are identical
      except for the transaction reservation and whether it shoul dbe a
      synchronous transaction or not. Consolidate these all into a single
      function, a single reserveration and a sync flag and call it
      xfs_sync_sb().
      
      Also, xfs_mod_sb() is not really a modification function - it's the
      operation of logging the superblock buffer. hence change the name of
      it to reflect this.
      
      Note that we have to change the mp->m_update_flags that are passed
      around at mount time to a boolean simply to indicate a superblock
      update is needed.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      61e63ecb
    • D
      xfs: remove bitfield based superblock updates · 4d11a402
      Dave Chinner 提交于
      When we log changes to the superblock, we first have to write them
      to the on-disk buffer, and then log that. Right now we have a
      complex bitfield based arrangement to only write the modified field
      to the buffer before we log it.
      
      This used to be necessary as a performance optimisation because we
      logged the superblock buffer in every extent or inode allocation or
      freeing, and so performance was extremely important. We haven't done
      this for years, however, ever since the lazy superblock counters
      pulled the superblock logging out of the transaction commit
      fast path.
      
      Hence we have a bunch of complexity that is not necessary that makes
      writing the in-core superblock to disk much more complex than it
      needs to be. We only need to log the superblock now during
      management operations (e.g. during mount, unmount or quota control
      operations) so it is not a performance critical path anymore.
      
      As such, remove the complex field based logging mechanism and
      replace it with a simple conversion function similar to what we use
      for all other on-disk structures.
      
      This means we always log the entirity of the superblock, but again
      because we rarely modify the superblock this is not an issue for log
      bandwidth or CPU time. Indeed, if we do log the superblock
      frequently, delayed logging will minimise the impact of this
      overhead.
      
      [Fixed gquota/pquota inode sharing regression noticed by bfoster.]
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      4d11a402
  15. 24 12月, 2014 1 次提交
    • J
      xfs: Keep sb_bad_features2 consistent with sb_features2 · 1a43ec03
      Jan Kara 提交于
      Currently when we modify sb_features2, we store the same value also in
      sb_bad_features2. However in most places we forget to mark field
      sb_bad_features2 for logging and thus it can happen that a change to it
      is lost. This results in an inconsistent sb_features2 and
      sb_bad_features2 fields e.g. after xfstests test xfs/187.
      
      Fix the problem by changing XFS_SB_FEATURES2 to actually mean both
      sb_features2 and sb_bad_features2 fields since this is always what we
      want to log. This isn't ideal because the fact that XFS_SB_FEATURES2
      means two fields could cause some problem in future however the code is
      hopefully less error prone that it is now.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      1a43ec03
  16. 04 12月, 2014 1 次提交
  17. 28 11月, 2014 4 次提交
  18. 02 10月, 2014 1 次提交
  19. 29 9月, 2014 1 次提交
  20. 04 8月, 2014 2 次提交
    • K
      xfs: fix coccinelle warnings · 6eee8972
      kbuild test robot 提交于
      Removes unneeded semicolon, introduced by commit a70a4fa5 ("xfs: fix
      a couple error sequence jumps in xfs_mountfs"):
      
      fs/xfs/xfs_mount.c:858:24-25: Unneeded semicolon
      
      Generated by: scripts/coccinelle/misc/semicolon.cocci
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      6eee8972
    • E
      xfs: avoid false quotacheck after unclean shutdown · 5ef828c4
      Eric Sandeen 提交于
      The commit
      
      83e782e1 xfs: Remove incore use of XFS_OQUOTA_ENFD and XFS_OQUOTA_CHKD
      
      added a new function xfs_sb_quota_from_disk() which swaps
      on-disk XFS_OQUOTA_* flags for in-core XFS_GQUOTA_* and XFS_PQUOTA_*
      flags after the superblock is read.
      
      However, if log recovery is required, the superblock is read again,
      and the modified in-core flags are re-read from disk, so we have
      XFS_OQUOTA_* flags in memory again.  This causes the
      XFS_QM_NEED_QUOTACHECK() test to be true, because the XFS_OQUOTA_CHKD
      is still set, and not XFS_GQUOTA_CHKD or XFS_PQUOTA_CHKD.
      
      Change xfs_sb_from_disk to call xfs_sb_quota_from disk and always
      convert the disk flags to in-memory flags.
      
      Add a lower-level function which can be called with "false" to
      not convert the flags, so that the sb verifier can verify
      exactly what was on disk, per Brian Foster's suggestion.
      Reported-by: NCyril B. <cbay@excellency.fr>
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      5ef828c4
  21. 30 7月, 2014 1 次提交
  22. 24 7月, 2014 1 次提交
    • E
      xfs: allow inode allocations in post-growfs disk space · 9de67c3b
      Eric Sandeen 提交于
      Today, if we perform an xfs_growfs which adds allocation groups,
      mp->m_maxagi is not properly updated when the growfs is complete.
      
      Therefore inodes will continue to be allocated only in the
      AGs which existed prior to the growfs, and the new space
      won't be utilized.
      
      This is because of this path in xfs_growfs_data_private():
      
      xfs_growfs_data_private
      	xfs_initialize_perag(mp, nagcount, &nagimax);
      		if (mp->m_flags & XFS_MOUNT_32BITINODES)
      			index = xfs_set_inode32(mp);
      		else
      			index = xfs_set_inode64(mp);
      
      		if (maxagi)
      			*maxagi = index;
      
      where xfs_set_inode* iterates over the (old) agcount in
      mp->m_sb.sb_agblocks, which has not yet been updated
      in the growfs path.  So "index" will be returned based on
      the old agcount, not the new one, and new AGs are not available
      for inode allocation.
      
      Fix this by explicitly passing the proper AG count (which
      xfs_initialize_perag() already has) down another level,
      so that xfs_set_inode* can make the proper decision about
      acceptable AGs for inode allocation in the potentially
      newly-added AGs.
      
      This has been broken since 3.7, when these two
      xfs_set_inode* functions were added in commit 2d2194f6.
      Prior to that, we looped over "agcount" not sb_agblocks
      in these calculations.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      9de67c3b
  23. 15 7月, 2014 1 次提交
    • B
      xfs: add xfs_mount sysfs kobject · a31b1d3d
      Brian Foster 提交于
      Embed a base kobject into xfs_mount. This creates a kobject associated
      with each XFS mount and a subdirectory in sysfs with the name of the
      filesystem. The subdirectory lifecycle matches that of the mount. Also
      add the new xfs_sysfs.[c,h] source files with some XFS sysfs
      infrastructure to facilitate attribute creation.
      
      Note that there are currently no attributes exported as part of the
      xfs_mount kobject. It exists solely to serve as a per-mount container
      for child objects.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      a31b1d3d