1. 10 9月, 2010 4 次提交
    • T
      ocfs2: Remove obscure error handling in direct_write. · 95fa859a
      Tao Ma 提交于
      In ocfs2, actually we don't allow any direct write pass i_size,
      see the function ocfs2_prepare_inode_for_write. So we don't
      need the bogus simple_setsize.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      95fa859a
    • T
      ocfs2: Add some trace log for orphan scan. · 3c3f20c9
      Tao Ma 提交于
      Now orphan scan worker has no trace log, so it is
      very hard to tell whether it is finished or blocked.
      So add 2 mlog trace log so that we can tell whether
      the current orphan scan worker is blocked or not.
      It does help when I analyzed a orphan scan bug.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      3c3f20c9
    • T
      Ocfs2: Add new OCFS2_IOC_INFO ioctl for ocfs2 v8. · ddee5cdb
      Tristan Ye 提交于
      The reason why we need this ioctl is to offer the none-privileged
      end-user a possibility to get filesys info gathering.
      
      We use OCFS2_IOC_INFO to manipulate the new ioctl, userspace passes a
      structure to kernel containing an array of request pointers and request
      count, such as,
      
      * From userspace:
      
      struct ocfs2_info_blocksize oib = {
              .ib_req = {
                      .ir_magic = OCFS2_INFO_MAGIC,
                      .ir_code = OCFS2_INFO_BLOCKSIZE,
                      ...
              }
              ...
      }
      
      struct ocfs2_info_clustersize oic = {
              ...
      }
      
      uint64_t reqs[2] = {(unsigned long)&oib,
                          (unsigned long)&oic};
      
      struct ocfs2_info info = {
              .oi_requests = reqs,
              .oi_count = 2,
      }
      
      ret = ioctl(fd, OCFS2_IOC_INFO, &info);
      
      * In kernel:
      
      Get the request pointers from *info*, then handle each request one bye one.
      
      Idea here is to make the spearated request small enough to guarantee
      a better backward&forward compatibility since a small piece of request
      would be less likely to be broken if filesys on raw disk get changed.
      
      Currently, the following 7 requests are supported per the requirement from
      userspace tool o2info, and I believe it will grow over time:-)
      
              OCFS2_INFO_CLUSTERSIZE
              OCFS2_INFO_BLOCKSIZE
              OCFS2_INFO_MAXSLOTS
              OCFS2_INFO_LABEL
              OCFS2_INFO_UUID
              OCFS2_INFO_FS_FEATURES
              OCFS2_INFO_JOURNAL_SIZE
      
      This ioctl is only specific to OCFS2.
      Signed-off-by: NTristan Ye <tristan.ye@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      ddee5cdb
    • S
      mm: Move vma_stack_continue into mm.h · 39aa3cb3
      Stefan Bader 提交于
      So it can be used by all that need to check for that.
      Signed-off-by: NStefan Bader <stefan.bader@canonical.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      39aa3cb3
  2. 08 9月, 2010 14 次提交
  3. 07 9月, 2010 2 次提交
    • M
      fuse: fix lock annotations · b9ca67b2
      Miklos Szeredi 提交于
      Sparse doesn't understand lock annotations of the form
      __releases(&foo->lock).  Change them to __releases(foo->lock).  Same
      for __acquires().
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      b9ca67b2
    • M
      fuse: flush background queue on connection close · 595afaf9
      Miklos Szeredi 提交于
      David Bartly reported that fuse can hang in fuse_get_req_nofail() when
      the connection to the filesystem server is no longer active.
      
      If bg_queue is not empty then flush_bg_queue() called from
      request_end() can put more requests on to the pending queue.  If this
      happens while ending requests on the processing queue then those
      background requests will be queued to the pending list and never
      ended.
      
      Another problem is that fuse_dev_release() didn't wake up processes
      sleeping on blocked_waitq.
      
      Solve this by:
      
       a) flushing the background queue before calling end_requests() on the
          pending and processing queues
      
       b) setting blocked = 0 and waking up processes waiting on
          blocked_waitq()
      
      Thanks to David for an excellent bug report.
      Reported-by: NDavid Bartley <andareed@gmail.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      CC: stable@kernel.org
      595afaf9
  4. 04 9月, 2010 1 次提交
  5. 03 9月, 2010 3 次提交
    • T
      xfs: Make fiemap work with sparse files · 9af25465
      Tao Ma 提交于
      In xfs_vn_fiemap, we set bvm_count to fi_extent_max + 1 and want
      to return fi_extent_max extents, but actually it won't work for
      a sparse file. The reason is that in xfs_getbmap we will
      calculate holes and set it in 'out', while out is malloced by
      bmv_count(fi_extent_max+1) which didn't consider holes. So in the
      worst case, if 'out' vector looks like
      [hole, extent, hole, extent, hole, ... hole, extent, hole],
      we will only return half of fi_extent_max extents.
      
      This patch add a new parameter BMV_IF_NO_HOLES for bvm_iflags.
      So with this flags, we don't use our 'out' in xfs_getbmap for
      a hole. The solution is a bit ugly by just don't increasing
      index of 'out' vector. I felt that it is not easy to skip it
      at the very beginning since we have the complicated check and
      some function like xfs_getbmapx_fix_eof_hole to adjust 'out'.
      
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      9af25465
    • D
      xfs: prevent 32bit overflow in space reservation · 72656c46
      Dave Chinner 提交于
      If we attempt to preallocate more than 2^32 blocks of space in a
      single syscall, the transaction block reservation will overflow
      leading to a hangs in the superblock block accounting code. This
      is trivially reproduced with xfs_io. Fix the problem by capping the
      allocation reservation to the maximum number of blocks a single
      xfs_bmapi() call can allocate (2^21 blocks).
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      72656c46
    • J
      nfsd4: mask out non-access bits in nfs4_access_to_omode · 8f34a430
      J. Bruce Fields 提交于
      This fixes an unnecessary BUG().
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      8f34a430
  6. 02 9月, 2010 2 次提交
    • A
      xfs: Disallow 32bit project quota id · 23963e54
      Arkadiusz Mi?kiewicz 提交于
      Currently on-disk structure is able to keep only 16bit project quota
      id, so disallow 32bit ones. This fixes a problem where parts of
      kernel structures holding project quota id are 32bit while parts
      (on-disk) are 16bit variables which causes project quota member
      files to be inaccessible for some operations (like mv/rm).
      Signed-off-by: NArkadiusz Mi?kiewicz <arekm@maven.pl>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      23963e54
    • D
      xfs: improve buffer cache hash scalability · 9bc08a45
      Dave Chinner 提交于
      When doing large parallel file creates on a 16p machines, large amounts of
      time is being spent in _xfs_buf_find(). A system wide profile with perf top
      shows this:
      
                1134740.00 19.3% _xfs_buf_find
                 733142.00 12.5% __ticket_spin_lock
      
      The problem is that the hash contains 45,000 buffers, and the hash table width
      is only 256 buffers. That means we've got around 200 buffers per chain, and
      searching it is quite expensive. The hash table size needs to increase.
      
      Secondly, every time we do a lookup, we promote the buffer we find to the head
      of the hash chain. This is causing cachelines to be dirtied and causes
      invalidation of cachelines across all CPUs that may have walked the hash chain
      recently. hence every walk of the hash chain is effectively a cold cache walk.
      Remove the promotion to avoid this invalidation.
      
      The results are:
      
                1045043.00 21.2% __ticket_spin_lock
                 326184.00  6.6% _xfs_buf_find
      
      A 70% drop in the CPU usage when looking up buffers. Unfortunately that does
      not result in an increase in performance underthis workload as contention on
      the inode_lock soaks up most of the reduction in CPU usage.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      9bc08a45
  7. 30 8月, 2010 2 次提交
  8. 28 8月, 2010 3 次提交
  9. 27 8月, 2010 9 次提交