1. 25 6月, 2015 5 次提交
  2. 22 4月, 2015 1 次提交
  3. 20 4月, 2015 1 次提交
  4. 12 4月, 2015 1 次提交
  5. 19 2月, 2015 1 次提交
  6. 11 2月, 2015 1 次提交
  7. 09 1月, 2015 1 次提交
  8. 18 12月, 2014 6 次提交
  9. 15 10月, 2014 1 次提交
    • C
      ceph: remove redundant code for max file size verification · a4483e8a
      Chao Yu 提交于
      Both ceph_update_writeable_page and ceph_setattr will verify file size
      with max size ceph supported.
      There are two caller for ceph_update_writeable_page, ceph_write_begin and
      ceph_page_mkwrite. For ceph_write_begin, we have already verified the size in
      generic_write_checks of ceph_write_iter; for ceph_page_mkwrite, we have no
      chance to change file size when mmap. Likewise we have already verified the size
      in inode_change_ok when we call ceph_setattr.
      So let's remove the redundant code for max file size verification.
      Signed-off-by: NChao Yu <chao2.yu@samsung.com>
      Reviewed-by: NYan, Zheng <zyan@redhat.com>
      a4483e8a
  10. 07 6月, 2014 1 次提交
  11. 06 6月, 2014 1 次提交
  12. 07 5月, 2014 1 次提交
  13. 29 1月, 2014 1 次提交
  14. 01 1月, 2014 2 次提交
  15. 14 12月, 2013 2 次提交
  16. 24 11月, 2013 1 次提交
  17. 07 9月, 2013 3 次提交
    • M
      ceph: page still marked private_2 · d4d3aa38
      Milosz Tanski 提交于
      Previous patch that allowed us to cleanup most of the issues with pages marked
      as private_2 when calling ceph_readpages. However, there seams to be a case in
      the error case clean up in start read that still trigers this from time to
      time. I've only seen this one a couple times.
      
      BUG: Bad page state in process petabucket  pfn:335b82
      page:ffffea000cd6e080 count:0 mapcount:0 mapping:          (null) index:0x0
      page flags: 0x200000000001000(private_2)
      Call Trace:
       [<ffffffff81563442>] dump_stack+0x46/0x58
       [<ffffffff8112c7f7>] bad_page+0xc7/0x120
       [<ffffffff8112cd9e>] free_pages_prepare+0x10e/0x120
       [<ffffffff8112e580>] free_hot_cold_page+0x40/0x160
       [<ffffffff81132427>] __put_single_page+0x27/0x30
       [<ffffffff81132d95>] put_page+0x25/0x40
       [<ffffffffa02cb409>] ceph_readpages+0x2e9/0x6f0 [ceph]
       [<ffffffff811313cf>] __do_page_cache_readahead+0x1af/0x260
      Signed-off-by: NMilosz Tanski <milosz@adfin.com>
      Signed-off-by: NSage Weil <sage@inktank.com>
      d4d3aa38
    • M
      ceph: clean PgPrivate2 on returning from readpages · 76be778b
      Milosz Tanski 提交于
      In some cases the ceph readapages code code bails without filling all the pages
      already marked by fscache. When we return back to readahead code this causes
      a BUG.
      Signed-off-by: NMilosz Tanski <milosz@adfin.com>
      76be778b
    • M
      ceph: use fscache as a local presisent cache · 99ccbd22
      Milosz Tanski 提交于
      Adding support for fscache to the Ceph filesystem. This would bring it to on
      par with some of the other network filesystems in Linux (like NFS, AFS, etc...)
      
      In order to mount the filesystem with fscache the 'fsc' mount option must be
      passed.
      Signed-off-by: NMilosz Tanski <milosz@adfin.com>
      Signed-off-by: NSage Weil <sage@inktank.com>
      99ccbd22
  18. 28 8月, 2013 1 次提交
    • S
      ceph: use vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem · 7d6e1f54
      Sha Zhengju 提交于
      Following we will begin to add memcg dirty page accounting around
      __set_page_dirty_{buffers,nobuffers} in vfs layer, so we'd better use vfs interface to
      avoid exporting those details to filesystems.
      
      Since vfs set_page_dirty() should be called under page lock, here we don't need elaborate
      codes to handle racy anymore, and two WARN_ON() are added to detect such exceptions.
      Thanks very much for Sage and Yan Zheng's coaching!
      
      I tested it in a two server's ceph environment that one is client and the other is
      mds/osd/mon, and run the following fsx test from xfstests:
      
        ./fsx   1MB -N 50000 -p 10000 -l 1048576
        ./fsx  10MB -N 50000 -p 10000 -l 10485760
        ./fsx 100MB -N 50000 -p 10000 -l 104857600
      
      The fsx does lots of mmap-read/mmap-write/truncate operations and the tests completed
      successfully without triggering any of WARN_ON.
      Signed-off-by: NSha Zhengju <handai.szj@taobao.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      7d6e1f54
  19. 16 8月, 2013 1 次提交
  20. 10 8月, 2013 1 次提交
  21. 04 7月, 2013 2 次提交
  22. 22 5月, 2013 2 次提交
    • L
      ceph: use ->invalidatepage() length argument · 569d39fc
      Lukas Czerner 提交于
      ->invalidatepage() aop now accepts range to invalidate so we can make
      use of it in ceph_invalidatepage().
      Signed-off-by: NLukas Czerner <lczerner@redhat.com>
      Acked-by: NSage Weil <sage@inktank.com>
      Cc: ceph-devel@vger.kernel.org
      569d39fc
    • L
      mm: change invalidatepage prototype to accept length · d47992f8
      Lukas Czerner 提交于
      Currently there is no way to truncate partial page where the end
      truncate point is not at the end of the page. This is because it was not
      needed and the functionality was enough for file system truncate
      operation to work properly. However more file systems now support punch
      hole feature and it can benefit from mm supporting truncating page just
      up to the certain point.
      
      Specifically, with this functionality truncate_inode_pages_range() can
      be changed so it supports truncating partial page at the end of the
      range (currently it will BUG_ON() if 'end' is not at the end of the
      page).
      
      This commit changes the invalidatepage() address space operation
      prototype to accept range to be invalidated and update all the instances
      for it.
      
      We also change the block_invalidatepage() in the same way and actually
      make a use of the new length argument implementing range invalidation.
      
      Actual file system implementations will follow except the file systems
      where the changes are really simple and should not change the behaviour
      in any way .Implementation for truncate_page_range() which will be able
      to accept page unaligned ranges will follow as well.
      Signed-off-by: NLukas Czerner <lczerner@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      d47992f8
  23. 02 5月, 2013 3 次提交
    • A
      libceph: kill off osd data write_request parameters · 406e2c9f
      Alex Elder 提交于
      In the incremental move toward supporting distinct data items in an
      osd request some of the functions had "write_request" parameters to
      indicate, basically, whether the data belonged to in_data or the
      out_data.  Now that we maintain the data fields in the op structure
      there is no need to indicate the direction, so get rid of the
      "write_request" parameters.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      406e2c9f
    • Y
      ceph: fix race between writepages and truncate · 1ac0fc8a
      Yan, Zheng 提交于
      ceph_writepages_start() reads inode->i_size in two places. It can get
      different values between successive read, because truncate can change
      inode->i_size at any time. The race can lead to mismatch between data
      length of osd request and pages marked as writeback. When osd request
      finishes, it clear writeback page according to its data length. So
      some pages can be left in writeback state forever. The fix is only
      read inode->i_size once, save its value to a local variable and use
      the local variable when i_size is needed.
      Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
      Reviewed-by: NAlex Elder <elder@inktank.com>
      1ac0fc8a
    • A
      libceph: combine initializing and setting osd data · a4ce40a9
      Alex Elder 提交于
      This ends up being a rather large patch but what it's doing is
      somewhat straightforward.
      
      Basically, this is replacing two calls with one.  The first of the
      two calls is initializing a struct ceph_osd_data with data (either a
      page array, a page list, or a bio list); the second is setting an
      osd request op so it associates that data with one of the op's
      parameters.  In place of those two will be a single function that
      initializes the op directly.
      
      That means we sort of fan out a set of the needed functions:
          - extent ops with pages data
          - extent ops with pagelist data
          - extent ops with bio list data
      and
          - class ops with page data for receiving a response
      
      We also have define another one, but it's only used internally:
          - class ops with pagelist data for request parameters
      
      Note that we *still* haven't gotten rid of the osd request's
      r_data_in and r_data_out fields.  All the osd ops refer to them for
      their data.  For now, these data fields are pointers assigned to the
      appropriate r_data_* field when these new functions are called.
      Signed-off-by: NAlex Elder <elder@inktank.com>
      Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
      a4ce40a9