1. 09 1月, 2014 7 次提交
  2. 17 12月, 2013 1 次提交
  3. 24 11月, 2013 2 次提交
    • K
      block: Abstract out bvec iterator · 4f024f37
      Kent Overstreet 提交于
      Immutable biovecs are going to require an explicit iterator. To
      implement immutable bvecs, a later patch is going to add a bi_bvec_done
      member to this struct; for now, this patch effectively just renames
      things.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Yehuda Sadeh <yehuda@inktank.com>
      Cc: Sage Weil <sage@inktank.com>
      Cc: Alex Elder <elder@inktank.com>
      Cc: ceph-devel@vger.kernel.org
      Cc: Joshua Morris <josh.h.morris@us.ibm.com>
      Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: dm-devel@redhat.com
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux390@de.ibm.com
      Cc: Boaz Harrosh <bharrosh@panasas.com>
      Cc: Benny Halevy <bhalevy@tonian.com>
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Chris Mason <chris.mason@fusionio.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Jaegeuk Kim <jaegeuk.kim@samsung.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: Joern Engel <joern@logfs.org>
      Cc: Prasad Joshi <prasadjoshi.linux@gmail.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: KONISHI Ryusuke <konishi.ryusuke@lab.ntt.co.jp>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Ben Myers <bpm@sgi.com>
      Cc: xfs@oss.sgi.com
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Guo Chao <yan@linux.vnet.ibm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Asai Thambi S P <asamymuthupa@micron.com>
      Cc: Selvan Mani <smani@micron.com>
      Cc: Sam Bradshaw <sbradshaw@micron.com>
      Cc: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
      Cc: "Roger Pau Monné" <roger.pau@citrix.com>
      Cc: Jan Beulich <jbeulich@suse.com>
      Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
      Cc: Ian Campbell <Ian.Campbell@citrix.com>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchand@redhat.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Peng Tao <tao.peng@emc.com>
      Cc: Andy Adamson <andros@netapp.com>
      Cc: fanchaoting <fanchaoting@cn.fujitsu.com>
      Cc: Jie Liu <jeff.liu@oracle.com>
      Cc: Sunil Mushran <sunil.mushran@gmail.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Namjae Jeon <namjae.jeon@samsung.com>
      Cc: Pankaj Kumar <pankaj.km@samsung.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Mel Gorman <mgorman@suse.de>6
      4f024f37
    • K
      bcache: Kill unaligned bvec hack · ed9c47be
      Kent Overstreet 提交于
      Bcache has a hack to avoid cloning the biovec if it's all full pages -
      but with immutable biovecs coming this won't be necessary anymore.
      
      For now, we remove the special case and always clone the bvec array so
      that the immutable biovec patches are simpler.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      ed9c47be
  4. 11 11月, 2013 18 次提交
  5. 12 7月, 2013 4 次提交
  6. 27 6月, 2013 8 次提交
    • G
      bcache: Send label uevents · ab9e1400
      Gabriel de Perthuis 提交于
      Signed-off-by: NGabriel de Perthuis <g2p.code@gmail.com>
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      ab9e1400
    • G
      a25c32be
    • K
      bcache: Track dirty data by stripe · 279afbad
      Kent Overstreet 提交于
      To make background writeback aware of raid5/6 stripes, we first need to
      track the amount of dirty data within each stripe - we do this by
      breaking up the existing sectors_dirty into per stripe atomic_ts
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      279afbad
    • K
      bcache: Initialize sectors_dirty when attaching · 444fc0b6
      Kent Overstreet 提交于
      Previously, dirty_data wouldn't get initialized until the first garbage
      collection... which was a bit of a problem for background writeback (as
      the PD controller keys off of it) and also confusing for users.
      
      This is also prep work for making background writeback aware of raid5/6
      stripes.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      444fc0b6
    • K
      bcache: Improve lazy sorting · 6ded34d1
      Kent Overstreet 提交于
      The old lazy sorting code was kind of hacky - rewrite in a way that
      mathematically makes more sense; the idea is that the size of the sets
      of keys in a btree node should increase by a more or less fixed ratio
      from smallest to biggest.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      6ded34d1
    • K
      bcache: Rip out pkey()/pbtree() · 85b1492e
      Kent Overstreet 提交于
      Old gcc doesnt like the struct hack, and it is kind of ugly. So finish
      off the work to convert pr_debug() statements to tracepoints, and delete
      pkey()/pbtree().
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      85b1492e
    • K
      bcache: Fix/revamp tracepoints · c37511b8
      Kent Overstreet 提交于
      The tracepoints were reworked to be more sensible, and fixed a null
      pointer deref in one of the tracepoints.
      
      Converted some of the pr_debug()s to tracepoints - this is partly a
      performance optimization; it used to be that with DEBUG or
      CONFIG_DYNAMIC_DEBUG pr_debug() was an empty macro; but at some point it
      was changed to an empty inline function.
      
      Some of the pr_debug() statements had rather expensive function calls as
      part of the arguments, so this code was getting run unnecessarily even
      on non debug kernels - in some fast paths, too.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      c37511b8
    • K
      bcache: Refactor btree io · 57943511
      Kent Overstreet 提交于
      The most significant change is that btree reads are now done
      synchronously, instead of asynchronously and doing the post read stuff
      from a workqueue.
      
      This was originally done because we can't block on IO under
      generic_make_request(). But - we already have a mechanism to punt cache
      lookups to workqueue if needed, so if we just use that we don't have to
      deal with the complexity of doing things asynchronously.
      
      The main benefit is this makes the locking situation saner; we can hold
      our write lock on the btree node until we're finished reading it, and we
      don't need that btree_node_read_done() flag anymore.
      
      Also, for writes, btree_write() was broken out into btree_node_write()
      and btree_leaf_dirty() - the old code with the boolean argument was dumb
      and confusing.
      
      The prio_blocked mechanism was improved a bit too, now the only counter
      is in struct btree_write, we don't mess with transfering a count from
      struct btree anymore.
      
      This required changing garbage collection to block prios at the start
      and unblock when it finishes, which is cleaner than what it was doing
      anyways (the old code had mostly the same effect, but was doing it in a
      convoluted way)
      
      And the btree iter btree_node_read_done() uses was converted to a real
      mempool.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      57943511