1. 01 5月, 2014 2 次提交
  2. 17 2月, 2014 21 次提交
  3. 24 11月, 2013 1 次提交
    • K
      block: Convert bio_for_each_segment() to bvec_iter · 7988613b
      Kent Overstreet 提交于
      More prep work for immutable biovecs - with immutable bvecs drivers
      won't be able to use the biovec directly, they'll need to use helpers
      that take into account bio->bi_iter.bi_bvec_done.
      
      This updates callers for the new usage without changing the
      implementation yet.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Paul Clements <Paul.Clements@steeleye.com>
      Cc: Jim Paris <jim@jtan.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Yehuda Sadeh <yehuda@inktank.com>
      Cc: Sage Weil <sage@inktank.com>
      Cc: Alex Elder <elder@inktank.com>
      Cc: ceph-devel@vger.kernel.org
      Cc: Joshua Morris <josh.h.morris@us.ibm.com>
      Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux390@de.ibm.com
      Cc: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com>
      Cc: Sreekanth Reddy <Sreekanth.Reddy@lsi.com>
      Cc: support@lsi.com
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Guo Chao <yan@linux.vnet.ibm.com>
      Cc: Asai Thambi S P <asamymuthupa@micron.com>
      Cc: Selvan Mani <smani@micron.com>
      Cc: Sam Bradshaw <sbradshaw@micron.com>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: Quoc-Son Anh <quoc-sonx.anh@intel.com>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: "Darrick J. Wong" <darrick.wong@oracle.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: drbd-user@lists.linbit.com
      Cc: nbd-general@lists.sourceforge.net
      Cc: cbe-oss-dev@lists.ozlabs.org
      Cc: xen-devel@lists.xensource.com
      Cc: virtualization@lists.linux-foundation.org
      Cc: linux-raid@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: DL-MPTFusionLinux@lsi.com
      Cc: linux-scsi@vger.kernel.org
      Cc: devel@driverdev.osuosl.org
      Cc: linux-fsdevel@vger.kernel.org
      Cc: cluster-devel@redhat.com
      Cc: linux-mm@kvack.org
      Acked-by: NGeoff Levand <geoff@infradead.org>
      7988613b
  4. 29 3月, 2013 2 次提交
  5. 23 3月, 2013 2 次提交
  6. 09 11月, 2012 8 次提交
  7. 08 11月, 2012 4 次提交
    • L
      drbd: do not reset rs_pending_cnt too early · a324896b
      Lars Ellenberg 提交于
      Fix asserts like
        block drbd0: in got_BlockAck:4634: rs_pending_cnt = -35 < 0 !
      
      We reset the resync lru cache and related information (rs_pending_cnt),
      once we successfully finished a resync or online verify, or if the
      replication connection is lost.
      
      We also need to reset it if a resync or online verify is aborted
      because a lower level disk failed.
      
      In that case the replication link is still established,
      and we may still have packets queued in the network buffers
      which want to touch rs_pending_cnt.
      
      We do not have any synchronization mechanism to know for sure when all
      such pending resync related packets have been drained.
      
      To avoid this counter to go negative (and violate the ASSERT that it
      will always be >= 0), just do not reset it when we lose a disk.
      
      It is good enough to make sure it is re-initialized before the next
      resync can start: reset it when we re-attach a disk.
      Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com>
      Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com>
      a324896b
    • L
      drbd: differentiate between normal and forced detach · 0c849666
      Lars Ellenberg 提交于
      Aborting local requests (not waiting for completion from the lower level
      disk) is dangerous: if the master bio has been completed to upper
      layers, data pages may be re-used for other things already.
      If local IO is still pending and later completes,
      this may cause crashes or corrupt unrelated data.
      
      Only abort local IO if explicitly requested.
      Intended use case is a lower level device that turned into a tarpit,
      not completing io requests, not even doing error completion.
      Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com>
      Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com>
      0c849666
    • L
      drbd: remove struct drbd_tl_epoch objects (barrier works) · b6dd1a89
      Lars Ellenberg 提交于
      cherry-picked and adapted from drbd 9 devel branch
      
      DRBD requests (struct drbd_request) are already on the per resource
      transfer log list, and carry their epoch number. We do not need to
      additionally link them on other ring lists in other structs.
      
      The drbd sender thread can recognize itself when to send a P_BARRIER,
      by tracking the currently processed epoch, and how many writes
      have been processed for that epoch.
      
      If the epoch of the request to be processed does not match the currently
      processed epoch, any writes have been processed in it, a P_BARRIER for
      this last processed epoch is send out first.
      The new epoch then becomes the currently processed epoch.
      
      To not get stuck in drbd_al_begin_io() waiting for P_BARRIER_ACK,
      the sender thread also needs to handle the case when the current
      epoch was closed already, but no new requests are queued yet,
      and send out P_BARRIER as soon as possible.
      
      This is done by comparing the per resource "current transfer log epoch"
      (tconn->current_tle_nr) with the per connection "currently processed
      epoch number" (tconn->send.current_epoch_nr), while waiting for
      new requests to be processed in wait_for_work().
      Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com>
      Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com>
      b6dd1a89
    • L
      drbd: move the drbd_work_queue from drbd_socket to drbd_connection · d5b27b01
      Lars Ellenberg 提交于
      cherry-picked and adapted from drbd 9 devel branch
      In 8.4, we don't distinguish between "resource work" and "connection
      work" yet, we have one worker for both, as we still have only one connection.
      
      We only ever used the "data.work",
      no need to keep the "meta.work" around.
      
      Move tconn->data.work to tconn->sender_work.
      Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com>
      Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com>
      d5b27b01