1. 21 5月, 2008 1 次提交
    • G
      mm: bdi: fix race in bdi_class device creation · 19051c50
      Greg Kroah-Hartman 提交于
      There is a race from when a device is created with device_create() and
      then the drvdata is set with a call to dev_set_drvdata() in which a
      sysfs file could be open, yet the drvdata will be NULL, causing all
      sorts of bad things to happen.
      
      This patch fixes the problem by using the new function,
      device_create_vargs().
      
      Many thanks to Arthur Jones <ajones@riverbed.com> for reporting the bug,
      and testing patches out.
      
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Arthur Jones <ajones@riverbed.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Miklos Szeredi <mszeredi@suse.cz>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      19051c50
  2. 30 4月, 2008 4 次提交
  3. 06 12月, 2007 1 次提交
  4. 17 10月, 2007 3 次提交
    • P
      mm: per device dirty threshold · 04fbfdc1
      Peter Zijlstra 提交于
      Scale writeback cache per backing device, proportional to its writeout speed.
      
      By decoupling the BDI dirty thresholds a number of problems we currently have
      will go away, namely:
      
       - mutual interference starvation (for any number of BDIs);
       - deadlocks with stacked BDIs (loop, FUSE and local NFS mounts).
      
      It might be that all dirty pages are for a single BDI while other BDIs are
      idling. By giving each BDI a 'fair' share of the dirty limit, each one can have
      dirty pages outstanding and make progress.
      
      A global threshold also creates a deadlock for stacked BDIs; when A writes to
      B, and A generates enough dirty pages to get throttled, B will never start
      writeback until the dirty pages go away. Again, by giving each BDI its own
      'independent' dirty limit, this problem is avoided.
      
      So the problem is to determine how to distribute the total dirty limit across
      the BDIs fairly and efficiently. A DBI that has a large dirty limit but does
      not have any dirty pages outstanding is a waste.
      
      What is done is to keep a floating proportion between the DBIs based on
      writeback completions. This way faster/more active devices get a larger share
      than slower/idle devices.
      
      [akpm@linux-foundation.org: fix warnings]
      [hugh@veritas.com: Fix occasional hang when a task couldn't get out of balance_dirty_pages]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04fbfdc1
    • P
      mm: scalable bdi statistics counters · b2e8fb6e
      Peter Zijlstra 提交于
      Provide scalable per backing_dev_info statistics counters.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b2e8fb6e
    • P
      nfs: remove congestion_end() · c4dc4bee
      Peter Zijlstra 提交于
      These patches aim to improve balance_dirty_pages() and directly address three
      issues:
        1) inter device starvation
        2) stacked device deadlocks
        3) inter process starvation
      
      1 and 2 are a direct result from removing the global dirty limit and using
      per device dirty limits. By giving each device its own dirty limit is will
      no longer starve another device, and the cyclic dependancy on the dirty limit
      is broken.
      
      In order to efficiently distribute the dirty limit across the independant
      devices a floating proportion is used, this will allocate a share of the total
      limit proportional to the device's recent activity.
      
      3 is done by also scaling the dirty limit proportional to the current task's
      recent dirty rate.
      
      This patch:
      
      nfs: remove congestion_end().  It's redundant, clear_bdi_congested() already
      wakes the waiters.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c4dc4bee
  5. 17 7月, 2007 1 次提交
  6. 17 3月, 2007 1 次提交
  7. 21 10月, 2006 1 次提交
    • A
      [PATCH] separate bdi congestion functions from queue congestion functions · 3fcfab16
      Andrew Morton 提交于
      Separate out the concept of "queue congestion" from "backing-dev congestion".
      Congestion is a backing-dev concept, not a queue concept.
      
      The blk_* congestion functions are retained, as wrappers around the core
      backing-dev congestion functions.
      
      This proper layering is needed so that NFS can cleanly use the congestion
      functions, and so that CONFIG_BLOCK=n actually links.
      
      Cc: "Thomas Maier" <balagi@justmail.de>
      Cc: "Jens Axboe" <jens.axboe@oracle.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Peter Osterlund <petero2@telia.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3fcfab16