1. 26 11月, 2015 1 次提交
    • J
      Revert "blk-flush: Queue through IO scheduler when flush not required" · d7cf931d
      Jens Axboe 提交于
      This reverts commit 1b2ff19e.
      
      Jan writes:
      
      --
      
      Thanks for report! After some investigation I found out we allocate
      elevator specific data in __get_request() only for non-flush requests. And
      this is actually required since the flush machinery uses the space in
      struct request for something else. Doh. So my patch is just wrong and not
      easy to fix since at the time __get_request() is called we are not sure
      whether the flush machinery will be used in the end. Jens, please revert
      1b2ff19e. Thanks!
      
      I'm somewhat surprised that you can reliably hit the race where flushing
      gets disabled for the device just while the request is in flight. But I
      guess during boot it makes some sense.
      
      --
      
      So let's just revert it, we can fix the queue run manually after the
      fact. This race is rare enough that it didn't trigger in testing, it
      requires the specific disable-while-in-flight scenario to trigger.
      d7cf931d
  2. 25 11月, 2015 6 次提交
  3. 24 11月, 2015 3 次提交
  4. 21 11月, 2015 1 次提交
    • J
      blk-mq: fix calling unplug callbacks with preempt disabled · b094f89c
      Jens Axboe 提交于
      Liu reported that running certain parts of xfstests threw the
      following error:
      
      BUG: sleeping function called from invalid context at mm/page_alloc.c:3190
      in_atomic(): 1, irqs_disabled(): 0, pid: 6, name: kworker/u16:0
      3 locks held by kworker/u16:0/6:
       #0:  ("writeback"){++++.+}, at: [<ffffffff8107f083>] process_one_work+0x173/0x730
       #1:  ((&(&wb->dwork)->work)){+.+.+.}, at: [<ffffffff8107f083>] process_one_work+0x173/0x730
       #2:  (&type->s_umount_key#44){+++++.}, at: [<ffffffff811e6805>] trylock_super+0x25/0x60
      CPU: 5 PID: 6 Comm: kworker/u16:0 Tainted: G           OE   4.3.0+ #3
      Hardware name: Red Hat KVM, BIOS Bochs 01/01/2011
      Workqueue: writeback wb_workfn (flush-btrfs-108)
       ffffffff81a3abab ffff88042e282ba8 ffffffff8130191b ffffffff81a3abab
       0000000000000c76 ffff88042e282ba8 ffff88042e27c180 ffff88042e282bd8
       ffffffff8108ed95 ffff880400000004 0000000000000000 0000000000000c76
      Call Trace:
       [<ffffffff8130191b>] dump_stack+0x4f/0x74
       [<ffffffff8108ed95>] ___might_sleep+0x185/0x240
       [<ffffffff8108eea2>] __might_sleep+0x52/0x90
       [<ffffffff811817e8>] __alloc_pages_nodemask+0x268/0x410
       [<ffffffff8109a43c>] ? sched_clock_local+0x1c/0x90
       [<ffffffff8109a6d1>] ? local_clock+0x21/0x40
       [<ffffffff810b9eb0>] ? __lock_release+0x420/0x510
       [<ffffffff810b534c>] ? __lock_acquired+0x16c/0x3c0
       [<ffffffff811ca265>] alloc_pages_current+0xc5/0x210
       [<ffffffffa0577105>] ? rbio_is_full+0x55/0x70 [btrfs]
       [<ffffffff810b7ed8>] ? mark_held_locks+0x78/0xa0
       [<ffffffff81666d50>] ? _raw_spin_unlock_irqrestore+0x40/0x60
       [<ffffffffa0578c0a>] full_stripe_write+0x5a/0xc0 [btrfs]
       [<ffffffffa0578ca9>] __raid56_parity_write+0x39/0x60 [btrfs]
       [<ffffffffa0578deb>] run_plug+0x11b/0x140 [btrfs]
       [<ffffffffa0578e33>] btrfs_raid_unplug+0x23/0x70 [btrfs]
       [<ffffffff812d36c2>] blk_flush_plug_list+0x82/0x1f0
       [<ffffffff812e0349>] blk_sq_make_request+0x1f9/0x740
       [<ffffffff812ceba2>] ? generic_make_request_checks+0x222/0x7c0
       [<ffffffff812cf264>] ? blk_queue_enter+0x124/0x310
       [<ffffffff812cf1d2>] ? blk_queue_enter+0x92/0x310
       [<ffffffff812d0ae2>] generic_make_request+0x172/0x2c0
       [<ffffffff812d0ad4>] ? generic_make_request+0x164/0x2c0
       [<ffffffff812d0ca0>] submit_bio+0x70/0x140
       [<ffffffffa0577b29>] ? rbio_add_io_page+0x99/0x150 [btrfs]
       [<ffffffffa0578a89>] finish_rmw+0x4d9/0x600 [btrfs]
       [<ffffffffa0578c4c>] full_stripe_write+0x9c/0xc0 [btrfs]
       [<ffffffffa057ab7f>] raid56_parity_write+0xef/0x160 [btrfs]
       [<ffffffffa052bd83>] btrfs_map_bio+0xe3/0x2d0 [btrfs]
       [<ffffffffa04fbd6d>] btrfs_submit_bio_hook+0x8d/0x1d0 [btrfs]
       [<ffffffffa05173c4>] submit_one_bio+0x74/0xb0 [btrfs]
       [<ffffffffa0517f55>] submit_extent_page+0xe5/0x1c0 [btrfs]
       [<ffffffffa0519b18>] __extent_writepage_io+0x408/0x4c0 [btrfs]
       [<ffffffffa05179c0>] ? alloc_dummy_extent_buffer+0x140/0x140 [btrfs]
       [<ffffffffa051dc88>] __extent_writepage+0x218/0x3a0 [btrfs]
       [<ffffffff810b7ed8>] ? mark_held_locks+0x78/0xa0
       [<ffffffffa051e2c9>] extent_write_cache_pages.clone.0+0x2f9/0x400 [btrfs]
       [<ffffffffa051e422>] extent_writepages+0x52/0x70 [btrfs]
       [<ffffffffa05001f0>] ? btrfs_set_inode_index+0x70/0x70 [btrfs]
       [<ffffffffa04fcc17>] btrfs_writepages+0x27/0x30 [btrfs]
       [<ffffffff81184df3>] do_writepages+0x23/0x40
       [<ffffffff81212229>] __writeback_single_inode+0x89/0x4d0
       [<ffffffff81212a60>] ? writeback_sb_inodes+0x260/0x480
       [<ffffffff81212a60>] ? writeback_sb_inodes+0x260/0x480
       [<ffffffff8121295f>] ? writeback_sb_inodes+0x15f/0x480
       [<ffffffff81212ad2>] writeback_sb_inodes+0x2d2/0x480
       [<ffffffff810b1397>] ? down_read_trylock+0x57/0x60
       [<ffffffff811e6805>] ? trylock_super+0x25/0x60
       [<ffffffff810d629f>] ? rcu_read_lock_sched_held+0x4f/0x90
       [<ffffffff81212d0c>] __writeback_inodes_wb+0x8c/0xc0
       [<ffffffff812130b5>] wb_writeback+0x2b5/0x500
       [<ffffffff810b7ed8>] ? mark_held_locks+0x78/0xa0
       [<ffffffff810660a8>] ? __local_bh_enable_ip+0x68/0xc0
       [<ffffffff81213362>] ? wb_do_writeback+0x62/0x310
       [<ffffffff812133c1>] wb_do_writeback+0xc1/0x310
       [<ffffffff8107c3d9>] ? set_worker_desc+0x79/0x90
       [<ffffffff81213842>] wb_workfn+0x92/0x330
       [<ffffffff8107f133>] process_one_work+0x223/0x730
       [<ffffffff8107f083>] ? process_one_work+0x173/0x730
       [<ffffffff8108035f>] ? worker_thread+0x18f/0x430
       [<ffffffff810802ed>] worker_thread+0x11d/0x430
       [<ffffffff810801d0>] ? maybe_create_worker+0xf0/0xf0
       [<ffffffff810801d0>] ? maybe_create_worker+0xf0/0xf0
       [<ffffffff810858df>] kthread+0xef/0x110
       [<ffffffff8108f74e>] ? schedule_tail+0x1e/0xd0
       [<ffffffff810857f0>] ? __init_kthread_worker+0x70/0x70
       [<ffffffff816673bf>] ret_from_fork+0x3f/0x70
       [<ffffffff810857f0>] ? __init_kthread_worker+0x70/0x70
      
      The issue is that we've got the software context pinned while
      calling blk_flush_plug_list(), which flushes callbacks that
      are allowed to sleep. btrfs and raid has such callbacks.
      
      Flip the checks around a bit, so we can enable preempt a bit
      earlier and flush plugs without having preempt disabled.
      
      This only affects blk-mq driven devices, and only those that
      register a single queue.
      Reported-by: NLiu Bo <bo.li.liu@oracle.com>
      Tested-by: NLiu Bo <bo.li.liu@oracle.com>
      Cc: stable@kernel.org
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b094f89c
  5. 20 11月, 2015 2 次提交
    • K
      mac: validate mac_partition is within sector · 02e2a5bf
      Kees Cook 提交于
      If md->signature == MAC_DRIVER_MAGIC and md->block_size == 1023, a single
      512 byte sector would be read (secsize / 512). However the partition
      structure would be located past the end of the buffer (secsize % 512).
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJens Axboe <axboe@fb.com>
      02e2a5bf
    • D
      block: protect rw_page against device teardown · 2e6edc95
      Dan Williams 提交于
      Fix use after free crashes like the following:
      
       general protection fault: 0000 [#1] SMP
       Call Trace:
        [<ffffffffa0050216>] ? pmem_do_bvec.isra.12+0xa6/0xf0 [nd_pmem]
        [<ffffffffa0050ba2>] pmem_rw_page+0x42/0x80 [nd_pmem]
        [<ffffffff8128fd90>] bdev_read_page+0x50/0x60
        [<ffffffff812972f0>] do_mpage_readpage+0x510/0x770
        [<ffffffff8128fd20>] ? I_BDEV+0x20/0x20
        [<ffffffff811d86dc>] ? lru_cache_add+0x1c/0x50
        [<ffffffff81297657>] mpage_readpages+0x107/0x170
        [<ffffffff8128fd20>] ? I_BDEV+0x20/0x20
        [<ffffffff8128fd20>] ? I_BDEV+0x20/0x20
        [<ffffffff8129058d>] blkdev_readpages+0x1d/0x20
        [<ffffffff811d615f>] __do_page_cache_readahead+0x28f/0x310
        [<ffffffff811d6039>] ? __do_page_cache_readahead+0x169/0x310
        [<ffffffff811c5abd>] ? pagecache_get_page+0x2d/0x1d0
        [<ffffffff811c76f6>] filemap_fault+0x396/0x530
        [<ffffffff811f816e>] __do_fault+0x4e/0xf0
        [<ffffffff811fce7d>] handle_mm_fault+0x11bd/0x1b50
      
      Cc: <stable@vger.kernel.org>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Reported-by: Nkbuild test robot <lkp@intel.com>
      Acked-by: NMatthew Wilcox <willy@linux.intel.com>
      [willy: symmetry fixups]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      2e6edc95
  6. 17 11月, 2015 2 次提交
    • J
      blk-flush: Queue through IO scheduler when flush not required · 1b2ff19e
      Jan Kara 提交于
      Currently blk_insert_flush() just adds flush request to q->queue_head
      when flush is not required. That completely bypasses IO scheduler so
      e.g. CFQ can be idling waiting for new request to arrive and will idle
      through the whole window unnecessarily. Luckily this only happens in
      rare cases as usually checks in generic_make_request_checks() clear
      FLUSH and FUA flags early if they are not needed.
      
      When no flushing is actually required, we can easily fix the problem by
      properly queueing the request through the IO scheduler. Ideally IO
      scheduler should be also made aware of requests queued via
      blk_flush_queue_rq(). However inserting flush request through IO
      scheduler can have unwanted side-effects since due to flush batching
      delaying the flush request in IO scheduler will delay all flush requests
      possibly coming from other processes. So we keep adding the request
      directly to q->queue_head.
      Signed-off-by: NJan Kara <jack@suse.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      1b2ff19e
    • G
      elevator: use list_{first,prev,next}_entry · 4736346b
      Geliang Tang 提交于
      To make the intention clearer, use list_{first,prev,next}_entry
      instead of list_entry.
      Signed-off-by: NGeliang Tang <geliangtang@163.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4736346b
  7. 12 11月, 2015 2 次提交
  8. 08 11月, 2015 3 次提交
  9. 07 11月, 2015 3 次提交
    • B
      pidns: fix set/getpriority and ioprio_set/get in PRIO_USER mode · 8639b461
      Ben Segall 提交于
      setpriority(PRIO_USER, 0, x) will change the priority of tasks outside of
      the current pid namespace.  This is in contrast to both the other modes of
      setpriority and the example of kill(-1).  Fix this.  getpriority and
      ioprio have the same failure mode, fix them too.
      
      Eric said:
      
      : After some more thinking about it this patch sounds justifiable.
      :
      : My goal with namespaces is not to build perfect isolation mechanisms
      : as that can get into ill defined territory, but to build well defined
      : mechanisms.  And to handle the corner cases so you can use only
      : a single namespace with well defined results.
      :
      : In this case you have found the two interfaces I am aware of that
      : identify processes by uid instead of by pid.  Which quite frankly is
      : weird.  Unfortunately the weird unexpected cases are hard to handle
      : in the usual way.
      :
      : I was hoping for a little more information.  Changes like this one we
      : have to be careful of because someone might be depending on the current
      : behavior.  I don't think they are and I do think this make sense as part
      : of the pid namespace.
      Signed-off-by: NBen Segall <bsegall@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Ambrose Feinstein <ambrose@google.com>
      Acked-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8639b461
    • M
      mm, page_alloc: rename __GFP_WAIT to __GFP_RECLAIM · 71baba4b
      Mel Gorman 提交于
      __GFP_WAIT was used to signal that the caller was in atomic context and
      could not sleep.  Now it is possible to distinguish between true atomic
      context and callers that are not willing to sleep.  The latter should
      clear __GFP_DIRECT_RECLAIM so kswapd will still wake.  As clearing
      __GFP_WAIT behaves differently, there is a risk that people will clear the
      wrong flags.  This patch renames __GFP_WAIT to __GFP_RECLAIM to clearly
      indicate what it does -- setting it allows all reclaim activity, clearing
      them prevents it.
      
      [akpm@linux-foundation.org: fix build]
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71baba4b
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  10. 03 11月, 2015 1 次提交
    • J
      blk-mq: avoid excessive boot delays with large lun counts · 2404e607
      Jeff Moyer 提交于
      Hi,
      
      Zhangqing Luo reported long boot times on a system with thousands of
      LUNs when scsi-mq was enabled.  He narrowed the problem down to
      blk_mq_add_queue_tag_set, where every queue is frozen in order to set
      the BLK_MQ_F_TAG_SHARED flag.  Each added device will freeze all queues
      added before it in sequence, which involves waiting for an RCU grace
      period for each one.  We don't need to do this.  After the second queue
      is added, only new queues need to be initialized with the shared tag.
      We can do that by percolating the flag up to the blk_mq_tag_set, and
      updating the newly added queue's hctxs if the flag is set.
      
      This problem was introduced by commit 0d2602ca (blk-mq: improve
      support for shared tags maps).
      Reported-and-tested-by: NJason Luo <zhangqing.luo@oracle.com>
      Reviewed-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2404e607
  11. 28 10月, 2015 1 次提交
  12. 22 10月, 2015 15 次提交