1. 21 6月, 2015 1 次提交
  2. 18 6月, 2015 1 次提交
    • R
      irq: Add irq_set_chained_handler_and_data() · 3b0f95be
      Russell King 提交于
      Driver authors seem to get the ordering of irq_set_chained_handler()
      and irq_set_handler_data() wrong - ordering the former before the
      latter.  This opens a race window where, if there is an interrupt
      pending, the handler will be called between these two calls,
      potentially resulting in an oops.
      
      Provide a single interface to set both of these together, especially
      as that's commonly what is required.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Cc: Alexandre Courbot <gnurou@gmail.com>
      Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Lee Jones <lee.jones@linaro.org>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Cc: Thierry Reding <thierry.reding@gmail.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Link: http://lkml.kernel.org/r/E1Z4yzs-0002Rw-4B@rmk-PC.arm.linux.org.ukSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      3b0f95be
  3. 12 6月, 2015 4 次提交
  4. 22 5月, 2015 1 次提交
  5. 20 5月, 2015 1 次提交
  6. 19 5月, 2015 3 次提交
    • J
      genirq: Introduce irq_set_vcpu_affinity() to target an interrupt to a VCPU · 0a4377de
      Jiang Liu 提交于
      With Posted-Interrupts support in Intel CPU and IOMMU, an external
      interrupt from assigned-devices could be directly delivered to a
      virtual CPU in a virtual machine. Instead of hacking KVM and Intel
      IOMMU drivers, we propose a platform independent interface to target
      an interrupt to a specific virtual CPU in a virtual machine, or set
      virtual CPU affinity for an interrupt.
      
      By adopting this new interface and the hierarchy irqdomain, we could
      easily support posted-interrupts on Intel platforms, and also provide
      flexible enough interfaces for other platforms to support similar
      features.
      
      Here is the usage scenario for this interface:
      Guest update MSI/MSI-X interrupt configuration
              -->QEMU and KVM handle this
              -->KVM call this interface (passing posted interrupts descriptor
                 and guest vector)
              -->irq core will transfer the control to IOMMU
              -->IOMMU will do the real work of updating IRTE (IRTE has new
                 format for VT-d Posted-Interrupts)
      Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com>
      Signed-off-by: NFeng Wu <feng.wu@intel.com>
      Link: http://lkml.kernel.org/r/1432026437-16560-2-git-send-email-feng.wu@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      0a4377de
    • S
      sched: always use blk_schedule_flush_plug in io_schedule_out · 10d784ea
      Shaohua Li 提交于
      block plug callback could sleep, so we introduce a parameter
      'from_schedule' and corresponding drivers can use it to destinguish a
      schedule plug flush or a plug finish. Unfortunately io_schedule_out
      still uses blk_flush_plug(). This causes below output (Note, I added a
      might_sleep() in raid1_unplug to make it trigger faster, but the whole
      thing doesn't matter if I add might_sleep). In raid1/10, this can cause
      deadlock.
      
      This patch makes io_schedule_out always uses blk_schedule_flush_plug.
      This should only impact drivers (as far as I know, raid 1/10) which are
      sensitive to the 'from_schedule' parameter.
      
      [  370.817949] ------------[ cut here ]------------
      [  370.817960] WARNING: CPU: 7 PID: 145 at ../kernel/sched/core.c:7306 __might_sleep+0x7f/0x90()
      [  370.817969] do not call blocking ops when !TASK_RUNNING; state=2 set at [<ffffffff81092fcf>] prepare_to_wait+0x2f/0x90
      [  370.817971] Modules linked in: raid1
      [  370.817976] CPU: 7 PID: 145 Comm: kworker/u16:9 Tainted: G        W       4.0.0+ #361
      [  370.817977] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140709_153802- 04/01/2014
      [  370.817983] Workqueue: writeback bdi_writeback_workfn (flush-9:1)
      [  370.817985]  ffffffff81cd83be ffff8800ba8cb298 ffffffff819dd7af 0000000000000001
      [  370.817988]  ffff8800ba8cb2e8 ffff8800ba8cb2d8 ffffffff81051afc ffff8800ba8cb2c8
      [  370.817990]  ffffffffa00061a8 000000000000041e 0000000000000000 ffff8800ba8cba28
      [  370.817993] Call Trace:
      [  370.817999]  [<ffffffff819dd7af>] dump_stack+0x4f/0x7b
      [  370.818002]  [<ffffffff81051afc>] warn_slowpath_common+0x8c/0xd0
      [  370.818004]  [<ffffffff81051b86>] warn_slowpath_fmt+0x46/0x50
      [  370.818006]  [<ffffffff81092fcf>] ? prepare_to_wait+0x2f/0x90
      [  370.818008]  [<ffffffff81092fcf>] ? prepare_to_wait+0x2f/0x90
      [  370.818010]  [<ffffffff810776ef>] __might_sleep+0x7f/0x90
      [  370.818014]  [<ffffffffa0000c03>] raid1_unplug+0xd3/0x170 [raid1]
      [  370.818024]  [<ffffffff81421d9a>] blk_flush_plug_list+0x8a/0x1e0
      [  370.818028]  [<ffffffff819e3550>] ? bit_wait+0x50/0x50
      [  370.818031]  [<ffffffff819e21b0>] io_schedule_timeout+0x130/0x140
      [  370.818033]  [<ffffffff819e3586>] bit_wait_io+0x36/0x50
      [  370.818034]  [<ffffffff819e31b5>] __wait_on_bit+0x65/0x90
      [  370.818041]  [<ffffffff8125b67c>] ? ext4_read_block_bitmap_nowait+0xbc/0x630
      [  370.818043]  [<ffffffff819e3550>] ? bit_wait+0x50/0x50
      [  370.818045]  [<ffffffff819e3302>] out_of_line_wait_on_bit+0x72/0x80
      [  370.818047]  [<ffffffff810935e0>] ? autoremove_wake_function+0x40/0x40
      [  370.818050]  [<ffffffff811de744>] __wait_on_buffer+0x44/0x50
      [  370.818053]  [<ffffffff8125ae80>] ext4_wait_block_bitmap+0xe0/0xf0
      [  370.818058]  [<ffffffff812975d6>] ext4_mb_init_cache+0x206/0x790
      [  370.818062]  [<ffffffff8114bc6c>] ? lru_cache_add+0x1c/0x50
      [  370.818064]  [<ffffffff81297c7e>] ext4_mb_init_group+0x11e/0x200
      [  370.818066]  [<ffffffff81298231>] ext4_mb_load_buddy+0x341/0x360
      [  370.818068]  [<ffffffff8129a1a3>] ext4_mb_find_by_goal+0x93/0x2f0
      [  370.818070]  [<ffffffff81295b54>] ? ext4_mb_normalize_request+0x1e4/0x5b0
      [  370.818072]  [<ffffffff8129ab67>] ext4_mb_regular_allocator+0x67/0x460
      [  370.818074]  [<ffffffff81295b54>] ? ext4_mb_normalize_request+0x1e4/0x5b0
      [  370.818076]  [<ffffffff8129ca4b>] ext4_mb_new_blocks+0x4cb/0x620
      [  370.818079]  [<ffffffff81290956>] ext4_ext_map_blocks+0x4c6/0x14d0
      [  370.818081]  [<ffffffff812a4d4e>] ? ext4_es_lookup_extent+0x4e/0x290
      [  370.818085]  [<ffffffff8126399d>] ext4_map_blocks+0x14d/0x4f0
      [  370.818088]  [<ffffffff81266fbd>] ext4_writepages+0x76d/0xe50
      [  370.818094]  [<ffffffff81149691>] do_writepages+0x21/0x50
      [  370.818097]  [<ffffffff811d5c00>] __writeback_single_inode+0x60/0x490
      [  370.818099]  [<ffffffff811d630a>] writeback_sb_inodes+0x2da/0x590
      [  370.818103]  [<ffffffff811abf4b>] ? trylock_super+0x1b/0x50
      [  370.818105]  [<ffffffff811abf4b>] ? trylock_super+0x1b/0x50
      [  370.818107]  [<ffffffff811d665f>] __writeback_inodes_wb+0x9f/0xd0
      [  370.818109]  [<ffffffff811d69db>] wb_writeback+0x34b/0x3c0
      [  370.818111]  [<ffffffff811d70df>] bdi_writeback_workfn+0x23f/0x550
      [  370.818116]  [<ffffffff8106bbd8>] process_one_work+0x1c8/0x570
      [  370.818117]  [<ffffffff8106bb5b>] ? process_one_work+0x14b/0x570
      [  370.818119]  [<ffffffff8106c09b>] worker_thread+0x11b/0x470
      [  370.818121]  [<ffffffff8106bf80>] ? process_one_work+0x570/0x570
      [  370.818124]  [<ffffffff81071868>] kthread+0xf8/0x110
      [  370.818126]  [<ffffffff81071770>] ? kthread_create_on_node+0x210/0x210
      [  370.818129]  [<ffffffff819e9322>] ret_from_fork+0x42/0x70
      [  370.818131]  [<ffffffff81071770>] ? kthread_create_on_node+0x210/0x210
      [  370.818132] ---[ end trace 7b4deb71e68b6605 ]---
      
      V2: don't change ->in_iowait
      
      Cc: NeilBrown <neilb@suse.de>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      10d784ea
    • P
      watchdog: Fix merge 'conflict' · ab992dc3
      Peter Zijlstra 提交于
      Two watchdog changes that came through different trees had a non
      conflicting conflict, that is, one changed the semantics of a variable
      but no actual code conflict happened. So the merge appeared fine, but
      the resulting code did not behave as expected.
      
      Commit 195daf66 ("watchdog: enable the new user interface of the
      watchdog mechanism") changes the semantics of watchdog_user_enabled,
      which thereafter is only used by the functions introduced by
      b3738d29 ("watchdog: Add watchdog enable/disable all functions").
      
      There further appears to be a distinct lack of serialization between
      setting and using watchdog_enabled, so perhaps we should wrap the
      {en,dis}able_all() things in watchdog_proc_mutex.
      
      This patch fixes a s2r failure reported by Michal; which I cannot
      readily explain. But this does make the code internally consistent
      again.
      Reported-and-tested-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ab992dc3
  7. 18 5月, 2015 4 次提交
  8. 13 5月, 2015 1 次提交
  9. 08 5月, 2015 3 次提交
  10. 07 5月, 2015 1 次提交
  11. 05 5月, 2015 3 次提交
  12. 01 5月, 2015 1 次提交
  13. 29 4月, 2015 1 次提交
  14. 28 4月, 2015 1 次提交
  15. 27 4月, 2015 1 次提交
  16. 25 4月, 2015 2 次提交
  17. 23 4月, 2015 1 次提交
    • M
      kexec: allocate the kexec control page with KEXEC_CONTROL_MEMORY_GFP · 7e01b5ac
      Martin Schwidefsky 提交于
      Introduce KEXEC_CONTROL_MEMORY_GFP to allow the architecture code
      to override the gfp flags of the allocation for the kexec control
      page. The loop in kimage_alloc_normal_control_pages allocates pages
      with GFP_KERNEL until a page is found that happens to have an
      address smaller than the KEXEC_CONTROL_MEMORY_LIMIT. On systems
      with a large memory size but a small KEXEC_CONTROL_MEMORY_LIMIT
      the loop will keep allocating memory until the oom killer steps in.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      7e01b5ac
  18. 20 4月, 2015 1 次提交
  19. 17 4月, 2015 9 次提交