1. 31 7月, 2014 1 次提交
    • A
      perf evlist: Don't run workload if not told to · 5f1c4225
      Arnaldo Carvalho de Melo 提交于
      The perf_evlist__prepare_workload() method works by forking and then
      waiting on a fd that must be written to to allow the workload to be
      exec()ed.
      
      But if the tool calling it fails to, say, set up the events with which
      it wants to sample the workload for, it will not call
      perf_evlist__start_workload(), but even in this case the workload ended
      up running:
      
        [acme@zoo linux]$ trace /bin/echo workload ends up running, it should not...
        Couldn't mmap the events: Operation not permitted
        workload ends up running, it should not...
        [acme@zoo linux]$
      
      So check if at least one byte was written before letting exec() be
      called.
      
      Now the expected behaviour:
      
        [acme@zoo linux]$ trace /bin/echo workload ends up running, it should not...
        Couldn't mmap the events: Operation not permitted
        [acme@zoo linux]$
      Acked-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-oh1ixo8m74rf295a05gfjw8b@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      5f1c4225
  2. 30 7月, 2014 2 次提交
  3. 28 7月, 2014 10 次提交
  4. 27 7月, 2014 2 次提交
    • L
      Fix gcc-4.9.0 miscompilation of load_balance() in scheduler · 2062afb4
      Linus Torvalds 提交于
      Michel Dänzer and a couple of other people reported inexplicable random
      oopses in the scheduler, and the cause turns out to be gcc mis-compiling
      the load_balance() function when debugging is enabled.  The gcc bug
      apparently goes back to gcc-4.5, but slight optimization changes means
      that it now showed up as a problem in 4.9.0 and 4.9.1.
      
      The instruction scheduling problem causes gcc to schedule a spill
      operation to before the stack frame has been created, which in turn can
      corrupt the spilled value if an interrupt comes in.  There may be other
      effects of this bug too, but that's the code generation problem seen in
      Michel's case.
      
      This is fixed in current gcc HEAD, but the workaround as suggested by
      Markus Trippelsdorf is pretty simple: use -fno-var-tracking-assignments
      when compiling the kernel, which disables the gcc code that causes the
      problem.  This can result in slightly worse debug information for
      variable accesses, but that is infinitely preferable to actual code
      generation problems.
      
      Doing this unconditionally (not just for CONFIG_DEBUG_INFO) also allows
      non-debug builds to verify that the debug build would be identical: we
      can do
      
          export GCC_COMPARE_DEBUG=1
      
      to make gcc internally verify that the result of the build is
      independent of the "-g" flag (it will make the compiler build everything
      twice, toggling the debug flag, and compare the results).
      
      Without the "-fno-var-tracking-assignments" option, the build would fail
      (even with 4.8.3 that didn't show the actual stack frame bug) with a gcc
      compare failure.
      
      See also gcc bugzilla:
      
        https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61801Reported-by: NMichel Dänzer <michel@daenzer.net>
      Suggested-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Cc: Jakub Jelinek <jakub@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2062afb4
    • H
      mm: fix direct reclaim writeback regression · 8bdd6380
      Hugh Dickins 提交于
      Shortly before 3.16-rc1, Dave Jones reported:
      
        WARNING: CPU: 3 PID: 19721 at fs/xfs/xfs_aops.c:971
                 xfs_vm_writepage+0x5ce/0x630 [xfs]()
        CPU: 3 PID: 19721 Comm: trinity-c61 Not tainted 3.15.0+ #3
        Call Trace:
          xfs_vm_writepage+0x5ce/0x630 [xfs]
          shrink_page_list+0x8f9/0xb90
          shrink_inactive_list+0x253/0x510
          shrink_lruvec+0x563/0x6c0
          shrink_zone+0x3b/0x100
          shrink_zones+0x1f1/0x3c0
          try_to_free_pages+0x164/0x380
          __alloc_pages_nodemask+0x822/0xc90
          alloc_pages_vma+0xaf/0x1c0
          handle_mm_fault+0xa31/0xc50
        etc.
      
       970   if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
       971                   PF_MEMALLOC))
      
      I did not respond at the time, because a glance at the PageDirty block
      in shrink_page_list() quickly shows that this is impossible: we don't do
      writeback on file pages (other than tmpfs) from direct reclaim nowadays.
      Dave was hallucinating, but it would have been disrespectful to say so.
      
      However, my own /var/log/messages now shows similar complaints
      
        WARNING: CPU: 1 PID: 28814 at fs/ext4/inode.c:1881 ext4_writepage+0xa7/0x38b()
        WARNING: CPU: 0 PID: 27347 at fs/ext4/inode.c:1764 ext4_writepage+0xa7/0x38b()
      
      from stressing some mmotm trees during July.
      
      Could a dirty xfs or ext4 file page somehow get marked PageSwapBacked,
      so fail shrink_page_list()'s page_is_file_cache() test, and so proceed
      to mapping->a_ops->writepage()?
      
      Yes, 3.16-rc1's commit 68711a74 ("mm, migration: add destination
      page freeing callback") has provided such a way to compaction: if
      migrating a SwapBacked page fails, its newpage may be put back on the
      list for later use with PageSwapBacked still set, and nothing will clear
      it.
      
      Whether that can do anything worse than issue WARN_ON_ONCEs, and get
      some statistics wrong, is unclear: easier to fix than to think through
      the consequences.
      
      Fixing it here, before the put_new_page(), addresses the bug directly,
      but is probably the worst place to fix it.  Page migration is doing too
      many parts of the job on too many levels: fixing it in
      move_to_new_page() to complement its SetPageSwapBacked would be
      preferable, except why is it (and newpage->mapping and newpage->index)
      done there, rather than down in migrate_page_move_mapping(), once we are
      sure of success? Not a cleanup to get into right now, especially not
      with memcg cleanups coming in 3.17.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8bdd6380
  5. 26 7月, 2014 13 次提交
  6. 25 7月, 2014 11 次提交
  7. 24 7月, 2014 1 次提交