1. 04 9月, 2015 1 次提交
  2. 03 9月, 2015 1 次提交
    • K
      perf tools: Store the cpu socket and core ids in the perf.data header · 2bb00d2f
      Kan Liang 提交于
      This patch stores the cpu socket_id and core_id in a perf.data header,
      and reads them into the perf_env struct when processing perf.data files.
      
      The changes modifies the CPU_TOPOLOGY section, making sure it is
      backward/forward compatible.
      
      The patch checks the section size before reading the core and socket ids.
      
      It never reads data crossing the section boundary.  An old perf binary
      without this patch can also correctly read the perf.data from a new perf
      with this patch.
      
      Because the new info is added at the end of the cpu_topology section, an
      old perf tool ignores the extra data.
      
      Examples:
      
      1. New perf with this patch read perf.data from an old perf without the
         patch:
      
        $ perf_new report -i perf_old.data --header-only -I
        ......
        # sibling threads : 33
        # sibling threads : 34
        # sibling threads : 35
        # Core ID and Socket ID information is not available
        # node0 meminfo  : total = 32823872 kB, free = 29315548 kB
        # node0 cpu list : 0-17,36-53
        ......
      
      2. Old perf without the patch reads perf.data from a new perf with the
         patch:
      
        $ perf_old report -i perf_new.data --header-only -I
        ......
        # sibling threads : 33
        # sibling threads : 34
        # sibling threads : 35
        # node0 meminfo  : total = 32823872 kB, free = 29190932 kB
        # node0 cpu list : 0-17,36-53
        ......
      
      3. New perf read new perf.data:
      
        $ perf_new report -i perf_new.data --header-only -I
        ......
        # sibling threads : 33
        # sibling threads : 34
        # sibling threads : 35
        # CPU 0: Core ID 0, Socket ID 0
        # CPU 1: Core ID 1, Socket ID 0
        ......
        # CPU 61: Core ID 10, Socket ID 1
        # CPU 62: Core ID 11, Socket ID 1
        # CPU 63: Core ID 16, Socket ID 1
        # node0 meminfo  : total = 32823872 kB, free = 29190932 kB
        # node0 cpu list : 0-17,36-53
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Link: http://lkml.kernel.org/r/1441115893-22006-2-git-send-email-kan.liang@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      2bb00d2f
  3. 29 8月, 2015 1 次提交
  4. 07 8月, 2015 1 次提交
  5. 29 7月, 2015 1 次提交
    • A
      perf session env: Rename exit method · 4c7de49a
      Arnaldo Carvalho de Melo 提交于
      The semantic associated in tools/perf/ with foo__delete(instance) is to
      release all resources referenced by 'instance' members and then release
      the memory for 'instance' itself.
      
      The perf_session_env__delete() function isn't doing this, it just does
      the first part, but the space used by 'instance' itself isn't freed, as
      it is embedded in a larger structure, that will be freed at other stage.
      
      For these cases we se foo__exit(), i.e. the usage is:
      
       void foo__delete(foo)
       {
               if (foo) {
                       foo__exit(foo);
                       free(foo);
               }
       }
      
      But when we have something like:
      
       struct bar {
               struct foo foo;
               . . .
       }
      
      Then we can't really call foo__delete(&bar.foo), we must have this
      instead:
      
       void bar__exit(bar)
       {
               foo__exit(&bar.foo);
               /* free other bar-> resources */
       }
      
       void bar__delete(bar)
       {
               if (bar) {
      		bar__exit(bar);
                      free(bar);
               }
       }
      
      So just rename perf_session_env__delete() to perf_session_env__exit().
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-djbgpcfo5udqptx3q0flwtmk@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      4c7de49a
  6. 24 7月, 2015 1 次提交
  7. 22 7月, 2015 1 次提交
  8. 26 6月, 2015 1 次提交
  9. 24 6月, 2015 2 次提交
  10. 20 6月, 2015 3 次提交
  11. 18 6月, 2015 1 次提交
    • W
      perf tools: Fix a problem when opening old perf.data with different byte order · b30b6172
      Wang Nan 提交于
      Following error occurs when trying to use 'perf report' on x86_64 to
      cross analysis a perf.data generated by an old perf on a big-endian
      machine:
      
       # perf report
       *** Error in `/home/w00229757/perf': free(): invalid next size (fast): 0x00000000032c99f0 ***
       ======= Backtrace: =========
       /lib64/libc.so.6(+0x6eeef)[0x7ff6ff7e2eef]
       /lib64/libc.so.6(+0x78cae)[0x7ff6ff7eccae]
       /lib64/libc.so.6(+0x79987)[0x7ff6ff7ed987]
       /path/to/perf[0x4ac734]
       /path/to/perf[0x4ac829]
       /path/to/perf(perf_header__process_sections+0x129)[0x4ad2c9]
       /path/to/perf(perf_session__read_header+0x2e1)[0x4ad9e1]
       /path/to/perf(perf_session__new+0x168)[0x4bd458]
       /path/to/perf(cmd_report+0xfa0)[0x43eb70]
       /path/to/perf[0x47adc3]
       /path/to/perf(main+0x5f6)[0x42fd06]
       /lib64/libc.so.6(__libc_start_main+0xf5)[0x7ff6ff795bd5]
       /path/to/perf[0x42fe35]
       ======= Memory map: ========
       [SNIP]
      
      The bug is in perf_event__attr_swap(). It swaps all fields in 'struct
      perf_event_attr' without checking whether the swapped field exist or
      not. In addition, in read_event_desc() allocs memory for attr according
      to size read from perf.data.
      
      Therefore, if the perf.data is collected by an old perf (without
      aux_watermark, for example), when perf_event__attr_swap() swaping
      attr->aux_watermark it destroy malloc's metadata.
      
      This patch introduces boundary checking in perf_event__attr_swap(). It
      adds macros bswap_field_64 and bswap_field_32 into
      perf_event__attr_swap() to make it only swap exist fields.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1434534999-85347-1-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b30b6172
  12. 11 6月, 2015 1 次提交
    • H
      perf tools: Fix build failure on 32-bit arch · 6ba29c2f
      He Kuang 提交于
      Failed in 32bit arch build like this:
      
          CC       /opt/h00206996/output/perf/arm32/builtin-record.o
        util/session.c: In function ‘perf_session__warn_about_errors’:
        util/session.c:1304:9: error: format ‘%lu’ expects argument of type ‘long unsigned int’,
                               but argument 2 has type ‘long long unsigned int’ [-Werror=format=]
      
        builtin-report.c: In function ‘perf_evlist__tty_browse_hists’:
        builtin-report.c:323:2: error: format ‘%lu’ expects argument of type ‘long unsigned int’,
                                but argument 3 has type ‘u64’ [-Werror=format=]
      
      Replace %lu format strings in warning message with PRIu64 for u64
      'total_lost_samples' to fix this problem.
      Signed-off-by: NHe Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/r/1434026664-71642-1-git-send-email-hekuang@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      6ba29c2f
  13. 07 6月, 2015 1 次提交
    • K
      perf tools: handle PERF_RECORD_LOST_SAMPLES · c4937a91
      Kan Liang 提交于
      This patch modifies the perf tool to handle the new RECORD type,
      PERF_RECORD_LOST_SAMPLES.
      
      The number of lost-sample events is stored in
      .nr_events[PERF_RECORD_LOST_SAMPLES]. The exact number of samples
      which the kernel dropped is stored in total_lost_samples.
      
      When the percentage of dropped samples is greater than 5%, a warning
      is printed.
      
      Here are some examples:
      
      Eg 1, Recording different frequently-occurring events is safe with the
            patch. Only a very low drop rate is associated with such actions.
      
      $ perf record -e '{cycles:p,instructions:p}' -c 20003 --no-time ~/tchain ~/tchain
      
      $ perf report -D | tail
                SAMPLE events:     120243
                 MMAP2 events:          5
          LOST_SAMPLES events:         24
        FINISHED_ROUND events:         15
      cycles:p stats:
                 TOTAL events:      59348
                SAMPLE events:      59348
      instructions:p stats:
                 TOTAL events:      60895
                SAMPLE events:      60895
      
      $ perf report --stdio --group
       # To display the perf.data header info, please use --header/--header-only options.
       #
       #
       # Total Lost Samples: 24
       #
       # Samples: 120K of event 'anon group { cycles:p, instructions:p }'
       # Event count (approx.): 24048600000
       #
       #         Overhead  Command      Shared Object     Symbol
       # ................  ...........  ................
       ..................................
       #
          99.74%  99.86%  tchain_edit  tchain_edit       [.] f3
           0.09%   0.02%  tchain_edit  tchain_edit       [.] f2
           0.04%   0.00%  tchain_edit  [kernel.vmlinux]  [k] ixgbe_read_reg
      
      Eg 2, Recording the same thing multiple times can lead to high drop
            rate, but it is not a useful configuration.
      
      $ perf record -e '{cycles:p,cycles:p}' -c 20003 --no-time ~/tchain
      Warning: Processed 600592 samples and lost 99.73% samples!
      [perf record: Woken up 148 times to write data]
      [perf record: Captured and wrote 36.922 MB perf.data (1206322 samples)]
      [perf record: Woken up 1 times to write data]
      [perf record: Captured and wrote 0.121 MB perf.data (1629 samples)]
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1431285195-14269-9-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c4937a91
  14. 27 5月, 2015 1 次提交
    • A
      perf session: Fix perf_session__peek_event() · 554e92ed
      Adrian Hunter 提交于
      perf_session__peek_event() generally leverages there being a single mmap
      of the perf.data file, however on 32-bit platforms when there is more
      that 32MiB of data, then there are multiple mmaps, so
      perf_session__peek_event() reads from the file.
      
      In that case a couple of bugs were exposed (note how the seg. fault
      appears with >32M of data):
      
         $ perf record --per-thread -e intel_bts// ../rtit-tests/loopy 1000000
         [ perf record: Woken up 13 times to write data ]
         [ perf record: Captured and wrote 24.568 MB perf.data ]
         $ perf script > /dev/null
         $ perf record --per-thread -e intel_bts// ../rtit-tests/loopy 10000000
         [ perf record: Woken up 136 times to write data ]
         [ perf record: Captured and wrote 270.794 MB perf.data ]
         $ perf script > /dev/null
         Segmentation fault (core dumped)
      
      The wrong address was being passed to the readn() function and the
      buffer size was not being checked.
      Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Link: http://lkml.kernel.org/r/1432040746-1755-5-git-send-email-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      554e92ed
  15. 06 5月, 2015 2 次提交
  16. 05 5月, 2015 1 次提交
    • A
      perf tools: Add AUX area tracing index · 99fa2984
      Adrian Hunter 提交于
      Add an index of AUX area tracing events within a perf.data file.
      
      perf record uses a special user event PERF_RECORD_FINISHED_ROUND to
      enable sorting of events in chunks instead of having to sort all events
      altogether.
      
      AUX area tracing events contain data that can span back to the very
      beginning of the recording period. i.e. they do not obey the rules of
      PERF_RECORD_FINISHED_ROUND.
      
      By adding an index, AUX area tracing events can be found in advance and
      the PERF_RECORD_FINISHED_ROUND approach works as usual.
      
      The index is recorded with the auxtrace feature in the perf.data file.
      A session reads the index but does not process it.  An AUX area decoder
      can queue all the AUX area data in advance using
      auxtrace_queues__process_index() or otherwise process the index in some
      custom manner.
      Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/r/1430404667-10593-3-git-send-email-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      99fa2984
  17. 29 4月, 2015 5 次提交
  18. 08 4月, 2015 1 次提交
  19. 01 4月, 2015 2 次提交
    • A
      perf ordered_samples: Remove references to perf_{evlist,tool} and machines · 9870d780
      Arnaldo Carvalho de Melo 提交于
      As these can be obtained from the ordered_events pointer, via
      container_of, reducing the cross section of ordered_samples.
      
      These were added to ordered_samples in:
      
       commit b7b61cbe
       Author: Arnaldo Carvalho de Melo <acme@redhat.com>
       Date:   Tue Mar 3 11:58:45 2015 -0300
      
          perf ordered_events: Shorten function signatures
      
          By keeping pointers to machines, evlist and tool in ordered_events.
      
      But that was more a transitional patch while moving stuff out from
      perf_session.c to ordered_events.c and possibly not even needed by then,
      as we could use the container_of() method and instead of having the
      nr_unordered_samples stats in events_stats, we can have it in
      ordered_samples.
      Based-on-a-patch-by: NJiri Olsa <jolsa@kernel.org>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-4lk0t9js82g0tfc0x1onpkjt@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      9870d780
    • A
      perf session: Always initialize ordered_events · aae59fab
      Arnaldo Carvalho de Melo 提交于
      Even when it is not used to actually reorder events, some of its fields
      are used, like session->ordered_events->tool, to shorten function
      signatures where tool, for instance, was being passed, as the tool is
      needed for the ordered_events code, we need it there and might as well
      use it for other perf_session needs.
      
      This fixes a problem where 'perf script' had some condition that made
      session->ordered_events not to be initialized even with its
      script->tool ordered_events related flags asking for it to be, which
      looks like another bug and needs to be investigated further.
      
      Always initializing session->ordered_events at least leaves the current
      assumptions in place, so do it now.
      Reported-by: NDavid Ahern <dsahern@gmail.com>
      Reviewed-by: NDavid Ahern <dsahern@gmail.com>
      Tested-by: NDavid Ahern <dsahern@gmail.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-b1xxk0rwkz2a0gip1uufmjqg@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      aae59fab
  20. 12 3月, 2015 3 次提交
  21. 11 3月, 2015 2 次提交
  22. 03 3月, 2015 1 次提交
    • A
      perf tools: Reference count struct thread · f3b623b8
      Arnaldo Carvalho de Melo 提交于
      We need to do that to stop accumulating entries in the dead_threads
      linked list, i.e. we were keeping references to threads in struct hists
      that continue to exist even after a thread exited and was removed from
      the machine threads rbtree.
      
      We still keep the dead_threads list, but just for debugging, allowing us
      to iterate at any given point over the threads that still are referenced
      by things like struct hist_entry.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-3ejvfyed0r7ue61dkurzjux4@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f3b623b8
  23. 23 2月, 2015 5 次提交
  24. 19 2月, 2015 1 次提交
    • K
      perf tools: Construct LBR call chain · 384b6055
      Kan Liang 提交于
      LBR call stack only has user-space callchains. It is output in the
      PERF_SAMPLE_BRANCH_STACK data format. For kernel callchains, it's
      still in the form of PERF_SAMPLE_CALLCHAIN.
      
      The perf tool has to handle both data sources to construct a
      complete callstack.
      
      For the "perf report -D" option, both lbr and fp information will be
      displayed.
      
      A new call chain recording option "lbr" is introduced into the perf
      tool for LBR call stack. The user can use --call-graph lbr to get
      the call stack information from hardware.
      
      Here are some examples.
      
      When profiling bc(1) on Fedora 19:
      
        echo 'scale=2000; 4*a(1)' > cmd; perf record --call-graph lbr bc -l < cmd
      
      If enabling LBR, perf report output looks like:
      
          50.36%       bc  bc                 [.] bc_divide
                       |
                       --- bc_divide
                           execute
                           run_code
                           yyparse
                           main
                           __libc_start_main
                           _start
          33.66%       bc  bc                 [.] _one_mult
                       |
                       --- _one_mult
                           bc_divide
                           execute
                           run_code
                           yyparse
                           main
                           __libc_start_main
                           _start
           7.62%       bc  bc                 [.] _bc_do_add
                       |
                       --- _bc_do_add
                          |
                          |--99.89%-- 0x2000186a8
                           --0.11%-- [...]
           6.83%       bc  bc                 [.] _bc_do_sub
                       |
                       --- _bc_do_sub
                          |
                          |--99.94%-- bc_add
                          |          execute
                          |          run_code
                          |          yyparse
                          |          main
                          |          __libc_start_main
                          |          _start
                           --0.06%-- [...]
           0.46%       bc  libc-2.17.so       [.] __memset_sse2
                       |
                       --- __memset_sse2
                          |
                          |--54.13%-- bc_new_num
                          |          |
                          |          |--51.00%-- bc_divide
                          |          |          execute
                          |          |          run_code
                          |          |          yyparse
                          |          |          main
                          |          |          __libc_start_main
                          |          |          _start
                          |          |
                          |          |--30.46%-- _bc_do_sub
                          |          |          bc_add
                          |          |          execute
                          |          |          run_code
                          |          |          yyparse
                          |          |          main
                          |          |          __libc_start_main
                          |          |          _start
                          |          |
                          |           --18.55%-- _bc_do_add
                          |                     bc_add
                          |                     execute
                          |                     run_code
                          |                     yyparse
                          |                     main
                          |                     __libc_start_main
                          |                     _start
                          |
                           --45.87%-- bc_divide
                                     execute
                                     run_code
                                     yyparse
                                     main
                                     __libc_start_main
                                     _start
      
      If using FP, perf report output looks like:
      
        echo 'scale=2000; 4*a(1)' > cmd; perf record --call-graph fp bc -l < cmd
      
          50.49%       bc  bc                 [.] bc_divide
                       |
                       --- bc_divide
          33.57%       bc  bc                 [.] _one_mult
                       |
                       --- _one_mult
           7.61%       bc  bc                 [.] _bc_do_add
                       |
                       --- _bc_do_add
                           0x2000186a8
           6.88%       bc  bc                 [.] _bc_do_sub
                       |
                       --- _bc_do_sub
           0.42%       bc  libc-2.17.so       [.] __memcpy_ssse3_back
                       |
                       --- __memcpy_ssse3_back
      
      If using LBR, perf report -D output looks like:
      
      3458145275743 0x2fd750 [0xd8]: PERF_RECORD_SAMPLE(IP, 0x2): 9748/9748: 0x408ea8 period: 609644 addr: 0
      ... LBR call chain: nr:8
      .....  0: fffffffffffffe00
      .....  1: 0000000000408e50
      .....  2: 000000000040a458
      .....  3: 000000000040562e
      .....  4: 0000000000408590
      .....  5: 00000000004022c0
      .....  6: 00000000004015dd
      .....  7: 0000003d1cc21b43
      ... FP chain: nr:2
      .....  0: fffffffffffffe00
      .....  1: 0000000000408ea8
       ... thread: bc:9748
       ...... dso: /usr/bin/bc
      
      The LBR call stack has the following known limitations:
      
       - Zero length calls are not filtered out by the hardware
      
       - Exception handing such as setjmp/longjmp will have calls/returns not
         match
      
       - Pushing different return address onto the stack will have
         calls/returns not match
      
       - If callstack is deeper than the LBR, only the last entries are
         captured
      Tested-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Simon Que <sque@chromium.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/r/1420482185-29830-3-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      384b6055