1. 16 8月, 2017 2 次提交
  2. 07 5月, 2016 1 次提交
    • C
      perf script: Update export-to-postgresql to support callchain export · 3521f3bc
      Chris Phlipot 提交于
      Update the export-to-postgresql.py to support the newly introduced
      callchain export.
      
      callchains are added into the existing call_paths table and can now
      be associated with samples when the "callpaths" commandline option
      is used with the script.
      
      Ex.:
      
        $ perf script -s export-to-postgresql.py example_db all callchains
      
      Includes the following changes to enable callchain export via the python export
      APIs:
      
      - Add the "callchains" commandline option, which is used to enable
        callchain export by setting the perf_db_export_callchains global
      - Add perf_db_export_callchains checks for call_path table creation
        and population.
      - Add call_path_id to samples_table to conform with the new API
      
      example usage and output using a small test app:
      
        test_app.c:
      
      	volatile int x = 0;
      	void inc_x_loop()
      	{
      		int i;
      		for(i=0; i<100000000; i++)
      			x++;
      	}
      
      	void a()
      	{
      		inc_x_loop();
      	}
      
      	void b()
      	{
      		inc_x_loop();
      	}
      
      	int main()
      	{
      		a();
      		b();
      		return 0;
      	}
      
      example usage:
      
        $ gcc -g -O0 test_app.c
        $ perf record --call-graph=dwarf ./a.out
        [ perf record: Woken up 77 times to write data ]
        [ perf record: Captured and wrote 19.373 MB perf.data (2404 samples) ]
      
        $ perf script -s scripts/python/export-to-postgresql.py
      	example_db all callchains
      
        $ psql example_db
      
        example_db=#
        SELECT
        (SELECT name FROM symbols WHERE id = cps.symbol_id) as symbol,
        (SELECT name FROM symbols WHERE id =
      	(SELECT symbol_id from call_paths where id = cps.parent_id))
      	as parent_symbol,
        sum(period) as event_count
        FROM samples join call_paths as cps on call_path_id = cps.id
        GROUP BY cps.id,evsel_id
        ORDER BY event_count DESC
        LIMIT 5;
      
              symbol      |      parent_symbol       | event_count
        ------------------+--------------------------+-------------
         inc_x_loop       | a                        |   734250982
         inc_x_loop       | b                        |   731028057
         unknown          | unknown                  |     1335858
         task_tick_fair   | scheduler_tick           |     1238842
         update_wall_time | tick_do_update_jiffies64 |      650373
        (5 rows)
      
      The above data shows total "self time" in cycles for each call path that was
      sampled. It is intended to demonstrate how it accounts separately for the two
      ways to reach the "inc_x_loop" function(via "a" and "b").  Recursive common
      table expressions can be used as well to get cumulative time spent in a
      function as well, but that is beyond the scope of this basic example.
      Signed-off-by: NChris Phlipot <cphlipot0@gmail.com>
      Acked-by: NAdrian Hunter <adrian.hunter@intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1461831551-12213-7-git-send-email-cphlipot0@gmail.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      3521f3bc
  3. 19 4月, 2016 1 次提交
  4. 29 9月, 2015 1 次提交
  5. 21 8月, 2015 1 次提交
    • A
      perf tools: Add example call-graph script · 4b715d24
      Adrian Hunter 提交于
      Add a script to produce a call-graph from data exported to a postgresql
      database and derived from a processor trace event like intel_pt or intel_bts.
      
      Refer to comments in the scripts call-graph-from-postgresql.py and
      export-to-postgresql.py for more details on how to set up the environment,
      install the required packages, etc.
      
      Committer note:
      
      From the scripts, for convenience while reading 'git log':
      
        An example of using this script with Intel PT:
      
        $ perf record -e intel_pt//u ls
        $ perf script -s ~/libexec/perf-core/scripts/python/export-to-postgresql.py pt_example branches calls
        2015-05-29 12:49:23.464364 Creating database...
        2015-05-29 12:49:26.281717 Writing to intermediate files...
        2015-05-29 12:49:27.190383 Copying to database...
        2015-05-29 12:49:28.140451 Removing intermediate files...
        2015-05-29 12:49:28.147451 Adding primary keys
        2015-05-29 12:49:28.655683 Adding foreign keys
        2015-05-29 12:49:29.365350 Done
        $ python tools/perf/scripts/python/call-graph-from-postgresql.py pt_example
        # The result is a GUI window with a tree representing a context-sensitive
        # call-graph.  Expanding a couple of levels of the tree and adjusting column
        # widths to suit will display something like:
      
                                               Call Graph: pt_example
        Call Path                        |Object     |Count|Time(ns)|Time(%)|Branch Count|Branch Count(%)
        v- ls
           v- 2638:2638
               v- _start                  ld-2.19.so    1   10074071  100.0        211135          100.0
                 |- unknown               unknown       1      13198    0.1             1            0.0
                 >- _dl_start             ld-2.19.so    1    1400980   13.9         19637            9.3
                 >- _d_linit_internal     ld-2.19.so    1     448152    4.4         11094            5.3
                 v-__libc_start_main@plt  ls            1    8211741   81.5        180397           85.4
                    >- _dl_fixup          ld-2.19.so    1       7607    0.1           108            0.1
                    >- __cxa_atexit       libc-2.19.so  1      11737    0.1            10            0.0
                    >- __libc_csu_init    ls            1      10354    0.1            10            0.0
                    |- _setjmp            libc-2.19.so  1          0    0.0             4            0.0
                    v- main               ls            1    8182043   99.6        180254           99.9
      Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Link: http://lkml.kernel.org/r/1437150840-31811-11-git-send-email-adrian.hunter@intel.com
      [ Added 'python-pyside qt-postgresql' to the yum cmdline installing required packages ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      4b715d24
  6. 04 11月, 2014 2 次提交
  7. 29 10月, 2014 1 次提交
    • A
      perf script: Add Python script to export to postgresql · 2987e32f
      Adrian Hunter 提交于
      Add a Python script to export to a postgresql database.
      
      The script requires the Python pyside module and the Qt PostgreSQL
      driver.  The packages needed are probably named "python-pyside" and
      "libqt4-sql-psql"
      
      The caller of the script must be able to create postgresql databases.
      
      The script takes the database name as a parameter.  The database and
      database tables are created.  Data is written to flat files which are
      then imported using SQL COPY FROM.
      
      Example:
      
        $ perf record ls
        ...
        $ perf script report export-to-postgresql lsdb
        2014-02-14 10:55:38.631431 Creating database...
        2014-02-14 10:55:39.291958 Writing to intermediate files...
        2014-02-14 10:55:39.350280 Copying to database...
        2014-02-14 10:55:39.358536 Removing intermediate files...
        2014-02-14 10:55:39.358665 Adding primary keys
        2014-02-14 10:55:39.658697 Adding foreign keys
        2014-02-14 10:55:39.667412 Done
        $ psql lsdb
        lsdb-# \d
                    List of relations
         Schema |      Name       | Type  | Owner
        --------+-----------------+-------+-------
         public | comm_threads    | table | acme
         public | comms           | table | acme
         public | dsos            | table | acme
         public | machines        | table | acme
         public | samples         | table | acme
         public | samples_view    | view  | acme
         public | selected_events | table | acme
         public | symbols         | table | acme
         public | threads         | table | acme
        (9 rows)
        lsdb-# \d samples
               Table "public.samples"
            Column     |  Type   | Modifiers
        ---------------+---------+-----------
         id            | bigint  | not null
         evsel_id      | bigint  |
         machine_id    | bigint  |
         thread_id     | bigint  |
         comm_id       | bigint  |
         dso_id        | bigint  |
         symbol_id     | bigint  |
         sym_offset    | bigint  |
         ip            | bigint  |
         time          | bigint  |
         cpu           | integer |
         to_dso_id     | bigint  |
         to_symbol_id  | bigint  |
         to_sym_offset | bigint  |
         to_ip         | bigint  |
         period        | bigint  |
         weight        | bigint  |
         transaction   | bigint  |
         data_src      | bigint  |
        Indexes:
            "samples_pkey" PRIMARY KEY, btree (id)
        Foreign-key constraints:
            "commfk" FOREIGN KEY (comm_id) REFERENCES comms(id)
            "dsofk" FOREIGN KEY (dso_id) REFERENCES dsos(id)
            "evselfk" FOREIGN KEY (evsel_id) REFERENCES selected_events(id)
            "machinefk" FOREIGN KEY (machine_id) REFERENCES machines(id)
            "symbolfk" FOREIGN KEY (symbol_id) REFERENCES symbols(id)
            "threadfk" FOREIGN KEY (thread_id) REFERENCES threads(id)
            "todsofk" FOREIGN KEY (to_dso_id) REFERENCES dsos(id)
            "tosymbolfk" FOREIGN KEY (to_symbol_id) REFERENCES symbols(id)
      
        lsdb-# \d samples_view
                       View "public.samples_view"
              Column       |          Type           | Modifiers
        -------------------+-------------------------+-----------
         id                | bigint                  |
         time              | bigint                  |
         cpu               | integer                 |
         pid               | integer                 |
         tid               | integer                 |
         command           | character varying(16)   |
         event             | character varying(80)   |
         ip_hex            | text                    |
         symbol            | character varying(2048) |
         sym_offset        | bigint                  |
         dso_short_name    | character varying(256)  |
         to_ip_hex         | text                    |
         to_symbol         | character varying(2048) |
         to_sym_offset     | bigint                  |
         to_dso_short_name | character varying(256)  |
      
          lsdb=# select * from samples_view;
      
         id| time       |cpu | pid  | tid  |command| event  |   ip_hex      |           symbol    |sym_off| dso_name|to_ip_hex|to_symbol|to_sym_off|to_dso_name
         --+------------+----+------+------+-------+--------+---------------+---------------------+-------+---------+---------+---------+----------+----------
         1 |12202825015 | -1 | 7339 | 7339 |:17339 | cycles | fffff8104d24a |native_write_msr_safe|    10 | [kernel]| 0       | unknown |         0| unknown
         2 |12203258804 | -1 | 7339 | 7339 |:17339 | cycles | fffff8104d24a |native_write_msr_safe|    10 | [kernel]| 0       | unknown |         0| unknown
         3 |12203988119 | -1 | 7339 | 7339 |:17339 | cycles | fffff8104d24a |native_write_msr_safe|    10 | [kernel]| 0       | unknown |         0| unknown
      
      My notes (which may be out-of-date) on setting up postgresql so you can
      create databases:
      
      fedora:
      
              $ sudo yum install postgresql postgresql-server python-pyside qt-postgresql
              $ sudo su - postgres -c initdb
              $ sudo service postgresql start
              $ sudo su - postgres
              $ createuser -s <your username>
      
      I used the the unix user name in createuser.
      
      If it fails, try createuser without -s and answer the following question
      to allow your user to create tables:
      
              Shall the new role be a superuser? (y/n) y
      
      ubuntu:
      
              $ sudo apt-get install postgresql
              $ sudo su - postgres
              $ createuser <your username>
              Shall the new role be a superuser? (y/n) y
      
      You may want to disable automatic startup.  One way is to edit
      /etc/postgresql/9.3/main/start.conf.  Another is to disable the init
      script e.g. sudo update-rc.d postgresql disable
      Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/r/1414061124-26830-8-git-send-email-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      2987e32f