提交 1058db9f 编写于 作者: C Chuck Litzell 提交者: David Yozie

Relocates gpperfmon reference from GPCC (#2388)

* Relocates gpperfmon reference from GPCC

* Remove emcconnect history and itorators from PDF ditamap

* Remove iterator metrics subtopic from PDF ditamap

* Additional edits to acknowledge gpperfmon without GPCC

* Incorporate gpperfmon.conf reference into gpperfmon_install reference
上级 dd311f02
......@@ -9,8 +9,8 @@ GEM
addressable (2.5.1)
public_suffix (~> 2.0, >= 2.0.2)
ansi (1.5.0)
backports (3.7.0)
bookbindery (10.1.2)
backports (3.8.0)
bookbindery (10.1.4)
ansi (~> 1.4)
css_parser
elasticsearch
......@@ -49,15 +49,15 @@ GEM
sass (>= 3.2, < 3.5)
concurrent-ruby (1.0.5)
contracts (0.13.0)
css_parser (1.4.10)
css_parser (1.5.0)
addressable
dotenv (2.2.0)
elasticsearch (5.0.3)
elasticsearch-api (= 5.0.3)
elasticsearch-transport (= 5.0.3)
elasticsearch-api (5.0.3)
dotenv (2.2.1)
elasticsearch (5.0.4)
elasticsearch-api (= 5.0.4)
elasticsearch-transport (= 5.0.4)
elasticsearch-api (5.0.4)
multi_json
elasticsearch-transport (5.0.3)
elasticsearch-transport (5.0.4)
faraday
multi_json
em-websocket (0.5.1)
......@@ -67,7 +67,7 @@ GEM
eventmachine (1.2.3)
excon (0.55.0)
execjs (2.7.0)
faraday (0.12.0.1)
faraday (0.12.1)
multipart-post (>= 1.2, < 3)
fast_blank (1.0.0)
fastimage (2.1.0)
......@@ -77,7 +77,7 @@ GEM
fog-json (~> 1.0)
fog-xml (~> 0.1)
ipaddress (~> 0.8)
fog-core (1.43.0)
fog-core (1.44.1)
builder
excon (~> 0.49)
formatador (~> 0.2)
......@@ -91,7 +91,8 @@ GEM
sass (>= 3.2)
formatador (0.2.5)
git (1.2.9.1)
haml (4.0.7)
haml (5.0.1)
temple (>= 0.8.0)
tilt
hamster (3.0.0)
concurrent-ruby (~> 1.0)
......@@ -181,6 +182,7 @@ GEM
sprockets (3.7.1)
concurrent-ruby (~> 1.0)
rack (> 1, < 3)
temple (0.8.0)
therubyracer (0.12.2)
libv8 (~> 3.16.14.0)
ref
......@@ -189,7 +191,7 @@ GEM
tilt (1.4.1)
tzinfo (1.2.3)
thread_safe (~> 0.1)
uglifier (3.1.13)
uglifier (3.2.0)
execjs (>= 0.3.0, < 3)
PLATFORMS
......
......@@ -289,6 +289,22 @@
</chapter>
<chapter href="ref_guide/gp_toolkit.xml" navtitle="The gp_toolkit Administrative Schema"
id="gp_toolkit"/>
<chapter href="ref_guide/gpperfmon/dbref.xml" navtitle="The gpperfmon Database">
<topicref href="ref_guide/gpperfmon/db-database.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-diskspace.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-filerep.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-health.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-interface-stats.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-log-alert.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-queries.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-segment.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-socket-stats.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-system.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-tcp-stats.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-udp-stats.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-dynamic-memory-info.xml" type="topic"/>
<topicref href="ref_guide/gpperfmon/db-memory-info.xml" type="topic"/>
</chapter>
<chapter href="ref_guide/data_types.xml" navtitle="Greenplum Database Data Types"
id="data_types"/>
<chapter href="ref_guide/character_sets.xml" navtitle="Character Set Support"
......
......@@ -63,8 +63,8 @@
<p>In Greenplum Database, the <i>segments</i> are where data is stored and the majority
of query processing takes place. When a user connects to the database and issues a
query, processes are created on each segment to handle the work of that query. For
more information about query processes, see the Greenplum Database
Administrator Guide. </p>
more information about query processes, see the Greenplum Database Administrator
Guide. </p>
<p>User-defined tables and their indexes are distributed across the available segments
in a Greenplum Database system; each segment contains a distinct portion of data.
The database server processes that serve segment data run under the corresponding
......@@ -241,15 +241,22 @@
<p>System State Reporting </p>
</li>
</ul>
<p>Greenplum provides an optional system monitoring and management tool that
administrators can install and enable with Greenplum Database. Greenplum Command
Center uses data collection agents on each segment host to collect and store
Greenplum system metrics in a dedicated database. Segment data collection agents
send their data to the Greenplum master at regular intervals (typically every 15
seconds). Users can query the Command Center database to see query and system
metrics. Greenplum Command Center has a graphical web-based user interface for
viewing system metrics, which administrators can install separately from Greenplum
Database. For more information, see the Greenplum Command Center documentation.</p>
<p>Pivotal provides an optional system monitoring and management tool, Greenplum Command
Center, which administrators can install and enable with Greenplum Database.
Greenplum Command Center depends upon a dedicated database named
<codeph>gpperfmon</codeph>, which is used to collect and store system metrics.
Data collection agents on the segments send their data to the Greenplum master at
regular intervals (typically every 15 seconds).</p>
<p> Greenplum Database includes a <codeph>gpperfmon_install</codeph> management utility,
which creates the <codeph>gpperfmon</codeph> database and enables the data
collection agents. Users can query the <codeph>gpperfmon</codeph> database to see
query and system metrics. </p>
<p>Administrators can install Greenplum Command Center, available separately from
Greenplum Database, to provide a graphical web-based user interface for viewing the
system metrics and to perform additional system management tasks. For more
information about Greenplum Command Center, see the <xref
href="https://gpcc.docs.pivotal.io" format="html" scope="external">Greenplum
Command Center documentation</xref>.</p>
<fig id="kf145043">
<title>Greenplum Command Center Architecture</title>
<image href="../graphics/cc_arch_gpdb.png" placement="break" width="299px"
......
......@@ -249,15 +249,22 @@
<li id="iw157657">Loading Data in Parallel</li>
<li id="iw157658">System State Reporting</li>
</ul>
<p>Greenplum provides an optional system monitoring and management tool that
administrators can install and enable with Greenplum Database. Greenplum Command
Center uses data collection agents on each segment host to collect and store
Greenplum system metrics in a dedicated database. Segment data collection agents
send their data to the Greenplum master at regular intervals (typically every 15
seconds). Users can query the Command Center database to see query and system
metrics. Greenplum Command Center has a graphical web-based user interface for
viewing system metrics, which administrators can install separately from Greenplum
Database. For more information, see the Greenplum Command Center documentation.</p>
<p>Pivotal provides an optional system monitoring and management tool, Greenplum Command
Center, which administrators can install and enable with Greenplum Database.
Greenplum Command Center depends upon a dedicated database named
<codeph>gpperfmon</codeph>, which is used to collect and store system metrics.
Data collection agents on the segments send their data to the Greenplum master at
regular intervals (typically every 15 seconds).</p>
<p> Greenplum Database includes a <codeph>gpperfmon_install</codeph> management utility,
which creates the <codeph>gpperfmon</codeph> database and enables the data
collection agents. Users can query the <codeph>gpperfmon</codeph> database to see
query and system metrics. </p>
<p>Administrators can install Greenplum Command Center, available separately from
Greenplum Database, to provide a graphical web-based user interface for viewing the
system metrics and to perform additional system management tasks. For more
information about Greenplum Command Center, see the <xref
href="https://gpcc.docs.pivotal.io" format="html" scope="external">Greenplum
Command Center documentation</xref>.</p>
<fig id="iw157682">
<title>Greenplum Command Center Architecture</title>
<image height="304px" href="../../graphics/cc_arch_gpdb.png" placement="break"
......
......@@ -21,14 +21,22 @@
<li>Transferring data between Greenplum databases</li>
<li>System state reporting</li>
</ul>
<p>Greenplum provides an optional system monitoring and management tool that administrators can
install and enable with Greenplum Database. Greenplum Command Center uses data collection
agents on each segment host to collect and store Greenplum system metrics in a dedicated
database. Segment data collection agents send their data to the Greenplum master at regular
intervals (typically every 15 seconds). Users can query the Command Center database to see
query and system metrics. Greenplum Command Center has a graphical web-based user interface
for viewing system metrics, which administrators can install separately from Greenplum
Database. For more information, see the Greenplum Command Center documentation.</p>
<p>Pivotal provides an optional system monitoring and management tool, Greenplum Command
Center, which administrators can install and enable with Greenplum Database.
Greenplum Command Center depends upon a dedicated database named
<codeph>gpperfmon</codeph>, which is used to collect and store system metrics.
Data collection agents on the segments send their data to the Greenplum master at
regular intervals (typically every 15 seconds).</p>
<p> Greenplum Database includes a <codeph>gpperfmon_install</codeph> management utility,
which creates the <codeph>gpperfmon</codeph> database and enables the data
collection agents. Users can query the <codeph>gpperfmon</codeph> database to see
query and system metrics. </p>
<p>Administrators can install Greenplum Command Center, available separately from
Greenplum Database, to provide a graphical web-based user interface for viewing the
system metrics and to perform additional system management tasks. For more
information about Greenplum Command Center, see the <xref
href="https://gpcc.docs.pivotal.io" format="html" scope="external">Greenplum
Command Center documentation</xref>.</p>
<image href="../graphics/cc_arch_gpdb.png" id="image_mt4_ltv_fp"/>
</body>
</topic>
......@@ -863,8 +863,7 @@
<topic id="topic36" xml:lang="en">
<title>Greenplum Command Center Agent</title>
<body>
<p>The following parameters configure the data collection agents for Greenplum Command
Center.</p>
<p>The following parameters configure the data collection agents for the Command Center (<codeph>gpperfmon</codeph>) database.</p>
<simpletable id="kh171891">
<strow>
<stentry>
......
......@@ -17,18 +17,19 @@
<title id="kj157177">Monitoring Database Activity and Performance</title>
<body>
<p>Pivotal provides an optional system monitoring and management tool, Greenplum Command
Center, that administrators can enable within Greenplum Database.</p>
<p>Enabling Greenplum Command Center is a two-part process. First, enable the Greenplum
Database server to collect and store system metrics. Next, install and configure the
Greenplum Command Center Console, an online application used to view the system metrics
collected and store them in the Command Center's dedicated Greenplum database. </p>
<p>The Greenplum Command Center Console ships separately from your Greenplum Database
installation. Download the Greenplum Command Center Console package from <xref
href="https://network.pivotal.io" scope="external" format="html">Pivotal Network
</xref>and documentation from <xref href="http://gpdb.docs.pivotal.io" scope="external"
format="html">Pivotal Documentation</xref>. See the <cite>Greenplum Database Command
Center Administrator Guide</cite> for more information on installing and using the
Greenplum Command Center Console.</p>
Center, which administrators can enable with Greenplum Database. Greenplum Command Center
depends upon a dedicated database, <codeph>gpperfmon</codeph>, and segment data collection
agents that collect and store system metrics in the database. </p>
<p>The Greenplum Database <codeph>gpperfmon_install</codeph> management utility creates the
<codeph>gpperfmon</codeph> database and manages the data collection agents on the segment
hosts. Administrators can query metrics in the <codeph>gpperfmon</codeph> database.</p>
<p>The Greenplum Command Center Console is a web-based interface that graphically displays
metrics collected in the <codeph>gpperfmon</codeph> database and provides additional system
management tools. Greenplum Command Center ships separately from the Greenplum Database
distribution. Download the Greenplum Command Center package from <xref
href="https://network.pivotal.io" scope="external" format="html">Pivotal Network</xref>
and view documentation at the <xref href="http://gpcc.docs.pivotal.io" scope="external"
format="html">Greenplum Command Center Documentation</xref> web site. </p>
</body>
</topic>
<topic id="topic3" xml:lang="en">
......@@ -729,7 +730,8 @@ Distributed by: (sale_id)
</entry>
<entry colname="col2">timestamptz</entry>
<entry colname="col3"/>
<entry colname="col4">The last time a query process in this session became idle.</entry>
<entry colname="col4">The last time a query process in this session became
idle.</entry>
</row>
</tbody>
</tgroup>
......
......@@ -14,12 +14,13 @@
<topic id="topic2" xml:lang="en">
<title>Checking System State</title>
<body>
<p>Use the <codeph>gpstate</codeph> utility to identify failed segments. A Greenplum Database system will incur performance degradation when segment
instances are down because other hosts must pick up the processing responsibilities
of the down segments.</p>
<p>Use the <codeph>gpstate</codeph> utility to identify failed segments. A Greenplum
Database system will incur performance degradation when segment instances are down
because other hosts must pick up the processing responsibilities of the down
segments.</p>
<p>Failed segments can indicate a hardware failure, such as a failed disk drive or
network card. Greenplum Database provides the hardware verification
tool <codeph>gpcheckperf</codeph> to help identify the segment hosts with hardware
network card. Greenplum Database provides the hardware verification tool
<codeph>gpcheckperf</codeph> to help identify the segment hosts with hardware
issues.</p>
</body>
</topic>
......@@ -94,21 +95,33 @@ a.current_query
<p>You can use system monitoring utilities such as <codeph>ps</codeph>,
<codeph>top</codeph>, <codeph>iostat</codeph>, <codeph>vmstat</codeph>,
<codeph>netstat</codeph> and so on to monitor database activity on the hosts
in your Greenplum Database array. These tools can help identify
Greenplum Database processes (<codeph>postgres</codeph>
processes) currently running on the system and the most resource intensive tasks
with regards to CPU, memory, disk I/O, or network activity. Look at these system
statistics to identify queries that degrade database performance by overloading
the system and consuming excessive resources. Greenplum Database's
management tool <codeph>gpssh</codeph> allows you to run these system monitoring
commands on several hosts simultaneously.</p>
in your Greenplum Database array. These tools can help identify Greenplum
Database processes (<codeph>postgres</codeph> processes) currently running on
the system and the most resource intensive tasks with regards to CPU, memory,
disk I/O, or network activity. Look at these system statistics to identify
queries that degrade database performance by overloading the system and
consuming excessive resources. Greenplum Database's management tool
<codeph>gpssh</codeph> allows you to run these system monitoring commands on
several hosts simultaneously.</p>
<p>You can create and use the Greenplum Database
<i>session_level_memory_consumption</i> view that provides information about the
current memory utilization and idle time for sessions that are running queries on Greenplum Database. For information about the view, see <xref
<i>session_level_memory_consumption</i> view that provides information about
the current memory utilization and idle time for sessions that are running
queries on Greenplum Database. For information about the view, see <xref
href="managing/monitor.xml#topic_slt_ddv_1q"/>.</p>
<p>The Greenplum Command Center collects query and system
utilization metrics. See the <i>Greenplum Command Center Administrator Guide</i>
for procedures to enable Greenplum Command Center.</p>
<p>You can enable a dedicated database, <codeph>gpperfmon</codeph>, in which data
collection agents running on each segment host save query and system utilization
metrics. Refer to the <codeph>gperfmon_install</codeph> management utility
reference in the <cite>Greenplum Database Management Utility Reference
Guide</cite> for help creating the <codeph>gpperfmon</codeph> database and
managing the agents. See documentation for the tables and views in the
<codeph>gpperfmon</codeph> database in the <cite>Greenplum Database
Reference Guide</cite>.</p>
<p>The optional Greenplum Command Center web-based user interface graphically
displays query and system utilization metrics saved in the
<codeph>gpperfmon</codeph> database. See the <xref
href="https://gpcc.docs.pivotal.io" format="html" scope="external">Greenplum
Command Center Documentation</xref> web site for procedures to enable
Greenplum Command Center.</p>
</body>
</topic>
</topic>
......@@ -119,19 +132,20 @@ a.current_query
<codeph>EXPLAIN</codeph> command shows the query plan for a given query. See
<xref href="query/topics/query-profiling.xml#topic39"/> for more information
about reading query plans and identifying problems.</p>
<p>When an out of memory event occurs during query execution, the Greenplum Database memory accounting framework reports detailed memory
consumption of every query running at the time of the event. The information is
written to the Greenplum Database segment logs. </p>
<p>When an out of memory event occurs during query execution, the Greenplum Database
memory accounting framework reports detailed memory consumption of every query
running at the time of the event. The information is written to the Greenplum
Database segment logs. </p>
</body>
</topic>
<topic id="topic8" xml:lang="en">
<title id="jc155511">Investigating Error Messages</title>
<body>
<p>Greenplum Database log messages are written to files in the
<codeph>pg_log</codeph> directory within the master's or segment's data
directory. Because the master log file contains the most information, you should
always check it first. Log files roll over daily and use the naming convention:
<codeph>gpdb-</codeph><i><codeph>YYYY</codeph></i><codeph>-</codeph><i><codeph>MM</codeph></i><codeph>-</codeph><i><codeph>DD_hhmmss.csv</codeph></i>.
<p>Greenplum Database log messages are written to files in the <codeph>pg_log</codeph>
directory within the master's or segment's data directory. Because the master log
file contains the most information, you should always check it first. Log files roll
over daily and use the naming convention:
<codeph>gpdb-</codeph><i><codeph>YYYY</codeph></i><codeph>-</codeph><i><codeph>MM</codeph></i><codeph>-</codeph><i><codeph>DD_hhmmss.csv</codeph></i>.
To locate the log files on the master host:</p>
<p>
<codeblock>$ cd $MASTER_DATA_DIRECTORY/pg_log
......@@ -143,10 +157,10 @@ a.current_query
</i></codeblock>
<p>You may want to focus your search for <codeph>WARNING</codeph>,
<codeph>ERROR</codeph>, <codeph>FATAL</codeph> or <codeph>PANIC</codeph> log
level messages. You can use the Greenplum utility
<codeph>gplogfilter</codeph> to search through Greenplum Database
log files. For example, when you run the following command on the master host, it
checks for problem log messages in the standard logging locations:</p>
level messages. You can use the Greenplum utility <codeph>gplogfilter</codeph> to
search through Greenplum Database log files. For example, when you run the following
command on the master host, it checks for problem log messages in the standard
logging locations:</p>
<p>
<codeblock>$ gplogfilter -t
</codeblock>
......
......@@ -16,6 +16,7 @@
<topic id="topic_kgn_vxl_vp">
<title>Unsupported SQL Query Features</title>
<body>
<p>These are unsupported features when GPORCA is enabled (the default): <ul id="ul_ndm_gyl_vp">
<li>Indexed expressions (an index defined as expression based on one or more columns of
the table)</li>
......@@ -46,8 +47,8 @@
<title>Performance Regressions</title>
<body>
<p>The following features are known performance regressions that occur with GPORCA
enabled::<ul id="ul_zp2_4yl_vp">
<li>Short running queries - For GPORCA, short running queries might encounter additional
enabled:<ul id="ul_zp2_4yl_vp">
<li>Short running queries - For GPORCA, short running queries might encounter additional
overhead due to GPORCA enhancements for determining an optimal query execution
plan.</li>
<li><cmdname>ANALYZE</cmdname> - For GPORCA, the <cmdname>ANALYZE</cmdname> command
......@@ -57,8 +58,7 @@
partition and distribution keys might require additional overhead. </li>
</ul></p>
<p>Also, enhanced functionality of the features from previous versions could result in
additional time required when GPORCA executes SQL statements with the
features. </p>
additional time required when GPORCA executes SQL statements with the features. </p>
</body>
</topic>
</topic>
......@@ -4,8 +4,8 @@
<topic id="topic36">
<title>Greenplum Command Center Agent</title>
<body>
<p>The following parameters configure the data collection agents for Greenplum Command
Center.</p>
<p>The following parameters configure the data collection agents that populate the
<codeph>gpperfmon</codeph> database used by Greenplum Command Center.</p>
<simpletable id="kh171891">
<strow>
<stentry>
......@@ -27,4 +27,4 @@
</strow>
</simpletable>
</body>
</topic>
\ No newline at end of file
</topic>
......@@ -3254,7 +3254,7 @@
<topic id="gp_enable_gpperfmon">
<title>gp_enable_gpperfmon</title>
<body>
<p>Enables or disables the data collection agents of Greenplum Command Center.</p>
<p>Enables or disables the data collection agents that populate the <codeph>gpperfmon</codeph> database for Greenplum Command Center.</p>
<table id="gp_enable_gpperfmon_table">
<tgroup cols="3">
<colspec colnum="1" colname="col1" colwidth="1*"/>
......@@ -3840,7 +3840,7 @@
<title>gp_gpperfmon_send_interval</title>
<body>
<p>Sets the frequency that the Greenplum Database server processes send query execution
updates to the data collection agent processes used by Command Center. Query operations
updates to the data collection agent processes used to populate the <codeph>gpperfmon</codeph> database for Command Center. Query operations
(iterators) executed during this interval are sent through UDP to the segment monitor
agents. If you find that an excessive number of UDP packets are dropped during long-running,
complex queries, you may consider increasing this value.</p>
......@@ -3872,7 +3872,7 @@
<body>
<p>Controls which message levels are written to the gpperfmon log. Each level includes all the
levels that follow it. The later the level, the fewer messages are sent to the log. </p>
<note>If the Greenplum Database Command Center is installed and is monitoring the database,
<note>If the <codeph>gpperfmon</codeph> database is installed and is monitoring the database,
the default value is warning.</note>
<table id="gpperfmon_log_alert_level_table">
<tgroup cols="3">
......
......@@ -874,9 +874,8 @@
<topic id="topic36" xml:lang="en">
<title>Greenplum Command Center Agent</title>
<body>
<p>The following parameters configure the data collection agents for Greenplum Command
Center.</p>
<simpletable id="kh171891" frame="none">
<p>The following parameters configure the data collection agents that populate the <codeph>gpperfmon</codeph> database for Greenplum Command Center.</p>
<simpletable id="kh171891" frame="none">
<strow>
<stentry>
<p>
......
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-database">
<title> database_* </title>
<body>
<p>The <codeph>database_*</codeph> tables store query workload information for a Greenplum
Database instance. There are three database tables, all having the same columns:</p>
<ul>
<li>
<codeph>database_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. Current query workload data
is stored in <codeph>database_now</codeph> during the period between data collection
from the data collection agents and automatic commitment to the
<codeph>database_history</codeph> table.</li>
<li>
<codeph>database_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for query workload data that has been cleared from <codeph>database_now</codeph> but has
not yet been committed to <codeph>database_history</codeph>. It typically only contains
a few minutes worth of data.</li>
<li>
<codeph>database_history</codeph> is a regular table that stores historical
database-wide query workload data. It is pre-partitioned into monthly partitions.
Partitions are automatically added in two month increments as needed. Administrators
must drop old partitions for the months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>ctime</codeph>
</entry>
<entry>timestamp</entry>
<entry>Time this row was created.</entry>
</row>
<row>
<entry>
<codeph>queries_total</codeph>
</entry>
<entry>int</entry>
<entry>The total number of queries in Greenplum Database at data collection
time.</entry>
</row>
<row>
<entry>
<codeph>queries_running</codeph>
</entry>
<entry>int</entry>
<entry>The number of active queries running at data collection time.</entry>
</row>
<row>
<entry>
<codeph>queries_queued</codeph>
</entry>
<entry>int</entry>
<entry>The number of queries waiting in a resource queue at data collection
time.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-diskspac">
<title> diskspace_* </title>
<body>
<p>The <codeph>diskspace_*</codeph> tables store diskspace metrics.</p>
<ul>
<li>
<codeph>diskspace_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. Current diskspace metrics are
stored in <codeph>database_now</codeph> during the period between data collection from
the Command Center agents and automatic commitment to the
<codeph>diskspace_history</codeph> table.</li>
<li>
<codeph>diskspace_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for diskspace metrics that have been cleared from <codeph>diskspace_now</codeph> but has
not yet been committed to <codeph>diskspace_history</codeph>. It typically only contains
a few minutes worth of data.</li>
<li>
<codeph>diskspace_history</codeph> is a regular table that stores historical diskspace
metrics. It is pre-partitioned into monthly partitions. Partitions are automatically
added in two month increments as needed. Administrators must drop old partitions for the
months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>ctime</codeph>
</entry>
<entry>timestamp(0) without time zone </entry>
<entry>Time of diskspace measurement.</entry>
</row>
<row>
<entry>
<codeph>hostname</codeph>
</entry>
<entry> varchar(64)</entry>
<entry>The hostname associated with the diskspace measurement.</entry>
</row>
<row>
<entry>
<codeph>Filesystem </codeph>
</entry>
<entry>text</entry>
<entry>Name of the filesystem for the diskspace measurement.</entry>
</row>
<row>
<entry>
<codeph>total_bytes</codeph>
</entry>
<entry>bigint</entry>
<entry>Total bytes in the file system.</entry>
</row>
<row>
<entry>
<codeph>bytes_used</codeph>
</entry>
<entry>bigint</entry>
<entry>Total bytes used in the file system.</entry>
</row>
<row>
<entry>
<codeph>bytes_available</codeph>
</entry>
<entry>bigint</entry>
<entry>Total bytes available in file system.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="CommandCenterDatabaseReference-dynamic_memory_info">
<title>dynamic_memory_info </title>
<body>
<p>The <codeph>dynamic_memory_info</codeph> view shows a sum of the used and available dynamic
memory for all segment instances on a segment host. Dynamic memory refers to the maximum
amount of memory that Greenplum Database instance will allow the query processes of a
single segment instance to consume before it starts cancelling processes. This limit is set
by the <codeph>gp_vmem_protect_limit</codeph> server configuration parameter, and is
evaluated on a per-segment basis.</p>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>ctime</codeph>
</entry>
<entry>timestamp(0) without time zone</entry>
<entry>Time this row was created in the <codeph>segment_history</codeph>
table.</entry>
</row>
<row>
<entry>
<codeph>hostname</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>Segment or master hostname associated with these system memory
metrics.</entry>
</row>
<row>
<entry>
<codeph>dynamic_memory_used_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>The amount of dynamic memory in MB allocated to query processes running on
this segment.</entry>
</row>
<row>
<entry>
<codeph>dynamic_memory_available_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>The amount of additional dynamic memory (in MB) available to the query
processes running on this segment host. Note that this value is a sum of the
available memory for all segments on a host. Even though this value reports
available memory, it is possible that one or more segments on the host have
exceeded their memory limit as set by the
<codeph>gp_vmem_protect_limit</codeph> parameter.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
此差异已折叠。
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-health">
<title> health_* </title>
<body>
<p>The <codeph>health_*</codeph> tables store system health metrics for the EMC Data Computing
Appliance. There are three health tables, all having the same columns:</p>
<note type="note">This table only applies to Greenplum Data Computing Appliance
platforms.</note>
<ul>
<li>
<codeph>health_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. Current system health data is
stored in <codeph>system_now</codeph> during the period between data collection from the
data collection agents and automatic commitment to the <codeph>system_history</codeph>
table.</li>
<li>
<codeph>health_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for system health data that has been cleared from <codeph>system_now</codeph> but has
not yet been committed to <codeph>system_history</codeph>. It typically only contains a
few minutes worth of data.</li>
<li>
<codeph>health_history</codeph> is a regular table that stores historical system health
metrics. It is pre-partitioned into monthly partitions. Partitions are automatically
added in two month increments as needed. Administrators must drop old partitions for the
months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>ctime</entry>
<entry>timestamp(0) without time zone</entry>
<entry>Time this snapshot of health information about this system was
created.</entry>
</row>
<row>
<entry>hostname</entry>
<entry>varchar(64)</entry>
<entry>Segment or master hostname associated with this health information.</entry>
</row>
<row>
<entry>
<codeph>symptom_code</codeph>
</entry>
<entry>int</entry>
<entry>The symptom code related to the current health/status of an element or
component of the system.</entry>
</row>
<row>
<entry>
<codeph>detailed_symptom_code</codeph>
</entry>
<entry>int</entry>
<entry>A more granular symptom code related to the health/status of a element or
component of the system.</entry>
</row>
<row>
<entry>
<codeph>description</codeph>
</entry>
<entry>text</entry>
<entry>A description of the health/status of this symptom code.</entry>
</row>
<row>
<entry>
<codeph>snmp_oid</codeph>
</entry>
<entry>text</entry>
<entry>The SNMP object ID of the element/component where the event occurred, where
applicable.</entry>
</row>
<row>
<entry>
<codeph>status</codeph>
</entry>
<entry>text</entry>
<entry>The current status of the system. The status is always <codeph>OK</codeph>
unless a connection to the server/switch cannot be made, in which case the
status is <codeph>FAILED</codeph>.</entry>
</row>
<row>
<entry>
<codeph>message</codeph>
</entry>
<entry>text</entry>
<entry>The text of the error message created as a result of this event.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-interface_stats">
<title> interface_stats_* </title>
<body>
<p>The <codeph>interface_stats_*</codeph> tables store statistical metrics about
communications over each active interface for a Greenplum Database instance.</p>
<p>These tables are in place for future use and are not currently populated.</p>
<p>There are three <codeph>interface_stats</codeph> tables, all having the same columns:</p>
<ul>
<li>
<codeph>interface_stats_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>.</li>
<li>
<codeph>interface_stats_tail</codeph> is an external table whose data files are stored
in <codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for statistical interface metrics that has been cleared from
<codeph>interface_stats_now</codeph> but has not yet been committed to
<codeph>interface_stats_history</codeph>. It typically only contains a few minutes
worth of data.</li>
<li>
<codeph>interface_stats_history</codeph> is a regular table that stores statistical
interface metrics. It is pre-partitioned into monthly partitions. Partitions are
automatically added in one month increments as needed. Administrators must drop old
partitions for the months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>interface_name</codeph>
</entry>
<entry>string</entry>
<entry>Name of the interface. For example: eth0, eth1, lo.</entry>
</row>
<row>
<entry>
<codeph>bytes_received</codeph>
</entry>
<entry>bigint</entry>
<entry>Amount of data received in bytes.</entry>
</row>
<row>
<entry>
<codeph>packets_received</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of packets received.</entry>
</row>
<row>
<entry>
<codeph>receive_errors</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of errors encountered while data was being received.</entry>
</row>
<row>
<entry>
<codeph>receive_drops</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of times packets were dropped while data was being received.</entry>
</row>
<row>
<entry>
<codeph>receive_fifo_errors</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of times FIFO (first in first out) errors were encountered while
data was being received.</entry>
</row>
<row>
<entry>
<codeph>receive_frame_errors</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of frame errors while data was being received.</entry>
</row>
<row>
<entry>
<codeph>receive_compressed_packets</codeph>
</entry>
<entry>int</entry>
<entry>Number of packets received in compressed format.</entry>
</row>
<row>
<entry>
<codeph>receive_multicast_packets</codeph>
</entry>
<entry>int</entry>
<entry>Number of multicast packets received.</entry>
</row>
<row>
<entry>
<codeph>bytes_transmitted</codeph>
</entry>
<entry>bigint</entry>
<entry>Amount of data transmitted in bytes.</entry>
</row>
<row>
<entry>
<codeph>packets_transmitted</codeph>
</entry>
<entry>bigint</entry>
<entry>Amount of data transmitted in bytes.</entry>
</row>
<row>
<entry>
<codeph>packets_transmitted</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of packets transmitted.</entry>
</row>
<row>
<entry>
<codeph>transmit_errors</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of errors encountered during data transmission.</entry>
</row>
<row>
<entry>
<codeph>transmit_drops</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of times packets were dropped during data transmission.</entry>
</row>
<row>
<entry>
<codeph>transmit_fifo_errors</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of times fifo errors were encountered during data
transmission.</entry>
</row>
<row>
<entry>
<codeph>transmit_collision_errors</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of times collision errors were encountered during data
transmission.</entry>
</row>
<row>
<entry>
<codeph>transmit_carrier_errors</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of times carrier errors were encountered during data
transmission.</entry>
</row>
<row>
<entry>
<codeph>transmit_compressed_packets</codeph>
</entry>
<entry>int</entry>
<entry>Number of packets transmitted in compressed format.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="CommandCenterDatabaseReference-log_alert">
<title> log_alert_* </title>
<body>
<p>The <codeph>log_alert_*</codeph> tables store <codeph>pg_log</codeph> errors and warnings. </p>
<p>There are three <codeph>log_alert</codeph> tables, all having the same columns:</p>
<ul>
<li><codeph>log_alert_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. Current
<codeph>pg_log</codeph> errors and warnings data is stored in
<codeph>log_alert_now</codeph> during the period between data collection from the
Command Center agents and automatic commitment to the <codeph>log_alert_history</codeph>
table.</li>
<li><codeph>log_alert_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for query workload data that has been cleared from <codeph>log_alert_now</codeph> but
has not yet been committed to <codeph>log_alert_history</codeph>. It typically only
contains a few minutes worth of data.</li>
<li><codeph>log_alert_history</codeph> is a regular table that stores historical
database-wide errors and warnings data. It is pre-partitioned into monthly partitions.
Partitions are automatically added in two month increments as needed. Administrators
must drop old partitions for the months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>logtime</codeph>
</entry>
<entry>timestamp with time zone</entry>
<entry>Timestamp for this log</entry>
</row>
<row>
<entry>
<codeph>loguser</codeph>
</entry>
<entry>text</entry>
<entry>User of the query</entry>
</row>
<row>
<entry>
<codeph>logdatabase</codeph>
</entry>
<entry>text</entry>
<entry>The accessed database</entry>
</row>
<row>
<entry>
<codeph>logpid</codeph>
</entry>
<entry>text</entry>
<entry>Process id</entry>
</row>
<row>
<entry>
<codeph>logthread</codeph>
</entry>
<entry>text</entry>
<entry>Thread number</entry>
</row>
<row>
<entry>
<codeph>loghost</codeph>
</entry>
<entry>text</entry>
<entry>Host name or ip address</entry>
</row>
<row>
<entry>
<codeph>logport</codeph>
</entry>
<entry>text</entry>
<entry>Port number</entry>
</row>
<row>
<entry>
<codeph>logsessiontime </codeph>
</entry>
<entry>timestamp with time zone</entry>
<entry>Session timestamp</entry>
</row>
<row>
<entry>
<codeph>logtransaction</codeph>
</entry>
<entry>integer</entry>
<entry>Transaction id</entry>
</row>
<row>
<entry>
<codeph>logsession</codeph>
</entry>
<entry>text</entry>
<entry>Session id</entry>
</row>
<row>
<entry>
<codeph>logcmdcount</codeph>
</entry>
<entry>text</entry>
<entry>Command count</entry>
</row>
<row>
<entry>
<codeph>logsegment</codeph>
</entry>
<entry>text</entry>
<entry>Segment number</entry>
</row>
<row>
<entry>
<codeph>logslice</codeph>
</entry>
<entry>text</entry>
<entry>Slice number</entry>
</row>
<row>
<entry>
<codeph>logdistxact</codeph>
</entry>
<entry>text</entry>
<entry>Distributed transaction</entry>
</row>
<row>
<entry>
<codeph>loglocalxact</codeph>
</entry>
<entry>text</entry>
<entry>Local transaction</entry>
</row>
<row>
<entry>
<codeph>logsubxact</codeph>
</entry>
<entry>text</entry>
<entry>Subtransaction</entry>
</row>
<row>
<entry>
<codeph>logseverity</codeph>
</entry>
<entry>text</entry>
<entry>Log severity</entry>
</row>
<row>
<entry>
<codeph>logstate</codeph>
</entry>
<entry>text</entry>
<entry>State</entry>
</row>
<row>
<entry>
<codeph>logmessage</codeph>
</entry>
<entry>text</entry>
<entry>Log message</entry>
</row>
<row>
<entry>
<codeph>logdetail</codeph>
</entry>
<entry>text</entry>
<entry>Detailed message</entry>
</row>
<row>
<entry>
<codeph>loghint</codeph>
</entry>
<entry>text</entry>
<entry>Hint info</entry>
</row>
<row>
<entry>
<codeph>logquery</codeph>
</entry>
<entry>text</entry>
<entry>Executed query</entry>
</row>
<row>
<entry>
<codeph>logquerypos</codeph>
</entry>
<entry>text</entry>
<entry>Query position</entry>
</row>
<row>
<entry>
<codeph>logcontext</codeph>
</entry>
<entry>text</entry>
<entry>Context info</entry>
</row>
<row>
<entry>
<codeph>logdebug</codeph>
</entry>
<entry>text</entry>
<entry>Debug</entry>
</row>
<row>
<entry>
<codeph>logcursorpos</codeph>
</entry>
<entry>text</entry>
<entry>Cursor position</entry>
</row>
<row>
<entry>
<codeph>logfunction</codeph>
</entry>
<entry>text</entry>
<entry>Function info</entry>
</row>
<row>
<entry>
<codeph>logfile</codeph>
</entry>
<entry>text</entry>
<entry>Source code file</entry>
</row>
<row>
<entry>
<codeph>logline</codeph>
</entry>
<entry>text</entry>
<entry>Source code line</entry>
</row>
<row>
<entry>
<codeph>logstack</codeph>
</entry>
<entry>text</entry>
<entry>Stack trace</entry>
</row>
</tbody>
</tgroup>
</table>
<section id="rotation">
<title>Log Processing and Rotation</title>
<p>The Greenplum Database system logger writes alert logs in the
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/logs</codeph> directory.</p>
<p>The agent process (<codeph>gpmmon</codeph>) performs the following steps to consolidate
log files and load them into the <codeph>gpperfmon</codeph> database:</p>
<ol>
<li>Gathers all of the <codeph>gpdb-alert-*</codeph> files in the logs directory (except
the latest, which the syslogger has open and is writing to) into a single file,
<codeph>alert_log_stage</codeph>.</li>
<li>Loads the <codeph>alert_log_stage</codeph> file into the
<codeph>log_alert_history</codeph> table in the <codeph>gpperfmon</codeph>
database.</li>
<li>Truncates the <codeph>alert_log_stage</codeph> file.</li>
<li>Removes all of the <codeph>gp-alert-*</codeph> files, except the latest.</li>
</ol>
<p>The syslogger rotates the alert log every 24 hours or when the current log file reaches
or exceeds 1MB. A rotated log file can exceed 1MB if a single error message contains a
large SQL statement or a large stack trace. Also, the syslogger processes error messages
in chunks, with a separate chunk for each logging process. The size of a chunk is
OS-dependent; on Red Hat Enterprise Linux, for example, it is 4096 bytes. If many
Greenplum Database sessions generate error messages at the same time, the log file can
grow significantly before its size is checked and log rotation is triggered.</p>
</section>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="CommandCenterDatabaseReference-memory_info">
<title> memory_info </title>
<body>
<p>The <codeph>memory_info</codeph> view shows per-host memory information from the
<codeph>system_history</codeph> and <codeph>segment_history</codeph> tables. This allows
administrators to compare the total memory available on a segment host, total memory used
on a segment host, and dynamic memory used by query processes.</p>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>ctime</codeph>
</entry>
<entry>timestamp(0) without time zone</entry>
<entry>Time this row was created in the <codeph>segment_history</codeph>
table.</entry>
</row>
<row>
<entry>
<codeph>hostname</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>Segment or master hostname associated with these system memory
metrics.</entry>
</row>
<row>
<entry>
<codeph>mem_total_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>Total system memory in MB for this segment host.</entry>
</row>
<row>
<entry>
<codeph>mem_used_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>Total system memory used in MB for this segment host.</entry>
</row>
<row>
<entry>
<codeph>mem_actual_used_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>Actual system memory used in MB for this segment host.</entry>
</row>
<row>
<entry>
<codeph>mem_actual_free_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>Actual system memory free in MB for this segment host.</entry>
</row>
<row>
<entry>
<codeph>swap_total_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>Total swap space in MB for this segment host.</entry>
</row>
<row>
<entry>
<codeph>swap_used_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>Total swap space used in MB for this segment host.</entry>
</row>
<row>
<entry>
<codeph>dynamic_memory_used_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>The amount of dynamic memory in MB allocated to query processes running on
this segment.</entry>
</row>
<row>
<entry>
<codeph>dynamic_memory_available_mb</codeph>
</entry>
<entry>numeric</entry>
<entry>The amount of additional dynamic memory (in MB) available to the query
processes running on this segment host. Note that this value is a sum of the
available memory for all segments on a host. Even though this value reports
available memory, it is possible that one or more segments on the host have
exceeded their memory limit as set by the
<codeph>gp_vmem_protect_limit</codeph> parameter.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="overview">
<title>Overview</title>
<body>
<p>The <codeph>gpperfmon</codeph> database consists of three sets of tables; <codeph>now</codeph> tables
store data on current system metrics such as active queries, <codeph>history</codeph>
tables store data on historical metrics, and <codeph>tail</codeph> tables are for data in
transition. <codeph>Tail</codeph> tables are for internal use only and should not be
queried by users. The <codeph>now</codeph> and <codeph>tail</codeph> data are stored as
text files on the master host file system, and accessed by the <codeph>gpperfmon</codeph> database via
external tables. The <codeph>history</codeph> tables are regular database tables stored
within the <codeph>gpperfmon</codeph> database.</p>
<p>The database consists of three sets of tables:</p>
<ul>
<li>
<codeph>now</codeph> tables store data on current system metrics such as active
queries.</li>
<li>
<codeph>history</codeph> tables store data historical metrics.</li>
<li>
<codeph>tail</codeph> tables are for data in transition. These tables are for internal
use only and should not be queried by end users.</li>
</ul>
<p>There are the following categories of tables:</p>
<ul>
<li>The <codeph>database_*</codeph> tables store query workload information for a Greenplum
Database instance.</li>
<li>The <codeph>emcconnect_history</codeph> table displays information about ConnectEMC
events and alerts. ConnectEMC events are triggered based on a hardware failure, a fix to
a failed hardware component, or a Greenplum Database startup. Once a ConnectEMC event is
triggered, an alert is sent to EMC Support.</li>
<li>The <codeph>diskspace_*</codeph> tables store diskspace metrics.</li>
<li>The<codeph> filerep_*</codeph> tables store health and status metrics for the file
replication process. This process is how high-availability/mirroring is achieved in
Greenplum Database instance. Statistics are maintained for each primary-mirror
pair.</li>
<li>The <codeph>health_*</codeph> tables store system health metrics for the EMC Data
Computing Appliance.</li>
<li>The <codeph>interface_stats_*</codeph> tables store statistical metrics for each active
interface of a Greenplum Database instance. Note: These tables are in place for future
use and are not currently populated.</li>
<li>The <codeph>log_alert_*</codeph> tables store information about pg_log errors and
warnings.</li>
<li>The <codeph>queries_*</codeph> tables store high-level query status information.</li>
<li>The <codeph>segment_*</codeph> tables store memory allocation statistics for the
Greenplum Database segment instances.</li>
<li>The <codeph>socket_stats_*</codeph> tables store statistical metrics about socket usage
for a Greenplum Database instance. Note: These tables are in place for future use and
are not currently populated.</li>
<li>The <codeph>system_*</codeph> tables store system utilization metrics.</li>
<li>The <codeph>tcp_stats_*</codeph> tables store statistical metrics about TCP
communications for a Greenplum Database instance. Note: These tables are in place for
future use and are not currently populated.</li>
<li>The <codeph>udp_stats_*</codeph> tables store statistical metrics about UDP
communications for a Greenplum Database instance. Note: These tables are in place for
future use and are not currently populated.</li>
</ul>
<p>The <codeph>gpperfmon</codeph> database also contains the following views:</p>
<ul>
<li>The <codeph>dynamic_memory_info</codeph> view shows an aggregate of all the segments
per host and the amount of dynamic memory used per host.</li>
<li>The <codeph>memory_info</codeph> view shows per-host memory information from the
<codeph>system_history</codeph> and <codeph>segment_history</codeph> tables.</li>
</ul>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-queries">
<title>queries_*</title>
<body>
<p>The <codeph>queries_*</codeph> tables store high-level query status information.</p>
<p>The <codeph>tmid</codeph>, <codeph>ssid</codeph> and <codeph>ccnt</codeph> columns are the
composite key that uniquely identifies a particular query.</p>
<p>There are three queries tables, all having the same columns:</p>
<ul>
<li>
<codeph>queries_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. Current query status is
stored in <codeph>queries_now</codeph> during the period between data collection from
the Command Center agents and automatic commitment to the
<codeph>queries_history</codeph> table.</li>
<li>
<codeph>queries_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for query status data that has been cleared from <codeph>queries_now</codeph> but has
not yet been committed to <codeph>queries_history</codeph>. It typically only contains a
few minutes worth of data.</li>
<li>
<codeph>queries_history</codeph> is a regular table that stores historical query status
data. It is pre-partitioned into monthly partitions. Partitions are automatically added
in two month increments as needed. Administrators must drop old partitions for the
months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>ctime</codeph>
</entry>
<entry>timestamp</entry>
<entry>Time this row was created.</entry>
</row>
<row>
<entry>
<codeph>tmid</codeph>
</entry>
<entry>int</entry>
<entry>A time identifier for a particular query. All records associated with the
query will have the same <codeph>tmid</codeph>.</entry>
</row>
<row>
<entry>
<codeph>ssid</codeph>
</entry>
<entry>int</entry>
<entry>The session id as shown by <codeph>gp_session_id</codeph>. All records
associated with the query will have the same <codeph>ssid</codeph>.</entry>
</row>
<row>
<entry>
<codeph>ccnt</codeph>
</entry>
<entry>int</entry>
<entry>The command number within this session as shown by
<codeph>gp_command_count</codeph>. All records associated with the query
will have the same <codeph>ccnt</codeph>.</entry>
</row>
<row>
<entry>
<codeph>username</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>Greenplum role name that issued this query.</entry>
</row>
<row>
<entry>
<p>
<codeph>db</codeph>
</p>
</entry>
<entry>varchar(64)</entry>
<entry>Name of the database queried.</entry>
</row>
<row>
<entry>
<codeph>cost</codeph>
</entry>
<entry>int</entry>
<entry>Not implemented in this release.</entry>
</row>
<row>
<entry>
<codeph>tsubmit</codeph>
</entry>
<entry>timestamp</entry>
<entry>Time the query was submitted.</entry>
</row>
<row>
<entry>
<codeph>tstart</codeph>
</entry>
<entry>timestamp</entry>
<entry>Time the query was started.</entry>
</row>
<row>
<entry>
<codeph>tfinish</codeph>
</entry>
<entry>timestamp</entry>
<entry>Time the query finished.</entry>
</row>
<row>
<entry>
<codeph>status</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>Status of the query -- <codeph>start</codeph>, <codeph>done</codeph>, or
<codeph>abort</codeph>.</entry>
</row>
<row>
<entry>
<codeph>rows_out</codeph>
</entry>
<entry>bigint</entry>
<entry>Rows out for the query.</entry>
</row>
<row>
<entry>
<codeph>cpu_elapsed</codeph>
</entry>
<entry>bigint</entry>
<entry>
<p>CPU usage by all processes across all segments executing this query (in
seconds). It is the sum of the CPU usage values taken from all active
primary segments in the database system.</p>
<p>Note that the value is logged as 0 if the query runtime
is shorter than the value for the quantum. This occurs even if the query
runtime is greater than the values for <codeph>min_query_time</codeph> and
<codeph>min_detailed_query</codeph>, and these values are lower than the
value for the quantum.</p>
</entry>
</row>
<row>
<entry>
<codeph>cpu_currpct</codeph>
</entry>
<entry>float</entry>
<entry>
<p>Current CPU percent average for all processes executing this query. The
percentages for all processes running on each segment are averaged, and then
the average of all those values is calculated to render this metric.</p>
<p>Current CPU percent average is always zero in historical and tail data.</p>
</entry>
</row>
<row>
<entry>
<codeph>skew_cpu</codeph>
</entry>
<entry>float</entry>
<entry>
<p>Displays the amount of processing skew in the system for this query.
Processing/CPU skew occurs when one segment performs a disproportionate
amount of processing for a query. This value is the coefficient of variation
in the CPU% metric of all iterators across all segments for this query,
multiplied by 100. For example, a value of .95 is shown as 95.</p>
</entry>
</row>
<row>
<entry>
<codeph>skew_rows</codeph>
</entry>
<entry>float</entry>
<entry>Displays the amount of row skew in the system. Row skew occurs when one
segment produces a disproportionate number of rows for a query. This value is
the coefficient of variation for the <codeph>rows_in</codeph> metric of all
iterators across all segments for this query, multiplied by 100. For example, a
value of .95 is shown as 95.</entry>
</row>
<row>
<entry>
<codeph>query_hash</codeph>
</entry>
<entry>bigint</entry>
<entry>Not implemented in this release.</entry>
</row>
<row>
<entry>
<codeph>query_text</codeph>
</entry>
<entry>text</entry>
<entry>The SQL text of this query.</entry>
</row>
<row>
<entry>
<codeph>query_plan</codeph>
</entry>
<entry>text</entry>
<entry>Text of the query plan. Not implemented in this release.</entry>
</row>
<row>
<entry>
<codeph>application_name</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>The name of the application.</entry>
</row>
<row>
<entry>
<codeph>rsqname</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>The name of the resource queue.</entry>
</row>
<row>
<entry>
<codeph>rqppriority</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>The priority of the query -- <codeph>max, high, med, low, or
min</codeph>.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-segment">
<title> segment_* </title>
<body>
<p>The <codeph>segment_*</codeph> tables contain memory allocation statistics for the
Greenplum Database segment instances. This tracks the amount of memory consumed by all
postgres processes of a particular segment instance, and the remaining amount of memory
available to a segment as per the setting of the <codeph>postgresql.conf</codeph>
configuration parameter: <codeph>gp_vmem_protect_limit</codeph>. Query processes that cause
a segment to exceed this limit will be cancelled in order to prevent system-level
out-of-memory errors. See the <i>Greenplum Database Reference Guide</i> for more
information about this parameter.</p>
<p>There are three segment tables, all having the same columns:</p>
<ul>
<li>
<codeph>segment_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. Current memory allocation
data is stored in <codeph>segment_now</codeph> during the period between data collection
from the Command Center agents and automatic commitment to the segment_history
table.</li>
<li>
<codeph>segment_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for memory allocation data that has been cleared from <codeph>segment_now</codeph> but
has not yet been committed to <codeph>segment_history</codeph>. It typically only
contains a few minutes worth of data.</li>
<li>
<codeph>segment_history</codeph> is a regular table that stores historical memory
allocation metrics. It is pre-partitioned into monthly partitions. Partitions are
automatically added in two month increments as needed. Administrators must drop old
partitions for the months that are no longer needed.</li>
</ul>
<p>A particular segment instance is identified by its <codeph>hostname</codeph> and
<codeph>dbid</codeph> (the unique segment identifier as per the
<codeph>gp_segment_configuration</codeph> system catalog table).</p>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>ctime</codeph>
</entry>
<entry>
<p>timestamp(0)</p>
<p>(without time zone)</p>
</entry>
<entry>The time the row was created.</entry>
</row>
<row>
<entry>
<codeph>dbid</codeph>
</entry>
<entry>int</entry>
<entry>The segment ID (<codeph>dbid</codeph> from
<codeph>gp_segment_configuration</codeph>).</entry>
</row>
<row>
<entry>
<codeph>hostname</codeph>
</entry>
<entry>charvar(64)</entry>
<entry>The segment hostname.</entry>
</row>
<row>
<entry>
<codeph>dynamic_memory_used</codeph>
</entry>
<entry>bigint</entry>
<entry>The amount of dynamic memory (in bytes) allocated to query processes
running on this segment.</entry>
</row>
<row>
<entry>
<codeph>dynamic_memory_available</codeph>
</entry>
<entry>bigint</entry>
<entry>The amount of additional dynamic memory (in bytes) that the segment can
request before reaching the limit set by the
<codeph>gp_vmem_protect_limit</codeph> parameter.</entry>
</row>
</tbody>
</tgroup>
</table>
<p>See also the views <codeph>memory_info</codeph> and <codeph>dynamic_memory_info</codeph>
for aggregated memory allocation and utilization by host.</p>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-socket_stats">
<title>socket_stats_*</title>
<body>
<p>The <codeph>socket_stats_*</codeph> tables store statistical metrics about socket usage for
a Greenplum Database instance. There are three system tables, all having the same
columns:</p>
<p>These tables are in place for future use and are not currently populated.</p>
<ul>
<li>
<codeph>socket_stats_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>.</li>
<li>
<codeph>socket_stats_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for socket statistical metrics that has been cleared from
<codeph>socket_stats_now</codeph> but has not yet been committed to
<codeph>socket_stats_history</codeph>. It typically only contains a few minutes worth
of data.</li>
<li>
<codeph>socket_stats_history</codeph> is a regular table that stores historical socket
statistical metrics. It is pre-partitioned into monthly partitions. Partitions are
automatically added in two month increments as needed. Administrators must drop old
partitions for the months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>total_sockets_used</codeph>
</entry>
<entry>int</entry>
<entry>Total sockets used in the system.</entry>
</row>
<row>
<entry>
<codeph>tcp_sockets_inuse</codeph>
</entry>
<entry>int</entry>
<entry>Number of TCP sockets in use.</entry>
</row>
<row>
<entry>
<codeph>tcp_sockets_orphan</codeph>
</entry>
<entry>int</entry>
<entry>Number of TCP sockets orphaned.</entry>
</row>
<row>
<entry>
<codeph>tcp_sockets_timewait</codeph>
</entry>
<entry>int</entry>
<entry>Number of TCP sockets in Time-Wait.</entry>
</row>
<row>
<entry>
<codeph>tcp_sockets_alloc</codeph>
</entry>
<entry>int</entry>
<entry>Number of TCP sockets allocated.</entry>
</row>
<row>
<entry>
<codeph>tcp_sockets_memusage_inbytes</codeph>
</entry>
<entry>int</entry>
<entry>Amount of memory consumed by TCP sockets.</entry>
</row>
<row>
<entry>
<codeph>udp_sockets_inuse</codeph>
</entry>
<entry>int</entry>
<entry>Number of UDP sockets in use.</entry>
</row>
<row>
<entry>
<codeph>udp_sockets_memusage_inbytes</codeph>
</entry>
<entry>int</entry>
<entry>Amount of memory consumed by UDP sockets.</entry>
</row>
<row>
<entry>
<codeph>raw_sockets_inuse</codeph>
</entry>
<entry>int</entry>
<entry>Number of RAW sockets in use.</entry>
</row>
<row>
<entry>
<codeph>frag_sockets_inuse</codeph>
</entry>
<entry>int</entry>
<entry>Number of FRAG sockets in use.</entry>
</row>
<row>
<entry>
<codeph>frag_sockets_memusage_inbytes</codeph>
</entry>
<entry>int</entry>
<entry>Amount of memory consumed by FRAG sockets.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-system">
<title>system_*</title>
<body>
<p>The <codeph>system_*</codeph> tables store system utilization metrics. There are three
system tables, all having the same columns:</p>
<ul>
<li>
<codeph>system_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. Current system utilization
data is stored in <codeph>system_now</codeph> during the period between data collection
from the Command Center agents and automatic commitment to the
<codeph>system_history</codeph> table.</li>
<li>
<codeph>system_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for system utilization data that has been cleared from <codeph>system_now</codeph> but
has not yet been committed to <codeph>system_history</codeph>. It typically only
contains a few minutes worth of data.</li>
<li>
<codeph>system_history</codeph> is a regular table that stores historical system
utilization metrics. It is pre-partitioned into monthly partitions. Partitions are
automatically added in two month increments as needed. Administrators must drop old
partitions for the months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>ctime</codeph>
</entry>
<entry>timestamp</entry>
<entry>Time this row was created.</entry>
</row>
<row>
<entry>
<codeph>hostname</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>Segment or master hostname associated with these system metrics.</entry>
</row>
<row>
<entry>
<codeph>mem_total</codeph>
</entry>
<entry>bigint</entry>
<entry>Total system memory in Bytes for this host.</entry>
</row>
<row>
<entry>
<codeph>mem_used</codeph>
</entry>
<entry>bigint</entry>
<entry>Used system memory in Bytes for this host.</entry>
</row>
<row>
<entry>
<codeph>mem_actual_used</codeph>
</entry>
<entry>bigint</entry>
<entry>Used actual memory in Bytes for this host (not including the memory
reserved for cache and buffers).</entry>
</row>
<row>
<entry>
<codeph>mem_actual_free</codeph>
</entry>
<entry>bigint</entry>
<entry>Free actual memory in Bytes for this host (not including the memory
reserved for cache and buffers).</entry>
</row>
<row>
<entry>
<codeph>swap_total</codeph>
</entry>
<entry>bigint</entry>
<entry>Total swap space in Bytes for this host.</entry>
</row>
<row>
<entry>
<codeph>swap_used</codeph>
</entry>
<entry>bigint</entry>
<entry>Used swap space in Bytes for this host.</entry>
</row>
<row>
<entry>
<codeph>swap_page_in</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of swap pages in.</entry>
</row>
<row>
<entry>
<codeph>swap_page_out</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of swap pages out.</entry>
</row>
<row>
<entry>
<codeph>cpu_user</codeph>
</entry>
<entry>float</entry>
<entry>CPU usage by the Greenplum system user.</entry>
</row>
<row>
<entry>
<codeph>cpu_sys</codeph>
</entry>
<entry>float</entry>
<entry>CPU usage for this host.</entry>
</row>
<row>
<entry>
<codeph>cpu_idle</codeph>
</entry>
<entry>float</entry>
<entry>Idle CPU capacity at metric collection time.</entry>
</row>
<row>
<entry>
<codeph>load0</codeph>
</entry>
<entry>float</entry>
<entry>CPU load average for the prior one-minute period.</entry>
</row>
<row>
<entry>
<codeph>load1</codeph>
</entry>
<entry>float</entry>
<entry>CPU load average for the prior five-minute period.</entry>
</row>
<row>
<entry>
<codeph>load2</codeph>
</entry>
<entry>float</entry>
<entry>CPU load average for the prior fifteen-minute period.</entry>
</row>
<row>
<entry>
<codeph>quantum</codeph>
</entry>
<entry>int</entry>
<entry>Interval between metric collection for this metric entry.</entry>
</row>
<row>
<entry>
<codeph>disk_ro_rate</codeph>
</entry>
<entry>bigint</entry>
<entry>Disk read operations per second.</entry>
</row>
<row>
<entry>
<codeph>disk_wo_rate</codeph>
</entry>
<entry>bigint</entry>
<entry>Disk write operations per second.</entry>
</row>
<row>
<entry>
<codeph>disk_rb_rate</codeph>
</entry>
<entry>bigint</entry>
<entry>Bytes per second for disk write operations.</entry>
</row>
<row>
<entry>
<codeph>net_rp_rate</codeph>
</entry>
<entry>bigint</entry>
<entry>Packets per second on the system network for read operations.</entry>
</row>
<row>
<entry>
<codeph>net_wp_rate</codeph>
</entry>
<entry>bigint</entry>
<entry>Packets per second on the system network for write operations.</entry>
</row>
<row>
<entry>
<codeph>net_rb_rate</codeph>
</entry>
<entry>bigint</entry>
<entry>Bytes per second on the system network for read operations.</entry>
</row>
<row>
<entry>
<codeph>net_wb_rate</codeph>
</entry>
<entry>bigint</entry>
<entry>Bytes per second on the system network for write operations.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-tcp_stats">
<title>tcp_stats_*</title>
<body>
<p>The <codeph>tcp_stats_*</codeph> tables store statistical metrics about TCP communications
for a Greenplum Database instance.</p>
<p>These tables are in place for future use and are not currently populated.</p>
<p>There are three system tables, all having the same columns:</p>
<ul>
<li>
<codeph>tcp_stats_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>.</li>
<li>
<codeph>tcp_stats_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for TCP statistical data that has been cleared from <codeph>tcp_stats_now</codeph> but
has not yet been committed to <codeph>tcp_stats_history</codeph>. It typically only
contains a few minutes worth of data.</li>
<li>
<codeph>tcp_stats_history</codeph> is a regular table that stores historical TCP
statistical data. It is pre-partitioned into monthly partitions. Partitions are
automatically added in two month increments as needed. Administrators must drop old
partitions for the months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>segments_received</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of TCP segments received.</entry>
</row>
<row>
<entry>
<codeph>segments_sent</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of TCP segments sent.</entry>
</row>
<row>
<entry>
<codeph>segments_retransmitted</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of TCP segments retransmitted.</entry>
</row>
<row>
<entry>
<codeph>active_connections</codeph>
</entry>
<entry>int</entry>
<entry>Number of active TCP connections.</entry>
</row>
<row>
<entry>
<codeph>passive_connections</codeph>
</entry>
<entry>int</entry>
<entry>Number of passive TCP connections.</entry>
</row>
<row>
<entry>
<codeph>failed_connection_attempts</codeph>
</entry>
<entry>int</entry>
<entry>Number of failed TCP connection attempts.</entry>
</row>
<row>
<entry>
<codeph>connections_established</codeph>
</entry>
<entry>int</entry>
<entry>Number of TCP connections established.</entry>
</row>
<row>
<entry>
<codeph>connection_resets_received</codeph>
</entry>
<entry>int</entry>
<entry>Number of TCP connection resets received.</entry>
</row>
<row>
<entry>
<codeph>connection_resets_sent</codeph>
</entry>
<entry>int</entry>
<entry>Number of TCP connection resets sent.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="db-udp_stats">
<title>udp_stats_*</title>
<body>
<p>The <codeph>udp_stats_*</codeph> tables store statistical metrics about UDP communications
for a Greenplum Database instance.</p>
<p>These tables are in place for future use and are not currently populated.</p>
<p>There are three system tables, all having the same columns:</p>
<ul>
<li>
<codeph>udp_stats_now</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>.</li>
<li>
<codeph>udp_stats_tail</codeph> is an external table whose data files are stored in
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/data</codeph>. This is a transitional table
for UDP statistical data that has been cleared from <codeph>udp_stats_now</codeph> but
has not yet been committed to <codeph>udp_stats_history</codeph>. It typically only
contains a few minutes worth of data.</li>
<li>
<codeph>udp_stats_history</codeph> is a regular table that stores historical UDP
statistical metrics. It is pre-partitioned into monthly partitions. Partitions are
automatically added in two month increments as needed. Administrators must drop old
partitions for the months that are no longer needed.</li>
</ul>
<table>
<tgroup cols="2">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<codeph>packets_received</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of UDP packets received.</entry>
</row>
<row>
<entry>
<codeph>packets_sent</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of UDP packets sent.</entry>
</row>
<row>
<entry>
<codeph>packets_received_unknown_port</codeph>
</entry>
<entry>int</entry>
<entry>Number of UDP packets received on unknown ports.</entry>
</row>
<row>
<entry>
<codeph>packet_receive_errors</codeph>
</entry>
<entry>bigint</entry>
<entry>Number of errors encountered during UDP packet receive.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="overview">
<title>The gpperfmon Database</title>
<body>
<p>The <codeph>gpperfmon</codeph> database is a dedicated database where data collection agents
on Greenplum segment hosts save statistics. The optional Greenplum Command Center management
tool requires the database. The <codeph>gpperfmon</codeph> database is created using the
<codeph>gpperfmon_install</codeph> command-line utility. The utility creates the database
and the <codeph>gpmon</codeph> database role and enables monitoring agents on the segment
hosts. See the <codeph>gpperfmon_install</codeph> reference in the <cite>Greenplum Database
Utility Guide</cite> for information about using the utility and configuring the data
collection agents.</p>
<p>The <codeph>gpperfmon</codeph> database consists of three sets of tables.</p>
<ul id="ul_oqn_xsy_tz">
<li>
<codeph>now</codeph> tables store data on current system metrics such as active
queries.</li>
<li><codeph>history</codeph> tables store data on historical metrics.</li>
<li><codeph>tail</codeph> tables are for data in transition. <codeph>Tail</codeph> tables are
for internal use only and should not be queried by users. The <codeph>now</codeph> and
<codeph>tail</codeph> data are stored as text files on the master host file system, and
accessed by the <codeph>gpperfmon</codeph> database via external tables. The
<codeph>history</codeph> tables are regular database tables stored within the
<codeph>gpperfmon</codeph>database.</li>
</ul>
<p>The database contains the following categories of tables:</p>
<ul>
<li>The <codeph><xref href="db-database.xml#db-database">database_*</xref></codeph> tables
store query workload information for a Greenplum Database instance.</li>
<li>The <codeph><xref href="db-diskspace.xml#db-diskspac">diskspace_*</xref></codeph> tables
store diskspace metrics.</li>
<li>The <codeph><xref href="db-filerep.xml#db-filerep">filerep_*</xref></codeph> tables store
health and status metrics for the file replication process. This process is how
high-availability/mirroring is achieved in Greenplum Database instance. Statistics are
maintained for each primary-mirror pair.</li>
<li>The <codeph><xref href="db-health.xml#db-health">health_*</xref></codeph> tables store
system health metrics for the EMC Data Computing Appliance.</li>
<li>The <codeph><xref href="db-interface-stats.xml#db-interface_stats"
>interface_stats_*</xref></codeph> tables store statistical metrics for each active
interface of a Greenplum Database instance. Note: These tables are in place for future use
and are not currently populated.</li>
<li>The <codeph><xref href="db-log-alert.xml#CommandCenterDatabaseReference-log_alert"
>log_alert_*</xref></codeph> tables store information about pg_log errors and
warnings.</li>
<li>The <codeph><xref href="db-queries.xml#db-queries">queries_*</xref></codeph> tables store
high-level query status information.</li>
<li>The <codeph><xref href="db-segment.xml#db-segment">segment_*</xref></codeph> tables store
memory allocation statistics for the Greenplum Database segment instances.</li>
<li>The <codeph><xref href="db-socket-stats.xml#db-socket_stats"
>socket_stats_*</xref></codeph> tables store statistical metrics about socket usage for a
Greenplum Database instance. Note: These tables are in place for future use and are not
currently populated.</li>
<li>The <codeph><xref href="db-system.xml#db-system">system_*</xref></codeph> tables store
system utilization metrics.</li>
<li>The <codeph><xref href="db-tcp-stats.xml#db-tcp_stats">tcp_stats_*</xref></codeph> tables
store statistical metrics about TCP communications for a Greenplum Database instance. Note:
These tables are in place for future use and are not currently populated.</li>
<li>The <codeph><xref href="db-udp-stats.xml#db-udp_stats">udp_stats_*</xref></codeph> tables
store statistical metrics about UDP communications for a Greenplum Database instance. Note:
These tables are in place for future use and are not currently populated.</li>
</ul>
<p>The <codeph>gpperfmon</codeph> database also contains the following views:</p>
<ul>
<li>The <codeph><xref
href="db-dynamic-memory-info.xml#CommandCenterDatabaseReference-dynamic_memory_info"
>dynamic_memory_info</xref></codeph> view shows an aggregate of all the segments per
host and the amount of dynamic memory used per host.</li>
<li>The <codeph><xref href="db-memory-info.xml#CommandCenterDatabaseReference-memory_info"
>memory_info</xref></codeph> view shows per-host memory information from the
<codeph>system_history</codeph> and <codeph>segment_history</codeph> tables.</li>
</ul>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd">
<map title="The gpperfmon Database">
<topicref href="dbref.xml" navtitle="The gpperfmon Database">
<topicref href="db-database.xml" type="topic"/>
<topicref href="db-diskspace.xml" type="topic"/>
<topicref href="db-filerep.xml" type="topic"/>
<topicref href="db-health.xml" type="topic"/>
<topicref href="db-interface-stats.xml" type="topic"/>
<topicref href="db-log-alert.xml" type="topic"/>
<topicref href="db-queries.xml" type="topic"/>
<topicref href="db-segment.xml" type="topic"/>
<topicref href="db-socket-stats.xml" type="topic"/>
<topicref href="db-system.xml" type="topic"/>
<topicref href="db-tcp-stats.xml" type="topic"/>
<topicref href="db-udp-stats.xml" type="topic"/>
<topicref href="db-dynamic-memory-info.xml" type="topic"/>
<topicref href="db-memory-info.xml" type="topic"/>
</topicref>
</map>
......@@ -220,6 +220,7 @@
</topicref>
</topicref>
<topicref href="gp_toolkit.ditamap" format="ditamap"/>
<topicref href="gpperfmon/gpperfmon.ditamap" format="ditamap"/>
<topicref href="data_types.xml" navtitle="Greenplum Database Data Types" id="data_types"/>
<topicref href="character_sets.xml" navtitle="Character Set Support" id="character_sets"/>
<topicref href="config_params/guc_config.ditamap" navtitle="Server Configuration Parameters"
......@@ -240,4 +241,6 @@
<topicref href="feature_summary.xml" navtitle="Summary of Greenplum Features"
id="feature_summary"/>
</topicref>
<topicref href="../../homenav.html" scope="external" navtitle="Greenplum Database Docs Home" format="html" otherprops="pivotal"/></map>
<topicref href="../../homenav.html" scope="external" navtitle="Greenplum Database Docs Home"
format="html" otherprops="pivotal"/>
</map>
......@@ -4,8 +4,8 @@
<topic id="topic1">
<title>gpperfmon_install</title>
<body>
<p>Installs the Command Center database (<codeph>gpperfmon</codeph>) and optionally enables the
data collection agents.</p>
<p>Installs the <codeph>gpperfmon</codeph> database, which is used by Greenplum Command Center,
and optionally enables the data collection agents.</p>
<section id="section2">
<title>Synopsis</title>
<codeblock><b>gpperfmon_install</b>
......@@ -83,8 +83,11 @@ host all gpmon 127.0.0.1/28 password</codeblock></p>
<li><codeph>gpperfmon_port=8888</codeph> (in all <codeph>postgresql.conf</codeph> files) </li>
<li><codeph>gp_external_enable_exec=on</codeph> (in the master
<codeph>postgresql.conf</codeph> file)</li>
</ul><p>For information about the Greenplum Command Center, see the <cite>Greenplum
Command Center Administrator Guide</cite>.</p></li>
</ul><p>Data collection agents can be configured by setting parameters in the
<codeph>gpperfmon.conf</codeph> configuration file. See <xref
href="#topic1/section_p51_bxc_wz" format="dita"/> for details.</p><p>For information
about the Greenplum Command Center, see the <cite>Greenplum Command Center Administrator
Guide</cite>.</p></li>
</ol>
</section>
<section id="section4">
......@@ -124,15 +127,103 @@ host all gpmon 127.0.0.1/28 password</codeblock></p>
</plentry>
</parml>
</section>
<section id="section_p51_bxc_wz">
<title>Data Collection Agent Configuration</title>
<p>The <codeph>$MASTER_DATA_DIRECTORY/gpperfmon/conf/gpperfmon.conf</codeph> file stores
configuration parameters for the data collection agents. For configuration changes to these
options to take effect, you must save <codeph>gpperfmon.conf</codeph> and then restart
Greenplum Database server (<codeph>gpstop -r</codeph>).</p>
<p>The <codeph>gpperfmon.conf</codeph> file contains the following configuration
parameters.</p>
<simpletable frame="all" id="simpletable_xhc_qtc_wz">
<sthead>
<stentry>Parameter</stentry>
<stentry>Description</stentry>
</sthead>
<strow>
<stentry>log_location</stentry>
<stentry>Specifies a directory location for gpperfmon log files. Default is
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/logs</codeph>.</stentry>
</strow>
<strow>
<stentry>min_query_time</stentry>
<stentry>
<p>Specifies the minimum query run time in seconds for statistics collection. All
queries that run longer than this value are logged in the
<codeph>queries_history</codeph> table. For queries with shorter run times, no
historical data is collected. Defaults to 20 seconds.</p>
<p>If you know that you want to collect data for all queries, you can set this parameter
to a low value. Setting the minimum query run time to zero, however, collects data
even for the numerous queries run by Greenplum Command Center, creating a large amount
of data that may not be useful.</p>
</stentry>
</strow>
<strow>
<stentry>min_detailed_query_time</stentry>
<stentry>
<p>Specifies the minimum iterator run time in seconds for statistics collection.
Iterators that run longer than this value are logged in the
<codeph>iterators_history</codeph> table. For iterators with shorter run times, no
data is collected. Minimum value is 10 seconds.</p>
<p>This parameter’s value must always be equal to, or greater than, the value of
<codeph>min_query_time</codeph>. Setting <codeph>min_detailed_query_time</codeph>
higher than <codeph>min_query_time</codeph> allows you to log detailed query plan
iterator data only for especially complex, long-running queries, while still logging
basic query data for shorter queries.</p>
<p>Given the complexity and size of iterator data, you may want to adjust this parameter
according to the size of data collected. If the <codeph>iterators_*</codeph> tables
are growing to excessive size without providing useful information, you can raise the
value of this parameter to log iterator detail for fewer queries.</p>
</stentry>
</strow>
<strow>
<stentry>max_log_size</stentry>
<stentry>
<p>This parameter is not included in <codeph>gpperfmon.conf</codeph>, but it may be
added to this file.</p>
<p>To prevent the log files from growing to excessive size, you can add the
<codeph>max_log_size</codeph> parameter to <codeph>gpperfmon.conf</codeph>. The
value of this parameter is measured in bytes. For example:</p>
<codeblock>max_log_size = 10485760</codeblock>
<p>With this setting, the log files will grow to 10MB before the system rolls over to a
new log file.</p>
</stentry>
</strow>
<strow>
<stentry>partition_age</stentry>
<stentry>The number of months that gperfmon statistics data will be retained. The default
it is 0, which means we won’t drop any data.</stentry>
</strow>
<strow>
<stentry>quantum</stentry>
<stentry>Specifies the time in seconds between updates from data collection agents on all
segments. Valid values are 10, 15, 20, 30, and 60. Defaults to 15 seconds.<p>If you
prefer a less granular view of performance, or want to collect and analyze minimal
amounts of data for system metrics, choose a higher quantum. To collect data more
frequently, choose a lower value.</p></stentry>
</strow>
<strow>
<stentry>smdw_aliases</stentry>
<stentry>This parameter allows you to specify additional host names for the standby
master. For example, if the standby master has two NICs, you can
enter:<codeblock>smdw_aliases=smdw-1,smdw-2</codeblock><p>This optional fault
tolerance parameter is useful if the Greenplum Command Center loses connectivity with
the standby master. Instead of continuously retrying to connect to host smdw, it will
try to connect to the NIC-based aliases of <codeph>smdw-1</codeph> and/or
<codeph>smdw-2</codeph>. This ensures that the Command Center Console can
continuously poll and monitor the standby master.</p></stentry>
</strow>
</simpletable>
</section>
<section>
<title>Notes</title>
<p>Greenplum Command Center requires the <i>gpperfmon</i> the database role
<codeph>gpmon</codeph>. After the <i>gpperfmon</i> database and <codeph>gpmon</codeph>
role have been created, you can change the password for the <codeph>gpmon</codeph> role and
update the information that Greenplum Command Center uses to connect to the <i>gpperfmon</i>
database: </p>
<p>The <i>gpperfmon</i> database and Greenplum Command Center require the
<codeph>gpmon</codeph> role. After the <i>gpperfmon</i> database and
<codeph>gpmon</codeph> role have been created, you can change the password for the
<codeph>gpmon</codeph> role and update the information that Greenplum Command Center uses
to connect to the <i>gpperfmon</i> database: </p>
<ol id="ol_wss_knp_wr">
<li>Log into Greenplum Database as a superuser and change the <codeph>gpmon</codeph>
<li>Log in to Greenplum Database as a superuser and change the <codeph>gpmon</codeph>
password with the <codeph>ALTER ROLE</codeph>
command.<codeblock># ALTER ROLE gpmon WITH PASSWORD '<varname>new_password</varname>' ;</codeblock></li>
<li>Update the password in <codeph>.pgpass</codeph> file that is used by Greenplum Command
......@@ -143,9 +234,9 @@ host all gpmon 127.0.0.1/28 password</codeblock></p>
<li>Restart the Greenplum Command Center with the Command Center <codeph>gpcmdr</codeph>
utility. <codeblock> $ gpcmdr --restart</codeblock></li>
</ol>
<p>This gpperfmon monitoring system requires some initialization after startup,
so expect monitoring information to appear after a few minutes have passed, and not
immediately after installation and startup of the gpperfmon system.</p>
<p>This gpperfmon monitoring system requires some initialization after startup, so expect
monitoring information to appear after a few minutes have passed, and not immediately after
installation and startup of the gpperfmon system.</p>
</section>
<section id="section5">
<title>Examples</title>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册