提交 45d7af51 编写于 作者: L Lisa Owen 提交者: dyozie

docs - RG/RQ-qualify gpperfmon, other content where appropriate (#3800)

* docs - RG/RQ-qualify gpperfmon, other content where appropriate

* edits from david
上级 2558c215
......@@ -600,14 +600,16 @@ Distributed by: (sale_id)
<body>
<p>The <i>session_level_memory_consumption</i> view provides information about memory
consumption and idle time for sessions that are running SQL queries. </p>
<p>In the view, the column <codeph>is_runaway</codeph> indicates whether Greenplum
<p>When resource queue-based resource management is active, the column
<codeph>is_runaway</codeph> indicates whether Greenplum
Database considers the session a runaway session based on the vmem memory consumption of
the session's queries. When the queries consume an excessive amount of memory, Greenplum
Database considers the session a runaway. The Greenplum Database server configuration
parameter <codeph>runaway_detector_activation_percent</codeph> controlling when
Greenplum Database considers a session a runaway session. </p>
<p>For information about the parameter, see "Server Configuration Parameters" in the
<cite>Greenplum Database Reference Guide</cite>. </p>
the session's queries. Under the resource queue-based resource management scheme, Greenplum
Database considers the session a runaway when the queries consume an excessive amount of
memory. The Greenplum Database server configuration
parameter <codeph>runaway_detector_activation_percent</codeph> governs the
conditions under which Greenplum Database considers a session a runaway session. </p>
<p>The <codeph>is_runaway</codeph>, <codeph>runaway_vmem_mb</codeph>, and
<codeph>runaway_command_cnt</codeph> columns are not applicable when resource group-based resource management is active.</p>
<table id="session_level_memory_consumption_table">
<title>session_level_memory_consumption</title>
<tgroup cols="4">
......
......@@ -52,7 +52,7 @@
<codeblock>SELECT * FROM pg_stat_activity;
</codeblock>
</p>
<p>Note the information does not update instantaneously.</p>
<p>Note that the information does not update instantaneously.</p>
</body>
</topic>
<topic id="topic5" xml:lang="en">
......@@ -79,7 +79,14 @@ a.current_query
        ORDER BY c.relname;
</codeblock>
</p>
<p>If you use resource queues, queries that are waiting in a queue will also show in
<p>If you use resource groups, queries that are waiting will also show in
<i>pg_locks</i>. To see how many queries are waiting to run in a resource
group, use the<i> gp_resgroup_status </i>system catalog view. For example:</p>
<p>
<codeblock>SELECT * FROM gp_toolkit.gp_resgroup_status;
</codeblock>
</p>
<p>Similarly, if you use resource queues, queries that are waiting in a queue also show in
<i>pg_locks</i>. To see how many queries are waiting to run from a resource
queue, use the<i> gp_resqueue_status </i>system catalog view. For example:</p>
<p>
......
......@@ -61,7 +61,8 @@
<codeph>queries_queued</codeph>
</entry>
<entry>int</entry>
<entry>The number of queries waiting in a resource queue at data collection
<entry>The number of queries waiting in a resource group or resource queue,
depending upon which resource management scheme is active, at data collection
time.</entry>
</row>
</tbody>
......
......@@ -7,8 +7,9 @@
<p>The <codeph>dynamic_memory_info</codeph> view shows a sum of the used and available dynamic
memory for all segment instances on a segment host. Dynamic memory refers to the maximum
amount of memory that Greenplum Database instance will allow the query processes of a
single segment instance to consume before it starts cancelling processes. This limit is set
by the <codeph>gp_vmem_protect_limit</codeph> server configuration parameter, and is
single segment instance to consume before it starts cancelling processes. This limit,
determined by the currently active resource management scheme (resource group-based
or resource queue-based), is
evaluated on a per-segment basis.</p>
<table>
<tgroup cols="2">
......@@ -53,8 +54,7 @@
processes running on this segment host. Note that this value is a sum of the
available memory for all segments on a host. Even though this value reports
available memory, it is possible that one or more segments on the host have
exceeded their memory limit as set by the
<codeph>gp_vmem_protect_limit</codeph> parameter.</entry>
exceeded their memory limit.</entry>
</row>
</tbody>
</tgroup>
......
......@@ -93,8 +93,7 @@
processes running on this segment host. Note that this value is a sum of the
available memory for all segments on a host. Even though this value reports
available memory, it is possible that one or more segments on the host have
exceeded their memory limit as set by the
<codeph>gp_vmem_protect_limit</codeph> parameter.</entry>
exceeded their memory limit.</entry>
</row>
</tbody>
</tgroup>
......
......@@ -211,14 +211,16 @@
<codeph>rsqname</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>The name of the resource queue.</entry>
<entry>If the resource queue-based resource management scheme is active,
this column specifies the name of the resource queue.</entry>
</row>
<row>
<entry>
<codeph>rqppriority</codeph>
</entry>
<entry>varchar(64)</entry>
<entry>The priority of the query -- <codeph>max, high, med, low, or
<entry>If the resource queue-based resource management scheme is active,
this column specifies the priority of the query -- <codeph>max, high, med, low, or
min</codeph>.</entry>
</row>
</tbody>
......
......@@ -7,11 +7,9 @@
<p>The <codeph>segment_*</codeph> tables contain memory allocation statistics for the
Greenplum Database segment instances. This tracks the amount of memory consumed by all
postgres processes of a particular segment instance, and the remaining amount of memory
available to a segment as per the setting of the <codeph>postgresql.conf</codeph>
configuration parameter: <codeph>gp_vmem_protect_limit</codeph>. Query processes that cause
a segment to exceed this limit will be cancelled in order to prevent system-level
out-of-memory errors. See the <i>Greenplum Database Reference Guide</i> for more
information about this parameter.</p>
available to a segment as per the settings configured by the currently active resource management scheme (resource group-based or resource queue-based).
See the <cite>Greenplum Database Administrator Guide</cite> for more
information about resource management schemes.</p>
<p>There are three segment tables, all having the same columns:</p>
<ul>
<li>
......@@ -83,8 +81,8 @@
</entry>
<entry>bigint</entry>
<entry>The amount of additional dynamic memory (in bytes) that the segment can
request before reaching the limit set by the
<codeph>gp_vmem_protect_limit</codeph> parameter.</entry>
request before reaching the limit set by the currently active
resource management scheme (resource group-based or resource queue-based). </entry>
</row>
</tbody>
</tgroup>
......
......@@ -44,7 +44,7 @@
high-availability/mirroring is achieved in a Greenplum Database instance. Statistics are
maintained for each primary-mirror pair.</li>
<li>The <codeph><xref href="db-log-alert.xml#CommandCenterDatabaseReference-log_alert"
>log_alert_*</xref></codeph> tables store error and warning messages from pg_log.</li>
>log_alert_*</xref></codeph> tables store error and warning messages from <codeph>pg_log</codeph>.</li>
<li>The <codeph><xref href="db-queries.xml#db-queries">queries_*</xref></codeph> tables store
high-level query status information.</li>
<li>The <codeph><xref href="db-segment.xml#db-segment">segment_*</xref></codeph> tables store
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册