未验证 提交 825fd2ce 编写于 作者: M Mel Kiyama 提交者: GitHub

docs - resource group support of runaway query detection (#9508)

* docs - resource group support of runaway query detection

update GUC runaway_detector_activation_percent
Add cross reference in
--Admin Guide resource group memory management topic
--CREATE RESOURCE GROUP parameter MEMORY_AUDITOR

This will be backported to 5X)_STABLE

* docs - minor edit

* docs - review comment updates

* docs - simplified description for resource groups
--replaced requirement for vmtracker mem. auditor w/ admin_group, and default_group
--Added global shared memory example from Simon

* docs - created an Admin Guide section for resource group automatic query termination.

* docs - fix math error
上级 f24c228c
......@@ -50,13 +50,14 @@
<li id="im16806f">
<xref href="#topic10" type="topic" format="dita"/>
</li>
<li><xref href="#topic_jlz_hzg_pkb" format="dita"/></li>
<li id="im16806g">
<xref href="#topic17" type="topic" format="dita"/>
</li>
</ul></li>
<li id="im16806h">
<xref href="#topic22" type="topic" format="dita"/>
</li>
</ul></li>
<li>
<xref href="#topic777999" type="topic" format="dita"/>
</li>
......@@ -228,6 +229,14 @@
</tbody>
</tgroup>
</table>
<p>
<note>For queries managed by resource groups that are configured to use the
<codeph>vmtracker</codeph> memory auditor, Greenplum Database supports the automatic
termination of queries based on the amount of memory the queries are using. See the server
configuration parameter <codeph><xref
href="../ref_guide/config_params/guc-list.xml#runaway_detector_activation_percent"
type="section"/></codeph>. </note>
</p>
</body>
</topic>
<topic id="topic8339717179" xml:lang="en">
......@@ -547,7 +556,7 @@ SET statement_mem='10 MB';</codeblock></p>
</body>
</topic>
<topic id="topic71717999" xml:lang="en">
<title>Using Resource Groups</title>
<title>Configuring and Using Resource Groups</title>
<body>
<note type="important">Significant Greenplum Database performance degradation has been
observed when enabling resource group-based workload management on RedHat 6.x and CentOS 6.x
......@@ -822,6 +831,24 @@ gpstart
</p>
</body>
</topic>
<topic id="topic_jlz_hzg_pkb">
<title>Configuring Automatic Query Termination</title>
<body>
<p>When resource groups have a global shared memory pool, the server configuration parameter
<codeph><xref
href="../ref_guide/config_params/guc-list.xml#runaway_detector_activation_percent"
type="section"/></codeph> sets the percent of utilized global shared memory that
triggers the termination of queries that are managed by resource groups that are configured
to use the <codeph>vmtracker</codeph> memory auditor, such as <codeph>admin_group</codeph>
and <codeph>default_group</codeph>. </p>
<p>Resource groups have a global shared memory pool when the sum of the
<codeph>MEMORY_LIMIT</codeph> attribute values configured for all resource groups is less
than 100. For example, if you have 3 resource groups configured with
<codeph>MEMORY_LIMIT</codeph> values of 10 , 20, and 30, then global shared memory is 40%
= 100% - (10% + 20% + 30%). </p>
<p>For information about global shared memory, see <xref href="#topic833glob"/>.</p>
</body>
</topic>
<topic id="topic17" xml:lang="en">
<title id="iz172210">Assigning a Resource Group to a Role</title>
<body>
......
......@@ -4328,7 +4328,8 @@
<title>gp_resource_manager</title>
<body>
<p>Identifies the resource management scheme currently enabled in the Greenplum Database
cluster. The default scheme is to use resource queues.</p>
cluster. The default scheme is to use resource queues. For information about Greenplum
Database resource management, see <xref href="../../admin_guide/wlmgmt.xml"/>.</p>
<table id="gp_resource_manager_table">
<tgroup cols="3">
<colspec colnum="1" colname="col1" colwidth="1*"/>
......@@ -8001,20 +8002,49 @@
<topic id="runaway_detector_activation_percent">
<title>runaway_detector_activation_percent</title>
<body>
<note>The <codeph>runaway_detector_activation_percent</codeph> server configuration parameter
is enforced only when resource queue-based resource management is active.</note>
<p>Sets the percentage of Greenplum Database vmem memory that triggers the termination of
queries. If the percentage of vmem memory that is utilized for a Greenplum Database segment
exceeds the specified value, Greenplum Database terminates queries based on memory usage,
starting with the query consuming the largest amount of memory. Queries are terminated until
the percentage of utilized vmem is below the specified percentage.</p>
<p>For queries that are managed by resource queues or resource groups, this parameter
determines when Greenplum Database terminates running queries based on the amount of memory
the queries are using. A value of 100 disables the automatic termination of queries based on
the percentage of memory that is utilized.</p>
<p>Either the resource queue or the resource group management scheme can be active in
Greenplum Database; both schemes cannot be active at the same time. The server configuration
parameter <codeph><xref href="#gp_resource_manager"/></codeph> controls which scheme is
active.</p>
<p><b>When resource queues are enabled -</b> This parameter sets the percent of utilized
Greenplum Database vmem memory that triggers the termination of queries. If the percentage
of vmem memory that is utilized for a Greenplum Database segment exceeds the specified
value, Greenplum Database terminates queries managed by resource queues based on memory
usage, starting with the query consuming the largest amount of memory. Queries are
terminated until the percentage of utilized vmem is below the specified percentage.</p>
<p>Specify the maximum vmem value for active Greenplum Database segment instances with the
server configuration parameter <xref href="#gp_vmem_protect_limit" format="dita"/>.</p>
<p>For example, if vmem memory is set to 10GB, and the value of
<codeph>runaway_detector_activation_percent</codeph> is 90 (90%), Greenplum Database
starts terminating queries when the utilized vmem memory exceeds 9 GB.</p>
<p>A value of 0 disables the automatic termination of queries based on percentage of vmem that
is utilized.</p>
server configuration parameter <codeph><xref href="#gp_vmem_protect_limit" format="dita"
/></codeph>.</p>
<p>For example, if vmem memory is set to 10GB, and this parameter is 90 (90%), Greenplum
Database starts terminating queries when the utilized vmem memory exceeds 9 GB.</p>
<p>For information about resource queues, see <xref href="../../admin_guide/workload_mgmt.xml"
/>.</p>
<p><b>When resource groups are enabled -</b> This parameter sets the percent of utilized
resource group global shared memory that triggers the termination of queries that are
managed by resource groups that are configured to use the <codeph>vmtracker</codeph> memory
auditor, such as <codeph>admin_group</codeph> and <codeph>default_group</codeph>. For
information about memory auditors, see <xref
href="../../admin_guide/workload_mgmt_resgroups.xml#topic8339777"/>.</p>
<p>Resource groups have a global shared memory pool when the sum of the
<codeph>MEMORY_LIMIT</codeph> attribute values configured for all resource groups is less
than 100. For example, if you have 3 resource groups configured with
<codeph>memory_limit</codeph> values of 10 , 20, and 30, then global shared memory is 40%
= 100% - (10% + 20% + 30%). See <xref
href="../../admin_guide/workload_mgmt_resgroups.xml#topic833glob"/>.</p>
<p>If the percentage of utilized global shared memory exceeds the specified value, Greenplum
Database terminates queries based on memory usage, selecting from queries managed by the
resource groups that are configured to use the <codeph>vmtracker</codeph> memory auditor.
Greenplum Database starts with the query consuming the largest amount of memory. Queries are
terminated until the percentage of utilized global shared memory is below the specified
percentage. </p>
<p>For example, if global shared memory is 10GB, and this parameter is 90 (90%), Greenplum
Database starts terminating queries when the utilized global shared memory exceeds 9 GB.</p>
<p>For information about resource groups, see <xref
href="../../admin_guide/workload_mgmt_resgroups.xml"/>.</p>
<table id="runaway_detector_activation_percent_table">
<tgroup cols="3">
<colspec colnum="1" colname="col1" colwidth="1*"/>
......
......@@ -1190,6 +1190,10 @@
<p>
<xref href="guc-list.xml#memory_spill_ratio" type="section">memory_spill_ratio</xref>
</p>
<p>
<xref href="guc-list.xml#runaway_detector_activation_percent" type="section"
>runaway_detector_activation_percent</xref>
</p>
<p>
<xref href="guc-list.xml#statement_mem" type="section">statement_mem</xref>
</p>
......
......@@ -88,8 +88,17 @@
</plentry>
<plentry>
<pt>MEMORY_AUDITOR {vmtracker | cgroup}</pt>
<pd>The memory auditor for the resource group. Greenplum Database employs virtual memory tracking for role resources and cgroup memory tracking for resources used by external components. The default <codeph>MEMORY_AUDITOR</codeph> is <codeph>vmtracker</codeph>. When you create a resource group with <codeph>vmtracker</codeph> memory auditing, Greenplum Database tracks that resource group's memory internally.</pd>
<pd>When you create a resource group specifying the <codeph>cgroup</codeph> <codeph>MEMORY_AUDITOR</codeph>, Greenplum Database defers the accounting of memory used by that resource group to cgroups. <codeph>CONCURRENCY</codeph> must be zero (0) for a resource group that you create for external components such as PL/Container. You cannot assign a resource group that you create for external components to a Greenplum Database role.</pd>
<pd>The memory auditor for the resource group. Greenplum Database employs virtual memory
tracking for role resources and cgroup memory tracking for resources used by external
components. The default <codeph>MEMORY_AUDITOR</codeph> is <codeph>vmtracker</codeph>.
When you create a resource group with <codeph>vmtracker</codeph> memory auditing,
Greenplum Database tracks that resource group's memory internally.</pd>
<pd>When you create a resource group specifying the <codeph>cgroup</codeph>
<codeph>MEMORY_AUDITOR</codeph>, Greenplum Database defers the accounting of memory used
by that resource group to cgroups. <codeph>CONCURRENCY</codeph> must be zero (0) for a
resource group that you create for external components such as PL/Container. You cannot
assign a resource group that you create for external components to a Greenplum Database
role.</pd>
</plentry>
</parml>
</section>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册