提交 6a07ecf0 编写于 作者: L Lisa Owen 提交者: David Yozie

docs - describe the new resource group CPUSET feature (#5100)

* docs - resource group cpuset feature

* alter and create resource group sgml ref page updates

* gp_resource_group_cpu_limit applies to both CPU alloc modes

* add cpuset usage considerations

* restore ... fail, not backup

* misc edits, move note
上级 cf316e0a
......@@ -25,6 +25,7 @@ ALTER RESOURCE GROUP name SET group_attribute value
where group_attribute is one of:
CONCURRENCY integer
CPU_RATE_LIMIT integer
CPUSET tuple
MEMORY_LIMIT integer
MEMORY_SHARED_QUOTA integer
MEMORY_SPILL_RATIO integer
......
......@@ -23,7 +23,7 @@ PostgreSQL documentation
CREATE RESOURCE GROUP name WITH (group_attribute=value [, ... ])
where group_attribute is:
CPU_RATE_LIMIT=integer
CPU_RATE_LIMIT=integer | CPUSET=tuple
MEMORY_LIMIT=integer
[ CONCURRENCY=integer ]
[ MEMORY_SHARED_QUOTA=integer ]
......
......@@ -5,7 +5,7 @@
<title id="iz173472">Using Resource Groups</title>
<body>
<p>You use resource groups to set and enforce CPU, memory, and concurrent transaction limits in Greenplum Database. After you define a resource group, you can then assign the group to one or more Greenplum Database roles, or to an external component such as PL/Container, in order to control the resources used by those roles or components.</p>
<p>When you assign a resource group to a role (a role-based resource group), the resource limits that you define for the group apply to all of the roles to which you assign the group. For example, the CPU limit for a resource group identifies the maximum CPU usage for all running transactions submitted by Greenplum Database users in all roles to which you assign the group.</p>
<p>When you assign a resource group to a role (a role-based resource group), the resource limits that you define for the group apply to all of the roles to which you assign the group. For example, the memory limit for a resource group identifies the maximum memory usage for all running transactions submitted by Greenplum Database users in all roles to which you assign the group.</p>
<p>Similarly, when you assign a resource group to an external component, the group limits apply to all running instances of the component. For example, if you create a resource group for a PL/Container external component, the memory limit that you define for the group specifies the maximum memory usage for all running instances of each PL/Container runtime to which you assign the group.</p>
<p>This topic includes the following subtopics:</p>
......@@ -109,6 +109,11 @@
<entry colname="col2">The percentage of CPU resources available to this resource
group.</entry>
</row>
<row>
<entry colname="col1">CPUSET</entry>
<entry colname="col2">The CPU cores to reserve for this resource
group.</entry>
</row>
<row>
<entry colname="col1">MEMORY_LIMIT</entry>
<entry colname="col2">The percentage of memory resources available to this resource
......@@ -164,6 +169,11 @@
<entry colname="col2">Yes</entry>
<entry colname="col3">Yes</entry>
</row>
<row>
<entry colname="col1">CPUSET</entry>
<entry colname="col2">Yes</entry>
<entry colname="col3">Yes</entry>
</row>
<row>
<entry colname="col1">MEMORY_LIMIT</entry>
<entry colname="col2">Yes</entry>
......@@ -203,13 +213,17 @@
</topic>
<topic id="topic833971717" xml:lang="en">
<title>CPU Limit</title>
<title>CPU Limits</title>
<body>
<p>You configure the share of CPU resources to reserve for a resource group on a segment host by assigning specific CPU core(s) to the group, or by identifying the percentage of segment CPU resources to allocate to the group. Greenplum Database uses the <codeph>CPUSET</codeph> and <codeph>CPU_RATE_LIMIT</codeph> resource group limits to identify the CPU resource allocation mode. You must specify only one of these limits when you configure a resource group.</p>
<p>You may employ both modes of CPU resource allocation simultaneously in your Greenplum Database cluster. You may also change the CPU resource allocation mode for a resource group at runtime.</p>
<p>The <codeph><xref
href="../ref_guide/config_params/guc-list.xml#gp_resource_group_cpu_limit"
type="section"/></codeph> server configuration parameter identifies the maximum
percentage of system CPU resources to allocate to resource groups on each Greenplum Database
segment host. The remaining CPU resources are used for the OS kernel and the Greenplum
segment host. This limit governs the maximum CPU usage of all resource groups on
a segment host regardless of the CPU allocation mode configured for the group.
The remaining unreserved CPU resources are used for the OS kernel and the Greenplum
Database auxiliary daemon processes. The default
<codeph>gp_resource_group_cpu_limit</codeph> value is .9 (90%).</p>
<note>The default <codeph>gp_resource_group_cpu_limit</codeph> value may not leave sufficient
......@@ -219,15 +233,38 @@
higher than .9. Doing so may result in high workload
queries taking near all CPU resources, potentially starving Greenplum
Database auxiliary processes.</note>
<p>The Greenplum Database node CPU percentage is further divided equally among each segment on
the Greenplum node. Each resource group reserves a percentage of the segment CPU for
resource management. You identify this percentage via the <codeph>CPU_RATE_LIMIT</codeph>
value that you provide when you create the resource group.</p>
</body>
<topic id="cpuset" xml:lang="en">
<title>Assigning CPU Resources by Core</title>
<body>
<p>You identify the CPU cores that you want to reserve for a resource group with the <codeph>CPUSET</codeph> property. The CPU cores that you specify must be available in the system and cannot overlap with any CPU cores that you reserved for other resource groups. (Although Greenplum Database uses the cores that you assign to a resource group exclusively for that group, note that those CPU cores may also be used by non-Greenplum processes in the system.)</p>
<p>Specify a comma-separated list of single core numbers or number intervals when you configure <codeph>CPUSET</codeph>. You must enclose the core numbers/intervals in single quotes, for example, '1,3-4'.</p>
<p>When you assign CPU cores to <codeph>CPUSET</codeph> groups, consider the following:<ul>
<li>A resource group that you create with <codeph>CPUSET</codeph> uses the specified cores exclusively. If there are no running queries in the group, the reserved cores are idle and cannot be used by queries in other resource groups. Consider minimizing the number of <codeph>CPUSET</codeph> groups to avoid wasting system CPU resources.</li>
<li>Consider keeping CPU core 0 unassigned. CPU core 0 is used as a fallback mechanism in the following cases:<ul>
<li><codeph>admin_group</codeph> and <codeph>default_group</codeph> require at least one CPU core. When all CPU cores are reserved, Greenplum Database assigns CPU core 0 to these default groups. In this situation, the resource group to which you assigned CPU core 0 shares the core with <codeph>admin_group</codeph> and <codeph>default_group</codeph>.</li>
<li>If you restart your Greenplum Database cluster with one node replacement and the node does not have enough cores to service all <codeph>CPUSET</codeph> resource groups, the groups are automatically assigned CPU core 0 to avoid system start failure.</li></ul></li>
<li>Use the lowest possible core numbers when you assign cores to resource groups. If you replace a Greenplum Database node and the new node has fewer CPU cores than the original, or if you back up the database and want to restore it on a cluster with nodes with fewer CPU cores, the operation may fail. For example, if your Greenplum Database cluster has 16 cores, assigning cores 1-7 is optimal. If you create a resource group and assign CPU core 9 to this group, database restore to an 8 core node will fail.</li></ul></p>
<p>Resource groups that you configure with <codeph>CPUSET</codeph> have a higher priority on CPU resources. The maximum CPU resource usage percentage for all resource groups configured with <codeph>CPUSET</codeph> on a segment host is the number of CPU cores reserved divided by the number of all CPU cores, multiplied by 100.</p>
<p>When you configure <codeph>CPUSET</codeph> for a resource group, Greenplum Database disables <codeph>CPU_RATE_LIMIT</codeph> for the group and sets the value to -1.</p>
<note>You must configure <codeph>CPUSET</codeph> for a resource group <i>after</i> you have enabled resource group-based resource management for your Greenplum Database cluster.</note>
</body>
</topic>
<topic id="cpu_rate_limit" xml:lang="en">
<title>Assigning CPU Resources by Percentage</title>
<body>
<p>The Greenplum Database node CPU percentage is divided equally among each segment on
the Greenplum node. Each resource group that you configure with a <codeph>CPU_RATE_LIMIT</codeph> reserves the specified percentage of the segment CPU for
resource management.</p>
<p>The minimum <codeph>CPU_RATE_LIMIT</codeph> percentage you can specify for a resource group
is 1, the maximum is 100.</p>
<p>The sum of <codeph>CPU_RATE_LIMIT</codeph>s specified for all resource groups that you define in
your Greenplum Database cluster must not exceed 100.</p>
<p>CPU resource assignment is elastic in that Greenplum Database may allocate the CPU
<p>The maximum CPU resource usage for all resource groups configured with a <codeph>CPU_RATE_LIMIT</codeph> on a segment host is the minimum of:<ul>
<li>The number of non-reserved CPU cores divided by the number of all CPU cores, multiplied by 100, and</li>
<li>The <codeph>gp_resource_group_cpu_limit</codeph> value.</li></ul></p>
<p>CPU resource assignment for resource groups configured with a <codeph>CPU_RATE_LIMIT</codeph> is elastic in that Greenplum Database may allocate the CPU
resources of an idle resource group to a busier one(s). In such situations, CPU resources
are re-allocated to the previously idle resource group when that resource group next becomes
active. If multiple resource groups are busy, they are allocated the CPU resources of any
......@@ -235,7 +272,9 @@
example, a resource group created with a <codeph>CPU_RATE_LIMIT</codeph> of 40 will be
allocated twice as much extra CPU resource as a resource group that you create with a
<codeph>CPU_RATE_LIMIT</codeph> of 20.</p>
</body>
<p>When you configure <codeph>CPU_RATE_LIMIT</codeph> for a resource group, Greenplum Database disables <codeph>CPUSET</codeph> for the group and sets the value to -1.</p>
</body>
</topic>
</topic>
<topic id="topic8339717" xml:lang="en">
<title>Memory Limits</title>
......@@ -353,7 +392,7 @@ rg_perseg_mem = ((RAM * (vm.overcommit_ratio / 100) + SWAP) * gp_resource_group_
<section id="topic833low" xml:lang="en">
<title>memory_spill_ratio and Low Memory Queries </title>
<p>A low <codeph>memory_spill_ratio</codeph> setting (for example,
in the 0-2 range) has been shown
in the 0-2% range) has been shown
to increase the performance of queries with low memory requirements. Use
the <codeph>memory_spill_ratio</codeph> server configuration parameter
to override the setting on a per-query basis. For example:
......@@ -426,10 +465,12 @@ rg_perseg_mem = ((RAM * (vm.overcommit_ratio / 100) + SWAP) * gp_resource_group_
}
cpuacct {
}
cpuset {
}
memory {
}
} </codeblock>
<p>This content configures CPU, CPU accounting, and memory control groups
} </codeblock>
<p>This content configures CPU, CPU accounting, CPU core set, and memory control groups
managed by the <codeph>gpadmin</codeph> user. Greenplum Database uses
the memory control group only for those resource groups created with the
<codeph>cgroup</codeph> <codeph>MEMORY_AUDITOR</codeph>.</p>
......@@ -461,6 +502,7 @@ sudo cgconfigparser -l /etc/cgconfig.d/gpdb.conf </codeblock>
running the following commands. Replace &lt;cgroup_mount_point&gt; with the
mount point that you identified in the previous step: <codeblock>ls -l &lt;cgroup_mount_point&gt;/cpu/gpdb
ls -l &lt;cgroup_mount_point&gt;/cpuacct/gpdb
ls -l &lt;cgroup_mount_point&gt;/cpuset/gpdb
ls -l &lt;cgroup_mount_point&gt;/memory/gpdb</codeblock>
<p>If these directories exist and are owned by <codeph>gpadmin:gpadmin</codeph>, you
have successfully configured cgroups for Greenplum Database CPU resource
......@@ -550,6 +592,11 @@ gpstart
<entry colname="col2">10</entry>
<entry colname="col3">30</entry>
</row>
<row>
<entry colname="col1">CPUSET</entry>
<entry colname="col2">-1</entry>
<entry colname="col3">-1</entry>
</row>
<row>
<entry colname="col1">MEMORY_LIMIT</entry>
<entry colname="col2">10</entry>
......@@ -565,6 +612,11 @@ gpstart
<entry colname="col2">20</entry>
<entry colname="col3">20</entry>
</row>
<row>
<entry colname="col1">MEMORY_AUDITOR</entry>
<entry colname="col2">vmtracker</entry>
<entry colname="col3">vmtracker</entry>
</row>
</tbody>
</tgroup>
</table>
......@@ -580,12 +632,12 @@ gpstart
<topic id="topic10" xml:lang="en">
<title id="iz139857">Creating Resource Groups</title>
<body>
<p><i>When you create a resource group for a role</i>, you provide a name, CPU limit, and memory limit. You can
<p><i>When you create a resource group for a role</i>, you provide a name, a CPU resource allocation mode, and memory limit. You can
optionally provide a concurrent transaction limit and memory shared quota and spill ratio.
Use the <codeph><xref href="../ref_guide/sql_commands/CREATE_RESOURCE_GROUP.xml#topic1"
type="topic" format="dita"/></codeph> command to create a new resource group. </p>
<p id="iz152723">When you create a resource group for a role, you must provide
<codeph>CPU_RATE_LIMIT</codeph> and <codeph>MEMORY_LIMIT</codeph> limit values. These
<codeph>CPU_RATE_LIMIT</codeph> or <codeph>CPUSET</codeph> and <codeph>MEMORY_LIMIT</codeph> limit values. These
limits identify the percentage of Greenplum Database resources to allocate to this resource
group. For example, to create a resource group named <i>rgroup1</i> with a CPU limit of 20
and a memory limit of 25:</p>
......@@ -598,12 +650,12 @@ gpstart
is assigned. <codeph>rgroup1</codeph> utilizes the default <codeph>MEMORY_AUDITOR</codeph> <codeph>vmtracker</codeph> and the default <codeph>CONCURRENCY</codeph>
setting of 20.</p>
<p id="iz1527231"><i>When you create a resource group for an external component</i>, you must provide
<codeph>CPU_RATE_LIMIT</codeph> and <codeph>MEMORY_LIMIT</codeph> limit values. You
must also provide the <codeph>MEMORY_AUDITOR</codeph> and explicitly set <codeph>CONCURRENCY</codeph> to zero (0). For example, to create a resource group named <i>rgroup_extcomp</i> with a CPU limit of 10
and a memory limit of 15:</p>
<codeph>CPU_RATE_LIMIT</codeph> or <codeph>CPUSET</codeph> and <codeph>MEMORY_LIMIT</codeph> limit values. You
must also provide the <codeph>MEMORY_AUDITOR</codeph> and explicitly set <codeph>CONCURRENCY</codeph> to zero (0). For example, to create a resource group named <i>rgroup_extcomp</i> for which you reserve CPU core 1
and assign a memory limit of 15:</p>
<p>
<codeblock>=# CREATE RESOURCE GROUP <i>rgroup_extcomp</i> WITH (MEMORY_AUDITOR=cgroup, CONNCURENCY=0,
CPU_RATE_LIMIT=10, MEMORY_LIMIT=15);
<codeblock>=# CREATE RESOURCE GROUP <i>rgroup_extcomp</i> WITH (MEMORY_AUDITOR=cgroup, CONCURRENCY=0,
CPUSET='1', MEMORY_LIMIT=15);
</codeblock>
</p>
<p>The <codeph><xref href="../ref_guide/sql_commands/ALTER_RESOURCE_GROUP.xml#topic1"
......@@ -613,6 +665,7 @@ gpstart
<p>
<codeblock>=# ALTER RESOURCE GROUP <i>rg_role_light</i> SET CONCURRENCY 7;
=# ALTER RESOURCE GROUP <i>exec</i> SET MEMORY_LIMIT 25;
=# ALTER RESOURCE GROUP <i>rgroup1</i> SET CPUSET '2,4';
</codeblock>
</p>
<note>You cannot set or alter the <codeph>CONCURRENCY</codeph> value for the
......
......@@ -1597,7 +1597,7 @@
<row class="- topic/row ">
<entry colname="col1" class="- topic/entry ">cpu_rate_limit</entry>
<entry colname="col2" class="- topic/entry ">The CPU limit (<codeph>CPU_RATE_LIMIT</codeph>) value specified for the resource group.</entry>
<entry colname="col2" class="- topic/entry ">The CPU limit (<codeph>CPU_RATE_LIMIT</codeph>) value specified for the resource group, or -1.</entry>
</row>
<row class="- topic/row ">
<entry colname="col1" class="- topic/entry ">memory_limit</entry>
......@@ -1627,6 +1627,10 @@
<entry colname="col1" class="- topic/entry ">memory_auditor</entry>
<entry colname="col2" class="- topic/entry ">The memory auditor for the resource group.</entry>
</row>
<row class="- topic/row ">
<entry colname="col1" class="- topic/entry ">cpuset</entry>
<entry colname="col2" class="- topic/entry ">The CPU cores reserved for the resource group, or -1.</entry>
</row>
</tbody>
</tgroup>
</table>
......
......@@ -10,7 +10,8 @@
<codeblock id="sql_command_synopsis">ALTER RESOURCE GROUP <varname>name</varname> SET <varname>group_attribute</varname> <varname>value</varname></codeblock>
<p>where <varname>group_attribute</varname> is one of:</p>
<codeblock>CONCURRENCY <varname>integer</varname>
CPU_RATE_LIMIT <varname>integer</varname>
CPU_RATE_LIMIT <varname>integer</varname>
CPUSET <varname>tuple</varname>
MEMORY_LIMIT <varname>integer</varname>
MEMORY_SHARED_QUOTA <varname>integer</varname>
MEMORY_SPILL_RATIO <varname>integer</varname></codeblock>
......@@ -21,9 +22,9 @@ MEMORY_SPILL_RATIO <varname>integer</varname></codeblock>
Only a superuser can alter a resource group.</p>
<p>You can set or reset the concurrency limit of a resource group that you create for roles to control the maximum
number of active concurrent statements in that group. You can also reset the memory or CPU
rate limit of a resource group to control the amount of memory or CPU resources that all
resources of a resource group to control the amount of memory or CPU resources that all
queries submitted through the group can consume on each segment host.</p>
<p>When you alter the CPU limit of a resource group, the new limit is immediately applied.</p>
<p>When you alter the CPU resource management mode or limit of a resource group, the new mode or limit is immediately applied.</p>
<p>When you alter a memory limit of a resource group that you create for roles, the new resource limit is immediately applied if current resource usage is less than or equal to the new value and there are no running transactions in the resource group. If the current resource usage exceeds the new memory limit value, or if there are running transactions in other resource groups that hold some of the resource, then Greenplum Database defers assigning the new limit until resource usage falls within the range of the new value.</p>
<p>When you increase the memory limit of a resource group that you create for external components, the new resource limit is phased in as system memory resources become available. If you decrease the memory limit of a resource group that you create for external components, the behavior is component-specific. For example, if you decrease the memory limit of a resource group that you create for a PL/Container runtime, queries in a running container may fail with an out of memory error.</p>
<p>You can alter one limit type in a single <codeph>ALTER RESOURCE GROUP</codeph> call.</p>
......@@ -53,6 +54,16 @@ MEMORY_SPILL_RATIO <varname>integer</varname></codeblock>
The maximum is 100. The sum of the
<codeph>CPU_RATE_LIMIT</codeph>s of all resource groups defined in the
Greenplum Database cluster must not exceed 100.</pd>
<pd>If you alter the <codeph>CPU_RATE_LIMIT</codeph> of a resource group in which
you previously configured a <codeph>CPUSET</codeph>, <codeph>CPUSET</codeph> is disabled, the reserved CPU cores are returned to Greenplum Database, and <codeph>CPUSET</codeph> is set to -1.</pd>
</plentry>
<plentry>
<pt>CPUSET <varname>tuple</varname></pt>
<pd>The CPU cores to reserve for this resource group. The CPU cores that you specify in <varname>tuple</varname> must be available in the system and cannot overlap with any CPU cores that you specify for other resource groups.</pd>
<pd><varname>tuple</varname> is a comma-separated list of single core numbers or core intervals. You must enclose <varname>tuple</varname> in single quotes, for example, '1,3-4'.</pd>
<pd>If you alter the <codeph>CPUSET</codeph> value of a resource group for which
you previously configured a <codeph>CPU_RATE_LIMIT</codeph>, <codeph>CPU_RATE_LIMIT</codeph> is disabled, the reserved CPU resources are returned to Greenplum Database, and <codeph>CPU_RATE_LIMIT</codeph> is set to -1.</pd>
<pd>You can alter <codeph>CPUSET</codeph> for a resource group only after you have enabled resource group-based resource management for your Greenplum Database cluster.</pd>
</plentry>
<plentry>
<pt>MEMORY_LIMIT <varname>integer</varname></pt>
......@@ -95,6 +106,8 @@ MEMORY_SPILL_RATIO <varname>integer</varname></codeblock>
<codeblock>ALTER RESOURCE GROUP rgroup3 SET MEMORY_LIMIT 30;</codeblock>
<p>Increase the memory spill ratio for a resource group from the default: </p>
<codeblock>ALTER RESOURCE GROUP rgroup4 SET MEMORY_SPILL_RATIO 25;</codeblock>
<p>Reserve CPU core 1 for a resource group: </p>
<codeblock>ALTER RESOURCE GROUP rgroup5 SET CPUSET '1';</codeblock>
</section>
<section id="section7">
<title>Compatibility</title>
......
......@@ -9,7 +9,7 @@
<title>Synopsis</title>
<codeblock id="sql_command_synopsis">CREATE RESOURCE GROUP <varname>name</varname> WITH (<varname>group_attribute</varname>=<varname>value</varname> [, ... ])</codeblock>
<p>where <varname>group_attribute</varname> is:</p>
<codeblock>CPU_RATE_LIMIT=<varname>integer</varname>
<codeblock>CPU_RATE_LIMIT=<varname>integer</varname> | CPUSET=<varname>tuple</varname>
MEMORY_LIMIT=<varname>integer</varname>
[ CONCURRENCY=<varname>integer</varname> ]
[ MEMORY_SHARED_QUOTA=<varname>integer</varname> ]
......@@ -23,7 +23,7 @@ MEMORY_LIMIT=<varname>integer</varname>
<p>A resource
group that you create to manage a user role identifies concurrent transaction, memory,
and CPU limits for the role when resource groups are enabled. You may assign such resource groups to one or more roles.</p>
<p>A resource group that you create to manage the resources of a Greenplum Database external component such as PL/Container identifies the memory and CPU limits for the component when resource groups are enabled. These resource groups use cgroups to manage both CPU and memory management. Assignment of resource groups to external components is component-specific. For example, you assign a PL/Container resource group when you configure a PL/Container runtime. You cannot assign a resource group that you create for external components to a role, nor can you assign a resource group that you create for roles to an external component.</p>
<p>A resource group that you create to manage the resources of a Greenplum Database external component such as PL/Container identifies the memory and CPU limits for the component when resource groups are enabled. These resource groups use cgroups for both CPU and memory management. Assignment of resource groups to external components is component-specific. For example, you assign a PL/Container resource group when you configure a PL/Container runtime. You cannot assign a resource group that you create for external components to a role, nor can you assign a resource group that you create for roles to an external component.</p>
<p>You must have <codeph>SUPERUSER</codeph> privileges to create a resource group. The maximum number of resource groups allowed in your Greenplum Database cluster is 100.</p>
<p>Greenplum Database pre-defines two default resource groups: <codeph>admin_group</codeph>
and <codeph>default_group</codeph>. These group names, as well as the group name <codeph>none</codeph>, are reserved.</p>
......@@ -54,7 +54,11 @@ MEMORY_LIMIT=<varname>integer</varname>
</plentry>
<plentry>
<pt>CPU_RATE_LIMIT <varname>integer</varname></pt>
<pd>Required. The percentage of CPU resources to allocate to this resource group. The minimum CPU percentage you can specify for a resource group is 1. The maximum is 100. The sum of the <codeph>CPU_RATE_LIMIT</codeph> values specified for all resource groups defined in the Greenplum Database cluster must be less than or equal to 100.
<pt>CPUSET <varname>tuple</varname></pt>
<pd>Required. You must specify only one of <codeph>CPU_RATE_LIMIT</codeph> or <codeph>CPUSET</codeph> when you create a resource group.</pd>
<pd><codeph>CPU_RATE_LIMIT</codeph> is the percentage of CPU resources to allocate to this resource group. The minimum CPU percentage you can specify for a resource group is 1. The maximum is 100. The sum of the <codeph>CPU_RATE_LIMIT</codeph> values specified for all resource groups defined in the Greenplum Database cluster must be less than or equal to 100.
</pd>
<pd><codeph>CPUSET</codeph> identifies the CPU cores to reserve for this resource group. The CPU cores that you specify in <varname>tuple</varname> must be available in the system and cannot overlap with any CPU cores that you specify for other resource groups.</pd><pd><varname>tuple</varname> is a comma-separated list of single core numbers or core number intervals. You must enclose <varname>tuple</varname> in single quotes, for example, '1,3-4'.</pd><pd><note>You can configure <codeph>CPUSET</codeph> for a resource group only after you have enabled resource group-based resource management for your Greenplum Database cluster.</note>
</pd>
</plentry>
<plentry>
......@@ -92,6 +96,8 @@ MEMORY_LIMIT=<varname>integer</varname>
<p>Create a resource group to manage PL/Container resources specifying a memory limit of 10, and a CPU limit of 10:</p>
<codeblock>CREATE RESOURCE GROUP plc_run1 WITH (MEMORY_LIMIT=10, CPU_RATE_LIMIT=10,
CONCURRENCY=0, MEMORY_AUDITOR=cgroup);</codeblock>
<p>Create a resource group with a memory limit percentage of 11 to which you assign CPU cores 1 to 3:</p>
<codeblock>CREATE RESOURCE GROUP rgroup3 WITH (CPUSET='1-3', MEMORY_LIMIT=11);</codeblock>
</section>
<section id="section7">
<title>Compatibility</title>
......
......@@ -4,7 +4,7 @@
<topic id="topic1"><title id="dc20941">DROP RESOURCE GROUP</title><body>
<p id="sql_command_desc">Removes a resource group.</p><section id="section2"><title>Synopsis</title><codeblock id="sql_command_synopsis">DROP RESOURCE GROUP <varname>group_name</varname></codeblock></section><section id="section3"><title>Description</title><p>This command removes a resource group from Greenplum
Database. Only a superuser
can drop a resource group.</p><p>To drop a role resource group, the group cannot be assigned to any roles,
can drop a resource group. When you drop a resource group, the memory and CPU resources reserved by the group are returned to Greenplum Database.</p><p>To drop a role resource group, the group cannot be assigned to any roles,
nor can it have any statements pending or running in the group. If you drop a resource group that you created for an external component, the behavior is determined by the external component. For example, dropping a resource group that you assigned to a PL/Container runtime kills running containers in the group. </p><p>You cannot drop the pre-defined <codeph>admin_group</codeph> and <codeph>default_group</codeph> resource groups.</p>
</section><section id="section4"><title>Parameters</title><parml><plentry><pt><varname>group_name</varname></pt><pd>The name of the resource group to remove.</pd></plentry></parml></section><section id="section5"><title>Notes</title><p>You cannot submit a <codeph>DROP RESOURCE GROUP</codeph> command in an explicit transaction or sub-transaction.</p><p>Use <codeph><xref href="ALTER_ROLE.xml#topic1" type="topic" format="dita"/></codeph> to remove a resource group assigned
to a specific user/role.</p><p> Perform the following query to view all of the currently active
......
......@@ -63,7 +63,7 @@
</entry>
<entry colname="col2">text</entry>
<entry colname="col3">pg_resgroupcapability.value for pg_resgroupcapability.reslimittype = 2</entry>
<entry colname="col4">The CPU limit (<codeph>CPU_RATE_LIMIT</codeph>) value specified for the resource group.</entry>
<entry colname="col4">The CPU limit (<codeph>CPU_RATE_LIMIT</codeph>) value specified for the resource group, or -1.</entry>
</row>
<row>
<entry colname="col1">
......@@ -121,6 +121,14 @@
<entry colname="col3">pg_resgroupcapability.value for pg_resgroupcapability.reslimittype = 6</entry>
<entry colname="col4">The memory auditor in use for the resource group.</entry>
</row>
<row>
<entry colname="col1">
<codeph>cpuset</codeph>
</entry>
<entry colname="col2">text</entry>
<entry colname="col3">pg_resgroupcapability.value for pg_resgroupcapability.reslimittype = 7</entry>
<entry colname="col4">The CPU cores reserved for the resource group, or -1.</entry>
</row>
</tbody>
</tgroup>
</table>
......
......@@ -47,7 +47,7 @@
<entry colname="col3">
<codeph></codeph>
</entry>
<entry colname="col4">The resource group limit type:<p>0 - Unknown</p><p>1 - Concurrency</p><p>2 - CPU</p><p>3 - Memory</p><p>4 - Memory shared quota</p><p>5 - Memory spill ratio</p><p>6 - Memory auditor</p>
<entry colname="col4">The resource group limit type:<p>0 - Unknown</p><p>1 - Concurrency</p><p>2 - CPU</p><p>3 - Memory</p><p>4 - Memory shared quota</p><p>5 - Memory spill ratio</p><p>6 - Memory auditor</p><p>7 - CPU set</p>
</entry>
</row>
<row>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册