workload_mgmt_resgroups.xml 35.7 KB
Newer Older
1 2 3 4 5 6
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
  PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="topic1" xml:lang="en">
  <title id="iz173472">Using Resource Groups</title>
  <body>
7 8 9
    <p>You can use resource groups to manage the number of active queries that may execute
      concurrently in your Greenplum Database cluster. With resource groups, you can also manage the
      amount of CPU and memory resources Greenplum allocates to each query.</p>
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
    <p>This topic includes the following subtopics:</p>
    <ul id="ul_wjf_1wy_sp">
      <li id="im168064">
        <xref href="#topic8339intro" type="topic" format="dita"/>
      </li>
      <li id="im16806a">
        <xref href="#topic8339717179" type="topic" format="dita"/>
      </li>
      <li id="im16806b">
        <xref href="#topic833971717" type="topic" format="dita"/>
      </li>
      <li id="im16806c">
        <xref href="#topic8339717" type="topic" format="dita"/>
      </li>
      <li id="im16806d">
        <xref href="#topic71717999" type="topic" format="dita"/>
        <ul id="ul_wjf_1wy_spXX">
          <li id="im16806e">
            <xref href="#topic8" type="topic" format="dita"/>
          </li>
          <li id="im16806f">
            <xref href="#topic10" type="topic" format="dita"/>
          </li>
          <li id="im16806g">
            <xref href="#topic17" type="topic" format="dita"/>
          </li>
          <li id="im16806h">
            <xref href="#topic22" type="topic" format="dita"/>
          </li>
        </ul>
      </li>
    </ul>
  </body>
  <topic id="topic8339intro" xml:lang="en">
    <title>Introduction</title>
    <body>
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
    <p>When the user executes a query, Greenplum Database evaluates the query against a set of
      limits defined for the resource group. Greenplum Database executes the query immediately if
      the group's resource limits have not yet been reached and the query does not cause the group
      to exceed the concurrent transaction limit. If these conditions are not met, Greenplum
      Database queues the query. For example, if the maximum number of concurrent transactions for
      the resource group has already been reached, a subsequent query is queued and must wait until
      other queries complete before it runs. Greenplum Database may also execute a pending query
      when the resource group's concurrency and memory limits are altered to large enough
      values.</p>
    <p>Within a resource group, transactions are evaluated on a first in, first out basis. Greenplum
      Database periodically assesses the active workload of the system, reallocating resources and
      starting/queuing jobs as necessary.</p>
    <p>When you create a resource group, you provide a set of limits that determine the amount of
      CPU and memory resources available to transactions executed within the group. These limits
      are:</p>
    <table id="resgroup_limit_descriptions">
      <tgroup cols="3">
        <colspec colnum="1" colname="col1" colwidth="1*"/>
        <colspec colnum="2" colname="col2" colwidth="1*"/>
        <thead>
          <row>
            <entry colname="col1">Limit Type</entry>
            <entry colname="col2">Description</entry>
          </row>
        </thead>
        <tbody>
          <row>
            <entry colname="col1">CONCURRENCY</entry>
            <entry colname="col2">The maximum number of concurrent transactions, including active
              and idle transactions, that are permitted for this resource group. </entry>
          </row>
          <row>
            <entry colname="col1">CPU_RATE_LIMIT</entry>
            <entry colname="col2">The percentage of CPU resources available to this resource
              group.</entry>
          </row>
          <row>
            <entry colname="col1">MEMORY_LIMIT</entry>
            <entry colname="col2">The percentage of memory resources available to this resource
              group.</entry>
          </row>
          <row>
            <entry colname="col1">MEMORY_SHARED_QUOTA</entry>
            <entry colname="col2">The percentage of memory to share across transactions submitted in
              this resource group.</entry>
          </row>
          <row>
            <entry colname="col1">MEMORY_SPILL_RATIO</entry>
            <entry colname="col2">The memory usage threshold for memory-intensive transactions. When
              a transaction reaches this threshold, it spills to disk.</entry>
          </row>
        </tbody>
      </tgroup>
    </table>
100
    <note>Resource limits are not enforced on <codeph>SET</codeph>, <codeph>RESET</codeph>, and <codeph>SHOW</codeph> commands.</note>
101 102
    </body>
  </topic>
103 104 105 106

  <topic id="topic8339717179" xml:lang="en">
    <title>Transaction Concurrency Limit</title>
    <body>
107 108 109 110
      <p>The <codeph>CONCURRENCY</codeph> limit controls the maximum number of concurrent
        transactions permitted for the resource group. Each resource group is logically divided into
        a fixed number of slots equal to the <codeph>CONCURRENCY</codeph> limit. Greenplum Database
        allocates these slots an equal, fixed percentage of memory resources.</p>
111
      <p>The default <codeph>CONCURRENCY</codeph> limit value for a resource group is 20.</p>
112 113 114 115
      <p>Greenplum Database queues any transactions submitted after the resource group reaches its
          <codeph>CONCURRENCY</codeph> limit. When a running transaction completes, Greenplum
        Database un-queues and executes the earliest queued transaction if sufficient memory
        resources exist.</p>
116 117 118 119 120 121
    </body>
  </topic>

  <topic id="topic833971717" xml:lang="en">
    <title>CPU Limit</title>
    <body>
122 123 124 125
      <p>The <codeph><xref
            href="../ref_guide/config_params/guc-list.xml#gp_resource_group_cpu_limit"
            type="section"/></codeph> server configuration parameter identifies the maximum
        percentage of system CPU resources to allocate to resource groups on each Greenplum Database
126
        segment host. The remaining CPU resources are used for the OS kernel and the Greenplum
127 128
        Database auxiliary daemon processes. The default
        <codeph>gp_resource_group_cpu_limit</codeph> value is .9 (90%).</p>
129 130 131
      <note>The default <codeph>gp_resource_group_cpu_limit</codeph> value may not leave sufficient
        CPU resources if you are running other workloads on your Greenplum Database cluster nodes,
        so be sure to adjust this server configuration parameter accordingly.</note>
132 133 134 135
      <note type="warning">Avoid setting <codeph>gp_resource_group_cpu_limit</codeph> to a value
         higher than .9. Doing so may result in high workload
         queries taking near all CPU resources, potentially starving Greenplum
         Database auxiliary processes.</note>
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151
      <p>The Greenplum Database node CPU percentage is further divided equally among each segment on
        the Greenplum node. Each resource group reserves a percentage of the segment CPU for
        resource management. You identify this percentage via the <codeph>CPU_RATE_LIMIT</codeph>
        value you provide when you create the resource group.</p>
      <p>The minimum <codeph>CPU_RATE_LIMIT</codeph> percentage you can specify for a resource group
        is 1, the maximum is 100.</p>
      <p>The sum of <codeph>CPU_RATE_LIMIT</codeph>s specified for all resource groups you define in
        your Greenplum Database cluster must not exceed 100.</p>
      <p>CPU resource assignment is elastic in that Greenplum Database may allocate the CPU
        resources of an idle resource group to a busier one(s). In such situations, CPU resources
        are re-allocated to the previously idle resource group when that resource group next becomes
        active. If multiple resource groups are busy, they are allocated the CPU resources of any
        idle resource groups based on the ratio of their <codeph>CPU_RATE_LIMIT</codeph>s. For
        example, a resource group created with a <codeph>CPU_RATE_LIMIT</codeph> of 40 will be
        allocated twice as much extra CPU resource as a resource group you create with a
          <codeph>CPU_RATE_LIMIT</codeph> of 20.</p>
152 153 154 155 156
    </body>
  </topic>
  <topic id="topic8339717" xml:lang="en">
    <title>Memory Limits</title>
    <body>
157 158
      <p>When resource groups are enabled, memory usage is managed at the Greenplum Database node,
        segment, resource group, and transaction levels.</p>
159

160 161 162 163
      <p>The <codeph><xref
            href="../ref_guide/config_params/guc-list.xml#gp_resource_group_memory_limit"
            type="section"/></codeph> server configuration parameter identifies the maximum
        percentage of system memory resources to allocate to resource groups on each Greenplum
164
        Database segment host. The default <codeph>gp_resource_group_memory_limit</codeph> value is
165
        .7 (70%).</p>
166
      <p>The memory resource available on a Greenplum Database node is further divided equally among
167 168 169 170 171 172 173 174
        each segment on the node. 
        When resource group-based resource management is active, the amount of memory allocated
          to each segment on a segment host is the memory available to Greenplum Database multiplied by the
          <codeph>gp_resource_group_memory_limit</codeph> server configuration parameter and
          divided by the number of active primary segments on the host:</p>
        <p><codeblock>
rg_perseg_mem = ((RAM * (vm.overcommit_ratio / 100) + SWAP) * gp_resource_group_memory_limit) / num_active_primary_segments</codeblock></p>
      <p>Each resource group reserves a percentage of the segment memory
175 176 177 178 179 180 181 182 183 184 185 186
        for resource management. You identify this percentage via the <codeph>MEMORY_LIMIT</codeph>
        value you specify when you create the resource group. The minimum
          <codeph>MEMORY_LIMIT</codeph> percentage you can specify for a resource group is 1, the
        maximum is 100.</p>
      <p>The sum of <codeph>MEMORY_LIMIT</codeph>s specified for all resource groups you define in
        your Greenplum Database cluster must not exceed 100.</p>
      <p>The memory reserved by the resource group is divided into fixed and shared components. The
          <codeph>MEMORY_SHARED_QUOTA</codeph> value you specify when you create the resource group
        identifies the percentage of reserved resource group memory that may be shared among the
        currently running transactions. This memory is allotted on a first-come, first-served basis.
        A running transaction may use none, some, or all of the
        <codeph>MEMORY_SHARED_QUOTA</codeph>.</p>
187

188 189
      <p>The minimum <codeph>MEMORY_SHARED_QUOTA</codeph> you can specify is 0, the maximum is 100.
        The default <codeph>MEMORY_SHARED_QUOTA</codeph> is 20.</p>
190

191 192 193 194 195 196 197 198 199 200 201 202
      <p>As mentioned previously, <codeph>CONCURRENCY</codeph> identifies the maximum number of
        concurrently running transactions permitted in the resource group. The fixed memory reserved
        by a resource group is divided into <codeph>CONCURRENCY</codeph> number of transaction
        slots. Each slot is allocated a fixed, equal amount of resource group memory. Greenplum
        Database guarantees this fixed memory to each transaction. <fig id="fig_py5_1sl_wlrg">
          <title>Resource Group Memory Allotments</title>
          <image href="graphics/resgroupmem.png" id="image_iqn_dsl_wlrg"/>
        </fig></p>
      <p>When a query's memory usage exceeds the fixed per-transaction memory usage amount,
        Greenplum Database allocates available resource group shared memory to the query. The
        maximum amount of resource group memory available to a specific transaction slot is the sum
        of the transaction's fixed memory and the full resource group shared memory allotment.</p>
203 204 205 206
      <note>Greenplum Database tracks, but does not actively monitor, transaction memory usage
        in resource groups. A transaction submitted in a resource group will fail and exit
        when memory usage exceeds its fixed memory allotment, no available resource group
        shared memory exists, and the transaction requests more memory.</note>
207

208
      <section id="topic833sp" xml:lang="en">
209 210 211 212 213 214 215 216 217 218 219 220 221 222 223
        <title>Query Operator Memory</title>
        <p>Most query operators are non-memory-intensive; that is, during processing, Greenplum
          Database can hold their data in allocated memory. When memory-intensive query operators
          such as join and sort process more data than can be held in memory, data is spilled to
          disk.</p>
        <p>The <codeph><xref href="../ref_guide/config_params/guc-list.xml#gp_resgroup_memory_policy" type="section"/></codeph>
          server configuration parameter governs the memory allocation and distribution algorithm
          for all query operators. Greenplum Database supports <codeph>eager-free</codeph> (the 
          default) and <codeph>auto</codeph> memory policies for resource groups. When you specify
          the <codeph>auto</codeph> policy, Greenplum Database uses resource group memory limits to
          distribute memory across query operators, allocating a fixed size of memory to 
          non-memory-intensive operators and the rest to memory-intensive operators. When the 
          <codeph>eager_free</codeph> policy is in place, Greenplum Database distributes memory 
          among operators more optimally by re-allocating memory released by operators that have 
          completed their processing to operators in a later query stage.</p>
224 225 226 227 228 229 230 231 232 233 234 235 236
        <p><codeph>MEMORY_SPILL_RATIO</codeph> identifies the memory usage threshold for
          memory-intensive operators in a transaction. When the transaction reaches this memory
          threshold, it spills to disk. Greenplum Database uses the
            <codeph>MEMORY_SPILL_RATIO</codeph> to determine the initial memory to allocate to a
          transaction.</p>
        <p> The minimum <codeph>MEMORY_SPILL_RATIO</codeph> percentage you can specify for a
          resource group is 0. The maximum is 100. The default <codeph>MEMORY_SPILL_RATIO</codeph>
          is 20.</p>
        <p>You define the <codeph>MEMORY_SPILL_RATIO</codeph> when you create a resource group. You
          can selectively set this limit on a per-query basis at the session level with the
              <codeph><xref href="../ref_guide/config_params/guc-list.xml#memory_spill_ratio"
              type="section"/></codeph> server configuration parameter.</p>
      </section>
237 238 239 240
      <section id="topic833cons" xml:lang="en">
        <title>Other Memory Considerations</title>
        <p>Resource groups track all Greenplum Database memory allocated via the <codeph>palloc()</codeph> function. Memory that you allocate using the Linux <codeph>malloc()</codeph> function is not managed by resource groups. To ensure that resource groups are accurately tracking memory usage, avoid <codeph>malloc()</codeph>ing large amounts of memory in custom Greenplum Database user-defined functions.</p>
      </section>
241 242 243 244
    </body>
  </topic>

  <topic id="topic71717999" xml:lang="en">
245
    <title>Using Resource Groups</title>
246
    <body>
D
David Yozie 已提交
247 248 249
      <note type="important">Significant Greenplum Database performance degradation has been
        observed when enabling resource group-based workload management on RedHat 6.x, CentOS 6.x,
        and SuSE 11 systems. This issue is caused by a Linux cgroup kernel bug. This kernel bug has
250 251 252 253 254 255 256
        been fixed in CentOS 7.x and Red Hat 7.x systems, and on SuSE 12 SP2/SP3 systems with kernel
        version 4.4.73-5.1 or newer. <p>If you use RedHat 6 and the performance with resource groups
          is acceptable for your use case, upgrade your kernel to version 2.6.32-696 or higher to
          benefit from other fixes to the cgroups implementation. </p><p>SuSE 11 does not have a
          kernel version that resolves this issue; resource groups are still considered to be an
          experimental feature on this platform. <ph otherprops="pivotal">Resource groups are not
            supported on SuSE 11 for production use.</ph></p></note>
257 258 259
      <section id="topic833" xml:lang="en">
        <title>Prerequisite</title>
        <p>Greenplum Database resource groups use Linux Control Groups (cgroups) to manage CPU
260 261
          resources. (cgroups are <b>not</b> used for resource group memory management.) 
          With cgroups, Greenplum isolates the CPU usage of your Greenplum processes from
262 263 264 265 266 267
          other processes on the node. This allows Greenplum to support CPU usage restrictions on a
          per-resource-group basis.</p>
        <p>For detailed information about cgroups, refer to the Control Groups documentation for
          your Linux distribution.</p>
        <p>Complete the following tasks on each node in your Greenplum Database cluster to set up
          cgroups for use with resource groups:</p>
268
        <ol>
269
          <li>If you are running the SuSE 11+ operating system on your Greenplum Database cluster
270 271
            nodes, you must enable swap accounting on each node and restart your Greenplum Database
            cluster. The <codeph>swapaccount</codeph> kernel boot parameter governs the swap
272
            accounting setting on SuSE 11+ systems. After setting this boot parameter, you must
273 274
            reboot your systems. For details, refer to the <xref
              href="https://www.suse.com/releasenotes/x86_64/SUSE-SLES/11-SP2/#fate-310471"
275
              format="html" scope="external">Cgroup Swap Control</xref> discussion in the SuSE 11
276 277 278 279 280 281
            release notes. You must be the superuser or have <codeph>sudo</codeph> access to
            configure kernel boot parameters and reboot systems. </li>
          <li>Create the Greenplum Database cgroups configuration file
              <codeph>/etc/cgconfig.d/gpdb.conf</codeph>. You must be the superuser or have
              <codeph>sudo</codeph> access to create this file:
            <codeblock>sudo vi /etc/cgconfig.d/gpdb.conf</codeblock>
282
          </li>
283 284
          <li>Add the following configuration information to
              <codeph>/etc/cgconfig.d/gpdb.conf</codeph>: <codeblock>group gpdb {
285 286 287 288 289 290 291 292 293 294 295 296 297 298
     perm {
         task {
             uid = gpadmin;
             gid = gpadmin;
         }
         admin {
             uid = gpadmin;
             gid = gpadmin;
         }
     }
     cpu {
     }
     cpuacct {
     }
299 300 301
 } </codeblock>
            <p>This content configures CPU and CPU accounting control groups managed by the
                <codeph>gpadmin</codeph> user.</p>
302
          </li>
303 304 305 306 307 308 309 310 311 312 313
          <li>If not already installed and running, install the Control Groups operating system
            package and start the cgroups service on each Greenplum Database node. The commands you
            run to perform these tasks will differ based on the operating system installed on the
            node. You must be the superuser or have <codeph>sudo</codeph> access to run these
            commands: <ul>
              <li> Redhat/CentOS 7.x systems:
                <codeblock>sudo yum install libcgroup-tools
sudo cgconfigparser -l /etc/cgconfig.d/gpdb.conf </codeblock>
              </li>
              <li> Redhat/CentOS 6.x systems:
                <codeblock>sudo yum install libcgroup
314
sudo service cgconfig start </codeblock>
315
              </li>
316
              <li> SuSE 11+ systems:
317
                <codeblock>sudo zypper install libcgroup-tools
318
sudo cgconfigparser -l /etc/cgconfig.d/gpdb.conf </codeblock>
319 320
              </li>
            </ul>
321 322
          </li>
          <li>Identify the <codeph>cgroup</codeph> directory mount point for the node:
323 324
              <codeblock>grep cgroup /proc/mounts</codeblock><p>The first line of output identifies
              the <codeph>cgroup</codeph> mount point.</p>
325
          </li>
326
          <li>Verify that you set up the Greenplum Database cgroups configuration correctly by
327 328 329
            running the following commands. Replace &lt;cgroup_mount_point&gt; with the
            mount point that you identified in the previous step: <codeblock>ls -l &lt;cgroup_mount_point&gt;/cpu/gpdb
ls -l &lt;cgroup_mount_point&gt;/cpuacct/gpdb</codeblock>
330 331 332
            <p>If these directories exist and are owned by <codeph>gpadmin:gpadmin</codeph>, you
              have successfully configured cgroups for Greenplum Database CPU resource
              management.</p>
333
          </li>
334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353
          <li>To automatically recreate Greenplum Database required cgroup hierarchies and
             parameters when your system is restarted, configure your
             system to enable the Linux cgroup service daemon <codeph>cgconfig.service</codeph> 
            (Redhat/CentOS 7.x and SuSE 11+) or <codeph>cgconfig</codeph> (Redhat/CentOS 6.x)
            at node start-up. For example, configure one of the following cgroup service start
            commands in your preferred service auto-start tool:
            <ul>
              <li> Redhat/CentOS 7.x systems:
                <codeblock>sudo systemctl start cgconfig.service</codeblock>
              </li>
              <li> Redhat/CentOS 6.x systems:
                <codeblock>sudo service cgconfig start </codeblock>
              </li>
              <li> SuSE 11+ systems:
                <codeblock>sudo systemctl start cgconfig.service</codeblock>
              </li>
            </ul>
            <p>You may choose a different method to recreate the Greenplum Database resource group
              cgroup hierarchies.</p>
          </li>
354
        </ol>
355 356 357 358 359 360 361 362 363 364 365 366 367 368
      </section>
      <section id="topic8339191" xml:lang="en">
        <title>Procedure</title>
        <p>To use resource groups in your Greenplum Database cluster, you:</p>
        <ol>
          <li><xref href="#topic8" type="topic" format="dita">Enable resource groups for your
              Greenplum Database cluster</xref>.</li>
          <li><xref href="#topic10" type="topic" format="dita">Create resource groups</xref>.</li>
          <li><xref href="#topic17" type="topic" format="dita">Assign the resource groups to one or
              more roles</xref>.</li>
          <li><xref href="#topic22" type="topic" format="dita">Use resource management system views
              to monitor and manage the resource groups</xref>.</li>
        </ol>
      </section>
369 370 371 372
    </body>
  </topic>

  <topic id="topic8" xml:lang="en">
373
    <title id="iz153124">Enabling Resource Groups</title>
374
    <body>
375 376 377 378
      <p>When you install Greenplum Database, resource queues are enabled by default. To use
        resource groups instead of resource queues, you must set the <codeph><xref
            href="../ref_guide/config_params/guc-list.xml#gp_resource_manager" type="section"
          /></codeph> server configuration parameter.</p>
379
      <ol id="ol_ec5_4dy_wq">
380 381
        <li>Set the <codeph>gp_resource_manager</codeph> server configuration parameter to the value
            <codeph>"group"</codeph>:
382 383 384 385
          <codeblock>gpconfig -s gp_resource_manager
gpconfig -c gp_resource_manager -v "group"
</codeblock>
        </li>
386
        <li>Restart Greenplum Database: <codeblock>gpstop
387 388 389 390
gpstart
</codeblock>
        </li>
      </ol>
391 392 393 394 395 396 397 398 399 400 401
      <p>Once enabled, any transaction submitted by a role is directed to the resource group
        assigned to the role, and is governed by that resource group's concurrency, memory, and CPU
        limits.</p>
      <p>Greenplum Database creates two default resource groups named <codeph>admin_group</codeph>
        and <codeph>default_group</codeph>. When you enable resources groups, any role that was not
        explicitly assigned a resource group is assigned the default group for the role's
        capability. <codeph>SUPERUSER</codeph> roles are assigned the <codeph>admin_group</codeph>,
        non-admin roles are assigned the group named <codeph>default_group</codeph>.</p>
      <p>The default resource groups <codeph>admin_group</codeph> and <codeph>default_group</codeph>
        are created with the following resource limits:</p>
      <table id="default_resgroup_limits">
402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440
        <tgroup cols="3">
          <colspec colnum="1" colname="col1" colwidth="1*"/>
          <colspec colnum="2" colname="col2" colwidth="1*"/>
          <colspec colnum="3" colname="col3" colwidth="1*"/>
          <thead>
            <row>
              <entry colname="col1">Limit Type</entry>
              <entry colname="col2">admin_group</entry>
              <entry colname="col3">default_group</entry>
            </row>
          </thead>
          <tbody>
            <row>
              <entry colname="col1">CONCURRENCY</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">20</entry>
            </row>
            <row>
              <entry colname="col1">CPU_RATE_LIMIT</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">30</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_LIMIT</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">30</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_SHARED_QUOTA</entry>
              <entry colname="col2">50</entry>
              <entry colname="col3">50</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_SPILL_RATIO</entry>
              <entry colname="col2">20</entry>
              <entry colname="col3">20</entry>
            </row>
          </tbody>
        </tgroup>
441
      </table>
442 443 444 445 446 447
      <p>Keep in mind that the <codeph>CPU_RATE_LIMIT</codeph> and <codeph>MEMORY_LIMIT</codeph>
        values for the default resource groups <codeph>admin_group</codeph> and
        <codeph>default_group</codeph> contribute to the total percentages on a segment host.
        You may find that you need to adjust these limits for <codeph>admin_group</codeph>
        and/or <codeph>default_group</codeph> as you create and add new resource groups to
        your Greenplum Database deployment.</p>
448 449 450 451 452 453
    </body>
  </topic>

  <topic id="topic10" xml:lang="en">
    <title id="iz139857">Creating Resource Groups</title>
    <body>
454 455 456 457 458 459 460 461 462 463 464
      <p>When you create a resource group, you provide a name, CPU limit, and memory limit. You can
        optionally provide a concurrent transaction limit and memory shared quota and spill ratio.
        Use the <codeph><xref href="../ref_guide/sql_commands/CREATE_RESOURCE_GROUP.xml#topic1"
            type="topic" format="dita"/></codeph> command to create a new resource group. </p>
      <p id="iz152723">When you create a resource group, you must provide
          <codeph>CPU_RATE_LIMIT</codeph> and <codeph>MEMORY_LIMIT</codeph> limit values. These
        limits identify the percentage of Greenplum Database resources to allocate to this resource
        group. For example, to create a resource group named <i>rgroup1</i> with a CPU limit of 20
        and a memory limit of 25:</p>
      <p>
        <codeblock>=# CREATE RESOURCE GROUP <i>rgroup1</i> WITH (CPU_RATE_LIMIT=20, MEMORY_LIMIT=25);
465
</codeblock>
466 467 468 469 470 471 472 473 474 475 476
      </p>
      <p>The CPU limit of 20 is shared by every role to which <codeph>rgroup1</codeph> is assigned.
        Similarly, the memory limit of 25 is shared by every role to which <codeph>rgroup1</codeph>
        is assigned. <codeph>rgroup1</codeph> utilizes the default <codeph>CONCURRENCY</codeph>
        setting of 20.</p>
      <p>The <codeph><xref href="../ref_guide/sql_commands/ALTER_RESOURCE_GROUP.xml#topic1"
            type="topic" format="dita"/></codeph> command updates the limits of a resource group. To
        change the limits of a resource group, specify the new values you want for the group. For
        example:</p>
      <p>
        <codeblock>=# ALTER RESOURCE GROUP <i>rg_light</i> SET CONCURRENCY 7;
477 478
=# ALTER RESOURCE GROUP <i>exec</i> SET MEMORY_LIMIT 25;
</codeblock>
479 480 481 482 483 484 485 486 487 488
      </p>
      <note>You cannot set or alter the <codeph>CONCURRENCY</codeph> value for the
          <codeph>admin_group</codeph> to zero (0).</note>
      <p>The <codeph><xref href="../ref_guide/sql_commands/DROP_RESOURCE_GROUP.xml#topic1"
            type="topic" format="dita"/></codeph> command drops a resource group. To drop a resource
        group, the group cannot be assigned to any role, nor can there be any transactions active or
        waiting in the resource group. To drop a resource group:</p>
      <p>
        <codeblock>=# DROP RESOURCE GROUP <i>exec</i>; </codeblock>
      </p>
489 490 491 492 493 494
    </body>
  </topic>

  <topic id="topic17" xml:lang="en">
    <title id="iz172210">Assigning a Resource Group to a Role</title>
    <body>
495 496 497 498 499 500 501 502 503 504 505
      <p id="iz172211">When you create a resource group, the group is available for assignment to
        one or more roles (users). You assign a resource group to a database role using the
          <codeph>RESOURCE GROUP</codeph> clause of the <codeph><xref
            href="../ref_guide/sql_commands/CREATE_ROLE.xml#topic1" type="topic" format="dita"
          /></codeph> or <codeph><xref href="../ref_guide/sql_commands/ALTER_ROLE.xml#topic1"
            type="topic" format="dita"/></codeph> commands. If you do not specify a resource group
        for a role, the role is assigned the default group for the role's capability.
          <codeph>SUPERUSER</codeph> roles are assigned the <codeph>admin_group</codeph>, non-admin
        roles are assigned the group named <codeph>default_group</codeph>.</p>
      <p>Use the <codeph>ALTER ROLE</codeph> or <codeph>CREATE ROLE</codeph> commands to assign a
        resource group to a role. For example:</p>
506 507 508 509 510
      <p>
        <codeblock>=# ALTER ROLE <i>bill</i> RESOURCE GROUP <i>rg_light</i>;
=# CREATE ROLE <i>mary</i> RESOURCE GROUP <i>exec</i>;
</codeblock>
      </p>
511 512 513 514 515 516 517 518
      <p>You can assign a resource group to one or more roles. If you have defined a role hierarchy,
        assigning a resource group to a parent role does not propagate down to the members of that
        role group.</p>
      <p>If you wish to remove a resource group assignment from a role and assign the role the
        default group, change the role's group name assignment to <codeph>NONE</codeph>. For
        example:</p>
      <p>
        <codeblock>=# ALTER ROLE <i>mary</i> RESOURCE GROUP NONE;
519
</codeblock>
520 521 522
      </p>
    </body>
  </topic>
523 524 525


  <topic id="topic22" xml:lang="en">
526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548
    <title id="iz152239">Monitoring Resource Group Status</title>
    <body>
      <p>Monitoring the status of your resource groups and queries may involve the following
        tasks:</p>
      <ul>
        <li id="iz153669">
          <xref href="#topic221" type="topic" format="dita"/>
        </li>
        <li id="iz153670">
          <xref href="#topic23" type="topic" format="dita"/>
        </li>
        <li id="iz153671">
          <xref href="#topic25" type="topic" format="dita"/>
        </li>
        <li id="iz15367125">
          <xref href="#topic252525" type="topic" format="dita"/>
        </li>
        <li id="iz153679">
          <xref href="#topic27" type="topic" format="dita"/>
        </li>
      </ul>

    </body>
549 550 551 552

    <topic id="topic221" xml:lang="en">
      <title id="iz152239">Viewing Resource Group Limits</title>
      <body>
553 554 555 556 557 558
        <p>The <codeph><xref href="../ref_guide/system_catalogs/gp_resgroup_config.xml" type="topic"
              format="dita"/></codeph>
          <codeph>gp_toolkit</codeph> system view displays the current and proposed limits for a
          resource group. The proposed limit differs from the current limit when you alter the limit
          but the new value can not be immediately applied. To view the limits of all resource
          groups:</p>
559 560 561 562 563 564 565 566 567 568
        <p>
          <codeblock>=# SELECT * FROM gp_toolkit.gp_resgroup_config;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic23" xml:lang="en">
      <title id="iz152239">Viewing Resource Group Query Status and CPU/Memory Usage</title>
      <body>
569
        <p>The <codeph><xref href="../ref_guide/gp_toolkit.xml#topic31x" type="topic"
570 571 572 573 574
              format="dita"/></codeph>
          <codeph>gp_toolkit</codeph> system view enables you to view the status and activity of a
          resource group. The view displays the number of running and queued transactions. It also
          displays the real-time CPU and memory usage of the resource group. To view this
          information:</p>
575 576 577 578 579 580 581 582 583 584 585
        <p>
          <codeblock>=# SELECT * FROM gp_toolkit.gp_resgroup_status;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic25" xml:lang="en">
      <title id="iz152239">Viewing the Resource Group Assigned to a Role</title>
      <body>
        <p>To view the resource group-to-role assignments, perform the following query on the
586 587 588 589
              <codeph><xref href="../ref_guide/system_catalogs/pg_roles.xml" type="topic"
              format="dita"/></codeph> and <codeph><xref
              href="../ref_guide/system_catalogs/pg_resgroup.xml" type="topic" format="dita"
            /></codeph> system catalog tables:</p>
590 591 592 593 594 595 596 597 598 599 600
        <p>
          <codeblock>=# SELECT rolname, rsgname FROM pg_roles, pg_resgroup
     WHERE pg_roles.rolresgroup=pg_resgroup.oid;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic252525" xml:lang="en">
      <title id="iz15223925">Viewing a Resource Group's Running and Pending Queries</title>
      <body>
601 602 603 604
        <p>To view a resource group's running queries, pending queries, and how long the pending
          queries have been queued, examine the <codeph><xref
              href="../ref_guide/system_catalogs/pg_stat_activity.xml" type="topic" format="dita"
            /></codeph> system catalog table:</p>
605 606 607 608 609 610 611 612 613
        <p>
          <codeblock>=# SELECT current_query, waiting, rsgname, rsgqueueduration 
     FROM pg_stat_activity;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic27" xml:lang="en">
614
      <title id="iz153732">Cancelling a Running or Queued Transaction in a Resource Group</title>
615
      <body>
616 617
        <p>There may be cases when you want to cancel a running or queued transaction in a
          resource group. For
618 619
          example, you may want to remove a query that is waiting in the resource group queue but
          has not yet been executed. Or, you may want to stop a running query that is taking too
620 621
          long to execute, or one that is sitting idle in a transaction and taking up resource group
          transaction slots that are needed by other users.</p>
622 623
        <p>To cancel a running or queued transaction, you must first determine the process
          id (pid) associated
624 625
          with the transaction. Once you have obtained the process id, you can invoke
            <codeph>pg_cancel_backend()</codeph> to end that process, as shown below.</p>
626
        <p>For example, to view the process information associated with all statements currently
627 628
          active or waiting in all resource groups, run the following query. If the query returns no
          results, then there are no running or queued transactions in any resource group.</p>
629 630 631 632 633 634 635 636 637 638 639 640 641
        <p>
          <codeblock>=# SELECT rolname, g.rsgname, procpid, waiting, current_query, datname 
     FROM pg_roles, gp_toolkit.gp_resgroup_status g, pg_stat_activity 
     WHERE pg_roles.rolresgroup=g.groupid
        AND pg_stat_activity.usename=pg_roles.rolname;
</codeblock>
        </p>

        <p>Sample partial query output:</p>
        <codeblock> rolname | rsgname  | procpid | waiting |     current_query     | datname 
---------+----------+---------+---------+-----------------------+---------
  sammy  | rg_light |  31861  |    f    | &lt;IDLE&gt; in transaction | testdb
  billy  | rg_light |  31905  |    t    | SELECT * FROM topten; | testdb</codeblock>
642 643 644
        <p>Use this output to identify the process id (<codeph>procpid</codeph>) of the transaction
          you want to cancel, and then cancel the process. For example, to cancel the pending query
          identified in the sample output above:</p>
645 646 647
        <p>
          <codeblock>=# SELECT pg_cancel_backend(31905);</codeblock>
        </p>
648 649 650
        <p>You can provide an optional message in a second argument to
            <codeph>pg_cancel_backend()</codeph> to indicate to the user why the process was
          cancelled.</p>
651
        <note type="note">
652 653
          <p>Do not use an operating system <codeph>KILL</codeph> command to cancel any Greenplum
            Database process.</p>
654 655 656 657 658 659
        </note>
      </body>
    </topic>

  </topic>
</topic>