workload_mgmt_resgroups.xml 31.8 KB
Newer Older
1 2 3 4 5 6
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
  PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="topic1" xml:lang="en">
  <title id="iz173472">Using Resource Groups</title>
  <body>
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
    <p>You can use resource groups to manage the number of active queries that may execute
      concurrently in your Greenplum Database cluster. With resource groups, you can also manage the
      amount of CPU and memory resources Greenplum allocates to each query.</p>
    <p>When the user executes a query, Greenplum Database evaluates the query against a set of
      limits defined for the resource group. Greenplum Database executes the query immediately if
      the group's resource limits have not yet been reached and the query does not cause the group
      to exceed the concurrent transaction limit. If these conditions are not met, Greenplum
      Database queues the query. For example, if the maximum number of concurrent transactions for
      the resource group has already been reached, a subsequent query is queued and must wait until
      other queries complete before it runs. Greenplum Database may also execute a pending query
      when the resource group's concurrency and memory limits are altered to large enough
      values.</p>
    <p>Within a resource group, transactions are evaluated on a first in, first out basis. Greenplum
      Database periodically assesses the active workload of the system, reallocating resources and
      starting/queuing jobs as necessary.</p>
    <p>When you create a resource group, you provide a set of limits that determine the amount of
      CPU and memory resources available to transactions executed within the group. These limits
      are:</p>
    <table id="resgroup_limit_descriptions">
      <tgroup cols="3">
        <colspec colnum="1" colname="col1" colwidth="1*"/>
        <colspec colnum="2" colname="col2" colwidth="1*"/>
        <thead>
          <row>
            <entry colname="col1">Limit Type</entry>
            <entry colname="col2">Description</entry>
          </row>
        </thead>
        <tbody>
          <row>
            <entry colname="col1">CONCURRENCY</entry>
            <entry colname="col2">The maximum number of concurrent transactions, including active
              and idle transactions, that are permitted for this resource group. </entry>
          </row>
          <row>
            <entry colname="col1">CPU_RATE_LIMIT</entry>
            <entry colname="col2">The percentage of CPU resources available to this resource
              group.</entry>
          </row>
          <row>
            <entry colname="col1">MEMORY_LIMIT</entry>
            <entry colname="col2">The percentage of memory resources available to this resource
              group.</entry>
          </row>
          <row>
            <entry colname="col1">MEMORY_SHARED_QUOTA</entry>
            <entry colname="col2">The percentage of memory to share across transactions submitted in
              this resource group.</entry>
          </row>
          <row>
            <entry colname="col1">MEMORY_SPILL_RATIO</entry>
            <entry colname="col2">The memory usage threshold for memory-intensive transactions. When
              a transaction reaches this threshold, it spills to disk.</entry>
          </row>
        </tbody>
      </tgroup>
    </table>
64
    <note>Resource limits are not enforced on <codeph>SET</codeph>, <codeph>RESET</codeph>, and <codeph>SHOW</codeph> commands.</note>
65 66 67 68 69
  </body>

  <topic id="topic8339717179" xml:lang="en">
    <title>Transaction Concurrency Limit</title>
    <body>
70 71 72 73
      <p>The <codeph>CONCURRENCY</codeph> limit controls the maximum number of concurrent
        transactions permitted for the resource group. Each resource group is logically divided into
        a fixed number of slots equal to the <codeph>CONCURRENCY</codeph> limit. Greenplum Database
        allocates these slots an equal, fixed percentage of memory resources.</p>
74
      <p>The default <codeph>CONCURRENCY</codeph> limit value for a resource group is 20.</p>
75 76 77 78
      <p>Greenplum Database queues any transactions submitted after the resource group reaches its
          <codeph>CONCURRENCY</codeph> limit. When a running transaction completes, Greenplum
        Database un-queues and executes the earliest queued transaction if sufficient memory
        resources exist.</p>
79 80 81 82 83 84
    </body>
  </topic>

  <topic id="topic833971717" xml:lang="en">
    <title>CPU Limit</title>
    <body>
85 86 87 88 89
      <p>The <codeph><xref
            href="../ref_guide/config_params/guc-list.xml#gp_resource_group_cpu_limit"
            type="section"/></codeph> server configuration parameter identifies the maximum
        percentage of system CPU resources to allocate to resource groups on each Greenplum Database
        segment node. The remaining CPU resources are used for the OS kernel and the Greenplum
90 91
        Database auxiliary daemon processes. The default
        <codeph>gp_resource_group_cpu_limit</codeph> value is .9 (90%).</p>
92 93 94
      <note>The default <codeph>gp_resource_group_cpu_limit</codeph> value may not leave sufficient
        CPU resources if you are running other workloads on your Greenplum Database cluster nodes,
        so be sure to adjust this server configuration parameter accordingly.</note>
95 96 97 98
      <note type="warning">Avoid setting <codeph>gp_resource_group_cpu_limit</codeph> to a value
         higher than .9. Doing so may result in high workload
         queries taking near all CPU resources, potentially starving Greenplum
         Database auxiliary processes.</note>
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114
      <p>The Greenplum Database node CPU percentage is further divided equally among each segment on
        the Greenplum node. Each resource group reserves a percentage of the segment CPU for
        resource management. You identify this percentage via the <codeph>CPU_RATE_LIMIT</codeph>
        value you provide when you create the resource group.</p>
      <p>The minimum <codeph>CPU_RATE_LIMIT</codeph> percentage you can specify for a resource group
        is 1, the maximum is 100.</p>
      <p>The sum of <codeph>CPU_RATE_LIMIT</codeph>s specified for all resource groups you define in
        your Greenplum Database cluster must not exceed 100.</p>
      <p>CPU resource assignment is elastic in that Greenplum Database may allocate the CPU
        resources of an idle resource group to a busier one(s). In such situations, CPU resources
        are re-allocated to the previously idle resource group when that resource group next becomes
        active. If multiple resource groups are busy, they are allocated the CPU resources of any
        idle resource groups based on the ratio of their <codeph>CPU_RATE_LIMIT</codeph>s. For
        example, a resource group created with a <codeph>CPU_RATE_LIMIT</codeph> of 40 will be
        allocated twice as much extra CPU resource as a resource group you create with a
          <codeph>CPU_RATE_LIMIT</codeph> of 20.</p>
115 116 117 118 119
    </body>
  </topic>
  <topic id="topic8339717" xml:lang="en">
    <title>Memory Limits</title>
    <body>
120 121
      <p>When resource groups are enabled, memory usage is managed at the Greenplum Database node,
        segment, resource group, and transaction levels.</p>
122

123 124 125 126 127
      <p>The <codeph><xref
            href="../ref_guide/config_params/guc-list.xml#gp_resource_group_memory_limit"
            type="section"/></codeph> server configuration parameter identifies the maximum
        percentage of system memory resources to allocate to resource groups on each Greenplum
        Database segment node. The default <codeph>gp_resource_group_memory_limit</codeph> value is
128
        .7 (70%).</p>
129 130 131 132 133 134 135 136 137 138 139 140 141 142
      <p>The memory resource available on a Greenplum Database node is further divided equally among
        each segment on the node. Each resource group reserves a percentage of the segment memory
        for resource management. You identify this percentage via the <codeph>MEMORY_LIMIT</codeph>
        value you specify when you create the resource group. The minimum
          <codeph>MEMORY_LIMIT</codeph> percentage you can specify for a resource group is 1, the
        maximum is 100.</p>
      <p>The sum of <codeph>MEMORY_LIMIT</codeph>s specified for all resource groups you define in
        your Greenplum Database cluster must not exceed 100.</p>
      <p>The memory reserved by the resource group is divided into fixed and shared components. The
          <codeph>MEMORY_SHARED_QUOTA</codeph> value you specify when you create the resource group
        identifies the percentage of reserved resource group memory that may be shared among the
        currently running transactions. This memory is allotted on a first-come, first-served basis.
        A running transaction may use none, some, or all of the
        <codeph>MEMORY_SHARED_QUOTA</codeph>.</p>
143

144 145
      <p>The minimum <codeph>MEMORY_SHARED_QUOTA</codeph> you can specify is 0, the maximum is 100.
        The default <codeph>MEMORY_SHARED_QUOTA</codeph> is 20.</p>
146

147 148 149 150 151 152 153 154
      <p>As mentioned previously, <codeph>CONCURRENCY</codeph> identifies the maximum number of
        concurrently running transactions permitted in the resource group. The fixed memory reserved
        by a resource group is divided into <codeph>CONCURRENCY</codeph> number of transaction
        slots. Each slot is allocated a fixed, equal amount of resource group memory. Greenplum
        Database guarantees this fixed memory to each transaction. <fig id="fig_py5_1sl_wlrg">
          <title>Resource Group Memory Allotments</title>
          <image href="graphics/resgroupmem.png" id="image_iqn_dsl_wlrg"/>
        </fig></p>
155

156 157 158 159
      <p>When a query's memory usage exceeds the fixed per-transaction memory usage amount,
        Greenplum Database allocates available resource group shared memory to the query. The
        maximum amount of resource group memory available to a specific transaction slot is the sum
        of the transaction's fixed memory and the full resource group shared memory allotment.</p>
160

161
      <section id="topic833sp" xml:lang="en">
162 163 164 165 166 167 168 169 170 171 172 173 174 175 176
        <title>Query Operator Memory</title>
        <p>Most query operators are non-memory-intensive; that is, during processing, Greenplum
          Database can hold their data in allocated memory. When memory-intensive query operators
          such as join and sort process more data than can be held in memory, data is spilled to
          disk.</p>
        <p>The <codeph><xref href="../ref_guide/config_params/guc-list.xml#gp_resgroup_memory_policy" type="section"/></codeph>
          server configuration parameter governs the memory allocation and distribution algorithm
          for all query operators. Greenplum Database supports <codeph>eager-free</codeph> (the 
          default) and <codeph>auto</codeph> memory policies for resource groups. When you specify
          the <codeph>auto</codeph> policy, Greenplum Database uses resource group memory limits to
          distribute memory across query operators, allocating a fixed size of memory to 
          non-memory-intensive operators and the rest to memory-intensive operators. When the 
          <codeph>eager_free</codeph> policy is in place, Greenplum Database distributes memory 
          among operators more optimally by re-allocating memory released by operators that have 
          completed their processing to operators in a later query stage.</p>
177 178 179 180 181 182 183 184 185 186 187 188 189
        <p><codeph>MEMORY_SPILL_RATIO</codeph> identifies the memory usage threshold for
          memory-intensive operators in a transaction. When the transaction reaches this memory
          threshold, it spills to disk. Greenplum Database uses the
            <codeph>MEMORY_SPILL_RATIO</codeph> to determine the initial memory to allocate to a
          transaction.</p>
        <p> The minimum <codeph>MEMORY_SPILL_RATIO</codeph> percentage you can specify for a
          resource group is 0. The maximum is 100. The default <codeph>MEMORY_SPILL_RATIO</codeph>
          is 20.</p>
        <p>You define the <codeph>MEMORY_SPILL_RATIO</codeph> when you create a resource group. You
          can selectively set this limit on a per-query basis at the session level with the
              <codeph><xref href="../ref_guide/config_params/guc-list.xml#memory_spill_ratio"
              type="section"/></codeph> server configuration parameter.</p>
      </section>
190 191 192 193
      <section id="topic833cons" xml:lang="en">
        <title>Other Memory Considerations</title>
        <p>Resource groups track all Greenplum Database memory allocated via the <codeph>palloc()</codeph> function. Memory that you allocate using the Linux <codeph>malloc()</codeph> function is not managed by resource groups. To ensure that resource groups are accurately tracking memory usage, avoid <codeph>malloc()</codeph>ing large amounts of memory in custom Greenplum Database user-defined functions.</p>
      </section>
194 195 196 197
    </body>
  </topic>

  <topic id="topic71717999" xml:lang="en">
198
    <title>Using Resource Groups</title>
199
    <body>
200 201 202
      <note type="warning">Significant Greenplum Database performance degradation has been observed
        when enabling resource group-based workload management on RedHat 6.x, CentOS 6.x, and SuSE
        11 systems. This issue is caused by a Linux cgroup kernel bug. This kernel bug has been
203 204 205 206 207
        fixed in CentOS 7.x and Red Hat 7.x systems. The performance problem is mitigated in Red Hat
        6 systems that use kernel version 2.6.32-696 or higher. <p>SuSE 11 does not have a kernel
          version that resolves this issue; resource groups are considered to be an experimental
          feature on this platform. <ph otherprops="pivotal">Resource groups are not supported on
            SuSE 11 for production use.</ph></p></note>
208 209 210 211 212 213 214 215 216 217
      <section id="topic833" xml:lang="en">
        <title>Prerequisite</title>
        <p>Greenplum Database resource groups use Linux Control Groups (cgroups) to manage CPU
          resources. With cgroups, Greenplum isolates the CPU usage of your Greenplum processes from
          other processes on the node. This allows Greenplum to support CPU usage restrictions on a
          per-resource-group basis.</p>
        <p>For detailed information about cgroups, refer to the Control Groups documentation for
          your Linux distribution.</p>
        <p>Complete the following tasks on each node in your Greenplum Database cluster to set up
          cgroups for use with resource groups:</p>
218
        <ol>
219 220 221 222 223 224 225 226 227 228 229 230 231
          <li>If you are running the Suse 11+ operating system on your Greenplum Database cluster
            nodes, you must enable swap accounting on each node and restart your Greenplum Database
            cluster. The <codeph>swapaccount</codeph> kernel boot parameter governs the swap
            accounting setting on Suse 11+ systems. After setting this boot parameter, you must
            reboot your systems. For details, refer to the <xref
              href="https://www.suse.com/releasenotes/x86_64/SUSE-SLES/11-SP2/#fate-310471"
              format="html" scope="external">Cgroup Swap Control</xref> discussion in the Suse 11
            release notes. You must be the superuser or have <codeph>sudo</codeph> access to
            configure kernel boot parameters and reboot systems. </li>
          <li>Create the Greenplum Database cgroups configuration file
              <codeph>/etc/cgconfig.d/gpdb.conf</codeph>. You must be the superuser or have
              <codeph>sudo</codeph> access to create this file:
            <codeblock>sudo vi /etc/cgconfig.d/gpdb.conf</codeblock>
232
          </li>
233 234
          <li>Add the following configuration information to
              <codeph>/etc/cgconfig.d/gpdb.conf</codeph>: <codeblock>group gpdb {
235 236 237 238 239 240 241 242 243 244 245 246 247 248
     perm {
         task {
             uid = gpadmin;
             gid = gpadmin;
         }
         admin {
             uid = gpadmin;
             gid = gpadmin;
         }
     }
     cpu {
     }
     cpuacct {
     }
249 250 251
 } </codeblock>
            <p>This content configures CPU and CPU accounting control groups managed by the
                <codeph>gpadmin</codeph> user.</p>
252
          </li>
253 254 255 256 257 258 259 260 261 262 263
          <li>If not already installed and running, install the Control Groups operating system
            package and start the cgroups service on each Greenplum Database node. The commands you
            run to perform these tasks will differ based on the operating system installed on the
            node. You must be the superuser or have <codeph>sudo</codeph> access to run these
            commands: <ul>
              <li> Redhat/CentOS 7.x systems:
                <codeblock>sudo yum install libcgroup-tools
sudo cgconfigparser -l /etc/cgconfig.d/gpdb.conf </codeblock>
              </li>
              <li> Redhat/CentOS 6.x systems:
                <codeblock>sudo yum install libcgroup
264
sudo service cgconfig start </codeblock>
265 266 267
              </li>
              <li> Suse 11+ systems:
                <codeblock>sudo zypper install libcgroup-tools
268
sudo cgconfigparser -l /etc/cgconfig.d/gpdb.conf </codeblock>
269 270
              </li>
            </ul>
271 272
          </li>
          <li>Identify the <codeph>cgroup</codeph> directory mount point for the node:
273 274
              <codeblock>grep cgroup /proc/mounts</codeblock><p>The first line of output identifies
              the <codeph>cgroup</codeph> mount point.</p>
275
          </li>
276 277 278
          <li>Verify that you set up the Greenplum Database cgroups configuration correctly by
            running the following commands. Replace <varname>cgroup_mount_point</varname> with the
            mount point you identified in the previous step: <codeblock>ls -l <i>cgroup_mount_point</i>/cpu/gpdb
279
ls -l <i>cgroup_mount_point</i>/cpuacct/gpdb
280 281 282 283
        </codeblock>
            <p>If these directories exist and are owned by <codeph>gpadmin:gpadmin</codeph>, you
              have successfully configured cgroups for Greenplum Database CPU resource
              management.</p>
284 285
          </li>
        </ol>
286 287 288 289 290 291 292 293 294 295 296 297 298 299
      </section>
      <section id="topic8339191" xml:lang="en">
        <title>Procedure</title>
        <p>To use resource groups in your Greenplum Database cluster, you:</p>
        <ol>
          <li><xref href="#topic8" type="topic" format="dita">Enable resource groups for your
              Greenplum Database cluster</xref>.</li>
          <li><xref href="#topic10" type="topic" format="dita">Create resource groups</xref>.</li>
          <li><xref href="#topic17" type="topic" format="dita">Assign the resource groups to one or
              more roles</xref>.</li>
          <li><xref href="#topic22" type="topic" format="dita">Use resource management system views
              to monitor and manage the resource groups</xref>.</li>
        </ol>
      </section>
300 301 302 303
    </body>
  </topic>

  <topic id="topic8" xml:lang="en">
304
    <title id="iz153124">Enabling Resource Groups</title>
305
    <body>
306 307 308 309
      <p>When you install Greenplum Database, resource queues are enabled by default. To use
        resource groups instead of resource queues, you must set the <codeph><xref
            href="../ref_guide/config_params/guc-list.xml#gp_resource_manager" type="section"
          /></codeph> server configuration parameter.</p>
310
      <ol id="ol_ec5_4dy_wq">
311 312
        <li>Set the <codeph>gp_resource_manager</codeph> server configuration parameter to the value
            <codeph>"group"</codeph>:
313 314 315 316
          <codeblock>gpconfig -s gp_resource_manager
gpconfig -c gp_resource_manager -v "group"
</codeblock>
        </li>
317
        <li>Restart Greenplum Database: <codeblock>gpstop
318 319 320 321
gpstart
</codeblock>
        </li>
      </ol>
322 323 324 325 326 327 328 329 330 331 332
      <p>Once enabled, any transaction submitted by a role is directed to the resource group
        assigned to the role, and is governed by that resource group's concurrency, memory, and CPU
        limits.</p>
      <p>Greenplum Database creates two default resource groups named <codeph>admin_group</codeph>
        and <codeph>default_group</codeph>. When you enable resources groups, any role that was not
        explicitly assigned a resource group is assigned the default group for the role's
        capability. <codeph>SUPERUSER</codeph> roles are assigned the <codeph>admin_group</codeph>,
        non-admin roles are assigned the group named <codeph>default_group</codeph>.</p>
      <p>The default resource groups <codeph>admin_group</codeph> and <codeph>default_group</codeph>
        are created with the following resource limits:</p>
      <table id="default_resgroup_limits">
333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371
        <tgroup cols="3">
          <colspec colnum="1" colname="col1" colwidth="1*"/>
          <colspec colnum="2" colname="col2" colwidth="1*"/>
          <colspec colnum="3" colname="col3" colwidth="1*"/>
          <thead>
            <row>
              <entry colname="col1">Limit Type</entry>
              <entry colname="col2">admin_group</entry>
              <entry colname="col3">default_group</entry>
            </row>
          </thead>
          <tbody>
            <row>
              <entry colname="col1">CONCURRENCY</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">20</entry>
            </row>
            <row>
              <entry colname="col1">CPU_RATE_LIMIT</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">30</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_LIMIT</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">30</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_SHARED_QUOTA</entry>
              <entry colname="col2">50</entry>
              <entry colname="col3">50</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_SPILL_RATIO</entry>
              <entry colname="col2">20</entry>
              <entry colname="col3">20</entry>
            </row>
          </tbody>
        </tgroup>
372
      </table>
373 374 375 376 377 378
    </body>
  </topic>

  <topic id="topic10" xml:lang="en">
    <title id="iz139857">Creating Resource Groups</title>
    <body>
379 380 381 382 383 384 385 386 387 388 389
      <p>When you create a resource group, you provide a name, CPU limit, and memory limit. You can
        optionally provide a concurrent transaction limit and memory shared quota and spill ratio.
        Use the <codeph><xref href="../ref_guide/sql_commands/CREATE_RESOURCE_GROUP.xml#topic1"
            type="topic" format="dita"/></codeph> command to create a new resource group. </p>
      <p id="iz152723">When you create a resource group, you must provide
          <codeph>CPU_RATE_LIMIT</codeph> and <codeph>MEMORY_LIMIT</codeph> limit values. These
        limits identify the percentage of Greenplum Database resources to allocate to this resource
        group. For example, to create a resource group named <i>rgroup1</i> with a CPU limit of 20
        and a memory limit of 25:</p>
      <p>
        <codeblock>=# CREATE RESOURCE GROUP <i>rgroup1</i> WITH (CPU_RATE_LIMIT=20, MEMORY_LIMIT=25);
390
</codeblock>
391 392 393 394 395 396 397 398 399 400 401
      </p>
      <p>The CPU limit of 20 is shared by every role to which <codeph>rgroup1</codeph> is assigned.
        Similarly, the memory limit of 25 is shared by every role to which <codeph>rgroup1</codeph>
        is assigned. <codeph>rgroup1</codeph> utilizes the default <codeph>CONCURRENCY</codeph>
        setting of 20.</p>
      <p>The <codeph><xref href="../ref_guide/sql_commands/ALTER_RESOURCE_GROUP.xml#topic1"
            type="topic" format="dita"/></codeph> command updates the limits of a resource group. To
        change the limits of a resource group, specify the new values you want for the group. For
        example:</p>
      <p>
        <codeblock>=# ALTER RESOURCE GROUP <i>rg_light</i> SET CONCURRENCY 7;
402 403
=# ALTER RESOURCE GROUP <i>exec</i> SET MEMORY_LIMIT 25;
</codeblock>
404 405 406 407 408 409 410 411 412 413
      </p>
      <note>You cannot set or alter the <codeph>CONCURRENCY</codeph> value for the
          <codeph>admin_group</codeph> to zero (0).</note>
      <p>The <codeph><xref href="../ref_guide/sql_commands/DROP_RESOURCE_GROUP.xml#topic1"
            type="topic" format="dita"/></codeph> command drops a resource group. To drop a resource
        group, the group cannot be assigned to any role, nor can there be any transactions active or
        waiting in the resource group. To drop a resource group:</p>
      <p>
        <codeblock>=# DROP RESOURCE GROUP <i>exec</i>; </codeblock>
      </p>
414 415 416 417 418 419
    </body>
  </topic>

  <topic id="topic17" xml:lang="en">
    <title id="iz172210">Assigning a Resource Group to a Role</title>
    <body>
420 421 422 423 424 425 426 427 428 429 430
      <p id="iz172211">When you create a resource group, the group is available for assignment to
        one or more roles (users). You assign a resource group to a database role using the
          <codeph>RESOURCE GROUP</codeph> clause of the <codeph><xref
            href="../ref_guide/sql_commands/CREATE_ROLE.xml#topic1" type="topic" format="dita"
          /></codeph> or <codeph><xref href="../ref_guide/sql_commands/ALTER_ROLE.xml#topic1"
            type="topic" format="dita"/></codeph> commands. If you do not specify a resource group
        for a role, the role is assigned the default group for the role's capability.
          <codeph>SUPERUSER</codeph> roles are assigned the <codeph>admin_group</codeph>, non-admin
        roles are assigned the group named <codeph>default_group</codeph>.</p>
      <p>Use the <codeph>ALTER ROLE</codeph> or <codeph>CREATE ROLE</codeph> commands to assign a
        resource group to a role. For example:</p>
431 432 433 434 435
      <p>
        <codeblock>=# ALTER ROLE <i>bill</i> RESOURCE GROUP <i>rg_light</i>;
=# CREATE ROLE <i>mary</i> RESOURCE GROUP <i>exec</i>;
</codeblock>
      </p>
436 437 438 439 440 441 442 443
      <p>You can assign a resource group to one or more roles. If you have defined a role hierarchy,
        assigning a resource group to a parent role does not propagate down to the members of that
        role group.</p>
      <p>If you wish to remove a resource group assignment from a role and assign the role the
        default group, change the role's group name assignment to <codeph>NONE</codeph>. For
        example:</p>
      <p>
        <codeblock>=# ALTER ROLE <i>mary</i> RESOURCE GROUP NONE;
444
</codeblock>
445 446 447
      </p>
    </body>
  </topic>
448 449 450


  <topic id="topic22" xml:lang="en">
451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473
    <title id="iz152239">Monitoring Resource Group Status</title>
    <body>
      <p>Monitoring the status of your resource groups and queries may involve the following
        tasks:</p>
      <ul>
        <li id="iz153669">
          <xref href="#topic221" type="topic" format="dita"/>
        </li>
        <li id="iz153670">
          <xref href="#topic23" type="topic" format="dita"/>
        </li>
        <li id="iz153671">
          <xref href="#topic25" type="topic" format="dita"/>
        </li>
        <li id="iz15367125">
          <xref href="#topic252525" type="topic" format="dita"/>
        </li>
        <li id="iz153679">
          <xref href="#topic27" type="topic" format="dita"/>
        </li>
      </ul>

    </body>
474 475 476 477

    <topic id="topic221" xml:lang="en">
      <title id="iz152239">Viewing Resource Group Limits</title>
      <body>
478 479 480 481 482 483
        <p>The <codeph><xref href="../ref_guide/system_catalogs/gp_resgroup_config.xml" type="topic"
              format="dita"/></codeph>
          <codeph>gp_toolkit</codeph> system view displays the current and proposed limits for a
          resource group. The proposed limit differs from the current limit when you alter the limit
          but the new value can not be immediately applied. To view the limits of all resource
          groups:</p>
484 485 486 487 488 489 490 491 492 493
        <p>
          <codeblock>=# SELECT * FROM gp_toolkit.gp_resgroup_config;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic23" xml:lang="en">
      <title id="iz152239">Viewing Resource Group Query Status and CPU/Memory Usage</title>
      <body>
494 495 496 497 498 499
        <p>The <codeph><xref href="../ref_guide/system_catalogs/gp_resgroup_status.xml" type="topic"
              format="dita"/></codeph>
          <codeph>gp_toolkit</codeph> system view enables you to view the status and activity of a
          resource group. The view displays the number of running and queued transactions. It also
          displays the real-time CPU and memory usage of the resource group. To view this
          information:</p>
500 501 502 503 504 505 506 507 508 509 510
        <p>
          <codeblock>=# SELECT * FROM gp_toolkit.gp_resgroup_status;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic25" xml:lang="en">
      <title id="iz152239">Viewing the Resource Group Assigned to a Role</title>
      <body>
        <p>To view the resource group-to-role assignments, perform the following query on the
511 512 513 514
              <codeph><xref href="../ref_guide/system_catalogs/pg_roles.xml" type="topic"
              format="dita"/></codeph> and <codeph><xref
              href="../ref_guide/system_catalogs/pg_resgroup.xml" type="topic" format="dita"
            /></codeph> system catalog tables:</p>
515 516 517 518 519 520 521 522 523 524 525
        <p>
          <codeblock>=# SELECT rolname, rsgname FROM pg_roles, pg_resgroup
     WHERE pg_roles.rolresgroup=pg_resgroup.oid;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic252525" xml:lang="en">
      <title id="iz15223925">Viewing a Resource Group's Running and Pending Queries</title>
      <body>
526 527 528 529
        <p>To view a resource group's running queries, pending queries, and how long the pending
          queries have been queued, examine the <codeph><xref
              href="../ref_guide/system_catalogs/pg_stat_activity.xml" type="topic" format="dita"
            /></codeph> system catalog table:</p>
530 531 532 533 534 535 536 537 538
        <p>
          <codeblock>=# SELECT current_query, waiting, rsgname, rsgqueueduration 
     FROM pg_stat_activity;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic27" xml:lang="en">
539
      <title id="iz153732">Cancelling a Running or Queued Transaction in a Resource Group</title>
540
      <body>
541 542
        <p>There may be cases when you want to cancel a running or queued transaction in a
          resource group. For
543 544
          example, you may want to remove a query that is waiting in the resource group queue but
          has not yet been executed. Or, you may want to stop a running query that is taking too
545 546
          long to execute, or one that is sitting idle in a transaction and taking up resource group
          transaction slots that are needed by other users.</p>
547 548
        <p>To cancel a running or queued transaction, you must first determine the process
          id (pid) associated
549 550
          with the transaction. Once you have obtained the process id, you can invoke
            <codeph>pg_cancel_backend()</codeph> to end that process, as shown below.</p>
551
        <p>For example, to view the process information associated with all statements currently
552 553
          active or waiting in all resource groups, run the following query. If the query returns no
          results, then there are no running or queued transactions in any resource group.</p>
554 555 556 557 558 559 560 561 562 563 564 565 566
        <p>
          <codeblock>=# SELECT rolname, g.rsgname, procpid, waiting, current_query, datname 
     FROM pg_roles, gp_toolkit.gp_resgroup_status g, pg_stat_activity 
     WHERE pg_roles.rolresgroup=g.groupid
        AND pg_stat_activity.usename=pg_roles.rolname;
</codeblock>
        </p>

        <p>Sample partial query output:</p>
        <codeblock> rolname | rsgname  | procpid | waiting |     current_query     | datname 
---------+----------+---------+---------+-----------------------+---------
  sammy  | rg_light |  31861  |    f    | &lt;IDLE&gt; in transaction | testdb
  billy  | rg_light |  31905  |    t    | SELECT * FROM topten; | testdb</codeblock>
567 568 569
        <p>Use this output to identify the process id (<codeph>procpid</codeph>) of the transaction
          you want to cancel, and then cancel the process. For example, to cancel the pending query
          identified in the sample output above:</p>
570 571 572
        <p>
          <codeblock>=# SELECT pg_cancel_backend(31905);</codeblock>
        </p>
573 574 575
        <p>You can provide an optional message in a second argument to
            <codeph>pg_cancel_backend()</codeph> to indicate to the user why the process was
          cancelled.</p>
576
        <note type="note">
577 578
          <p>Do not use an operating system <codeph>KILL</codeph> command to cancel any Greenplum
            Database process.</p>
579 580 581 582 583 584 585
        </note>
      </body>
    </topic>

  </topic>

</topic>