workload_mgmt_resgroups.xml 27.7 KB
Newer Older
1 2 3 4 5 6
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
  PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="topic1" xml:lang="en">
  <title id="iz173472">Using Resource Groups</title>
  <body>
7
    <note type="warning">Resource groups are an experimental feature and are not intended for use in a production environment. Experimental features are subject to change without notice in future releases.</note>
8 9
    <p>You can use resource groups to manage the number of active queries that may execute concurrently in your Greenplum Database cluster. With resource groups, you can also manage the amount of CPU and memory resources Greenplum allocates to each query.</p>

10
    <p>When the user executes a query, Greenplum Database evaluates the query against a set of limits defined for the resource group. Greenplum Database executes the query immediately if the group's resource limits have not yet been reached and the query does not cause the group to exceed the concurrent transaction limit. If these conditions are not met, Greenplum Database queues the query. For example, if the maximum number of concurrent transactions for the resource group has already been reached, a subsequent query is queued and must wait until other queries complete before it runs. Greenplum Database may also execute a pending query when the resource group's concurrency and memory limits are altered to large enough values.</p>
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
    <p>Within a resource group, transactions are evaluated on a first in, first out basis. Greenplum Database periodically assesses the active workload of the system, reallocating resources and starting/queuing jobs as necessary.</p>
    <p>When you create a resource group, you provide a set of limits that determine the amount of CPU and memory resources available to transactions executed within the group. These limits are:</p>
     <table id="resgroup_limit_descriptions">
        <tgroup cols="3">
          <colspec colnum="1" colname="col1" colwidth="1*"/>
          <colspec colnum="2" colname="col2" colwidth="1*"/>
          <thead>
            <row>
              <entry colname="col1">Limit Type</entry>
              <entry colname="col2">Description</entry>
            </row>
          </thead>
          <tbody>
            <row>
              <entry colname="col1">CONCURRENCY</entry>
              <entry colname="col2">The maximum number of concurrent transactions, including active and idle transactions, that are permitted for this resource group. </entry>
            </row>
            <row>
              <entry colname="col1">CPU_RATE_LIMIT</entry>
              <entry colname="col2">The percentage of CPU resources available to this resource group.</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_LIMIT</entry>
              <entry colname="col2">The percentage of memory resources available to this resource group.</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_SHARED_QUOTA</entry>
              <entry colname="col2">The percentage of memory to share across transactions submitted in this resource group.</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_SPILL_RATIO</entry>
42
              <entry colname="col2">The memory usage threshold for memory-intensive transactions. When a transaction reaches this threshold, it spills to disk.</entry>
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61
            </row>
          </tbody>
        </tgroup>
      </table>

  </body>

  <topic id="topic8339717179" xml:lang="en">
    <title>Transaction Concurrency Limit</title>
    <body>
      <p>The <codeph>CONCURRENCY</codeph> limit controls the maximum number of concurrent transactions permitted for the resource group. Each resource group is logically divided into a fixed number of slots equal to the <codeph>CONCURRENCY</codeph> limit. Greenplum Database allocates these slots an equal, fixed percentage of memory resources.</p>
      <p>The default <codeph>CONCURRENCY</codeph> limit value for a resource group is 20.</p>
      <p>Greenplum Database queues any transactions submitted after the resource group reaches its <codeph>CONCURRENCY</codeph> limit. When a running transaction completes, Greenplum Database un-queues and executes the earliest queued transaction if sufficient memory resources exist.</p>
    </body>
  </topic>

  <topic id="topic833971717" xml:lang="en">
    <title>CPU Limit</title>
    <body>
62
      <p>The <codeph><xref href="../ref_guide/config_params/guc-list.xml#gp_resource_group_cpu_limit" type="section"/></codeph> server configuration parameter identifies the maximum percentage of system CPU resources to allocate to resource groups on each Greenplum Database segment node. The remaining CPU resources are used for the OS kernel and the Greenplum Database daemon processes. The default <codeph>gp_resource_group_cpu_limit</codeph> value is .9 (90%).</p>
63
      <note>The default <codeph>gp_resource_group_cpu_limit</codeph> value may not leave sufficient CPU resources if you are running other workloads on your Greenplum Database cluster nodes, so be sure to adjust this server configuration parameter accordingly.</note>
64
      <p>The Greenplum Database node CPU percentage is further divided equally among each segment on the Greenplum node. Each resource group reserves a percentage of the segment CPU for resource management. You identify this percentage via the <codeph>CPU_RATE_LIMIT</codeph> value you provide when you create the resource group.</p>
65
      <p>The minimum <codeph>CPU_RATE_LIMIT</codeph> percentage you can specify for a resource group is 1, the maximum is 100.</p>
66 67 68 69 70 71 72
      <p>The sum of <codeph>CPU_RATE_LIMIT</codeph>s specified for all resource groups you define in your Greenplum Database cluster must not exceed 100.</p>
      <p>CPU resource assignment is elastic in that Greenplum Database may allocate the CPU resources of an idle resource group to a busier one(s). In such situations, CPU resources are re-allocated to the previously idle resource group when that resource group next becomes active. If multiple resource groups are busy, they are allocated the CPU resources of any idle resource groups based on the ratio of their <codeph>CPU_RATE_LIMIT</codeph>s. For example, a resource group created with a <codeph>CPU_RATE_LIMIT</codeph> of 40 will be allocated twice as much extra CPU resource as a resource group you create with a <codeph>CPU_RATE_LIMIT</codeph> of 20.</p>
    </body>
  </topic>
  <topic id="topic8339717" xml:lang="en">
    <title>Memory Limits</title>
    <body>
73
      <p>When resource groups are enabled, memory usage is managed at the Greenplum Database node, segment, resource group, and transaction levels.</p>
74 75

      <p>The <codeph><xref href="../ref_guide/config_params/guc-list.xml#gp_resource_group_memory_limit" type="section"/></codeph> server configuration parameter identifies the maximum percentage of system memory resources to allocate to resource groups on each Greenplum Database segment node. The default <codeph>gp_resource_group_memory_limit</codeph> value is .9 (90%).</p>
76
     <p>The memory resource available on a Greenplum Database node is further divided equally among each segment on the node. Each resource group reserves a percentage of the segment memory for resource management. You identify this percentage via the <codeph>MEMORY_LIMIT</codeph> value you specify when you create the resource group. The minimum <codeph>MEMORY_LIMIT</codeph> percentage you can specify for a resource group is 1, the maximum is 100.</p>
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93
      <p>The sum of <codeph>MEMORY_LIMIT</codeph>s specified for all resource groups you define in your Greenplum Database cluster must not exceed 100.</p>
      <p>The memory reserved by the resource group is divided into fixed and shared components. The <codeph>MEMORY_SHARED_QUOTA</codeph> value you specify when you create the resource group identifies the percentage of reserved resource group memory that may be shared among the currently running transactions. This memory is allotted on a first-come, first-served basis. A running transaction may use none, some, or all of the <codeph>MEMORY_SHARED_QUOTA</codeph>.</p>

      <p>The minimum <codeph>MEMORY_SHARED_QUOTA</codeph> you can specify is 0, the maximum is 100. The default <codeph>MEMORY_SHARED_QUOTA</codeph> is 20.</p>

      <p>As mentioned previously, <codeph>CONCURRENCY</codeph> identifies the maximum number of concurrently running transactions permitted in the resource group. The fixed memory reserved by a resource group is divided into <codeph>CONCURRENCY</codeph> number of transaction slots. Each slot is allocated a fixed, equal amount of resource group memory. Greenplum Database guarantees this fixed memory to each transaction.

         <fig id="fig_py5_1sl_wlrg">
            <title>Resource Group Memory Allotments</title>
            <image href="graphics/resgroupmem.png" id="image_iqn_dsl_wlrg"/>
          </fig></p>

      <p>When a query's memory usage exceeds the fixed per-transaction memory usage amount, Greenplum Database allocates available resource group shared memory to the query. The maximum amount of resource group memory available to a specific transaction slot is the sum of the transaction's fixed memory and the full resource group shared memory allotment.</p>

    <section id="topic833sp" xml:lang="en">
     <title>Spill to Disk</title>
      <p><codeph>MEMORY_SPILL_RATIO</codeph> identifies the memory usage threshold for memory-intensive operators in a transaction. When the transaction reaches this memory threshold, it spills to disk. Greenplum Database uses the <codeph>MEMORY_SPILL_RATIO</codeph> to determine the initial memory to allocate to a transaction.</p>
94
      <p> The minimum <codeph>MEMORY_SPILL_RATIO</codeph> percentage you can specify for a resource group is 0. The maximum is 100. The default <codeph>MEMORY_SPILL_RATIO</codeph> is 20.</p>
95
      <p>You define the <codeph>MEMORY_SPILL_RATIO</codeph> when you create a resource group. You can selectively set this limit on a per-query basis at the session level with the <codeph><xref href="../ref_guide/config_params/guc-list.xml#memory_spill_ratio" type="section"/></codeph> server configuration parameter.</p>
96 97 98 99 100 101
    </section>
 
    </body>
  </topic>

  <topic id="topic71717999" xml:lang="en">
102
    <title>Using Resource Groups</title>
103 104 105
    <body>
    <section id="topic833" xml:lang="en">
      <title>Prerequisite</title>
106
      <p>Greenplum Database resource groups use Linux Control Groups (cgroups) to manage CPU resources. With cgroups, Greenplum isolates the CPU usage of your Greenplum processes from other processes on the node. This allows Greenplum to support CPU usage restrictions on a per-resource-group basis.</p>
107
      <p>For detailed information about cgroups, refer to the Control Groups documentation for your Linux distribution.</p>
108
        <p>Complete the following tasks on each node in your Greenplum Database cluster to set up cgroups for use with resource groups:</p>
109
        <ol>
110 111
          <li>If you are running the Suse 11+ operating system on your Greenplum Database cluster nodes, you must enable swap accounting on each node and restart your Greenplum Database cluster. The <codeph>swapaccount</codeph> kernel boot parameter governs the swap accounting setting on Suse 11+ systems. After setting this boot parameter, you must reboot your systems. For details, refer to the <xref href="https://www.suse.com/releasenotes/x86_64/SUSE-SLES/11-SP2/#fate-310471" format="html" scope="external">Cgroup Swap Control</xref> discussion in the Suse 11 release notes. You must be the superuser or have <codeph>sudo</codeph> access to configure kernel boot parameters and reboot systems.
          </li>
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163
          <li>Create the Greenplum Database cgroups configuration file <codeph>/etc/cgconfig.d/gpdb.conf</codeph>. You must be the superuser or have <codeph>sudo</codeph> access to create this file:
           <codeblock>sudo vi /etc/cgconfig.d/gpdb.conf</codeblock>
          </li>
          <li>Add the following configuration information to <codeph>/etc/cgconfig.d/gpdb.conf</codeph>:
           <codeblock>group gpdb {
     perm {
         task {
             uid = gpadmin;
             gid = gpadmin;
         }
         admin {
             uid = gpadmin;
             gid = gpadmin;
         }
     }
     cpu {
     }
     cpuacct {
     }
 } </codeblock> <p>This content configures CPU and CPU accounting control groups managed by the <codeph>gpadmin</codeph> user.</p>
        </li>
        <li>If not already installed and running, install the Control Groups operating system package and start the cgroups service on each Greenplum Database node. The commands you run to perform these tasks will differ based on the operating system installed on the node. You must be the superuser or have <codeph>sudo</codeph> access to run these commands:
        <ul>
          <li> Redhat/CentOS 7.x systems:
            <codeblock>sudo yum install libcgroup-tools
sudo cgconfigparser -l /etc/cgconfig.d/gpdb.conf </codeblock>
          </li>
          <li> Redhat/CentOS 6.x systems:
            <codeblock>sudo yum install libcgroup
sudo service cgconfig start </codeblock>
          </li>
          <li> Suse 11+ systems:
            <codeblock>sudo zypper install libcgroup-tools
sudo cgconfigparser -l /etc/cgconfig.d/gpdb.conf </codeblock>
          </li>
        </ul>
      </li>
          <li>Identify the <codeph>cgroup</codeph> directory mount point for the node:
        <codeblock>grep cgroup /proc/mounts</codeblock><p>The first line of output identifies the <codeph>cgroup</codeph> mount point.</p>
          </li>
          <li>Verify that you set up the Greenplum Database cgroups configuration correctly by running the following commands. Replace <varname>cgroup_mount_point</varname> with the mount point you identified in the previous step:
        <codeblock>ls -l <i>cgroup_mount_point</i>/cpu/gpdb
ls -l <i>cgroup_mount_point</i>/cpuacct/gpdb
        </codeblock> <p>If these directories exist and are owned by <codeph>gpadmin:gpadmin</codeph>, you have successfully configured cgroups for Greenplum Database CPU resource management.</p>
          </li>
        </ol>
    </section>
    <section id="topic8339191" xml:lang="en">
      <title>Procedure</title>

      <p>To use resource groups in your Greenplum Database cluster, you:</p>
      <ol>
164
        <li><xref href="#topic8" type="topic" format="dita">Enable resource groups for your Greenplum Database cluster</xref>.</li>
165 166
        <li><xref href="#topic10" type="topic" format="dita">Create resource groups</xref>.</li>
        <li><xref href="#topic17" type="topic" format="dita">Assign the resource groups to one or more roles</xref>.</li>
167
        <li><xref href="#topic22" type="topic" format="dita">Use resource management system views to monitor and manage the resource groups</xref>.</li>
168 169 170 171 172 173
      </ol>
    </section>
    </body>
  </topic>

  <topic id="topic8" xml:lang="en">
174
    <title id="iz153124">Enabling Resource Groups</title>
175
    <body>
176
      <p>When you install Greenplum Database, resource queues are enabled by default. To use resource groups instead of resource queues, you must set the <codeph><xref href="../ref_guide/config_params/guc-list.xml#gp_resource_manager" type="section"/></codeph> server configuration parameter.</p>
177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239
      <ol id="ol_ec5_4dy_wq">
        <li>Set the <codeph>gp_resource_manager</codeph> server configuration parameter to the value <codeph>"group"</codeph>:
          <codeblock>gpconfig -s gp_resource_manager
gpconfig -c gp_resource_manager -v "group"
</codeblock>
        </li>
        <li>Restart Greenplum Database:
            <codeblock>gpstop
gpstart
</codeblock>
        </li>
      </ol>
      <p>Once enabled, any transaction submitted by a role is directed to the resource group assigned to the role, and is governed by that resource group's concurrency, memory, and CPU limits.</p>
      <p>Greenplum Database creates two default resource groups named <codeph>admin_group</codeph> and <codeph>default_group</codeph>. When you enable resources groups, any role that was not explicitly assigned a resource group is assigned the default group for the role's capability. <codeph>SUPERUSER</codeph> roles are assigned the <codeph>admin_group</codeph>, non-admin roles are assigned the group named <codeph>default_group</codeph>.</p>
      <p>The default resource groups <codeph>admin_group</codeph> and <codeph>default_group</codeph> are created with the following resource limits:</p>
       <table id="default_resgroup_limits">
        <tgroup cols="3">
          <colspec colnum="1" colname="col1" colwidth="1*"/>
          <colspec colnum="2" colname="col2" colwidth="1*"/>
          <colspec colnum="3" colname="col3" colwidth="1*"/>
          <thead>
            <row>
              <entry colname="col1">Limit Type</entry>
              <entry colname="col2">admin_group</entry>
              <entry colname="col3">default_group</entry>
            </row>
          </thead>
          <tbody>
            <row>
              <entry colname="col1">CONCURRENCY</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">20</entry>
            </row>
            <row>
              <entry colname="col1">CPU_RATE_LIMIT</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">30</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_LIMIT</entry>
              <entry colname="col2">10</entry>
              <entry colname="col3">30</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_SHARED_QUOTA</entry>
              <entry colname="col2">50</entry>
              <entry colname="col3">50</entry>
            </row>
            <row>
              <entry colname="col1">MEMORY_SPILL_RATIO</entry>
              <entry colname="col2">20</entry>
              <entry colname="col3">20</entry>
            </row>
          </tbody>
        </tgroup>
       </table>
    </body>
  </topic>

  <topic id="topic10" xml:lang="en">
    <title id="iz139857">Creating Resource Groups</title>
    <body>
      <p>When you create a resource group, you provide a name, CPU limit, and memory limit. You can optionally provide a concurrent transaction limit and memory shared quota and spill ratio.  Use the <codeph><xref href="../ref_guide/sql_commands/CREATE_RESOURCE_GROUP.xml#topic1" type="topic" format="dita"/></codeph> command to create a new resource group. </p>
240
      <p> You must have <codeph>SUPERUSER</codeph> privileges to create a resource group. The maximum number of resource groups allowed in your Greenplum Database cluster is 100.</p>
241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257
      <p id="iz152723">When you create a resource group, you must provide <codeph>CPU_RATE_LIMIT</codeph> and <codeph>MEMORY_LIMIT</codeph> limit values.
          These limits identify the percentage of Greenplum Database resources
          to allocate to this resource group. For example, to create a resource group named
          <i>rgroup1</i> with a CPU limit of 20 and a memory limit of 25:</p>
        <p>
          <codeblock>=# CREATE RESOURCE GROUP <i>rgroup1</i> WITH (CPU_RATE_LIMIT=20, MEMORY_LIMIT=25);
</codeblock>
        </p>
        <p>The CPU limit of 20 is shared by every role to which <codeph>rgroup1</codeph> is assigned. Similarly, the memory limit of 25 is shared by every role to which <codeph>rgroup1</codeph> is assigned. <codeph>rgroup1</codeph> utilizes the default <codeph>CONCURRENCY</codeph> setting of 20.</p>
        <p>The <codeph><xref href="../ref_guide/sql_commands/ALTER_RESOURCE_GROUP.xml#topic1" type="topic" format="dita"/></codeph> command updates the limits of a resource group.
          To change the limits of a resource group, specify the new values you want for the group.
          For example:</p>
        <p>
          <codeblock>=# ALTER RESOURCE GROUP <i>rg_light</i> SET CONCURRENCY 7;
=# ALTER RESOURCE GROUP <i>exec</i> SET MEMORY_LIMIT 25;
</codeblock>
        </p>
258
        <note>You cannot set or alter the <codeph>CONCURRENCY</codeph> value for the <codeph>admin_group</codeph> to zero (0).</note>
259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396
        <p>The <codeph><xref href="../ref_guide/sql_commands/DROP_RESOURCE_GROUP.xml#topic1" type="topic" format="dita"/></codeph> command drops a resource group. To drop a resource group, the group cannot be assigned to any role, nor can there be any transactions active or waiting in the resource group.
          To drop a resource group:</p>
        <p>
          <codeblock>=# DROP RESOURCE GROUP <i>exec</i>; </codeblock>
        </p>
    </body>
  </topic>

  <topic id="topic17" xml:lang="en">
    <title id="iz172210">Assigning a Resource Group to a Role</title>
    <body>
      <p id="iz172211">When you create a resource group, the group is available for assignment to one or more roles (users). You assign a resource group to a database role using the <codeph>RESOURCE GROUP</codeph> clause of the <codeph><xref href="../ref_guide/sql_commands/CREATE_ROLE.xml#topic1" type="topic" format="dita"/></codeph> or <codeph><xref href="../ref_guide/sql_commands/ALTER_ROLE.xml#topic1" type="topic" format="dita"/></codeph> commands. If you do not specify a resource group for a role, the role is assigned the default group for the role's capability. <codeph>SUPERUSER</codeph> roles are assigned the <codeph>admin_group</codeph>, non-admin roles are assigned the group named <codeph>default_group</codeph>.</p>
      <p>Use the <codeph>ALTER ROLE</codeph> or <codeph>CREATE ROLE</codeph> commands to assign a resource group to a role. For example:</p>
      <p>
        <codeblock>=# ALTER ROLE <i>bill</i> RESOURCE GROUP <i>rg_light</i>;
=# CREATE ROLE <i>mary</i> RESOURCE GROUP <i>exec</i>;
</codeblock>
      </p>
      <p>You can assign a resource group to one or more roles. If you have defined a role hierarchy, assigning a resource group to a parent role does not propagate down to the members of that role group.</p>
        <p>If you wish to remove a resource group assignment from a role and assign the role the default group, change the role's group name assignment to <codeph>NONE</codeph>.
          For example:</p>
        <p>
          <codeblock>=# ALTER ROLE <i>mary</i> RESOURCE GROUP NONE;
</codeblock>
        </p>
      </body>
    </topic>


  <topic id="topic22" xml:lang="en">
      <title id="iz152239">Monitoring Resource Group Status</title>
      <body>
        <p>Monitoring the status of your resource groups and queries may involve the following tasks:</p>
        <ul>
          <li id="iz153669"> <xref href="#topic221" type="topic" format="dita"/> </li>
          <li id="iz153670"> <xref href="#topic23" type="topic" format="dita"/> </li>
          <li id="iz153671"> <xref href="#topic25" type="topic" format="dita"/> </li>
          <li id="iz15367125"> <xref href="#topic252525" type="topic" format="dita"/> </li>
          <li id="iz153679"> <xref href="#topic27" type="topic" format="dita"/> </li>
        </ul>
        
      </body>

    <topic id="topic221" xml:lang="en">
      <title id="iz152239">Viewing Resource Group Limits</title>
      <body>
        <p>The <codeph><xref href="../ref_guide/system_catalogs/gp_resgroup_config.xml" type="topic" format="dita"/></codeph> <codeph>gp_toolkit</codeph> system view displays the current and proposed limits for a resource group. The proposed limit differs from the current limit when you alter the limit but the new value can not be immediately applied. To view the limits of all resource groups:</p>
        <p>
          <codeblock>=# SELECT * FROM gp_toolkit.gp_resgroup_config;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic23" xml:lang="en">
      <title id="iz152239">Viewing Resource Group Query Status and CPU/Memory Usage</title>
      <body>
        <p>The <codeph><xref href="../ref_guide/system_catalogs/gp_resgroup_status.xml" type="topic" format="dita"/></codeph> <codeph>gp_toolkit</codeph> system view enables you to view the status and activity of a resource group. The view displays the number of running and queued transactions. It also displays the real-time CPU and memory usage of the resource group. To view this information:</p>
        <p>
          <codeblock>=# SELECT * FROM gp_toolkit.gp_resgroup_status;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic25" xml:lang="en">
      <title id="iz152239">Viewing the Resource Group Assigned to a Role</title>
      <body>
        <p>To view the resource group-to-role assignments, perform the following query on the
            <codeph><xref href="../ref_guide/system_catalogs/pg_roles.xml" type="topic" format="dita"/></codeph> and
            <codeph><xref href="../ref_guide/system_catalogs/pg_resgroup.xml" type="topic" format="dita"/></codeph> system catalog tables:</p>
        <p>
          <codeblock>=# SELECT rolname, rsgname FROM pg_roles, pg_resgroup
     WHERE pg_roles.rolresgroup=pg_resgroup.oid;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic252525" xml:lang="en">
      <title id="iz15223925">Viewing a Resource Group's Running and Pending Queries</title>
      <body>
        <p>To view a resource group's running queries, pending queries, and how long
            the pending queries have been queued, examine the
            <codeph><xref href="../ref_guide/system_catalogs/pg_stat_activity.xml" type="topic" format="dita"/></codeph>
            system catalog table:</p>
        <p>
          <codeblock>=# SELECT current_query, waiting, rsgname, rsgqueueduration 
     FROM pg_stat_activity;
</codeblock>
        </p>
      </body>
    </topic>

    <topic id="topic27" xml:lang="en">
      <title id="iz153732">Canceling a Queued Transaction in a Resource Group</title>
      <body>
        <p>There may be cases when you want to cancel a queued transaction in a resource group. For
          example, you may want to remove a query that is waiting in the resource group queue but
          has not yet been executed. Or, you may want to stop a running query that is taking too
          long to execute, or one that is sitting idle in a transaction and taking up resource
          group transaction slots that are needed by other users.</p>
        <p>To cancel a queued transaction, you must first determine the process id (pid)
           associated with the transaction. Once you have obtained the  
           process id, you can invoke <codeph>pg_cancel_backend()</codeph> to end that process,
           as shown below.</p>
        <p>For example, to view the process information associated with all statements currently
          active or waiting in all resource groups, run the following query. If the query
          returns no results, then there are no running or queued transactions
          in any resource group.</p>
        <p>
          <codeblock>=# SELECT rolname, g.rsgname, procpid, waiting, current_query, datname 
     FROM pg_roles, gp_toolkit.gp_resgroup_status g, pg_stat_activity 
     WHERE pg_roles.rolresgroup=g.groupid
        AND pg_stat_activity.usename=pg_roles.rolname;
</codeblock>
        </p>

        <p>Sample partial query output:</p>
        <codeblock> rolname | rsgname  | procpid | waiting |     current_query     | datname 
---------+----------+---------+---------+-----------------------+---------
  sammy  | rg_light |  31861  |    f    | &lt;IDLE&gt; in transaction | testdb
  billy  | rg_light |  31905  |    t    | SELECT * FROM topten; | testdb</codeblock>
        <p>Use this output to identify the process id (<codeph>procpid</codeph>) of the transaction you want to cancel, and then cancel the process. For example, to cancel the pending query identified in the sample output above:</p>
        <p>
          <codeblock>=# SELECT pg_cancel_backend(31905);</codeblock>
        </p>
        <p>You can provide an optional message in a second argument to <codeph>pg_cancel_backend()</codeph>
          to indicate to the user why the process was cancelled.</p>
        <note type="note">
          <p>Do not use an operating system <codeph>KILL</codeph> command to cancel any Greenplum Database process.</p>
        </note>
      </body>
    </topic>

  </topic>

</topic>