未验证 提交 684fe032 编写于 作者: L Lisa Owen 提交者: GitHub

docs - additional updates for removal of gpperfmon (#9564)

* docs - additional updates for removal of gpperfmon

* remove XXX placeholder

* edits requested by david

* remove an unused file
上级 7c1d77b3
......@@ -105,7 +105,6 @@ FROM (SELECT count(*) c, gp_segment_id FROM facts GROUP BY 2) AS a;</codeblock>
1 | template1
10898 | template0
38817 | pws
39682 | gpperfmon
(6 rows)
</codeblock></p></li>
<li>Run a <codeph>gpssh</codeph> command to check file sizes across all of the segment
......
......@@ -86,8 +86,7 @@
id="image_ojd_t1m_2r"/>
</entry>
<entry>Validate that the master data directory has no extremely large files in the
<codeph>pg_log</codeph> or <codeph>gpperfmon/data</codeph>
<ph>directories</ph>.</entry>
<codeph>pg_log</codeph> directory.</entry>
</row>
<row>
<entry namest="c1" nameend="c2"><p><b>Offline Pre-Expansion Tasks</b></p>
......
......@@ -21,15 +21,6 @@
<li>Transferring data between Greenplum databases</li>
<li>System state reporting</li>
</ul>
<p>Greenplum Database includes an optional performance management database that contains query
status information and system metrics. The <codeph>gpperfmon_install</codeph> management
utility creates the database, named <codeph>gpperfmon</codeph>, and enables data collection
agents that execute on the Greenplum Database master and segment hosts. Data collection agents on the
segment hosts collect query status from the segments, as well as system metrics such as CPU
and memory utilization. An agent on the master host periodically (typically every 15 seconds)
retrieves the data from the segment host agents and updates the <codeph>gpperfmon</codeph>
database. Users can query the <codeph>gpperfmon</codeph> database to see the query and system
metrics. </p>
<p otherprops="pivotal">Pivotal provides an optional system monitoring and management tool,
Greenplum Command Center, which administrators can install and enable with Greenplum Database.
Greenplum Command Center provides a web-based user interface for viewing system metrics and
......
......@@ -775,36 +775,6 @@
</body>
</topic>
</topic>
<topic id="topic33" xml:lang="en">
<title id="kh171299">System Monitoring Parameters</title>
<topic id="topic36" xml:lang="en">
<title>Greenplum Performance Monitoring Data Collection Agents</title>
<body>
<p>The following parameters configure the data collection agents for the
(<codeph>gpperfmon</codeph>) database.</p>
<simpletable id="kh171891">
<strow>
<stentry>
<p>
<codeph>gp_enable_gpperfmon</codeph>
</p>
<p>
<codeph>gp_gpperfmon_send_interval</codeph>
</p>
</stentry>
<stentry>
<p>
<codeph>gpperfmon_log_alert_level</codeph>
</p>
<p>
<codeph>gpperfmon_port</codeph>
</p>
</stentry>
</strow>
</simpletable>
</body>
</topic>
</topic>
<topic id="topic37" xml:lang="en">
<title id="kh171364">Runtime Statistics Collection Parameters</title>
<body>
......
......@@ -12,18 +12,10 @@
<p>Also, be sure to review <xref href="../monitoring/monitoring.dita#topic_kmz_lbg_rp"/> for
monitoring activities you can script to quickly detect problems in the system. </p>
</body>
<topic id="topic2" xml:lang="en">
<topic id="topic2" xml:lang="en" otherprops="pivotal">
<title id="kj157177">Monitoring Database Activity and Performance</title>
<body>
<p>Greenplum Database includes an optional system monitoring and management database,
<codeph>gpperfmon</codeph>, that administrators can enable. The
<codeph>gpperfmon_install</codeph> command-line utility creates the
<codeph>gpperfmon</codeph> database and enables data collection agents that collect and
store query and system metrics in the database. Administrators can query metrics in the
<codeph>gpperfmon</codeph> database. See the documentation for the
<codeph>gpperfmon</codeph> database in the <cite>Greenplum Database Reference
Guide</cite>.</p>
<p otherprops="pivotal">Pivotal Greenplum Command Center, an optional web-based interface,
<p>Pivotal Greenplum Command Center, an optional web-based interface,
provides cluster status information, graphical administrative tools, real-time query
monitoring, and historical cluster and query data. Download the Greenplum Command Center
package from <xref href="https://network.pivotal.io/products/pivotal-gpdb" scope="external"
......
......@@ -193,45 +193,6 @@ FROM gp_master_mirroring;</codeblock></p>
</table>
</body>
</topic>
<topic id="topic_atb_b2g_rp">
<title>Database Alert Log Monitoring </title>
<body>
<table frame="all" id="table_dvp_d2g_rp">
<title>Database Alert Log Monitoring Activities</title>
<tgroup cols="3">
<colspec colname="c1" colnum="1"/>
<colspec colname="c2" colnum="2"/>
<colspec colname="c3" colnum="3"/>
<thead>
<row>
<entry>Activity</entry>
<entry>Procedure</entry>
<entry>Corrective Actions</entry>
</row>
</thead>
<tbody>
<row>
<entry>Check for FATAL and ERROR log messages from the
system.<p>Recommended frequency: run every 15
minutes</p><p>Severity: WARNING</p><p><i>This activity and
the next are two methods for monitoring messages in the
log_alert_history table. It is only necessary to set up
one or the other.</i></p></entry>
<entry>
<p>Run the following query in the <codeph>gpperfmon</codeph>
database:<codeblock>SELECT * FROM log_alert_history
WHERE logseverity in ('FATAL', 'ERROR')
AND logtime &gt; (now() - interval '15 minutes');</codeblock></p>
</entry>
<entry>Send an alert to the DBA to analyze the alert. You may want
to add additional filters to the query to ignore certain
messages of low interest.</entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<topic id="topic_y4c_4gg_rp">
<title>Hardware and Operating System Monitoring</title>
<body>
......
......@@ -114,17 +114,9 @@ a.query
the current memory utilization and idle time for sessions that are running
queries on Greenplum Database. For information about the view, see <xref
href="managing/monitor.xml#topic_slt_ddv_1q"/>.</p>
<p>You can enable a dedicated database, <codeph>gpperfmon</codeph>, in which data
collection agents running on each segment host save query and system utilization
metrics. Refer to the <codeph>gpperfmon_install</codeph> management utility
reference in the <cite>Greenplum Database Management Utility Reference
Guide</cite> for help creating the <codeph>gpperfmon</codeph> database and
managing the agents. See documentation for the tables and views in the
<codeph>gpperfmon</codeph> database in the <cite>Greenplum Database
Reference Guide</cite>.</p>
<p otherprops="pivotal">The optional Greenplum Command Center web-based user
interface graphically displays query and system utilization metrics saved in the
<codeph>gpperfmon</codeph> database. See the <xref
interface graphically displays query and system utilization metrics.
See the <xref
href="https://gpcc.docs.pivotal.io" format="html" scope="external">Greenplum
Command Center Documentation</xref> web site for procedures to enable
Greenplum Command Center.</p>
......
......@@ -77,12 +77,6 @@
</entry>
<entry>segment instance start log</entry>
</row>
<row>
<entry>
<codeph>$MASTER_DATA_DIRECTORY/gpperfmon/logs/gpmon.*.log</codeph>
</entry>
<entry>gpperfmon logs</entry>
</row>
<row>
<entry>
<codeph>$MASTER_DATA_DIRECTORY/pg_log/*.csv</codeph>,
......
......@@ -244,7 +244,6 @@ FROM (SELECT count(*) c, gp_segment_id FROM facts GROUP BY 2) AS a;</codeblock>
1 | template1
10898 | template0
38817 | pws
39682 | gpperfmon
(6 rows)
</codeblock></p></li>
<li>Run a <codeph>gpssh</codeph> command to check file sizes across all of the segment
......
......@@ -107,7 +107,7 @@
utilize the system resources of those hosts. The resource consumption of the data
collection agent processes on these hosts is minimal and should not significantly impact
database performance. Historical data collected by the collection agents is stored in its
own Command Center database (named <codeph>gpperfmon</codeph>) within your Greenplum
own Command Center database within your Greenplum
Database system. Collected data is distributed just like regular database data, so you
will need to account for disk space in the data directory locations of your Greenplum
segment instances. The amount of space required depends on the amount of historical data
......
......@@ -290,15 +290,8 @@
<title>Greenplum Performance Monitoring</title>
<!--Pivotal-->
<body>
<p>Greenplum Database includes a dedicated system monitoring and management database, named
gpperfmon, that administrators can install and enable. When this database is enabled, data
collection agents on each segment host collect query status and system metrics. At regular
intervals (typically every 15 seconds), an agent on the Greenplum master requests the data
from the segment agents and updates the gpperfmon database. Users can query the gpperfmon
database to see the stored query and system metrics. For more information see the "gpperfmon
Database Reference" in the <cite>Greenplum Database Reference Guide</cite>.</p>
<p>Greenplum Command Center is an optional web-based performance monitoring and management
tool for Greenplum Database, based on the gpperfmon database. Administrators can install
tool for Greenplum Database. Administrators can install
Command Center separately from Greenplum Database.</p>
<fig id="fig_f5t_whm_kbb">
<title>Greenplum Performance Monitoring Architecture</title>
......
......@@ -209,9 +209,6 @@
<xref href="#gp_enable_fast_sri"/>
</li>
<li><xref href="#gp_enable_global_deadlock_detector"/></li>
<li>
<xref href="#gp_enable_gpperfmon"/>
</li>
<li>
<xref href="#gp_enable_groupext_distinct_gather"/>
</li>
......@@ -264,9 +261,6 @@
<li>
<xref href="#gp_global_deadlock_detector_period"/>
</li>
<li>
<xref href="#gp_gpperfmon_send_interval"/>
</li>
<li>
<xref href="#gp_hashjoin_tuples_per_bucket"/>
</li>
......@@ -282,12 +276,6 @@
<li><xref href="#gp_workfile_limit_files_per_query"/></li>
<li><xref href="#gp_workfile_limit_per_query"/></li>
<li><xref href="#gp_workfile_limit_per_segment"/></li>
<li>
<xref href="#gpperfmon_log_alert_level"/>
</li>
<li>
<xref href="#gpperfmon_port"/>
</li>
<li><xref href="#ignore_checksum_failure" format="dita"/></li>
<li>
<xref href="#integer_datetimes"/>
......@@ -2816,34 +2804,6 @@
</table>
</body>
</topic>
<topic id="gp_enable_gpperfmon">
<title>gp_enable_gpperfmon</title>
<body>
<p>Enables or disables the data collection agents that populate the <codeph>gpperfmon</codeph>
database.</p>
<table id="gp_enable_gpperfmon_table">
<tgroup cols="3">
<colspec colnum="1" colname="col1" colwidth="1*"/>
<colspec colnum="2" colname="col2" colwidth="1*"/>
<colspec colnum="3" colname="col3" colwidth="1*"/>
<thead>
<row>
<entry colname="col1">Value Range</entry>
<entry colname="col2">Default</entry>
<entry colname="col3">Set Classifications</entry>
</row>
</thead>
<tbody>
<row>
<entry colname="col1">Boolean</entry>
<entry colname="col2">off</entry>
<entry colname="col3">local<p>system</p><p>restart</p></entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<topic id="gp_enable_groupext_distinct_gather">
<title>gp_enable_groupext_distinct_gather</title>
<body>
......@@ -3480,68 +3440,6 @@
</table>
</body>
</topic>
<topic id="gp_gpperfmon_send_interval">
<title>gp_gpperfmon_send_interval</title>
<body>
<p>Sets the frequency that the Greenplum Database server processes send query execution
updates to the data collection agent processes used to populate the
<codeph>gpperfmon</codeph> database. Query operations executed during
this interval are sent through UDP to the segment monitor agents. If you find that an
excessive number of UDP packets are dropped during long-running, complex queries, you may
consider increasing this value.</p>
<table id="gp_gpperfmon_send_interval_table">
<tgroup cols="3">
<colspec colnum="1" colname="col1" colwidth="1*"/>
<colspec colnum="2" colname="col2" colwidth="1*"/>
<colspec colnum="3" colname="col3" colwidth="1*"/>
<thead>
<row>
<entry colname="col1">Value Range</entry>
<entry colname="col2">Default</entry>
<entry colname="col3">Set Classifications</entry>
</row>
</thead>
<tbody>
<row>
<entry colname="col1">Any valid time expression (number and unit)</entry>
<entry colname="col2">1sec</entry>
<entry colname="col3">master<p>system</p><p>restart</p><p>superuser</p></entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<topic id="gpperfmon_log_alert_level">
<title>gpperfmon_log_alert_level</title>
<body>
<p>Controls which message levels are written to the gpperfmon log. Each level includes all the
levels that follow it. The later the level, the fewer messages are sent to the log. </p>
<note>If the <codeph>gpperfmon</codeph> database is installed and is monitoring the database,
the default value is warning.</note>
<table id="gpperfmon_log_alert_level_table">
<tgroup cols="3">
<colspec colnum="1" colname="col1" colwidth="1*"/>
<colspec colnum="2" colname="col2" colwidth="1*"/>
<colspec colnum="3" colname="col3" colwidth="1*"/>
<thead>
<row>
<entry colname="col1">Value Range</entry>
<entry colname="col2">Default</entry>
<entry colname="col3">Set Classifications</entry>
</row>
</thead>
<tbody>
<row>
<entry colname="col1">none <p>warning</p><p>error</p><p>fatal</p><p>panic</p></entry>
<entry colname="col2">none</entry>
<entry colname="col3">local<p>system</p><p>restart</p></entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<topic id="gp_hashjoin_tuples_per_bucket">
<title>gp_hashjoin_tuples_per_bucket</title>
<body>
......@@ -5079,34 +4977,6 @@
</table>
</body>
</topic>
<topic id="gpperfmon_port">
<title>gpperfmon_port</title>
<body>
<p>Sets the port on which all data collection agents communicate with the
master. </p>
<table id="gpperfmon_port_table">
<tgroup cols="3">
<colspec colnum="1" colname="col1" colwidth="1*"/>
<colspec colnum="2" colname="col2" colwidth="1*"/>
<colspec colnum="3" colname="col3" colwidth="1*"/>
<thead>
<row>
<entry colname="col1">Value Range</entry>
<entry colname="col2">Default</entry>
<entry colname="col3">Set Classifications</entry>
</row>
</thead>
<tbody>
<row>
<entry colname="col1">integer</entry>
<entry colname="col2">8888</entry>
<entry colname="col3">master<p>system</p><p>restart</p></entry>
</row>
</tbody>
</tgroup>
</table>
</body>
</topic>
<topic id="ignore_checksum_failure">
<title>ignore_checksum_failure</title>
<body>
......
......@@ -817,40 +817,6 @@
</body>
</topic>
</topic>
<topic id="topic33" xml:lang="en">
<title id="kh171299">System Monitoring Parameters</title>
<abstract>These configuration parameters control Greenplum Database data collection and
notifications related to database monitoring.</abstract>
<topic id="topic36" xml:lang="en">
<title>Greenplum Performance Database</title>
<body>
<p>The following parameters configure the data collection agents that populate the
<codeph>gpperfmon</codeph> database.</p>
<simpletable id="kh171891" frame="none">
<strow>
<stentry>
<p>
<xref href="guc-list.xml#gp_enable_gpperfmon" type="section"
>gp_enable_gpperfmon</xref>
</p>
<p>
<xref href="guc-list.xml#gp_gpperfmon_send_interval" type="section"
>gp_gpperfmon_send_interval</xref>
</p>
</stentry>
<stentry>
<p>
<xref href="guc-list.xml#gpperfmon_log_alert_level" type="section"
>gpperfmon_log_alert_level</xref>
</p>
<p>
<xref href="guc-list.xml#gpperfmon_port" type="section">gpperfmon_port</xref>
</p>
</stentry>
</strow>
</simpletable>
</body>
</topic>
<topic id="query-metrics">
<title>Query Metrics Collection Parameters</title>
<body>
......@@ -876,7 +842,6 @@
</simpletable>
</body>
</topic>
</topic>
<topic id="topic37" xml:lang="en">
<title id="kh171364">Runtime Statistics Collection Parameters</title>
<body>
......
......@@ -93,7 +93,6 @@
<topicref href="guc-list.xml#gp_enable_direct_dispatch"/>
<topicref href="guc-list.xml#gp_enable_exchange_default_partition"/>
<topicref href="guc-list.xml#gp_enable_fast_sri"/>
<topicref href="guc-list.xml#gp_enable_gpperfmon"/>
<topicref href="guc-list.xml#gp_enable_groupext_distinct_gather"/>
<topicref href="guc-list.xml#gp_enable_groupext_distinct_pruning"/>
<topicref href="guc-list.xml#gp_enable_multiphase_agg"/>
......@@ -112,8 +111,6 @@
<topicref href="guc-list.xml#gp_fts_probe_threadcount"/>
<topicref href="guc-list.xml#gp_fts_probe_timeout"/>
<topicref href="guc-list.xml#gp_global_deadlock_detector_period"/>
<topicref href="guc-list.xml#gp_gpperfmon_send_interval"/>
<topicref href="guc-list.xml#gpperfmon_log_alert_level"/>
<topicref href="guc-list.xml#gp_hashjoin_tuples_per_bucket"/>
<topicref href="guc-list.xml#gp_ignore_error_table"/>
<topicref href="guc-list.xml#topic_lvm_ttc_3p"/>
......@@ -165,7 +162,6 @@
<topicref href="guc-list.xml#gp_workfile_limit_files_per_query"/>
<topicref href="guc-list.xml#gp_workfile_limit_per_query"/>
<topicref href="guc-list.xml#gp_workfile_limit_per_segment"/>
<topicref href="guc-list.xml#gpperfmon_port"/>
<topicref href="guc-list.xml#ignore_checksum_failure"/>
<topicref href="guc-list.xml#integer_datetimes"/>
<topicref href="guc-list.xml#IntervalStyle"/>
......
......@@ -283,7 +283,6 @@
</topicref>
</topicref>
<topicref href="gp_toolkit.ditamap" format="ditamap"/>
<topicref href="gpperfmon/gpperfmon.ditamap" format="ditamap"/>
<topicref href="extensions/serverapi.ditamap" format="ditamap"/>
<topicref href="misc.ditamap" format="ditamap"/>
</topicref>
......
......@@ -17,10 +17,8 @@
reserved.</p>
<p>The tablespace names <codeph>pg_default</codeph> and <codeph>pg_global</codeph> are
reserved.</p>
<p>The role names <codeph>gpadmin</codeph> and <codeph>gpmon</codeph> are reserved.
<codeph>gpadmin</codeph> is the default Greenplum Database superuser role. The
<codeph>gpmon</codeph> role owns the <codeph>gpperfmon</codeph> database<ph
otherprops="pivotal"> and is also used by Greenplum Command Center</ph>.</p>
<p>The role name <codeph>gpadmin</codeph> is reserved.
<codeph>gpadmin</codeph> is the default Greenplum Database superuser role.</p>
<p>In data files, the characters that delimit fields (columns) and rows have a special meaning.
If they appear within the data you must escape them so that Greenplum Database treats them as
data and not as delimiters. The backslash character (<codeph>\</codeph>) is the default escape
......
......@@ -10,7 +10,6 @@
<topicref href="topics/ports_and_protocols.xml"/>
<topicref href="topics/Authenticate.xml"/>
<topicref href="topics/Authorization.xml"/>
<topicref href="topics/gpcc.xml" otherprops="pivotal"/>
<topicref href="topics/Auditing.xml"/>
<topicref href="topics/Encryption.xml"/>
<topicref href="topics/BestPractices.xml"/>
......
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="topic_zyt_rxp_f5">
<title>Greenplum Command Center Security</title>
<body>
<p>Greenplum Command Center is a web-based application for monitoring and managing Greenplum
clusters. Command Center works with data collected by agents running on the segment hosts and
saved to the gpperfmon database. Installing Command Center creates the gpperfmon database and
the <codeph>gpmon</codeph> database role if they do not already exist. It creates the
<codeph>gpmetrics</codeph> schema in the gpperfmon database, which contains metrics and
query history tables populated by the Greenplum Database metrics collector module.</p>
<note>The <codeph>gpperfmon_install</codeph> utility also creates the gpperfmon database and
<codeph>gpmon</codeph> role, but Command Center no longer requires the history tables it creates
in the database. Do not use <codeph>gpperfmon_install</codeph> unless you need the old query
history tables for some other purpose. <codeph>gpperfmon_install</codeph> enables the
<codeph>gpmmon</codeph> and <codeph>gpsmon</codeph> agents, which add unnecessary load to
the Greenplum Database system if you do not need the old history tables.</note>
<section>
<title>The gpmon User</title>
<p>The Command Center installer creates the <codeph>gpmon</codeph> database role and adds the
role to the <codeph>pg_hba.conf</codeph> file with the following
entries:<codeblock>local gpperfmon gpmon md5
host all gpmon 127.0.0.1/28 md5
host all gpmon ::1/128 md5</codeblock>These
entries allow <codeph>gpmon</codeph> to establish a local socket connection to the gpperfmon
database and a TCP/IP connection to any database.</p>
<p>The <codeph>gpmon</codeph> database role is a superuser. In a secure or production
environment, it may be desirable to restrict the <codeph>gpmon</codeph> user to just the
gpperfmon database. Do this by editing the <codeph>gpmon</codeph> host entry in the
<codeph>pg_hba.conf</codeph> file and changing <codeph>all</codeph> in the database field
to
<codeph>gpperfmon</codeph>:<codeblock>local gpperfmon gpmon md5
host gpperfmon gpmon 127.0.0.1/28 md5
host gpperfmon gpmon ::1/128 md5</codeblock></p>
<p>The password used to authenticate the <codeph>gpmon</codeph> user is stored in the
<codeph>gpadmin</codeph> home directory in the <codeph>~/.pgpass</codeph> file. The
<codeph>~/.pgpass</codeph> file must be owned by the <codeph>gpadmin</codeph> user and be
RW-accessible only by the <codeph>gpadmin</codeph> user. The Command Center installer
creates the <codeph>gpmon</codeph> role with the default password "changeme". Be sure to
change the password immediately after you install Command Center. Use the <codeph>ALTER
ROLE</codeph> command to change the password in the database, change the password in the
<codeph>~/.pgpass</codeph> file, and then restart Command Center with the <codeph>gpcc
start</codeph> command. </p>
<p>Because the <codeph>.pgpass</codeph> file contains the plain-text password of the
<codeph>gpmon</codeph> user, you may want to remove it and supply the
<codeph>gpmon</codeph> password using a more secure method. The <codeph>gpmon</codeph> password is needed
when you run the <codeph>gpcc start</codeph>, <codeph>gpcc stop</codeph>, or <codeph>gpcc
status</codeph> commands. You can add the <codeph>-W</codeph> option to the
<codeph>gpcc</codeph> command to have the command prompt you to enter the password.
Alternatively, you can set the <codeph>PGPASSWORD</codeph> environment variable to the gpmon
password before you run the <codeph>gpcc</codeph> command.</p>
<p>Command Center does not allow logins from any role configured with trust authentication,
including the <codeph>gpadmin</codeph> user. </p>
<p>The <codeph>gpmon</codeph> user can log in to the Command Center Console and has access to
all of the application's features. You can allow other database roles access to Command
Center so that you can secure the <codeph>gpmon</codeph> user and restrict other users'
access to Command Center features. Setting up other Command Center users is described in the
next section. </p>
</section>
<section>
<title>Greenplum Command Center Users</title>
<p>To log in to the Command Center web application, a user must be allowed access to the
gpperfmon database in <codeph>pg_hba.conf</codeph>. For example, to make
<codeph>user1</codeph> a regular Command Center user, edit the
<codeph>pg_hba.conf</codeph> file and either add or edit a line for the user so that the
gpperfmon database is included in the database field. For example:</p>
<codeblock>host gpperfmon,accounts user1 127.0.0.1/28 md5</codeblock>
<p>The Command Center web application includes an Admin interface to add, remove, and edit entries
in the <codeph>pg_hba.conf</codeph> file and reload the file into Greenplum Database. </p>
<p>Command Center has the following types of users:<ul id="ul_tdv_qnt_g5">
<li><i>Self Only</i> users can view metrics and view and cancel their own queries. Any
Greenplum Database user successfully authenticated through the Greenplum Database
authentication system can access Greenplum Command Center with Self Only permission.
Higher permission levels are required to view and cancel other’s queries and to access
the System and Admin Control Center features.</li>
<li><i>Basic</i> users can view metrics, view all queries, and cancel their own queries.
Users with Basic permission are members of the Greenplum Database
<codeph>gpcc_basic</codeph> group. </li>
<li><i>Operator Basic</i> users can view metrics, view their own and others’ queries,
cancel their own queries, and view the System and Admin screens. Users with Operator
Basic permission are members of the Greenplum Database
<codeph>gpcc_operator_basic</codeph> group.</li>
<li><i>Operator</i> users can view their own and others’ queries, cancel their own and
other’s queries, and view the System and Admin screens. Users with Operator permission
are members of the Greenplum Database <codeph>gpcc_operator</codeph> group.</li>
<li><i>Admin</i> users can access all views and capabilities in the Command Center.
Greenplum Database users with the <codeph>SUPERUSER</codeph> privilege have Admin
permissions in Command Center.</li>
</ul></p>
<p>The Command Center web application has an Admin interface you can use to change a Command
Center user's access level. </p>
</section>
<section>
<title>Enabling SSL for Greenplum Command Center</title>
<p>The Command Center web server can be configured to support SSL so that client connections
are encrypted. To enable SSL, install a <codeph>.pem</codeph> file containing the web
server's certificate and private key on the web server host and then enter the full path to
the <codeph>.pem</codeph> file when prompted by the Command Center installer.</p>
</section>
<section>
<title>Enabling Kerberos Authentication for Greenplum Command Center Users</title>
<p>If Kerberos authentication is enabled for Greenplum Database, Command Center users can also
authenticate with Kerberos. Command Center supports three Kerberos authentication modes:
<i>strict</i>, <i>normal</i>, and <i>gpmon-only</i>. </p>
<parml>
<plentry>
<pt>Strict</pt>
<pd>Command Center has a Kerberos keytab file containing the Command Center service
principal and a principal for every Command Center user. If the principal in the
client’s connection request is in the keytab file, the web server grants the client
access and the web server connects to Greenplum Database using the client’s principal
name. If the principal is not in the keytab file, the connection request fails.</pd>
</plentry>
<plentry>
<pt>Normal</pt>
<pd>The Command Center Kerberos keytab file contains the Command Center principal and may
contain principals for Command Center users. If the principal in the client’s connection
request is in Command Center’s keytab file, it uses the client’s principal for database
connections. Otherwise, Command Center uses the <codeph>gpmon</codeph> user for database
connections.</pd>
</plentry>
<plentry>
<pt>gpmon-only</pt>
<pd>The Command Center uses the <codeph>gpmon</codeph> database role for all Greenplum
Database connections. No client principals are needed in the Command Center’s keytab
file.</pd>
</plentry>
</parml>
</section>
<p>See the <xref href="http://gpcc.docs.pivotal.io" format="html" scope="external">Greenplum
Command Center documentation</xref> for instructions to enable Kerberos authentication with
Greenplum Command Center</p>
</body>
</topic>
......@@ -126,14 +126,6 @@ MIRROR_PORT_BASE = 7000</codeblock></note>
server. <p>The gpload utility runs one or more instances of gpfdist with ports or port
ranges specified in a configuration file.</p></entry>
</row>
<row>
<entry>Gpperfmon agents</entry>
<entry>TCP 8888</entry>
<entry>Connection port for gpperfmon agents (<codeph>gpmmon</codeph> and
<codeph>gpsmon</codeph>) executing on Greenplum Database hosts. Configure by setting
the <codeph>gpperfmon_port</codeph> configuration variable in
<filepath>postgresql.conf</filepath> on master and segment hosts.</entry>
</row>
<row>
<entry>Backup completion notification</entry>
<entry>TCP 25, TCP 587, SMTP</entry>
......
......@@ -70,7 +70,6 @@
<!-- hidden until testing is complete -msk
<p><xref href="ref/gpmovemirrors.xml#topic1"/>
</p -->
<p><xref href="ref/gpperfmon_install.xml#topic1"/></p>
<p><xref href="ref/gppkg.xml#topic1" type="topic" format="dita"/></p>
<p><xref href="ref/gprecoverseg.xml#topic1" type="topic" format="dita"/></p>
<p><xref href="ref/gpreload.xml#topic1"/></p>
......
......@@ -41,7 +41,6 @@
<!-- hidden until testing is complete -msk
<topicref href="ref/gpmovemirrors.xml"/>
-->
<topicref href="ref/gpperfmon_install.xml"/>
<topicref href="ref/gppkg.xml"/>
<topicref href="ref/gprecoverseg.xml"/>
<topicref href="ref/gpreload.xml"/>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册