提交 f026ac0f 编写于 作者: M mkiyama 提交者: David Yozie

GPDB DOCS - add/update warnings (#2136)

* GPDB DOCS - add/update warnings

[ci skip]

* GPDB DOCS - add/update warnings - conditionalized Pivotal information.

[ci skip]
上级 7b148864
......@@ -8,7 +8,14 @@
configured.</shortdesc>
</abstract>
<body>
<p>For information about the utilities that are used to enable high
availability, see the <i>Greenplum Database Utility Guide</i>. </p>
<note type="important"> When data loss is not acceptable for a <ph otherprops="pivotal"
>Pivotal </ph>Greenplum Database cluster, master and segment mirroring must be
enabled<ph otherprops="pivotal"> in order for the cluster to be supported by
Pivotal</ph>. Without mirroring, system and data availability is not guaranteed<ph
otherprops="pivotal">, Pivotal will make best efforts to restore a cluster in this
case</ph>. For information about master and segment mirroring, see <xref
href="../../intro/about_ha.xml#about_ha">About Redundancy and Failover</xref>.</note>
<p>For information about the utilities that are used to enable high availability, see the
<i>Greenplum Database Utility Guide</i>. </p>
</body>
</topic>
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Composite//EN"
"ditabase.dtd">
<topic id="about_ha" xml:lang="en">
<title id="iw157531">About Redundancy and Failover in Greenplum Database</title>
<shortdesc>This topic provides a high-level overview of Greenplum Database high availability
features.</shortdesc>
......@@ -14,10 +12,13 @@
<xref
href="../highavail/topics/g-overview-of-high-availability-in-greenplum-database.xml#topic2"
/>.</p>
<note type="important"> When data loss is not acceptable for a <ph otherprops="pivotal">Pivotal
</ph>Greenplum Database cluster, master and segment mirroring must be enabled<ph
otherprops="pivotal"> in order for the cluster to be supported by Pivotal</ph>. Without
mirroring, system and data availability is not guaranteed<ph otherprops="pivotal">, Pivotal
will make best efforts to restore a cluster in this case</ph>.</note>
<section id="segment_mirroring" xml:lang="en">
<title id="iw157552">About Segment Mirroring</title>
<p>When you deploy your Greenplum Database system, you can configure <i>mirror</i> segments.
Mirror segments allow database queries to fail over to a backup segment if the primary
segment becomes unavailable. Mirroring is strongly recommended for production systems and
......@@ -35,34 +36,27 @@
<p>
<xref format="dita" href="#about_ha/iw157574" type="fig"/> shows how table data is
distributed across segments when spread mirroring is configured.</p>
<fig id="iw157574">
<title>Spread Mirroring in Greenplum Database</title>
<image href="../graphics/spread-mirroring.png" placement="break"/>
</fig>
</section>
<section id="segment_failover" xml:lang="en">
<title>Segment Failover and Recovery</title>
<p>When mirroring is enabled in a Greenplum Database system, the system will automatically
fail over to the mirror segment if a primary copy becomes unavailable. A Greenplum Database
system can remain operational if a segment instance or host goes down as long as all the
data is available on the remaining active segments.</p>
<p>If the master cannot connect to a segment instance, it marks that segment instance as down
in the Greenplum Database system catalog and brings up the mirror segment in its place. A
failed segment instance will remain out of operation until an administrator takes steps to
bring that segment back online. An administrator can recover a failed segment while the
system is up and running. The recovery process copies over only the changes that were missed
while the segment was out of operation.</p>
<p>If you do not have mirroring enabled, the system will automatically shut down if a segment
instance becomes invalid. You must recover all failed segments before operations can
continue.</p>
</section>
<section id="master_mirroring" xml:lang="en">
<title id="iw157589">About Master Mirroring</title>
<p>You can also optionally deploy a <i>backup</i> or <i>mirror</i> of the master instance on a
......@@ -70,30 +64,24 @@
the event that the primary master host becomes unoperational. The standby master is kept up
to date by a transaction log replication process, which runs on the standby master host and
synchronizes the data between the primary and standby master hosts.</p>
<p>If the primary master fails, the log replication process stops, and the standby master can
be activated in its place. Upon activation of the standby master, the replicated logs are
used to reconstruct the state of the master host at the time of the last successfully
committed transaction. The activated standby master effectively becomes the Greenplum
Database master, accepting client connections on the master port (which must be set to the
same port number on the master host and the backup master host).</p>
<p>Since the master does not contain any user data, only the system catalog tables need to be
synchronized between the primary and backup copies. When these tables are updated, changes
are automatically copied over to the standby master to ensure synchronization with the
primary master.</p>
<fig id="iw157606">
<title>Master Mirroring in Greenplum Database</title>
<image height="165px" href="../graphics/standby_master.jpg" placement="break"
width="271px"/>
<image height="165px" href="../graphics/standby_master.jpg" placement="break" width="271px"
/>
</fig>
</section>
<section id="interconnect_redundancy" xml:lang="en">
<title id="iw157609">About Interconnect Redundancy</title>
<p>The <i>interconnect</i> refers to the inter-process communication between the segments and
the network infrastructure on which this communication relies. You can achieve a highly
available interconnect using by deploying dual 10-Gigabit Ethernet switches on your network
......
......@@ -3,36 +3,36 @@
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="topic1" xml:lang="en">
<title id="kg138244">Starting and Stopping Greenplum Database</title>
<shortdesc>In a Greenplum Database DBMS, the database server instances (the master
and all segments) are started or stopped across all of the hosts in the system in such a way
that they can work together as a unified DBMS. </shortdesc>
<shortdesc>In a Greenplum Database DBMS, the database server instances (the master and all
segments) are started or stopped across all of the hosts in the system in such a way that they
can work together as a unified DBMS. </shortdesc>
<body>
<p>Because a Greenplum Database system is distributed across many machines, the
process for starting and stopping a Greenplum Database system is different than
the process for starting and stopping a regular PostgreSQL DBMS.</p>
<p>Because a Greenplum Database system is distributed across many machines, the process for
starting and stopping a Greenplum Database system is different than the process for starting
and stopping a regular PostgreSQL DBMS.</p>
<p>Use the <codeph>gpstart</codeph> and <codeph>gpstop</codeph> utilities to start and stop
Greenplum Database, respectively. These utilities are located in the
<filepath>$GPHOME/bin</filepath> directory on your Greenplum Database master
host.</p>
<note type="important">
<p>Do not issue a <codeph>KILL</codeph> command to end any Postgres process. Instead, use the
database command <codeph>pg_cancel_backend()</codeph>.</p>
</note>
<p>For information about <codeph>gpstart</codeph> and <codeph>gpstop</codeph>, see the <cite
>Greenplum Database Utility Guide</cite>.</p>
Greenplum Database, respectively. These utilities are located in the
<filepath>$GPHOME/bin</filepath> directory on your Greenplum Database master host.</p>
<note type="important">Do not issue a <codeph>kill</codeph> command to end any Postgres process.
Instead, use the database command <codeph>pg_cancel_backend()</codeph>. <p>Issuing a
<codeph>kill -9</codeph> or <codeph>kill -11</codeph> might introduce database
corruption.<ph otherprops="pivotal"> If Pivotal Greenplum Database corruption occurs,
Pivotal will make best efforts to restore a cluster. A root cause analysis cannot be
performed.</ph></p></note>
<p>For information about <codeph>gpstart</codeph> and <codeph>gpstop</codeph>, see the
<cite>Greenplum Database Utility Guide</cite>.</p>
</body>
<task id="task_hkd_gzv_fp">
<title>Starting Greenplum Database</title>
<shortdesc>Start an initialized Greenplum Database system by running the
<codeph>gpstart</codeph> utility on the master instance.</shortdesc>
<taskbody>
<context>Use the <codeph>gpstart</codeph> utility to start a Greenplum Database
system that has already been initialized by the <codeph>gpinitsystem</codeph> utility, but
has been stopped by the <codeph>gpstop</codeph> utility. The <codeph>gpstart</codeph>
utility starts Greenplum Database by starting all the Postgres database
instances on the Greenplum Database cluster. <codeph>gpstart</codeph>
orchestrates this process and performs the process in parallel.</context>
<context>Use the <codeph>gpstart</codeph> utility to start a Greenplum Database system that
has already been initialized by the <codeph>gpinitsystem</codeph> utility, but has been
stopped by the <codeph>gpstop</codeph> utility. The <codeph>gpstart</codeph> utility starts
Greenplum Database by starting all the Postgres database instances on the Greenplum Database
cluster. <codeph>gpstart</codeph> orchestrates this process and performs the process in
parallel.</context>
<steps-unordered id="steps-unordered_ot5_ntk_gp">
<step>
<cmd>Run <codeph>gpstart</codeph> on the master host to start Greenplum Database:</cmd>
......@@ -51,8 +51,7 @@
then restart Greenplum Database after the shutdown completes. </context>
<steps-unordered id="steps-unordered_c51_ntk_gp">
<step>
<cmd>To restart Greenplum Database, enter the following command on the
master host:</cmd>
<cmd>To restart Greenplum Database, enter the following command on the master host:</cmd>
<stepxmp>
<codeblock>$ gpstop -r</codeblock>
</stepxmp>
......@@ -62,8 +61,8 @@
</task>
<task id="task_upload_config">
<title>Reloading Configuration File Changes Only</title>
<shortdesc>Reload changes to Greenplum Database configuration files without
interrupting the system.</shortdesc>
<shortdesc>Reload changes to Greenplum Database configuration files without interrupting the
system.</shortdesc>
<taskbody>
<context>The <codeph>gpstop</codeph> utility can reload changes to the
<filepath>pg_hba.conf</filepath> configuration file and to <i>runtime</i> parameters in
......@@ -125,13 +124,13 @@
<task id="task_gpdb_stop">
<title id="kg156168">Stopping Greenplum Database</title>
<taskbody>
<context>The <codeph>gpstop</codeph> utility stops or restarts your Greenplum Database system and always runs on the master host. When activated,
<codeph>gpstop</codeph> stops all <codeph>postgres</codeph> processes in the system,
including the master and all segment instances. The <codeph>gpstop</codeph> utility uses a
default of up to 64 parallel worker threads to bring down the Postgres instances that make
up the Greenplum Database cluster. The system waits for any active
transactions to finish before shutting down. To stop Greenplum Database
immediately, use fast mode.</context>
<context>The <codeph>gpstop</codeph> utility stops or restarts your Greenplum Database system
and always runs on the master host. When activated, <codeph>gpstop</codeph> stops all
<codeph>postgres</codeph> processes in the system, including the master and all segment
instances. The <codeph>gpstop</codeph> utility uses a default of up to 64 parallel worker
threads to bring down the Postgres instances that make up the Greenplum Database cluster.
The system waits for any active transactions to finish before shutting down. To stop
Greenplum Database immediately, use fast mode.</context>
<steps-unordered>
<step>
<cmd>To stop Greenplum Database:</cmd>
......
......@@ -281,7 +281,7 @@ NEXT 10 ROWS ONLY; </codeblock><p>Greenplum
</row>
<row>
<entry colname="col1"><codeph>ALTER OPERATOR CLASS</codeph></entry>
<entry colname="col2"><b>NO</b></entry>
<entry colname="col2">YES</entry>
<entry colname="col3"/>
</row>
<row>
......@@ -470,7 +470,7 @@ NEXT 10 ROWS ONLY; </codeblock><p>Greenplum
</row>
<row>
<entry colname="col1"><codeph>CREATE OPERATOR CLASS</codeph></entry>
<entry colname="col2"><b>NO</b></entry>
<entry colname="col2">YES</entry>
<entry colname="col3"/>
</row>
<row>
......@@ -576,8 +576,8 @@ NEXT 10 ROWS ONLY; </codeblock><p>Greenplum
<entry colname="col3"><b>Unsupported Clauses /
Options:</b><p><codeph>SCROLL</codeph></p><p><codeph>FOR UPDATE [ OF column [,
...] ]</codeph></p><p><b>Limitations:</b></p><p>Cursors cannot be
backward-scrolled. Forward scrolling is supported.</p><p>PL/pgSQL does
not have support for updatable cursors. </p></entry>
backward-scrolled. Forward scrolling is supported.</p><p>PL/pgSQL does not have
support for updatable cursors. </p></entry>
</row>
<row>
<entry colname="col1"><codeph>DELETE</codeph></entry>
......@@ -656,7 +656,7 @@ NEXT 10 ROWS ONLY; </codeblock><p>Greenplum
</row>
<row>
<entry colname="col1"><codeph>DROP OPERATOR CLASS</codeph></entry>
<entry colname="col2"><b otherprops="red">NO</b></entry>
<entry colname="col2">YES</entry>
<entry colname="col3"/>
</row>
<row>
......
......@@ -4,12 +4,17 @@
<topic id="topic1" xml:lang="en">
<title id="eu135496">System Catalog Reference</title>
<body>
<p>This reference describes the Greenplum Database system catalog tables and
views. System tables prefixed with <codeph>gp_</codeph> relate to the parallel features of
Greenplum Database. Tables prefixed with <codeph>pg_</codeph> are either
standard PostgreSQL system catalog tables supported in Greenplum Database, or
are related to features Greenplum that provides to enhance
PostgreSQL for data warehousing workloads. Note that the global system catalog for Greenplum Database resides on the master instance.</p>
<p>This reference describes the Greenplum Database system catalog tables and views. System
tables prefixed with <codeph>gp_</codeph> relate to the parallel features of Greenplum
Database. Tables prefixed with <codeph>pg_</codeph> are either standard PostgreSQL system
catalog tables supported in Greenplum Database, or are related to features Greenplum that
provides to enhance PostgreSQL for data warehousing workloads. Note that the global system
catalog for Greenplum Database resides on the master instance.</p>
<note type="warning"> Changes to <ph otherprops="pivotal">Pivotal </ph>Greenplum Database system
catalog tables or views are not supported. If a catalog table or view is changed<ph
otherprops="pivotal"> by the customer</ph>, the <ph otherprops="pivotal">Pivotal Greenplum
Database cluster is not supported. The </ph>cluster must be reinitialized and restored<ph
otherprops="pivotal"> by the customer</ph>.</note>
<!--links are print-only-->
<ul id="ul_lyl_np1_1q" otherprops="op-print">
<li>
......
......@@ -4,10 +4,11 @@
<topic id="topic1">
<title id="km135496">Management Utility Reference</title>
<body>
<p>This reference describes the command-line management utilities provided with Greenplum Database. Greenplum Database uses the standard PostgreSQL
client and server programs and provides additional management utilities for administering a
distributed Greenplum Database DBMS. Greenplum Database management
utilities reside in <codeph>$GPHOME/bin</codeph>.<note>When referencing IPv6 addresses in
<p>This reference describes the command-line management utilities provided with Greenplum
Database. Greenplum Database uses the standard PostgreSQL client and server programs and
provides additional management utilities for administering a distributed Greenplum Database
DBMS. Greenplum Database management utilities reside in
<codeph>$GPHOME/bin</codeph>.<note>When referencing IPv6 addresses in
<codeph>gpfdist</codeph> URLs or when using numeric IP addresses instead of hostnames in
any management utility, always enclose the IP address in brackets. For command prompt use,
the best practice is to escape any brackets or put them inside quotation marks. For example,
......@@ -137,10 +138,10 @@
<topic id="topic_zqp_5xm_cp">
<title>Backend Server Programs</title>
<body>
<p>The following standard PostgreSQL server management programs are provided with Greenplum Database and reside in <codeph>$GPHOME/bin</codeph>. They are modified to
handle the parallelism and distribution of a Greenplum Database system. You
access these programs only through the Greenplum Database management tools and
utilities.</p>
<p>The following standard PostgreSQL server management programs are provided with Greenplum
Database and reside in <codeph>$GPHOME/bin</codeph>. They are modified to handle the
parallelism and distribution of a Greenplum Database system. You access these programs only
through the Greenplum Database management tools and utilities.</p>
<table id="km164231">
<title>Greenplum Database Backend Server Programs</title>
<tgroup cols="3">
......@@ -160,8 +161,8 @@
<codeph id="km164249">initdb</codeph>
</entry>
<entry colname="col2">This program is called by <codeph>gpinitsystem</codeph> when
initializing a Greenplum Database array. It is used internally to
create the individual segment instances and the master instance.</entry>
initializing a Greenplum Database array. It is used internally to create the
individual segment instances and the master instance.</entry>
<entry colname="col3">
<codeph>
<xref href="gpinitsystem.xml#topic1" type="topic" format="dita"/>
......@@ -179,12 +180,12 @@
<entry colname="col1">
<codeph id="km164272">gpsyncmaster</codeph>
</entry>
<entry colname="col2">This is the Greenplum program that
starts the <codeph>gpsyncagent</codeph> process on the standby master host.
Administrators do not call this program directly, but do so through the management
scripts that initialize and/or activate a standby master for a Greenplum Database system. This process is responsible for keeping the
standby master up to date with the primary master via a transaction log replication
process.</entry>
<entry colname="col2">This is the Greenplum program that starts the
<codeph>gpsyncagent</codeph> process on the standby master host. Administrators do
not call this program directly, but do so through the management scripts that
initialize and/or activate a standby master for a Greenplum Database system. This
process is responsible for keeping the standby master up to date with the primary
master via a transaction log replication process.</entry>
<entry colname="col3"><codeph><xref href="gpinitstandby.xml#topic1" type="topic"
format="dita"/></codeph>, <codeph><xref href="gpactivatestandby.xml#topic1"
type="topic" format="dita"/></codeph></entry>
......@@ -205,8 +206,9 @@
<codeph id="km164304">pg_ctl</codeph>
</entry>
<entry colname="col2">This program is called by <codeph>gpstart</codeph> and
<codeph>gpstop</codeph> when starting or stopping a Greenplum Database array. It is used internally to stop and start the individual segment instances
and the master instance in parallel and with the correct options.</entry>
<codeph>gpstop</codeph> when starting or stopping a Greenplum Database array. It
is used internally to stop and start the individual segment instances and the master
instance in parallel and with the correct options.</entry>
<entry colname="col3"><codeph><xref href="gpstart.xml#topic1" type="topic"
format="dita"/></codeph>, <codeph><xref href="gpstop.xml#topic1" type="topic"
format="dita"/></codeph></entry>
......@@ -215,7 +217,11 @@
<entry colname="col1">
<codeph id="km164320">pg_resetxlog</codeph>
</entry>
<entry colname="col2">Not used in Greenplum Database</entry>
<entry colname="col2">DO NOT USE<p><b>Warning:</b> This program might cause data loss
or cause data to become unavailable. If this program is used, the <ph
otherprops="pivotal">Pivotal Greenplum Database cluster is not supported. The
</ph>cluster must be reinitialized and restored<ph otherprops="pivotal"> by the
customer</ph>.</p></entry>
<entry colname="col3">N/A</entry>
</row>
<row>
......@@ -233,9 +239,9 @@
<codeph id="km164337">postmaster</codeph>
</entry>
<entry colname="col2"><codeph>postmaster</codeph> starts the <codeph>postgres</codeph>
database server listener process that accepts client connections. In Greenplum Database, a <codeph>postgres</codeph> database listener process
runs on the Greenplum master Instance and on each
Segment Instance.</entry>
database server listener process that accepts client connections. In Greenplum
Database, a <codeph>postgres</codeph> database listener process runs on the
Greenplum master Instance and on each Segment Instance.</entry>
<entry colname="col3">In Greenplum Database, you use <codeph><xref
href="gpstart.xml#topic1" type="topic" format="dita"/></codeph> and
<codeph><xref href="gpstop.xml#topic1" type="topic" format="dita"/></codeph> to
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册