diff --git a/gpdb-doc/dita/admin_guide/highavail/topics/g-enabling-high-availability-features.xml b/gpdb-doc/dita/admin_guide/highavail/topics/g-enabling-high-availability-features.xml index 25b57f53642cc7749c660ba9dce8a06fb04500c1..01dcefffb381355c26615708bfecb6e7bc293b2b 100644 --- a/gpdb-doc/dita/admin_guide/highavail/topics/g-enabling-high-availability-features.xml +++ b/gpdb-doc/dita/admin_guide/highavail/topics/g-enabling-high-availability-features.xml @@ -8,7 +8,14 @@ configured.
-For information about the utilities that are used to enable high - availability, see the Greenplum Database Utility Guide.
+For information about the utilities that are used to enable high availability, see the + Greenplum Database Utility Guide.
diff --git a/gpdb-doc/dita/admin_guide/intro/about_ha.xml b/gpdb-doc/dita/admin_guide/intro/about_ha.xml index e21a8db86890d5c81c595f2d1dafc47d16194c91..56d1b735643f2655255937a5d47f12a012757bce 100644 --- a/gpdb-doc/dita/admin_guide/intro/about_ha.xml +++ b/gpdb-doc/dita/admin_guide/intro/about_ha.xml @@ -1,9 +1,7 @@ -When you deploy your Greenplum Database system, you can configure mirror segments. Mirror segments allow database queries to fail over to a backup segment if the primary segment becomes unavailable. Mirroring is strongly recommended for production systems and @@ -35,34 +36,27 @@
When mirroring is enabled in a Greenplum Database system, the system will automatically fail over to the mirror segment if a primary copy becomes unavailable. A Greenplum Database system can remain operational if a segment instance or host goes down as long as all the data is available on the remaining active segments.
-If the master cannot connect to a segment instance, it marks that segment instance as down in the Greenplum Database system catalog and brings up the mirror segment in its place. A failed segment instance will remain out of operation until an administrator takes steps to bring that segment back online. An administrator can recover a failed segment while the system is up and running. The recovery process copies over only the changes that were missed while the segment was out of operation.
-If you do not have mirroring enabled, the system will automatically shut down if a segment instance becomes invalid. You must recover all failed segments before operations can continue.
You can also optionally deploy a backup or mirror of the master instance on a @@ -70,30 +64,24 @@ the event that the primary master host becomes unoperational. The standby master is kept up to date by a transaction log replication process, which runs on the standby master host and synchronizes the data between the primary and standby master hosts.
-If the primary master fails, the log replication process stops, and the standby master can be activated in its place. Upon activation of the standby master, the replicated logs are used to reconstruct the state of the master host at the time of the last successfully committed transaction. The activated standby master effectively becomes the Greenplum Database master, accepting client connections on the master port (which must be set to the same port number on the master host and the backup master host).
-Since the master does not contain any user data, only the system catalog tables need to be synchronized between the primary and backup copies. When these tables are updated, changes are automatically copied over to the standby master to ensure synchronization with the primary master.
-The interconnect refers to the inter-process communication between the segments and
the network infrastructure on which this communication relies. You can achieve a highly
available interconnect using by deploying dual 10-Gigabit Ethernet switches on your network
diff --git a/gpdb-doc/dita/admin_guide/managing/startstop.xml b/gpdb-doc/dita/admin_guide/managing/startstop.xml
index 1c038de75c77342dcc3d59fe368353f37008fa69..998ba420db9a182c8a93f0c7de9cd7f348b023c6 100644
--- a/gpdb-doc/dita/admin_guide/managing/startstop.xml
+++ b/gpdb-doc/dita/admin_guide/managing/startstop.xml
@@ -3,36 +3,36 @@
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
Because a Greenplum Database system is distributed across many machines, the
- process for starting and stopping a Greenplum Database system is different than
- the process for starting and stopping a regular PostgreSQL DBMS. Because a Greenplum Database system is distributed across many machines, the process for
+ starting and stopping a Greenplum Database system is different than the process for starting
+ and stopping a regular PostgreSQL DBMS. Use the Do not issue a For information about
Issuing a
+
For information about
Greenplum
Greenplum
Greenplum
Limitations: Cursors cannot be
- backward-scrolled. Forward scrolling is supported. PL/pgSQL does
- not have support for updatable cursors. PL/pgSQL does not have
+ support for updatable cursors. Greenplum
This reference describes the Greenplum Database system catalog tables and
- views. System tables prefixed with This reference describes the Greenplum Database system catalog tables and views. System
+ tables prefixed with This reference describes the command-line management utilities provided with Greenplum Database. Greenplum Database uses the standard PostgreSQL
- client and server programs and provides additional management utilities for administering a
- distributed Greenplum Database DBMS. Greenplum Database management
- utilities reside in This reference describes the command-line management utilities provided with Greenplum
+ Database. Greenplum Database uses the standard PostgreSQL client and server programs and
+ provides additional management utilities for administering a distributed Greenplum Database
+ DBMS. Greenplum Database management utilities reside in
+ The following standard PostgreSQL server management programs are provided with Greenplum Database and reside in The following standard PostgreSQL server management programs are provided with Greenplum
+ Database and reside in Warning: This program might cause data loss
+ or cause data to become unavailable. If this program is used, the