- 16 11月, 2018 1 次提交
-
-
由 David Kimura 提交于
If a QE writer marks the transaction as aborted while a reader is still executing the query the reader may affect shared memory state. E.g. consider a transaction being aborted and a table needs to be dropped as part of abort processing. If the writer has dropped all the buffers belonging to the table from shared buffer cache and is about to unlink the file for the table. Concurrently, a reader, unaware of the writer's abort, is still executing the query. The reader may bring in a page from the file that the writer is about to unlink into shared buffer cache. In order to prevent such situations the writer walks procArray to find the readers and sends SIGINT to them. Walking procArray is expensive, and is avoided as much as possible. The patch walks the procArray only if the command being aborted (or the last command in a transaction that is being aborted) performed a write and at least one reader slice was part of the query plan. To avoid confusion on the QD due to "canceling MPP operation" error messages emitted by the readers upon receiving the SIGINT, the readers do not emit them on the libpq channel. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/S2WL1FtJEJ0/Wh6DfJ-RBwAJCo-authored-by: NEkta Khanna <ekhanna@pivotal.io> Co-authored-by: NAsim R P <apraveen@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
- 01 11月, 2018 1 次提交
-
-
由 Daniel Gustafsson 提交于
Rename readRecoveryCommandFile() back to the name used in upstream, and remove the emode parameter and associated log entry. Also tidy up the xlog.h header by removing stale entries and making functions only used in a single context static with associated small cleanups. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
- 25 10月, 2018 1 次提交
-
-
由 Tang Pengzhou 提交于
* Don't use GpIdentity.numsegments directly for the number of segments Use getgpsegmentCount() instead. * Unify the way to fetch/manage the number of segments Commit e0b06678 lets us expanding a GPDB cluster without a restart, the number of segments may changes during a transaction now, so we need to take care of the numsegments. We now have two way to get segments number, 1) from GpIdentity.numsegments 2) from gp_segment_configuration (cdb_component_dbs) which dispatcher used to decide the segments range of dispatching. We did some hard work to update GpIdentity.numsegments correctly within e0b06678 which made the management of segments more complicated, now we want to use an easier way to do it: 1. We only allow getting segments info (include number of segments) through gp_segment_configuration, gp_segment_configuration has newest segments info, there is no need to update GpIdentity.numsegments, GpIdentity.numsegments is left only for debugging and can be removed totally in the future. 2. Each global transaction fetches/updates the newest snapshot of gp_segment_configuration and never change it until the end of transaction unless a writer gang is lost, so a global transaction can see consistent state of segments. We used to use gxidDispatched to do the same thing, now it can be removed. * Remove GpIdentity.numsegments GpIdentity.numsegments take no effect now, remove it. This commit does not remove gp_num_contents_in_cluster because it needs to modify utilities like gpstart, gpstop, gprecoverseg etc, let's do such cleanup work in another PR. * Exchange the default UP/DOWN value in fts cache Previously, Fts prober read gp_segment_configuration, checked the status of segments and then set the status of segments in the shared memory struct named ftsProbeInfo->fts_status[], so other components (mainly used by dispatcher) can detect a segment was down. All segments were initialized as down and then be updated to up in most common cases, this brings two problems: 1. The fts_status is invalid until FTS does the first loop, so QD need to check ftsProbeInfo->fts_statusVersion > 0 2. gpexpand add a new segment in gp_segment_configuration, the new added segment may be marked as DOWN if FTS doesn't scan it yet. This commit changes the default value from DOWN to UP which can resolve problems mentioned above. * Fts should not be used to notify backends that a gpexpand occurs As Ashwin mentioned in PR#5679, "I don't think giving FTS responsibility to provide new segment count is right. FTS should only be responsible for HA of the segments. The dispatcher should independently figure out the count based on catalog.gp_segment_configuration should be the only way to get the segment count", FTS should decouple from gpexpand. * Access gp_segment_configuration inside a transaction * upgrade log level from ERROR to FATAL if expand version changed * Modify gpexpand test cases according to new design
-
- 15 10月, 2018 1 次提交
-
-
由 Ning Yu 提交于
* Protect catalog changes on master. To allow gpexpand to do the job without restarting the cluster we need to prevent concurrent catalog changes on master. A catalog lock is provided for this purpose, all insert/update/delete to catalog tables should hold this lock in shared mode before making the changes; gpexpand can hold this lock in exclusive mode to 1) wait for in-progress catalog updates to commit/rollback and 2) prevent concurrent catalog updates. Add UDF to hold catalog lock in exclusive mode. Add test cases for the catalog lock. Co-authored-by: NJialun Du <jdu@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io> * Numsegments management. GpIdentity.numsegments is a global variable saving the cluster size in each process. It is important for cluster. Changing of the cluster size means the expansion is finished and new segments have taken effect. FTS will count the number of primary segments from gp_segment_configuration and record it in shared memory. Later, other processes including master, fts, gdd, QD will update GpIdentity.numsegmentswith this information. So far it's not easy to make old transactions running with new segments, so for QD GpIdentity.numsegments can only be updated at beginning of transactions. Old transactions can only see old segments. QD will dispatch GpIdentity.numsegments to QEs, so they will see the same cluster size. Catalog changes in old transactions are disallowed. Consider below workflow: A: begin; B: gpexpand from N segments to M (M > N); A: create table t1 (c1 int); A: commit; C: drop table t1; Transaction A began when cluster size was still N, all its commands are dispatched to N segments only even after cluster being expanded to M segments, so t1 was only created on N tables, not only the data distribution but also the catalog records. This will cause the later DROP TABLE to fail on the new segments. To prevent this issue we currently disable all catalog updates in old transactions once the expansion is done. New transactions can update catalog as they are already running on all the M segments. Co-authored-by: NJialun Du <jdu@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io> * Online gpexpand implementation. Do not restart the cluster during expansion. - Lock the catalog - Create the new segments from master segment - Add new segments to cluster - Reload the cluster so new transaction and background processes can see new segments - Unlock the catalog Add test cases. Co-authored-by: NJialun Du <jdu@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io> * New job to run ICW after expansion. It is better to run ICW after online expansion to see if the cluster works well. So we add a new job to create a cluster with two segments first, then expand to three segments and run all the ICW cases. Cluster restarting is forbidden for we must be sure all the cases will be passed after online expansion. So any case which contains cluster restarting must be removed from this job. The new job is icw_planner_centos7_online_expand, it is same as icw_planner_centos7 but contains two new params EXCLUDE_TESTS and ONLINE_EXPAND. If ONLINE_EXPAND is set, the ICW shell will go to different branch, creating cluster with size two, expanding, etc. EXCLUDE_TESTS lists the cases which contain cluster restarting or the test cases will fail without restarting test cases. After the whole run, the pid of master will be checked, if it changes, the cluster must be restarted, so the job will fail. As a result any new restarting test cases should be added to EXCLUDE_TESTS. * Add README. Co-authored-by: NJialun Du <jdu@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io> * Small changes per review comments. - Delete confusing test case - Change function name updateBackendGpIdentityNumsegments to updateSystemProcessGpIdentityNumsegments - Fix some typos * Remove useless Assert These two asserts will cause failure in isolation2/uao/ compaction_utility_insert. This case will check AO table in utility mode. But numsegments are meaningless in sole segment, so this assert must be removed.
-
- 12 9月, 2018 1 次提交
-
-
由 Pengzhou Tang 提交于
In dispatch test cases, we need a way to put a segment to in-recovery status to test gang recreating logic of dispatcher. We used to trigger a panic fault on a segment and suspend the quickdie() to simulate in-recovery status. To avoid segment staying in recovery mode for a long time, we used a 'sleep' fault instead of 'suspend' in quickdie(), so segment can accept new connections after 5 seconds. 5 seconds works fine most of time, but still not stable enough, so we decide to use more straight-forward mean to simulate in-recovery mode which reports a POSTMASTER_IN_RECOVERY_MSG directly in ProcessStartupPacket(). To not affecting other backends, we create a new database so fault injectors only affect dispatch test cases.
-
- 24 8月, 2018 2 次提交
-
-
由 Heikki Linnakangas 提交于
* Fix whitespace to match upstream * Fix log message to match upstream * Remove a chunk of broken code for platforms that don't have strsignal or sys_siglist. Now the code matches upstream (Apparently all platforms we test on have them, or would've gotten a compiler error.) * remove unused 'didServiceProcessWork' local variable * remove unused 'EXIT_STATUS_2()' macro
-
由 Heikki Linnakangas 提交于
It was removed in commit eae1ee3f, but a couple of comments and function prototypes for removed functions were left behind.
-
- 15 8月, 2018 1 次提交
-
-
由 David Kimura 提交于
The purpose of this refactor is to more closely align the GUC with postgres. It started as a suggestion in https://github.com/greenplum-db/gpdb/pull/4790. There are still differences, particularly around when this GUC can be set. In GPDB it can be set by anyone at any time (PGC_USERSET), however in postgres it is limited to postmaster restart (PGC_POSTMASTER). This difference was kept on purpose until we have more buy-in as it is a bigger change on the end-user. Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
- 14 8月, 2018 1 次提交
-
-
由 Richard Guo 提交于
This fixes 'stack_base_ptr' assertion failure with '--enable-testutils' and also revises related codes to keep the same with upstream.
-
- 03 8月, 2018 1 次提交
-
-
由 Karen Huddleston 提交于
This reverts commit 4750e1b6.
-
- 02 8月, 2018 1 次提交
-
-
由 Richard Guo 提交于
This is the final batch of commits from PostgreSQL 9.2 development, up to the point where the REL9_2_STABLE branch was created, and 9.3 development started on the PostgreSQL master branch. Notable upstream changes: * Index-only scan was included in the batch of upstream commits. It allows queries to retrieve data only from indexes, avoiding heap access. * Group commit was added to work effectively under heavy load. Previously, batching of commits became ineffective as the write workload increased, because of internal lock contention. * A new fast-path lock mechanism was added to reduce the overhead of taking and releasing certain types of locks which are taken and released very frequently but rarely conflict. * The new "parameterized path" mechanism was added. It allows inner index scans to use values from relations that are more than one join level up from the scan. This can greatly improve performance in situations where semantic restrictions (such as outer joins) limit the allowed join orderings. * SP-GiST (Space-Partitioned GiST) index access method was added to support unbalanced partitioned search structures. For suitable problems, SP-GiST can be faster than GiST in both index build time and search time. * Checkpoints now are performed by a dedicated background process. Formerly the background writer did both dirty-page writing and checkpointing. Separating this into two processes allows each goal to be accomplished more predictably. * Custom plan was supported for specific parameter values even when using prepared statements. * API for FDW was improved to provide multiple access "paths" for their tables, allowing more flexibility in join planning. * Security_barrier option was added for views to prevents optimizations that might allow view-protected data to be exposed to users. * Range data type was added to store a lower and upper bound belonging to its base data type. * CTAS (CREATE TABLE AS/SELECT INTO) is now treated as utility statement. The SELECT query is planned during the execution of the utility. To conform to this change, GPDB executes the utility statement only on QD and dispatches the plan of the SELECT query to QEs. Co-authored-by: NAdam Lee <ali@pivotal.io> Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NAsim R P <apraveen@pivotal.io> Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Co-authored-by: NGang Xiong <gxiong@pivotal.io> Co-authored-by: NHaozhou Wang <hawang@pivotal.io> Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NJinbao Chen <jinchen@pivotal.io> Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NPaul Guo <paulguo@gmail.com> Co-authored-by: NRichard Guo <guofenglinux@gmail.com> Co-authored-by: NShujie Zhang <shzhang@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NZhenghua Lyu <zlv@pivotal.io>
-
- 17 7月, 2018 1 次提交
-
-
由 David Kimura 提交于
Handling sequences in MPP stage is challenging. This patch refactors the same mainly to eliminate the shortcomings/pitfalls in previous implementation. Lets first glance at issues with older implementation. - required relfilenode == oid for all sequence relations - "denial of service" due to dedicated but single process running on QD to serve sequence values for all the tables in all the databases to all the QEs - sequence tables direct opened as a result instead of flowing throw relcache - divergence from upstream implementation Many solutions were considered refer mailing discussion for the same before settling on one in here. Newer implementation still leverages centralized place QD to generated sequence values. It now leverages the existing QD backend process connecting to QEs for the query to serve the nextval request. As a result need for relfilenode == oid gets eliminated as based on oid, QD process can now lookup relfilenode from catalog and also leverage relcache. No more direct opens by single process across databases. For communication between QD and QE for sequence nextval request, async notify message is used (Notify messages are not used in GPDB for anything else currently). QD process while waiting for results from QEs is sitting idle and hence on seeing the nextval request, calls `nextval_internal()` and responds back with the value. Since the need for separate sequence server process went away, all the code for the same is removed. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/hni7lS9xH4c/o_M3ddAeBgAJCo-authored-by: NAsim R P <apraveen@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 29 6月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
GPDB only supports 1 replica currently. Need to adopt in FTS and all to support 1:n till then restrict max_wal_senders GUC to 1. Later when code can handle the same the max value of guc can be changed. Also, remove the setting of max_wal_senders in postmaster which was added earlier for dealing with filerep/walrep co-existence.
-
- 27 6月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
Currently gpdb wal rep code is mix of multiple versions, once we reach 9.3 get opportunity to pain get in sync with upstream version. This will be taken care of then till that time live with gpdb modified version of the CheckPromoteSignal().
-
- 07 6月, 2018 1 次提交
-
-
由 Pengzhou Tang 提交于
This is a quick fix to make dispatch test pass, for a long term, we need to redesign the dispatch test or make it a unit test.
-
- 12 5月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
-
- 03 5月, 2018 1 次提交
-
-
由 Zhenghua Lyu 提交于
To prevent distributed deadlock, in Greenplum DB an exclusive table lock is held for UPDATE and DELETE commands, so concurrent updates the same table are actually disabled. We add a backend process to do global deadlock detect so that we do not lock the whole table while doing UPDATE/DELETE and this will help improve the concurrency of Greenplum DB. The core idea of the algorithm is to divide lock into two types: - Persistent: the lock can only be released after the transaction is over(abort/commit) - Otherwise cases This PR’s implementation adds a persistent flag in the LOCK, and the set rule is: - Xid lock is always persistent - Tuple lock is never persistent - Relation is persistent if it has been closed with NoLock parameter, otherwise it is not persistent Other types of locks are not persistent More details please refer the code and README. There are several known issues to pay attention to: - This PR’s implementation only cares about the locks can be shown in the view pg_locks. - This PR’s implementation does not support AO table. We keep upgrading the locks for AO table. - This PR’s implementation does not take networking wait into account. Thus we cannot detect the deadlock of GitHub issue #2837. - SELECT FOR UPDATE still lock the whole table. Co-authored-by: NZhenghua Lyu <zlv@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io>
-
- 02 5月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
Previous behavior when primary is in crash recovery FTS probe fails and hence qqprimary is marked down. This change provides a recovery progress metric so that FTS can detect progress. We added last replayed LSN number inside the error message to determine recovery progress. This allows FTS to distinguish between recovery in progress and recovery hang or rolling panics. Only when FTS detects recovery is not making progress then FTS marks primary down. For testing a new fault injector is added to allow simulation of recovery hang and recovery in progress. Just fyi...this reverts the reverted commit 7b7219a4. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 01 5月, 2018 2 次提交
-
-
由 Ashwin Agrawal 提交于
This reverts commit 1b07e77a.
-
由 Ashwin Agrawal 提交于
Previous behavior when primary is in crash recovery FTS probe fails and hence primary is marked down. This change provides a recovery progress metric so that FTS can detect progress. We added last replayed LSN number inside the error message to determine recovery progress. This allows FTS to distinguish between recovery in progress and recovery hang or rolling panics. Only when FTS detects recovery is not making progress then FTS marks primary down. For testing a new fault injector is added to allow simulation of recovery hang and recovery in progress. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 20 3月, 2018 2 次提交
-
-
由 Jimmy Yih 提交于
The FIXME comment asked whether or not we wanted to remove the check on CAC_WAITBACKUP. It is true that we do not currently use WAITBACKUP state, but we should continue to have the code. It will prevent some merge conflicts during postgres merge, and we may use it sometime in the future if applicable to any future Greenplum features. [ci skip]
-
由 Asim R P 提交于
- Use standard PMSignal mechanism to trigger FTS probes from dispatcher. - Use statusVersion flag only to indicate if a probe cycle resulted in configuration update. This was overloaded to also unblock callers waiting for FTS probe to be triggered and finished. - Use a separate flag "probeTick" for this purpose. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 14 3月, 2018 1 次提交
-
-
由 Daniel Gustafsson 提交于
Commit ba5cfa93 introduced the upstream code for managing timezones, but it failed to include the the below diff to initdb for moving timezone handling from the postmaster to initdb. This is a partial backport of the below commit, some hunks didn't apply because due to requiring code introduced in PostgreSQL 9.1 and 9.2. commit ca4af308 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri Sep 9 17:59:11 2011 -0400 Simplify handling of the timezone GUC by making initdb choose the default. We were doing some amazingly complicated things in order to avoid running the very expensive identify_system_timezone() procedure during GUC initialization. But there is an obvious fix for that, which is to do it once during initdb and have initdb install the system-specific default into postgresql.conf, as it already does for most other GUC variables that need system-environment-dependent defaults. This means that the timezone (and log_timezone) settings no longer have any magic behavior in the server. Per discussion.
-
- 10 3月, 2018 2 次提交
-
-
由 Ashwin Agrawal 提交于
Added additional global timestamp to track the end of recovery where database server is ready to accept connections in variable PMAcceptingConnectionsStartTime. During checking mirror status, we compute the grace period not only based on when last wal sender died, but also based on when recovery process finished. This will fix the issue where the mirror is marked down due to long recovery time on the primary. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NXin Zhang <xzhang@pivotal.io>
-
由 Asim R P 提交于
The flag pm_launch_walreceiver helps to determine if a mirror has made at least one attempt to connect with primary. For a small window, right after a mirror is promoted, the flag was still in effect. If an FTS probe request arrived in this window, the following assertion would trip: "FailedAssertion(""!(am_mirror)"", File: ""postmaster.c"", Line: 2201)" Reset the flag before signaling promotion so that it cannot interfere with new connections after promotion.
-
- 06 3月, 2018 1 次提交
-
-
由 xiong-gang 提交于
The first parameter "node" of getaddrinfo can be "*" on Linux but doesn't work on Mac OS X.
-
- 01 3月, 2018 1 次提交
-
-
由 xiong-gang 提交于
listen_addresses in postgresql.conf doesn't taken effect now, backend and postmaster are listening on all addresses. From the point of security, we should be able to let user specify the listen address.
-
- 01 2月, 2018 8 次提交
-
-
由 Asim R P 提交于
The bgwriter and checkpointer were one process in 8.4. Because of this we may have missed shutting down bgwriter during smart shutdown request. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Xin Zhang 提交于
Stanby doesn't need to do checkpoint at the end of the shutdown. So the stop sequence would stop Startup process and WalReceiver process. The checkpointer and bgwriter processes are not running on standby and mirrors. Upstream has PM_READONLY with similar logic, but in GPDB we don't have HOT standby yet. Author: Asim R P <apraveen@pivotal.io> Author: Xin Zhang <xzhang@pivotal.io>
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
* Use SIGTERM, rather than SIGUSR2, to shut down the checkpointer. Like in PG 9.2 and above, after the upstream got a separate checkpointer process. * Add new postmaster state, between PM_WAIT_BACKUP and PM_WAIT_BACKENDS, to wait for all regular backends, but not seqserver. This is used at smart shutdown like PM_WAIT_BACKENDS, but to keep the seqserver running until all the regular backends have finished. * Fix code in reaper() to signal checkpointer, not bgwriter, to do the final checkpoint cycle. * Add missing code to handle PM_SHUTDOWN_2 in PostmasterStateMachine().
-
由 Heikki Linnakangas 提交于
GPDB has an extra checkpointer process, compared to PG 8.4. A separate checkpointer process was added in a later PostgreSQL version. When I reverted much of the postmaster.c code to the way it is in upstream 8.4, I missed the code necessary to handle the checkpointer. Copy the missing code from PG 9.3, which has a checkpointer process, too.
-
由 Heikki Linnakangas 提交于
And remove an obsolete comment.
-
由 Heikki Linnakangas 提交于
Revert the state machine and other logic in postmaster.c the way it is in upstream. Remove some GUCs related to mirrored and non-mirrored mode. Remove the -M, -x and -y postmaster options, and change management scripts to not pass those options.
-
- 30 1月, 2018 1 次提交
-
-
由 Wang Hao 提交于
Leverage perfmon stat sender procees to call a hook function for cluster level info collection. Without changing existing behavior of stat sender process, this commit changed the interval in main loop of stat sender from 1 second to 100 ms because cluster info collector requires a finer granularity of sampling. With this change, stats sender process should be started when gp_enable_query_metrics is on. Author: Wang Hao <haowang@pivotal.io> Author: Zhang Teng <tezhang@pivotal.io>
-
- 27 1月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
-
- 24 1月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
Previously, we did not use -w pg_ctl flag for mirror which waits until segment is able to accept connections. We found that a promotion scenario in which we run gpstart after FTS had updated the catalog and before the promotion signal was sent would give a deadlock during parallel segment start. To fix this, we add a new postmaster flag to show walreceiver started at least once and respond to libpq calls accordingly with CAC_MIRROR_READY value. Author: Ashwin Agrawal <aagrawal@pivotal.io> Author: Jimmy Yih <jyih@pivotal.io>
-
- 19 1月, 2018 1 次提交
-
-
由 Jimmy Yih 提交于
This was found in the dispatch ICG regress test where ProcessStartupPacket fault injector was being used. In this recent version of 6.0, FTS probe handler now uses ProcessStartupPacket. In addition to blocking the backend, this now blocks the FTS probe handler which caused failover to occur. Author: Xin Zhang <xzhang@pivotal.io> Author: Jimmy Yih <jyih@pivotal.io>
-
- 13 1月, 2018 1 次提交
-
-
由 Xin Zhang 提交于
- detect primary goes down - flip the role to m/p/d/n and p/m/u/n (role/prefer/status/mode) in gp_segment_configuration - send promotion message to mirror to promote it Author: Xin Zhang <xzhang@pivotal.io> Author: Jacob Champion <pchampion@pivotal.io> Author: Asim R P <apraveen@pivotal.io>
-