1. 08 1月, 2019 1 次提交
  2. 21 12月, 2018 1 次提交
    • A
      Remove XLogLocationToString*() and guc Debug_print_qd_mirroring. · 6d902a61
      Ashwin Agrawal 提交于
      Most of the code under guc `Debug_print_qd_mirroring` was deleted when special
      code for master mirroring like mmxlog and friends were deleted. Delete the
      remaining pieces and the GUC.
      
      Also, we were carrying diff against upstream with XLogLocationToString*()
      functions. Many of the variants were unused. The ones which were used can be
      easily replaced and avoid need for having static string buffer to construct the
      string and then print.
      6d902a61
  3. 07 12月, 2018 1 次提交
    • D
      Remove alert sending support via email and SNMP · 65822b80
      Daniel Gustafsson 提交于
      The support for sending alerts via Email or SNMP was quite a kludge,
      and there are much better external tools for managing alerts than
      what we can supply in core anyways so this retires the capability.
      
      All references to alert sending in the docs are removed, but there
      needs to be section written about how to migrate off this feature
      in the release notes or a similar location.
      
      Discussion: https://github.com/greenplum-db/gpdb/pull/6384
      65822b80
  4. 04 12月, 2018 1 次提交
  5. 30 11月, 2018 1 次提交
    • J
      Don't lose walreceiver start requests due to race condition in postmaster. · 5f0af5df
      Jacob Champion 提交于
      Backport of the following commit from upstream:
      
      commit 47fec424
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Mon Jun 26 17:31:56 2017 -0400
      
          Don't lose walreceiver start requests due to race condition in postmaster.
      
          When a walreceiver dies, the startup process will notice that and send
          a PMSIGNAL_START_WALRECEIVER signal to the postmaster, asking for a new
          walreceiver to be launched.  There's a race condition, which at least
          in HEAD is very easy to hit, whereby the postmaster might see that
          signal before it processes the SIGCHLD from the walreceiver process.
          In that situation, sigusr1_handler() just dropped the start request
          on the floor, reasoning that it must be redundant.  Eventually, after
          10 seconds (WALRCV_STARTUP_TIMEOUT), the startup process would make a
          fresh request --- but that's a long time if the connection could have
          been re-established almost immediately.
      
          Fix it by setting a state flag inside the postmaster that we won't
          clear until we do launch a walreceiver.  In cases where that results
          in an extra walreceiver launch, it's up to the walreceiver to realize
          it's unwanted and go away --- but we have, and need, that logic anyway
          for the opposite race case.
      
          I came across this through investigating unexpected delays in the
          src/test/recovery TAP tests: it manifests there in test cases where
          a master server is stopped and restarted while leaving streaming
          slaves active.
      
          This logic has been broken all along, so back-patch to all supported
          branches.
      
          Discussion: https://postgr.es/m/21344.1498494720@sss.pgh.pa.us
      
      We'll get there anyway as part of the REL9_4_STABLE merge, but this race
      has been injecting random 10-second delays into cluster startup for a
      while now. Let's fix it.
      
      For posterity, here's the order of operations that gets us into this
      situation, as hinted by the above commit message:
      
      - gpstart launches the cluster and waits for a COMMIT (which requires
        sync'd replication)
      - the primary and mirror begin to come up
      - mirror's startup process asks the WAL receiver to start, and
        eventually ends up waiting in WaitForWALToBecomeAvailable() on the
        XLogCtl->recoveryWakeupLatch
      - postmaster sees the request and forks off a new WAL receiver, setting
        WalReceiverPID
      - the new WAL receiver tries to connect to the primary, but it's told
            FATAL: the database system is starting up
        and so it starts to exit, setting the XLogCtl->recoveryWakeupLatch
      - the startup process wakes up again, sees that the WAL receiver is
        stopped, and once again requests WAL receiver startup from the
        postmaster
      - postmaster sees the startup request but it still has a valid
        WalReceiverPID; the request is ignored
      - The dying WAL receiver process finally exits, triggering a SIGCHLD on
        the postmaster
      - postmaster notes that the receiver is gone and clears WalReceiverPID
      - everything comes to a halt for 10 seconds, as gpstart waits for the
        mirror, the mirror's startup process waits for a WAL receiver that is
        not actually starting, and the mirror's postmaster waits for a
        START_WALRECEIVER request that was already ignored
      5f0af5df
  6. 29 11月, 2018 1 次提交
  7. 16 11月, 2018 1 次提交
    • D
      QE writer should cancel QE readers before aborting · 206ffa6c
      David Kimura 提交于
      If a QE writer marks the transaction as aborted while a reader is
      still executing the query the reader may affect shared memory state.
      E.g. consider a transaction being aborted and a table needs to be
      dropped as part of abort processing.  If the writer has dropped all
      the buffers belonging to the table from shared buffer cache and is
      about to unlink the file for the table.  Concurrently, a reader,
      unaware of the writer's abort, is still executing the query.  The
      reader may bring in a page from the file that the writer is about to
      unlink into shared buffer cache.
      
      In order to prevent such situations the writer walks procArray to find
      the readers and sends SIGINT to them.
      
      Walking procArray is expensive, and is avoided as much as possible.
      The patch walks the procArray only if the command being aborted (or
      the last command in a transaction that is being aborted) performed a
      write and at least one reader slice was part of the query plan.
      
      To avoid confusion on the QD due to "canceling MPP operation" error
      messages emitted by the readers upon receiving the SIGINT, the readers
      do not emit them on the libpq channel.
      
      Discussion:
      https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/S2WL1FtJEJ0/Wh6DfJ-RBwAJCo-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      206ffa6c
  8. 01 11月, 2018 1 次提交
  9. 25 10月, 2018 1 次提交
    • T
      Unify the way to fetch/manage the number of segments (#6034) · 8eed4217
      Tang Pengzhou 提交于
      * Don't use GpIdentity.numsegments directly for the number of segments
      
      Use getgpsegmentCount() instead.
      
      * Unify the way to fetch/manage the number of segments
      
      Commit e0b06678 lets us expanding a GPDB cluster without a restart,
      the number of segments may changes during a transaction now, so we
      need to take care of the numsegments.
      
      We now have two way to get segments number, 1) from GpIdentity.numsegments
      2) from gp_segment_configuration (cdb_component_dbs) which dispatcher used
      to decide the segments range of dispatching. We did some hard work to
      update GpIdentity.numsegments correctly within e0b06678 which made the
      management of segments more complicated, now we want to use an easier way
      to do it:
      
      1. We only allow getting segments info (include number of segments) through
      gp_segment_configuration, gp_segment_configuration has newest segments info,
      there is no need to update GpIdentity.numsegments, GpIdentity.numsegments is
      left only for debugging and can be removed totally in the future.
      
      2. Each global transaction fetches/updates the newest snapshot of
      gp_segment_configuration and never change it until the end of transaction
      unless a writer gang is lost, so a global transaction can see consistent
      state of segments. We used to use gxidDispatched to do the same thing, now
      it can be removed.
      
      * Remove GpIdentity.numsegments
      
      GpIdentity.numsegments take no effect now, remove it. This commit
      does not remove gp_num_contents_in_cluster because it needs to
      modify utilities like gpstart, gpstop, gprecoverseg etc, let's
      do such cleanup work in another PR.
      
      * Exchange the default UP/DOWN value in fts cache
      
      Previously, Fts prober read gp_segment_configuration, checked the
      status of segments and then set the status of segments in the shared
      memory struct named ftsProbeInfo->fts_status[], so other components
      (mainly used by dispatcher) can detect a segment was down.
      
      All segments were initialized as down and then be updated to up in
      most common cases, this brings two problems:
      
      1. The fts_status is invalid until FTS does the first loop, so QD
      need to check ftsProbeInfo->fts_statusVersion > 0
      2. gpexpand add a new segment in gp_segment_configuration, the
      new added segment may be marked as DOWN if FTS doesn't scan it
      yet.
      
      This commit changes the default value from DOWN to UP which can
      resolve problems mentioned above.
      
      * Fts should not be used to notify backends that a gpexpand occurs
      
      As Ashwin mentioned in PR#5679, "I don't think giving FTS responsibility to
      provide new segment count is right. FTS should only be responsible for HA
      of the segments. The dispatcher should independently figure out the count
      based on catalog.gp_segment_configuration should be the only way to get
      the segment count", FTS should decouple from gpexpand.
      
      * Access gp_segment_configuration inside a transaction
      
      * upgrade log level from ERROR to FATAL if expand version changed
      
      * Modify gpexpand test cases according to new design
      8eed4217
  10. 15 10月, 2018 1 次提交
    • N
      Online expand without restarting gpdb cluster (#5679) · e0b06678
      Ning Yu 提交于
      * Protect catalog changes on master.
      
      To allow gpexpand to do the job without restarting the cluster we need
      to prevent concurrent catalog changes on master.  A catalog lock is
      provided for this purpose, all insert/update/delete to catalog tables
      should hold this lock in shared mode before making the changes; gpexpand
      can hold this lock in exclusive mode to 1) wait for in-progress catalog
      updates to commit/rollback and 2) prevent concurrent catalog updates.
      
      Add UDF to hold catalog lock in exclusive mode.
      
      Add test cases for the catalog lock.
      Co-authored-by: NJialun Du <jdu@pivotal.io>
      Co-authored-by: NNing Yu <nyu@pivotal.io>
      
      * Numsegments management.
      
      GpIdentity.numsegments is a global variable saving the cluster size in
      each process. It is important for cluster. Changing of the cluster size
      means the expansion is finished and new segments have taken effect.
      
      FTS will count the number of primary segments from gp_segment_configuration
      and record it in shared memory. Later, other processes including master,
      fts, gdd, QD will update GpIdentity.numsegmentswith this information.
      
      So far it's not easy to make old transactions running with new segments,
      so for QD GpIdentity.numsegments can only be updated at beginning of
      transactions. Old transactions can only see old segments. QD will dispatch
      GpIdentity.numsegments to QEs, so they will see the same cluster size.
      
      Catalog changes in old transactions are disallowed.
      
      Consider below workflow:
      
              A: begin;
              B: gpexpand from N segments to M (M > N);
              A: create table t1 (c1 int);
              A: commit;
              C: drop table t1;
      
      Transaction A began when cluster size was still N, all its commands are
      dispatched to N segments only even after cluster being expanded to M
      segments, so t1 was only created on N tables, not only the data
      distribution but also the catalog records.  This will cause the later
      DROP TABLE to fail on the new segments.
      
      To prevent this issue we currently disable all catalog updates in old
      transactions once the expansion is done.  New transactions can update
      catalog as they are already running on all the M segments.
      Co-authored-by: NJialun Du <jdu@pivotal.io>
      Co-authored-by: NNing Yu <nyu@pivotal.io>
      
      * Online gpexpand implementation.
      
      Do not restart the cluster during expansion.
      - Lock the catalog
      - Create the new segments from master segment
      - Add new segments to cluster
      - Reload the cluster so new transaction and background processes
        can see new segments
      - Unlock the catalog
      
      Add test cases.
      Co-authored-by: NJialun Du <jdu@pivotal.io>
      Co-authored-by: NNing Yu <nyu@pivotal.io>
      
      * New job to run ICW after expansion.
      
      It is better to run ICW after online expansion to see if the
      cluster works well. So we add a new job to create a cluster with
      two segments first, then expand to three segments and run all the
      ICW cases. Cluster restarting is forbidden for we must be sure
      all the cases will be passed after online expansion. So any case
      which contains cluster restarting must be removed from this job.
      
      The new job is icw_planner_centos7_online_expand, it is same as
      icw_planner_centos7 but contains two new params EXCLUDE_TESTS and
      ONLINE_EXPAND. If ONLINE_EXPAND is set, the ICW shell will go to
      different branch, creating cluster with size two, expanding, etc.
      EXCLUDE_TESTS lists the cases which contain cluster restarting
      or the test cases will fail without restarting test cases.
      
      After the whole run, the pid of master will be checked, if it
      changes, the cluster must be restarted, so the job will fail.
      As a result any new restarting test cases should be added to
      EXCLUDE_TESTS.
      
      * Add README.
      Co-authored-by: NJialun Du <jdu@pivotal.io>
      Co-authored-by: NNing Yu <nyu@pivotal.io>
      
      * Small changes per review comments.
      
      - Delete confusing test case
      - Change function name updateBackendGpIdentityNumsegments to
        updateSystemProcessGpIdentityNumsegments
      - Fix some typos
      
      * Remove useless Assert
      
      These two asserts will cause failure in isolation2/uao/
      compaction_utility_insert. This case will check AO table in utility
      mode. But numsegments are meaningless in sole segment, so this
      assert must be removed.
      e0b06678
  11. 12 9月, 2018 1 次提交
    • P
      Fix imtermittent failure dispatch test cases · 1803e1eb
      Pengzhou Tang 提交于
      In dispatch test cases, we need a way to put a segment to in-recovery
      status to test gang recreating logic of dispatcher.
      
      We used to trigger a panic fault on a segment and suspend the quickdie()
      to simulate in-recovery status. To avoid segment staying in recovery mode
      for a long time, we used a 'sleep' fault instead of 'suspend' in quickdie(),
      so segment can accept new connections after 5 seconds. 5 seconds works
      fine most of time, but still not stable enough, so we decide to use more
      straight-forward mean to simulate in-recovery mode which reports a
      POSTMASTER_IN_RECOVERY_MSG directly in ProcessStartupPacket(). To not
      affecting other backends, we create a new database so fault injectors
      only affect dispatch test cases.
      1803e1eb
  12. 24 8月, 2018 2 次提交
    • H
      Minor cleanup in postmaster.c. · 34457fc3
      Heikki Linnakangas 提交于
      * Fix whitespace to match upstream
      * Fix log message to match upstream
      * Remove a chunk of broken code for platforms that don't have strsignal or
        sys_siglist. Now the code matches upstream (Apparently all platforms we
        test on have them, or would've gotten a compiler error.)
      * remove unused 'didServiceProcessWork' local variable
      * remove unused 'EXIT_STATUS_2()' macro
      34457fc3
    • H
      Remove some stray references to "sequence server" · f555d670
      Heikki Linnakangas 提交于
      It was removed in commit eae1ee3f, but a couple of comments and function
      prototypes for removed functions were left behind.
      f555d670
  13. 15 8月, 2018 1 次提交
  14. 14 8月, 2018 1 次提交
  15. 03 8月, 2018 1 次提交
  16. 02 8月, 2018 1 次提交
    • R
      Merge with PostgreSQL 9.2beta2. · 4750e1b6
      Richard Guo 提交于
      This is the final batch of commits from PostgreSQL 9.2 development,
      up to the point where the REL9_2_STABLE branch was created, and 9.3
      development started on the PostgreSQL master branch.
      
      Notable upstream changes:
      
      * Index-only scan was included in the batch of upstream commits. It
        allows queries to retrieve data only from indexes, avoiding heap access.
      
      * Group commit was added to work effectively under heavy load. Previously,
        batching of commits became ineffective as the write workload increased,
        because of internal lock contention.
      
      * A new fast-path lock mechanism was added to reduce the overhead of
        taking and releasing certain types of locks which are taken and released
        very frequently but rarely conflict.
      
      * The new "parameterized path" mechanism was added. It allows inner index
        scans to use values from relations that are more than one join level up
        from the scan. This can greatly improve performance in situations where
        semantic restrictions (such as outer joins) limit the allowed join orderings.
      
      * SP-GiST (Space-Partitioned GiST) index access method was added to support
        unbalanced partitioned search structures. For suitable problems, SP-GiST can
        be faster than GiST in both index build time and search time.
      
      * Checkpoints now are performed by a dedicated background process. Formerly
        the background writer did both dirty-page writing and checkpointing. Separating
        this into two processes allows each goal to be accomplished more predictably.
      
      * Custom plan was supported for specific parameter values even when using
        prepared statements.
      
      * API for FDW was improved to provide multiple access "paths" for their tables,
        allowing more flexibility in join planning.
      
      * Security_barrier option was added for views to prevents optimizations that
        might allow view-protected data to be exposed to users.
      
      * Range data type was added to store a lower and upper bound belonging to its
        base data type.
      
      * CTAS (CREATE TABLE AS/SELECT INTO) is now treated as utility statement. The
        SELECT query is planned during the execution of the utility. To conform to
        this change, GPDB executes the utility statement only on QD and dispatches
        the plan of the SELECT query to QEs.
      Co-authored-by: NAdam Lee <ali@pivotal.io>
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      Co-authored-by: NGang Xiong <gxiong@pivotal.io>
      Co-authored-by: NHaozhou Wang <hawang@pivotal.io>
      Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      Co-authored-by: NJinbao Chen <jinchen@pivotal.io>
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
      Co-authored-by: NShujie Zhang <shzhang@pivotal.io>
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      Co-authored-by: NZhenghua Lyu <zlv@pivotal.io>
      4750e1b6
  17. 17 7月, 2018 1 次提交
    • D
      Refactor of Sequence handling. · eae1ee3f
      David Kimura 提交于
      Handling sequences in MPP stage is challenging. This patch refactors the same
      mainly to eliminate the shortcomings/pitfalls in previous implementation. Lets
      first glance at issues with older implementation.
      
        - required relfilenode == oid for all sequence relations
        - "denial of service" due to dedicated but single process running on QD to
          serve sequence values for all the tables in all the databases to all the QEs
        - sequence tables direct opened as a result instead of flowing throw relcache
        - divergence from upstream implementation
      
      Many solutions were considered refer mailing discussion for the same before
      settling on one in here. Newer implementation still leverages centralized place
      QD to generated sequence values. It now leverages the existing QD backend
      process connecting to QEs for the query to serve the nextval request. As a
      result need for relfilenode == oid gets eliminated as based on oid, QD process
      can now lookup relfilenode from catalog and also leverage relcache. No more
      direct opens by single process across databases.
      
      For communication between QD and QE for sequence nextval request, async notify
      message is used (Notify messages are not used in GPDB for anything else
      currently). QD process while waiting for results from QEs is sitting idle and
      hence on seeing the nextval request, calls `nextval_internal()` and responds
      back with the value.
      
      Since the need for separate sequence server process went away, all the code for
      the same is removed.
      
      Discussion:
      https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/hni7lS9xH4c/o_M3ddAeBgAJCo-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      eae1ee3f
  18. 29 6月, 2018 1 次提交
    • A
      Restrict max_wal_senders guc to 1 in GPDB. · db53d8cf
      Ashwin Agrawal 提交于
      GPDB only supports 1 replica currently. Need to adopt in FTS and all to support
      1:n till then restrict max_wal_senders GUC to 1. Later when code can handle the
      same the max value of guc can be changed.
      
      Also, remove the setting of max_wal_senders in postmaster which was added
      earlier for dealing with filerep/walrep co-existence.
      db53d8cf
  19. 27 6月, 2018 1 次提交
    • A
      Remove the GPDB_91_MERGE_FIXME in sigusr1_handler(). · 1a1f650f
      Ashwin Agrawal 提交于
      Currently gpdb wal rep code is mix of multiple versions, once we reach 9.3 get
      opportunity to pain get in sync with upstream version. This will be taken care
      of then till that time live with gpdb modified version of the
      CheckPromoteSignal().
      1a1f650f
  20. 07 6月, 2018 1 次提交
  21. 12 5月, 2018 1 次提交
  22. 03 5月, 2018 1 次提交
    • Z
      Add Global Deadlock Detector. · 03915d65
      Zhenghua Lyu 提交于
      To prevent distributed deadlock, in Greenplum DB an exclusive table lock is
      held for UPDATE and DELETE commands, so concurrent updates the same table are
      actually disabled.
      
      We add a backend process to do global deadlock detect so that we do not lock
      the whole table while doing UPDATE/DELETE and this will help improve the
      concurrency of Greenplum DB.
      
      The core idea of the algorithm is to divide lock into two types:
      
      - Persistent: the lock can only be released after the transaction is over(abort/commit)
      - Otherwise cases
      
      This PR’s implementation adds a persistent flag in the LOCK, and the set rule is:
      
      - Xid lock is always persistent
      - Tuple lock is never persistent
      - Relation is persistent if it has been closed with NoLock parameter, otherwise
        it is not persistent Other types of locks are not persistent
      
      More details please refer the code and README.
      
      There are several known issues to pay attention to:
      
      - This PR’s implementation only cares about the locks can be shown
        in the view pg_locks.
      - This PR’s implementation does not support AO table. We keep upgrading
        the locks for AO table.
      - This PR’s implementation does not take networking wait into account.
        Thus we cannot detect the deadlock of GitHub issue #2837.
      - SELECT FOR UPDATE still lock the whole table.
      Co-authored-by: NZhenghua Lyu <zlv@pivotal.io>
      Co-authored-by: NNing Yu <nyu@pivotal.io>
      03915d65
  23. 02 5月, 2018 1 次提交
    • A
      FTS detects when primary is in recovery avoiding config change · d453a4aa
      Ashwin Agrawal 提交于
      Previous behavior when primary is in crash recovery FTS probe fails and hence
      qqprimary is marked down. This change provides a recovery progress metric so that
      FTS can detect progress. We added last replayed LSN number inside the error
      message to determine recovery progress. This allows FTS to distinguish between
      recovery in progress and recovery hang or rolling panics. Only when FTS detects
      recovery is not making progress then FTS marks primary down.
      
      For testing a new fault injector is added to allow simulation of recovery hang
      and recovery in progress.
      
      Just fyi...this reverts the reverted commit 7b7219a4.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      d453a4aa
  24. 01 5月, 2018 2 次提交
  25. 20 3月, 2018 2 次提交
    • J
      Remove GPDB_84_MERGE_FIXME from postmaster.c · 9ad42542
      Jimmy Yih 提交于
      The FIXME comment asked whether or not we wanted to remove the check
      on CAC_WAITBACKUP.  It is true that we do not currently use WAITBACKUP
      state, but we should continue to have the code.  It will prevent some
      merge conflicts during postgres merge, and we may use it sometime in
      the future if applicable to any future Greenplum features.
      
      [ci skip]
      9ad42542
    • A
      FtsNotifyProber() shouldn't wait infinitely for FTS probes. · edc3914d
      Asim R P 提交于
      - Use standard PMSignal mechanism to trigger FTS probes from
        dispatcher.
      
      - Use statusVersion flag only to indicate if a probe cycle resulted in
      configuration update. This was overloaded to also unblock callers
      waiting for FTS probe to be triggered and finished.
      
      - Use a separate flag "probeTick" for this purpose.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      edc3914d
  26. 14 3月, 2018 1 次提交
    • D
      Simplify handling of the timezone GUC by making initdb choose the default. · a637f5c3
      Daniel Gustafsson 提交于
      Commit ba5cfa93 introduced the upstream
      code for managing timezones, but it failed to include the the below diff
      to initdb for moving timezone handling from the postmaster to initdb.
      This is a partial backport of the below commit, some hunks didn't apply
      because due to requiring code introduced in PostgreSQL 9.1 and 9.2.
      
        commit ca4af308
        Author: Tom Lane <tgl@sss.pgh.pa.us>
        Date:   Fri Sep 9 17:59:11 2011 -0400
      
          Simplify handling of the timezone GUC by making initdb choose the default.
      
          We were doing some amazingly complicated things in order to avoid running
          the very expensive identify_system_timezone() procedure during GUC
          initialization.  But there is an obvious fix for that, which is to do it
          once during initdb and have initdb install the system-specific default into
          postgresql.conf, as it already does for most other GUC variables that need
          system-environment-dependent defaults.  This means that the timezone (and
          log_timezone) settings no longer have any magic behavior in the server.
          Per discussion.
      a637f5c3
  27. 10 3月, 2018 2 次提交
    • A
      Mirror should not get marked as DOWN after primary recovery. · 3c0e3493
      Ashwin Agrawal 提交于
      Added additional global timestamp to track the end of recovery where
      database server is ready to accept connections in variable
      PMAcceptingConnectionsStartTime.
      
      During checking mirror status, we compute the grace period not only
      based on when last wal sender died, but also based on when recovery
      process finished.
      
      This will fix the issue where the mirror is marked down due to long
      recovery time on the primary.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NXin Zhang <xzhang@pivotal.io>
      3c0e3493
    • A
      Reset the flag to enable 'pg_ctl -w' on mirror before promotion · 466e72a1
      Asim R P 提交于
      The flag pm_launch_walreceiver helps to determine if a mirror has made at least
      one attempt to connect with primary.  For a small window, right after a mirror
      is promoted, the flag was still in effect.  If an FTS probe request arrived in
      this window, the following assertion would trip:
      
      "FailedAssertion(""!(am_mirror)"", File: ""postmaster.c"", Line: 2201)"
      
      Reset the flag before signaling promotion so that it cannot interfere with new
      connections after promotion.
      466e72a1
  28. 06 3月, 2018 1 次提交
  29. 01 3月, 2018 1 次提交
    • X
      Enable listen_addresses · c93bb171
      xiong-gang 提交于
      listen_addresses in postgresql.conf doesn't taken effect now, backend and
      postmaster are listening on all addresses. From the point of security, we
      should be able to let user specify the listen address.
      c93bb171
  30. 01 2月, 2018 7 次提交