1. 13 4月, 2018 1 次提交
    • A
      Maintain oldest xmin among distributed snapshots separately on QD · 878c7694
      Asim R P 提交于
      Commit b3f300b9 introduced the novel idea tracking oldest xmin
      among all distributed snapshots on QEs.  However, the idea is not
      applicable to QD because all distributed transactions can be found in
      ProcArray on QD.  Local oldest xmin is therefore the oldest xmin among
      all distributed snapshots on QD.  This patch fixes the maintenance of
      oldest xmin on QD by avoiding DistributedLog_AdvanceOldestXmin() and all
      the heavy-lifting that it performs.  Calling this on QD was also hitting
      the "local snapshot's xmin is older than recorded distributed
      oldestxmin" error occasionally in CI.
      878c7694
  2. 22 3月, 2018 2 次提交
    • T
      Rename GPDB specific waiting status API · aad8cde8
      Taylor Vesely 提交于
      The GPDB specific API for pgstat_report_waiting() accepts waiting
      reason unlike the upstream counterpart, which accepts only a boolean
      flag.  Renaming the API to gpstat_report_waiting() allows us to catch
      new uses of the API introduced from upstream merges.
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      aad8cde8
    • T
      Fix ps display issues · f26b5799
      Taylor Vesely 提交于
      Without this change, ps display of postmaster child processes may get
      mangled.  E.g.
      
      postgres: 15432, gpadmin isolation2test [local] con14 cmd52 con14 cm?~??????X???
      
      This change uses the GPDB specific function get_real_act_ps_display()
      to get the ps display string before it is modified.
      
      This change also sets Gp_role of FTS daemon process to utility instead
      of the default value of dispatch.  That prevents appending "conXXX" to
      FTS daemon's ps display.
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      f26b5799
  3. 17 3月, 2018 1 次提交
    • J
      Remove GPDB_84_MERGE_FIXME from sinvaladt.c · 2c55325d
      Jimmy Yih 提交于
      The fixme comment was placed here to warn that the implementation
      might be backwards on Greenplum 4.X and 5.X.  The concern is not valid
      for Greenplum 4.X because there is no lazy xids.  However, the concern
      is indeed valid for Greenplum 5.x.  This is noted and will be changed
      on the 5X_STABLE branch.
      
      [ci skip]
      2c55325d
  4. 10 3月, 2018 2 次提交
    • H
      Enable autovacuum, but only for 'template0'. · 4e655714
      Heikki Linnakangas 提交于
      Autovacuum has been completely disabled so far. In the upstream, even if
      you set autovacuum=off, it would still run, if necessary, to prevent XID
      wraparound, but in GPDB we would not launch it even for that.
      
      That is problematic for template0, and any other databases with
      datallowconn=false. If you cannot connect to a database, you cannot
      manually VACUUM it. Therefore, its datfrozenxid is never advanced. We had
      hacked our way through that by letting XID wraparound to happen for
      databases with datallowconn=false. The theory was that template0 - and
      hopefully any other such database! - was fully frozen, so there is no harm
      in letting XID counter to wrap around. However, you get trouble if you
      create a new database, using template0 as the template, around the time
      that XID wraparound for template0 is about to happen. The new database will
      inherit the datfrozenxid value, and because it will have datallowconn=true,
      the system will immediately shut down because now it looks like XID
      wraparound happened.
      
      To fix, re-enable autovacuum, in a very limited fashion. The autovacuum
      launcher is now started, but it will only perform anti-wraparound vacuums,
      and only on databases with datallowconn=false.
      
      This includes fixes for some garden-variety bugs that have been introduced
      to autovacuum, when merging with upstream, that have gone unnoticed because
      the code has been unused.
      
      Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/gqordopb6Gg/-GHXSE4qBwAJ
      4e655714
    • H
      Track "oldestxmin" based on distributed snapshots. · b3f300b9
      Heikki Linnakangas 提交于
      Before this, in order to safely determine if a tuple can be vacuumed away,
      you would need an active distributed snapshot. Even if an XID was older
      than the locally-computed OldestXMin value, the XID might still be visible
      to some distributed snapshot that's active in the QD.
      
      This commit introduces a mechanism to track the "oldest xmin" across any
      distributed snapshots. That makes it possible to calculate an "oldest xmin"
      value in a QE, that covers any such distributed snapshots, even if the
      distributed transaction doesn't currently have an active connection to this
      QE. Every distributed snapshot contains such an "oldest xmin" value, but
      now we track the latest such value that we've seen in this QE, in shared
      memory. Therefore, it's not always 100% up-to-date, but it will reflect
      the situation as of the latest query that was dispatched from QD to this
      QE.
      
      The value returned by GetOldestXmin(), as well as RecentGlobalXmin, now
      includes any distributed transactions. So the value can now be used to
      determine which tuples are dead, like in upstream, without doing the extra
      check with the localXidSatisfiesAnyDistributedSnapshot() function. This
      allows reverting some changes in heap_tuple_freeze.
      
      This allows utility-mode VACUUMs, launched independently in QE nodes, to
      reclaim space. Previously, it could not remove any dead tuples that were
      ever visible to anyone, because it could not determine whether they might
      still be needed by some distributed transaction.
      b3f300b9
  5. 09 3月, 2018 2 次提交
    • H
      17419849
    • H
      Fix distributed snapshots to include just-started transactions. · 1d70a450
      Heikki Linnakangas 提交于
      All in-progress transactions, even those in DTX_STATE_ACTIVE_NOT_DISTRIBUTED
      state, must be included in a distributed snapshot. All transactions begin
      in DTX_STATE_ACTIVE_NOT_DISTRIBUTED state, and can become distributed later
      on.
      
      This bug was introduced in commit ff97c70b, which added the ill-advised
      optimization to skip DTX_STATE_ACTIVE_NOT_DISTRIBUTED transactions.
      
      This showed up occasionally in the regression tests as a failure in the
      'oidjoins' test, as a failure like this:
      
      @@ -230,9 +230,17 @@
       SELECT	ctid, attrelid
       FROM	pg_catalog.pg_attribute fk
       WHERE    attrelid != 0 AND
           NOT EXISTS(SELECT 1 FROM pg_catalog.pg_class pk WHERE pk.oid = fk.attrelid);
      - ctid | attrelid
      -------+----------
      -(0 rows)
      +  ctid   | attrelid
      +---------+----------
      + (20,10) |    17107
      + (20,11) |    17107
      + (20,12) |    17107
      + (20,13) |    17107
      + (20,14) |    17107
      + (20,15) |    17107
      + (20,8)  |    17107
      + (20,9)  |    17107
      +(8 rows)
      
      The plan for that is a hash anti-join, with 'pg_class' in the inner side,
      and 'pg_attribute' on the outer side. If a table was created concurrently
      with that query (with oid 17107 in the above case), you could get the
      failure. What happens is that the concurrent CREATE TABLE transaction was
      assigned a distributed XID that was incorrectly not included in the
      distributed snapshot that the query took. Hence, the transaction became
      visible to the query immediately, as soon as it committed. If the CREATE
      TABLE transaction committed between the full scan of pg_class, and the
      scan on pg_attribute, the query would not see the just-inserted pg_class
      row, but would see the pg_attribute rows.
      1d70a450
  6. 01 2月, 2018 1 次提交
    • H
      Remove primary_mirror_mode stuff. · ae760e25
      Heikki Linnakangas 提交于
      Revert the state machine and other logic in postmaster.c the way it is in
      upstream. Remove some GUCs related to mirrored and non-mirrored mode. Remove
      the -M, -x and -y postmaster options, and change management scripts to not
      pass those options.
      ae760e25
  7. 30 1月, 2018 1 次提交
    • W
      Alloc Instrumentation in Shmem · 67db4274
      Wang Hao 提交于
      On postmaster start, additional space in Shmem is allocated for Instrumentation
      slots and a header. The number of slots is controlled by a cluster level GUC,
      default is 5MB (approximate 30K slots). The default number is estimated by 250
      concurrent queries * 120 nodes per query. If the slots are exhausted,
      instruments are allocated in local memory as fallback.
      
      These slots are organized as a free list:
        - Header points to the first free slot.
        - Each free slot points to next free slot.
        - The last free slot's next pointer is NULL.
      
      ExecInitNode calls GpInstrAlloc to pick an empty slot from the free list:
        - The free slot pointed by the header is picked.
        - The picked slot's next pointer is assigned to the header.
        - A spin lock on the header to prevent concurrent writing.
        - When GUC gp_enable_query_metrics is off, Instrumentation will
          be allocated in local memory.
      
      Slots are recycled by resource owner callback function.
      
      Benchmark result with TPC-DS shows performance impact by this commit is less than 0.1%
      To improve performance of instrumenting, following optimizations are added:
        - Introduce instrument_option to skip CDB info collection
        - Optimize tuplecount in Instrumentation from double to uint64
        - Replace instrument tuple entry/exit function with macro
        - Add need_timer to Instrumentation, to allow eliminating of timing overhead.
          This is porting part of upstream commit:
      ------------------------------------------------------------------------
      commit af7914c6
      Author: Robert Haas <rhaas@postgresql.org>
      Date:   Tue Feb 7 11:23:04 2012 -0500
      
      Add TIMING option to EXPLAIN, to allow eliminating of timing overhead.
      ------------------------------------------------------------------------
      
      Author: Wang Hao <haowang@pivotal.io>
      Author: Zhang Teng <tezhang@pivotal.io>
      67db4274
  8. 23 1月, 2018 1 次提交
    • X
      Fix assertion failure · 6b7aa068
      xiong-gang 提交于
      Entry DB process share snapshot with QD process, but it didn't update
      the TransactionXmin. The assert in SubTransGetData() will fail in some
      cases:
         Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));
      
      1.QD takes a snapshot which contains a in-progress transaction A.
      2.Transaction A commits.
      3.QD creates a gang of entry DB. The entry DB process takes a snapshot
      in InitPostgres and updates TransactionXmin.
      4.The entry DB process scans the tuple inserted by the transaction A,
      and find it's in the snapshot but its xid is larger than TransactionXmin.
      6b7aa068
  9. 22 1月, 2018 1 次提交
    • G
      Reduce the contention on tmLock · ff97c70b
      Gang Xiong 提交于
      1. move TMGXACT to PGPROC.
      2. create distributed snapshot and create checkpoint will
      traverse procArray and acquire ProcArrayLock. shmControlLock
      is only used for serialize recoverTM().
      3. get rid of shmGxactArray and maintain an array of TMGXACT_LOG
      for recovery.
      
      Author: Gang Xiong <gxiong@pivotal.io>
      Author: Asim R P <apraveen@pivotal.io>
      Author: Ashwin Agrawal <aagrawal@pivotal.io>
      ff97c70b
  10. 13 1月, 2018 6 次提交
  11. 22 12月, 2017 1 次提交
  12. 14 12月, 2017 1 次提交
    • A
      Remove Startup Pass 4 PT verification code. · 5361041d
      Ashwin Agrawal 提交于
      This code is hidden under GUC and never turned on, so no point keeping it. Was
      coded in past due to some inconsistency issues which have not surfaced from long
      time now. Plus, anyways PT needs to go away soon as well.
      5361041d
  13. 30 10月, 2017 1 次提交
  14. 28 10月, 2017 1 次提交
    • H
      When dispatching, send ActiveSnapshot along, not some random snapshot. · 4a95afc1
      Heikki Linnakangas 提交于
      If the caller specifies DF_WITH_SNAPSHOT, so that the command is dispatched
      to the segments with a snapshot, but it currently has no active snapshot in
      the QD itself, that seems like a mistake.
      
      In qdSerializeDtxContextInfo(), the comment talked about which snapshot to
      use when the transaction has already been aborted. I didn't quite
      understand that. I don't think the function is used to dispatch the "ABORT"
      statement itself, and we shouldn't be dispatching anything else in an
      already-aborted transaction.
      
      This makes it more clear which snapshot is dispatched along with the
      command. In theory, the latest or serializable snapshot can be different
      from the one being used when the command is dispatched, although I'm not
      sure if there are any such cases in practice.
      
      In the upcoming 8.4 merge, there are more changes coming up to snapshot
      management, which make it more difficult to get hold of the latest acquired
      snapshot in the transaction, so changing this now will ease the pain of
      merging that.
      
      I don't know why, but after making the change in qdSerializeDtxContextInfo,
      I started to get a lot of "Too many distributed transactions for snapshot
      (maxCount %d, count %d)" errors. Looking at the code, I don't understand
      how it ever worked. I don't see any no guarantee that the array in
      TempQDDtxContextInfo or TempDtxContextInfo was pre-allocated correctly.
      Or maybe it got allocated big enough to hold max_prepared_xacts, which
      was always large enough, but it seemed rather haphazard to me. So in
      the spirit of "if you don't understand it, rewrite it until you do", I
      changed the way the allocation of the inProgressXidArray array works.
      In statically allocated snapshots, i.e. SerializableSnapshot and
      LatestSnapshot, the array is malloc'd. In a snapshot copied with
      CopySnapshot(), it is points to a part of the palloc'd space for the
      snapshot. Nothing new so far, but I changed CopySnapshot() to set
      "maxCount" to -1 to indicate that it's not malloc'd. Then I modified
      DistributedSnapshot_Copy and DistributedSnapshot_Deserialize to not give up
      if the target array is not large enough, but enlarge it as needed. Finally,
      I made a little optimization in GetSnapshotData() when running in a QE, to
      move the copying of the distributed snapshot data to outside the section
      guarded by ProcArrayLock. ProcArrayLock can be heavily contended, so that's
      a nice little optimization anyway, but especially now that
      DistributedSnapshot_Copy() might need to realloc the array.
      4a95afc1
  15. 29 8月, 2017 2 次提交
  16. 25 8月, 2017 1 次提交
    • H
      Use ereport, rather than elog, for performance. · 01dff3ba
      Heikki Linnakangas 提交于
      ereport() has one subtle but important difference to elog: it doesn't
      evaluate its arguments, if the log level says that the message doesn't
      need to be printed. This makes a small but measurable difference in
      performance, if the arguments contain more complicated expressions, like
      function calls.
      
      While performance testing a workload with very short queries, I saw some
      CPU time being used in DtxContextToString. Those calls were coming from the
      arguments to elog() statements, and the result was always thrown away,
      because the log level was not high enough to actually log anything. Turn
      those elog()s into ereport()s, for speed.
      
      The problematic case here was a few elogs containing DtxContextToString
      calls, in hot codepaths, but I changed a few surrounding ones too, for
      consistency.
      
      Simplify the mock test, to not bother mocking elog(), while we're at it.
      The real elog/ereport work just fine in the mock environment.
      01dff3ba
  17. 09 8月, 2017 1 次提交
    • P
      Do not include gp-libpq-fe.h and gp-libpq-int.h in cdbconn.h · cf7cddf7
      Pengzhou Tang 提交于
      The whole cdb directory was shipped to end users and all header files
      that cdb*.h included are also need to be shipped to make checkinc.py
      pass. However, exposing gp_libpq_fe/*.h will confuse customer because
      they are almost the same as libpq/*, as Heikki's suggestion, we should
      keep gp_libpq_fe/* unchanged. So to make system work, we include
      gp-libpg-fe.h and gp-libpq-int.h directly in c files that need them
      cf7cddf7
  18. 02 8月, 2017 1 次提交
    • R
      Make memory spill in resource group take effect · 68babac4
      Richard Guo 提交于
      Resource group memory spill is similar to 'statement_mem' in
      resource queue, the difference is memory spill is calculated
      according to the memory quota of the resource group.
      
      The related GUCs, variables and functions shared by both resource
      queue and resource group are moved to the namespace resource manager.
      
      Also codes of resource queue relating to memory policy are refactored in this commit.
      Signed-off-by: NPengzhou Tang <ptang@pivotal.io>
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      68babac4
  19. 06 7月, 2017 1 次提交
    • D
      Support an optional message in backend cancel/terminate (#2729) · fa6c2d43
      Daniel Gustafsson 提交于
      This adds the ability for the caller of pg_terminate_backend() or
      pg_cancel_backend() to include an optional message to the process
      which is being signalled. The message will be appended to the error
      message returned to the killed process. The new syntax is overloaded
      as:
      
          SELECT pg_terminate_backend(<pid> [, msg]);
          SELECT pg_cancel_backend(<pid> [, msg]);
      fa6c2d43
  20. 19 6月, 2017 1 次提交
  21. 07 6月, 2017 1 次提交
    • P
      restore TCP interconnect · 353a937d
      Pengzhou Tang 提交于
      This commit restore TCP interconnect and fix some hang issues.
      
      * restore TCP interconnect code
      * Add GUC called gp_interconnect_tcp_listener_backlog for tcp to control the backlog param of listen call
      * use memmove instead of memcpy because the memory areas do overlap.
      * call checkForCancelFromQD() for TCP interconnect if there are no data for a while, this can avoid QD from getting stuck.
      * revert cancelUnfinished related modification in 8d251945, otherwise some queries will get stuck
      * move and rename faultinjector "cursor_qe_reader_after_snapshot" to make test cases pass under TCP interconnect.
      353a937d
  22. 02 6月, 2017 1 次提交
    • X
      Remove subtransaction information from SharedLocalSnapshotSlot · b52ca70f
      Xin Zhang 提交于
      Originally, the reader kept copies of subtransaction information in
      two places.  First, it copied SharedLocalSnapshotSlot to share between
      writer and reader.  Second, reader kept another copy in subxbuf for
      better performance.  Due to lazy xid, subtransaction information can
      change in the writer asynchronously with respect to the reader.  This
      caused reader's subtransaction information out of date.
      
      This fix removes those copies of subtransaction information in the
      reader and adds a reference to the writer's PGPROC to
      SharedLocalSnapshotSlot.  Reader should refer to subtransaction
      information through writer's PGPROC and pg_subtrans.
      
      Also added is a lwlock per shared snapshot slot.  The lock protects
      shared snapshot information between a writer and readers belonging to
      the same session.
      
      Fixes github issues #2269 and #2284.
      Signed-off-by: NAsim R P <apraveen@pivotal.io>
      b52ca70f
  23. 01 6月, 2017 1 次提交
    • A
      Optimize DistributedSnapshot check and refactor to simplify. · 3c21b7d8
      Ashwin Agrawal 提交于
      Before this commit, snapshot stored information of distributed in-progress
      transactions populated during snapshot creation and its corresponding localXids
      found during tuple visibility check later (used as cache) by reverse mapping
      using single tightly coupled data structure DistributedSnapshotMapEntry. Storing
      the information this way possed couple of problems:
      
      1] Only one localXid can be cached for a distributedXid. For sub-transactions
      same distribXid can be associated with multiple localXid, but since can cache
      only one, for other local xids associated with distributedXid need to consult
      the distributed_log.
      
      2] While performing tuple visibility check, code must loop over full size of
      distributed in-progress array always first to check if cached localXid can be
      utilized to avoid reverse mapping.
      
      Now, decoupled the distributed in-progress with local xids cache separately. So,
      this allows us to store multiple xids per distributedXid. Also, allows to
      optimize scanning localXid only if tuple xid is relevant to it and also scanning
      size only equivalent to number of elements cached instead of size of distributed
      in-progress always even if nothing was cached.
      
      Along the way, refactored relevant code a bit as well to simplify further.
      3c21b7d8
  24. 04 5月, 2017 1 次提交
  25. 28 4月, 2017 1 次提交
    • A
      Correct calculation of xminAllDistributedSnapshots and set it on QE's. · d887fe0c
      Ashwin Agrawal 提交于
      For vacuum, page pruning and freezing to perform its job correctly on QE's, it
      needs to know globally what's the lowest dxid till any transaction can see in
      full cluster. Hence QD must calculate and send that info to QE. For this purpose
      using logic similar to one for calculating globalxmin by local snapshot. TMGXACT
      for global transactions serves similar to PROC and hence its leveraged to
      provide us lowest gxid for its snapshot. Further using its array, shmGxactArray,
      can easily find the lowest across all global snapshots and pass down to QE via
      snapshot.
      
      Adding unit test for createDtxSnapshot along with the change.
      d887fe0c
  26. 13 4月, 2017 1 次提交
    • A
      Fix dereference after null check in ProcArrayEndTransaction. · 7a9af586
      Ashwin Agrawal 提交于
      Coverity reported: Either the check against null is unnecessary, or there may be
      a null pointer dereference. In ProcArrayEndTransaction: Pointer is checked
      against null but then dereferenced anyway.
      
      While its not an issue and for commit case, pointer is never null, but simplify
      the code and stop using pointer itself here.
      7a9af586
  27. 07 4月, 2017 2 次提交
    • K
      Implement concurrency limit of resource group. · d0c6a352
      Kenan Yao 提交于
      Works include:
      * define structures used by resource group in shared memory;
      * insert/remove shared memory object when Create/Drop Resource Group;
      * clean up and restore when Create/Drop Resource Group fails;
      * implement concurrency slot acquire/release functionality;
      * sleep when concurrency slot is not available, and wake up others when
      releasing a concurrency slot if necessary;
      * handle signals in resource group properly;
      
      Signed-off-by Richard Guo <riguo@pivotal.io>
      Signed-off-by Gang Xiong <gxiong@pivotal.io>
      d0c6a352
    • K
      Since we added a GUC 'gp_resource_manager' to switch between resource queue · e630fb1f
      Kenan Yao 提交于
      and resource group when 'resource_scheduler' is on, we need to change the
      condition of the resource queue branches. Also, tidy up error messages related
      to resource manager under these different GUC settings.
      
      Signed-off-by Richard Guo <riguo@pivotal.io>
      Signed-off-by Gang Xiong <gxiong@pivotal.io>
      e630fb1f
  28. 01 4月, 2017 3 次提交
    • A
      Cleanup LocalDistribXactData related code. · 8c20bc94
      Ashwin Agrawal 提交于
      Commit fb86c90d "Simplify management of
      distributed transactions." cleanedup lot of code for LocalDistribXactData and
      introduced LocalDistribXactData in PROC for debugging purpose. But it's only
      correctly maintained for QE's, QD never populated LocalDistribXactData in
      MyProc. Instead TMGXACT also had LocalDistribXactData which was just set
      initially for QD but never updated later and confused more than serving the
      purpose. Hence removing LocalDistribXactData from TMGXACT, as it already has
      other fields which provide required information. Also, cleaned-up QD related
      states as even in PROC only QE uses LocalDistribXactData.
      8c20bc94
    • A
      Fully enable lazy XID allocation in GPDB. · 0932453d
      Ashwin Agrawal 提交于
      As part of 8.3 merge, upstream commit 295e6398
      "Implement lazy XID allocation" was merged. But transactionIds were still
      allocated in StartTransaction as code changes required to make it work for GPDB
      with distrbuted transaction was pending, thereby feature remained as
      disabled. Some progress was made by commit
      a54d84a3 "Avoid assigning an XID to
      DTX_CONTEXT_QE_AUTO_COMMIT_IMPLICIT queries." Now this commit addresses the
      pending work needed for handling deferred xid allocation correctly with
      distributed transactions and fully enables the feature.
      
      Important highlights of changes:
      
      1] Modify xlog write and xlog replay record for DISTRIBUTED_COMMIT. Even if
      transacion is read-only for master and no xid is allocated to it, it can still
      be distributed transaction and hence needs to persist itself in such a case. So,
      write xlog record even if no local xid is assigned but transaction is
      prepared. Similarly during xlog replay of the XLOG_XACT_DISTRIBUTED_COMMIT type,
      perform distributed commit recovery ignoring local commit. Which also means for
      this case don't commit to distrbuted log, as its only used to perform reverse
      map of localxid to distributed xid.
      
      2] Remove localXID from gxact, as its no more needed to be maintained and used.
      
      3] Refactor code for QE Reader StartTransaction. There used to be wait-loop with
      sleep checking to see if SharedLocalSnapshotSlot has distributed XID same as
      that of READER to assign reader some xid as that of writer, for SET type
      commands till READER actually performs GetSnapShotData(). Since now a) writer is
      not going to have valid xid till it performs some write, writers transactionId
      turns out InvalidTransaction always here and b) read operations like SET doesn't
      need xid, any more hence need for this wait is gone.
      
      4] Thow error if using distributed transaction without distributed xid. Earlier
      AssignTransactionId() was called for this case in StartTransaction() but such
      scenario doesn't exist hence convert it to ERROR.
      
      5] QD earlier during snapshot creation in createDtxSnapshot() was able to assign
      localXid in inProgressEntryArray corresponding to distribXid, as localXid was
      known by that time. That's no more the case and localXid mostly will get
      assigned after snapshot is taken. Hence now even for QD similar to QE's snapshot
      creation time localXid is not populated but later found in
      DistributedSnapshotWithLocalMapping_CommittedTest(). There is chance to optimize
      and try to match earlier behavior somewhat by populating gxact in
      AssignTransactionId() once locakXid is known but currently seems not so much
      worth it as QE's anyways have to perform the lookups.
      0932453d
    • A
      Optimize distributed xact commit check. · 692be1a1
      Ashwin Agrawal 提交于
      Leverage the fact that inProgressEntryArray is sorted based on distribXid while
      creating the snapshot in createDtxSnapshot. So, can break out fast in function
      DistributedSnapshotWithLocalMapping_CommittedTest().
      692be1a1