1. 05 4月, 2017 8 次提交
    • A
      Fix ON COMMIT DELETE ROWS action for append optimized tables · 5ccdd6a2
      Asim R P 提交于
      The fix is to perform the same steps as a TRUNCATE command - set new relfiles
      and drop existing ones for parent AO table as well as all its auxiliary tables.
      
      This fixes issue #913.  Thank you, Tao-Ma for reporting the issue and proposing
      a fix as PR #960.  This commit implements Tao-Ma's idea but implementation
      differs from the original proposal.
      5ccdd6a2
    • A
      Enhance UAO templating to recognize "aoseg" and "aocsseg" keywords. · 9fc5eee5
      Asim R P 提交于
      This is useful if a test wants to use gp_toolkit.__gp_{aoseg|aocsseg}*
      functions.
      9fc5eee5
    • B
      Ignore very wide columns in analyze sample · 8492807f
      Bhuvnesh Chaudhary and Ekta Khanna 提交于
      Analyze collects a sample from the table, in case the
      sample contains columns with huge length, it may
      result in memory usage to go high cancelling the query.
      
      This commit masks wide values `i.e pg_column_size(col) > WIDTH_THRESHOLD
      (1024)` in variable length columns to avoid high
      memory usage while collecting sample. Column values exceeding
      WIDTH_THRESHOLD will be marked as NULL and will be ignored from the collected
      samples tuples while computing stats on the relation.
      
      In case of expression/predicate indexes on the relation, the wide columns will be
      treated as NULL and will not be filtered out. Is it rare to have such
      indexes on very wide columns, so the effects on stats (nullfrac etc) will be minimal.
      Signed-off-by: NOmer Arap <oarap@pivotal.io>
      8492807f
    • B
      Use STRICT for functions using textin/textout · 038457a5
      Bhuvnesh Chaudhary 提交于
      The breakin/out functions should be marked as STRICT,
      because the underlying C functions, textin/textout,
      don't expect a NULL to be passed to them.
      Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
      038457a5
    • J
      Fix TINC storage tests after introducing relfilenode counter change. · adf3ed40
      Jimmy Yih 提交于
      A lot of tests assumed OID == relfilenode. We updated the tests to not assume
      that anymore.
      adf3ed40
    • J
      Fix ICW tests after introducing relfilenode counter change. · 974e78dc
      Jimmy Yih 提交于
      These tests assumed OID == relfilenode. We updated the tests to not assume it
      anymore.
      974e78dc
    • J
      Add O_EXCL to MirroredBufferPool_DoOpen create file · f6701956
      Jimmy Yih 提交于
      This is needed to prevent relations possibly overwriting each
      other. The O_EXCL is present in postgres's mdcreate() but for some
      reason we don't have it here. This adds it back.
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      f6701956
    • J
      Decouple OID and relfilenode allocations with new relfilenode counter · 1fd11387
      Jimmy Yih 提交于
      The master allocates an OID and provides it to segments during
      dispatch. The segments then check if they can use this OID as its
      relfilenode. If a segment cannot use the preassigned OID as the
      relation's relfilenode, it will generate a new relfilenode value via
      nextOid counter. This can result in a race condition between the
      generation of the new OID and the segment file being created on disk
      after being added to persistent tables. To combat this race condition,
      we have a small OID cache... but we have found in testing that it was
      not enough to prevent the issue.
      
      To fully solve the issue, we decouple OID and relfilenode on both QD and QE
      segments by introducing a nextRelfilenode counter which is similar to
      the nextOid counter. The QD segment will generate the OIDs and its own
      relfilenodes. The QE segments only use the preassigned OIDs from the QD
      dispatch and generate a relfilenode value from their own nextRelfilenode
      counter.
      
      Current sequence generation is always done on QD sequence server, and
      assumes the OID is always same as relfilenode when handling sequence client
      requests from QE segments. It is hard to change this assumption so we have a
      special OID/relfilenode sync for sequence relations for GP_ROLE_DISPATCH and
      GP_ROLE_UTILITY.
      
      Reference gpdb-dev thread:
      https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/lv6Sb4I6iSISigned-off-by: NXin Zhang <xzhang@pivotal.io>
      1fd11387
  2. 04 4月, 2017 8 次提交
    • D
      Fix various typos in comments and docs · 1878fa73
      Daniel Gustafsson 提交于
      [ci skip]
      1878fa73
    • A
      Avoid could not open file pg_subtrans/xxx situation during recovery. · 6e715b6b
      Ashwin Agrawal 提交于
      Initialize TransactionXmin to avoid situations where scanning pg_authid or other
      tables mostly in BuildFlatFiles() via SnapshotNow may try to chase down
      pg_subtrans for older "sub-committed" transaction. File corresponding to which
      may not and is not supposed to exist. Setting TransactionXmin will avoid calling
      SubTransGetParent() in TransactionIdDidCommit() for older XIDs. Also, along the
      way initialize RecentGlobalXmin as Heap access method needs it set.
      
      Repro for record of one such case:
      ```
      CREATE ROLE foo;
      
      BEGIN;
      SAVEPOINT sp;
      DROP ROLE foo;
      RELEASE SAVEPOINT sp; -- this is key which marks in clog as sub-committed.
      
      kill or gpstop -air
      < N transactions to atleast cross pg_subtrans single file limit roughly
      CLOG_XACTS_PER_BYTE * BLCKSZ* SLRU_PAGES_PER_SEGMENT >
      
      restart -- errors recovery with missing pg_subtrans
      ```
      6e715b6b
    • D
      Fix gpfdist Makefile rules · f785aed1
      Daniel Gustafsson 提交于
      The extension for executable binaries is defined in X, replace the
      old (and now defunct) references to EXE_EXT. Also remove a commented
      out dead gpfdist rule in gpMgmt from before the move to core.
      f785aed1
    • D
      Revert "Remap transient typmods on receivers instead of on senders." · b1140e54
      Dhanashree Kashid and Jesse Zhang 提交于
      This reverts commit ab4398dd.
      [#142986717]
      b1140e54
    • D
      Explicitly hash-distribute in CTAS [#142986717] · 828b99c4
      Dhanashree Kashid and Jesse Zhang 提交于
      Similar to ea818f0e, we remove the
      sensitivity to segment count in test `dml_oids_delete`. Without this,
      this test was passing for the wrong reason:
      
      0. The table `dml_heap_r` was set up with 3 tuples, whose values in the
      distribution column `a` are 1, 2, NULL respectively. On a 2-segment
      system, the 1-tuple and 2-tuple are on distinct segments, and because of
      a quirk of our local OID counter synchronization, they will get the same
      oids.
      
      0. The table `tempoid` will be distributed randomly under ORCA, with
      tuples copied from `dml_heap_r`
      
      0. The intent of the final assertion is checking that the OIDs are not
      changed by the DELETE. Also hidden in the assumption is that the tuples
      stay on the same segment as the source table.
      
      0. However, the compounding effect of that "same oid" with a randomly
      distributed `tempoid` will lead to a passing test when we have two
      segments!
      
      This commit fixes that. So this test will pass for the right reason, and
      also on any segment count.
      828b99c4
    • H
      Remove Orca translator dead code (#2138) · 0f7caefa
      Haisheng Yuan 提交于
      When I was trying to understand how does Orca generate plan for CTE using
      shared input scan, I found that the share input scan is generated during CTE
      producer & consumer DXL node to PlannedStmt translation stage, instead of Expr
      to DXL stage inside Orca. It turns out CDXLPhysicalSharedScan is not used
      anywhere, so remove all the related dead code.
      0f7caefa
    • H
      Fix duplicate typedefs. · 615b4c69
      Heikki Linnakangas 提交于
      It's an error in standard C - at least in older standards - to typedef
      the same type more than once, even if the definition is the same. Newer
      versions of gcc don't complain about it, but you can see the warnings
      with -pedantic (among a ton of other warnings, search for "redefinition").
      
      To fix, remove the duplicate typedefs. The ones in src/backend/gpopt and
      src/include/gpopt were actually OK, because a duplicate typedef is OK in
      C++, and those files are compiled with a C++ compiler. But many of the
      typedefs in those files were not used for anything, so I nevertheless
      removed duplicate ones there too, that caught my eye.
      
      In gpmon.h, we were redefining apr_*_t types when postgres.h had been
      included. But as far as I can tell, that was always - all the files that
      included gpmon, included postgres.h directly or indirectly before that.
      Search & replace the references to apr_*_t types in that file with the
      postgres equivalents, to make it more clear what they actually are.
      615b4c69
    • H
      Remove CdbCellBuf and CdbPtrBuf facilities. · f78d0246
      Heikki Linnakangas 提交于
      CdbCellBuf was only used in hash aggregates, and it only used a fraction
      of the functionality. In essence, it was using it as a very simple memory
      allocator, where each allocation was fixed size, and the only way to free
      was to reset the whole cellbuf. But the same code was using a different,
      but similar, mpool_* mechanism for allocating other things stored in
      the hash buckets. We might as well use mpool_alloc for the HashAggEntry
      struct as well, and get rid of all the cellbuf code.
      
      CdbPtrBuf was completely unused.
      f78d0246
  3. 03 4月, 2017 6 次提交
    • D
      Remove outdated comment and clarify code · 3c04ddcf
      Daniel Gustafsson 提交于
      The comment about backporting to a 10 year old version has passed
      it's due date so remove. Also actually use the referenced variable
      to make the code less confusing to readers (the compiler will be
      smart enough about stack allocations anyways). Also reflows and
      generally tidies up the comment a little.
      3c04ddcf
    • D
      Use appendStringInfoString() where possible · 54c38de6
      Daniel Gustafsson 提交于
      appendStringInfo() is a variadic function treating the passed
      string as a format specifier. This is wasteful processing when
      just adding a constant string which can be done faster with a
      call to appendStringInfoString() where no format processing is
      performed.
      
      This leaves lots of appendStringInfo() calls in the processing
      but they are from upstream and will be addressed when we merge
      with future versions of postgres. The calls in this patch are
      the GPDB specific ones.
      54c38de6
    • D
      Remove unused BugBuster leftovers · 7dbaace6
      Daniel Gustafsson 提交于
      With the last remaining testsuites moved over to ICW, there is no
      longer anything left running in BugBuster. Remove the remaining
      files and BugBuster makefile integration in one big swing of the
      git rm axe. The only thing left in use was a data file which was
      referenced from ICW, move this to regress/data instead.
      7dbaace6
    • D
      Fix typo and spelling in memory_quota util · c76b1c4b
      Daniel Gustafsson 提交于
      c76b1c4b
    • D
      Move BugBuster memory_quota test to ICW · 6cc722e0
      Daniel Gustafsson 提交于
      This moves the memory_quota tests more or less unchanged to ICW.
      Changes include removing ignore sections and minor formatting as
      well as a rename to bb_memory_quota.
      6cc722e0
    • D
      Migrate BugBuster mpph tests to ICW · 42b33d42
      Daniel Gustafsson 提交于
      This combines the various mpph tests in BugBuster into a single
      new ICW suite, bb_mpph. Most of the existing queries were moved
      over with a few pruned that were too uninteresting, or covered
      elsewhere.
      
      The BugBuster tests combined are: load_mpph, mpph_query,
      mpph_aopart, hashagg and opperf.
      42b33d42
  4. 01 4月, 2017 18 次提交
    • P
      Remap transient typmods on receivers instead of on senders. · ab4398dd
      Pengzhou Tang 提交于
      QD used to send a transient types table to QEs, then QE would remap the
      tuples with this table before sending them to QD. However in complex
      queries QD can't discover all the transient types so tuples can't be
      correctly remapped on QEs. One example is like below:
      
          SELECT q FROM (SELECT MAX(f1) FROM int4_tbl
                         GROUP BY f1 ORDER BY f1) q;
          ERROR:  record type has not been registered
      
      To fix this issue we changed the underlying logic: instead of sending
      the possibly incomplete transient types table from QD to QEs, we now
      send the tables from motion senders to motion receivers and do the remap
      on receivers. Receivers maintain a remap table for each motion so tuples
      from different senders can be remapped accordingly. In such way, queries
      contain multi-slices can also handle transient record type correctly
      between two QEs.
      
      The remap logic is derived from the executor/tqueue.c in upstream
      postgres. There is support for composite/record types and arrays as well
      as range types, however as range types are not yet supported in GPDB so
      the logic is put under a conditional compilation macro, in theory it
      shall be automatically enabled when range types are supported in GPDB.
      
      One side effect for this approach is that on receivers a performance
      down is introduced as the remap requires recursive checks on each tuple
      of record types. However optimization is made to make this side effect
      minimum on non-record types.
      
      Old logic that building transient types table on QD and sending them to
      QEs are retired.
      Signed-off-by: NGang Xiong <gxiong@pivotal.io>
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      ab4398dd
    • G
      Remove the record comparison functions in d9148a54. · 45e7669e
      Gang Xiong 提交于
      Commit d9148a54 enabled the record array as well as the comparison of
      record types, the comparison functions/operators' OIDs used in upstream
      postgres are already used by others in GPDB, and many test cases are
      assuming comparison of record types should fail. As we don't actually
      need this comparison feature at the moment in GPDB we simply remove
      these functions for now.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      45e7669e
    • T
      Implement comparison of generic records (composite types), and invent a... · 02335757
      Tom Lane 提交于
      Implement comparison of generic records (composite types), and invent a pseudo-type record[] to represent arrays of possibly-anonymous composite types. Since composite datums carry their own type identification, no extra knowledge is needed at the array level.
      
      The main reason for doing this right now is that it is necessary to support
      the general case of detection of cycles in recursive queries: if you need to
      compare more than one column to detect a cycle, you need to compare a ROW()
      to an array built from ROW()s, at least if you want to do it as the spec
      suggests.  Add some documentation and regression tests concerning the cycle
      detection issue.
      02335757
    • H
      Use PartitionSelectors for partition elimination, even without ORCA. · e378d84b
      Heikki Linnakangas 提交于
      The old mechanism was to scan the complete plan, searching for a pattern
      with a Join, where the outer side included an Append node. The inner
      side was duplicated into an InitPlan, with the pg_partition_oid aggregate
      to collect the Oids of all the partitions that can match. That was
      inefficient and broken: if the duplicated plan was volatile, you might
      choose wrong partitions. And scanning the inner side twice can obviously
      be slow, if there are a lot of tuples.
      
      Rewrite the way such plans are generated. Instead of using an InitPlan,
      inject a PartitionSelector node into the inner side of the join.
      
      Fixes github issues #2100 and #2116.
      e378d84b
    • H
      Fix external table CR as end of line issue · 37a0b769
      Haozhou Wang 提交于
      This commit fix issue #1621. Current external table implementation
      only recognize LF as line end. If table is created with CR as line
      end then no data can be selected because whole data is never
      splitted into lines.
      Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
      37a0b769
    • F
      Adding last seen idle time in SessionState (#2137) · 1552c836
      foyzur 提交于
      * Adding last activity time in the SessionState.
      
      * Adding last activity time in the session_state_memory_entries_f and updating view session_level_memory_consumption.
      
      * Adding unit tests.
      
      * Adding SessionState initialization test.
      
      * Changing last_idle_time to idle_start as per PR suggestion.
      1552c836
    • C
    • H
      Rewrite kerberos tests (#2087) · 2415aff4
      Heikki Linnakangas 提交于
      * Rewrite Kerberos test suite
      
      * Remove obsolete Kerberos test stuff from pipeline and TINC
      
      We now have a rewritten Kerberos test script in installcheck-world.
      
      * Update ICW kerberos test to run on concourse container
      
      This adds a whole new test script in src/test/regress, implemented in plain bash. It sets up temporary a KDC as part of the script, and doesn't therefore rely on a pre-existing Kerberos server, like the old MU_kerberos-smoke test job did. It does require MIT Kerberos server-side utilities to be installed, instead, but no server needs to be running, and no superuser privileges are required.
      
      This supersedes the MU_kerberos-smoke behave tests. The new rewritten bash script tests the same things:
        1. You cannot connect to the server before running 'kinit' (to verify that the server doesn't just let anyone in, which could happen if the pg_hba.conf is misconfigured for the test, for example)
        2. You can connect, after running 'kinit'
        3. You can no longer connect, if the user account is expired
      
      The new test script is hooked up to the top-level installcheck-world target.
      
      There were also some Kerberos-related tests in TINC. Remove all that, too. They didn't seem interesting in the first place, looks like they were just copies of a few random other tests, intended to be run as a smoke test, after a connection had been authenticated with Kerberos, but there was nothing in there to actually set up the Kerberos environment in TINC.
      
      Other minor patches added:
      
      * Remove absolute path when calling kerberos utilities
      -- assume they are on path, so that they can be accessed from various installs
      -- add clarification message if sample kerberos utility is not found with 'which'
      
      * Specify empty load library for kerberos tools
      
      * Move kerberos test to its own script file
      -- this allows a failure to be recorded without exiting Make, and
      therefore the server can be turned off always
      
      * Add trap for stopping kerberos server in all cases
      * Use localhost for kerberos connection
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      2415aff4
    • H
      Fix error message, if EXCHANGE PARTITION with multiple constraints fails. · 30400ddc
      Heikki Linnakangas 提交于
      The loop to print each constraint's name was broken: it printed the name of
      the first constraint multiple times. A test case, as matter of principle.
      
      In the passing, change the set of tests around this error to all use the
      same partitioned table, rather than drop and recreate it for each command.
      And reduce the number of partitions from 10 to 5. Shaves some milliseconds
      from the time to run the test.
      30400ddc
    • J
      Set max_stack_depth explicitly in subtransaction_limit ICG test · a5e26310
      Jingyi Mei 提交于
      This comes from 4.3_STABLE repo
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      a5e26310
    • F
      Rule based partition selection for list (sub)partitions (#2076) · 5cecfcd1
      foyzur 提交于
      GPDB supports range and list partitions. Range partitions are represented as a set of rules. Each rule defines the boundaries of a part. E.g., a rule might say that a part contains all values between (0, 5], where left bound is 0 exclusive, but the right bound is 5, inclusive. List partitions are defined by a list of values that the part will contain. 
      
      ORCA uses the above rule definition to generate expressions that determine which partitions need to be scanned. These expressions are of the following types:
      
      1. Equality predicate as in PartitionSelectorState->levelEqExpressions: If we have a simple equality on partitioning key (e.g., part_key = 1).
      
      2. General predicate as in PartitionSelectorState->levelExpressions: If we need more complex composition, including non-equality such as part_key > 1.
      
      Note:  We also have residual predicate, which the optimizer currently doesn't use. We are planning to remove this dead code soon.
      
      Prior to  this PR, ORCA was treating both range and list partitions as range partitions. This meant that each list part will be converted to a set of list values and each of these values will become a single point range partition.
      
      E.g., consider the DDL:
      
      ```sql
      CREATE TABLE DATE_PARTS (id int, year int, month int, day int, region text)
      DISTRIBUTED BY (id)
      PARTITION BY RANGE (year)
          SUBPARTITION BY LIST (month)
             SUBPARTITION TEMPLATE (
              SUBPARTITION Q1 VALUES (1, 2, 3), 
              SUBPARTITION Q2 VALUES (4 ,5 ,6),
              SUBPARTITION Q3 VALUES (7, 8, 9),
              SUBPARTITION Q4 VALUES (10, 11, 12),
              DEFAULT SUBPARTITION other_months )
      ( START (2002) END (2012) EVERY (1), 
        DEFAULT PARTITION outlying_years );
      ```
      
      Here we partition the months as list partition using quarters. So, each of the list part will contain three months. Now consider a query on this table:
      
      ```sql
      select * from DATE_PARTS where month between 1 and 3;
      ```
      
      Prior to this ORCA generated plan would consider each value of the Q1 as a separate range part with just one point range. I.e., we will have 3 virtual parts to evaluate for just one Q1: [1], [2], [3]. This approach is inefficient. The problem is further exacerbated when we have multi-level partitioning. Consider the list part of the above example. We have only 4 rules for 4 different quarters, but we will have 12 different virtual rule (aka constraints). For each such constraint, we will then evaluate the entire subtree of partitions.
      
      After this PR, we no longer decompose rules into constraints for list parts and then derive single point virtual range partitions based on those constraints. Rather, the new ORCA changes will use ScalarArrayOp to express selectivity on a list of values. So, the expression for the above SQL will look like 1 <= ANY {month_part} AND 3 >= ANY {month_part}, where month_part will be substituted at runtime with different list of values for each of quarterly partitions. We will end up evaluating that expressions 4 times with the following list of values:
      
      Q1: 1 <= ANY {1,2,3} AND 3 >= ANY {1,2,3}
      Q2: 1 <= ANY {4,5,6} AND 3 >= ANY {4,5,6}
      ...
      
      Compare this to the previous approach, where we will end up evaluating 12 different expressions, each time for a single point value:
      
      First constraint of Q1: 1 <= 1 AND 3 >= 1
      Second constraint of Q1: 1 <= 2 AND 3 >= 2
      Third constraint of Q1: 1 <= 3 AND 3 >= 3
      First constraint of Q2: 1 <= 4 AND 3 >= 4
      ...
      
      The ScalarArrayOp depends on a new type of expression PartListRuleExpr that can convert a list rule to an array of values. ORCA specific changes can be found here: https://github.com/greenplum-db/gporca/pull/149
      5cecfcd1
    • A
      Fix XidlimitsTests, avoid going back after bumping the xid. · 5b2ea684
      Ashwin Agrawal 提交于
      auto-vacuum limit would be reached first and then warn limit, followed by other
      limits. So, there is no reason to rollback after bumping the xid to auto-vacuum
      limit. Can land in all kinds of wierd issues by doing the same. Practically,
      these tests need to be fully re-written maybe by modifying the GUCs and then
      actually generating XID to reach the same instead of similating by bumping the
      counter, but will attempt that in another commit.
      5b2ea684
    • A
      Avoid PANIC fetch transactionID outside critical section. · 6e8a00b3
      Ashwin Agrawal 提交于
      In some of persistent table functions, GetTopTransactionId() was called inside
      critical section. With laxy xid allocation if at DDL time transactionId stop
      limit has reached, this causes simple ERROR upgraded to PANIC. Hence modify code
      to call GetTopTransactionId() before entering critical section to avoid PANIC
      but just have ERROR as before.
      6e8a00b3
    • A
      Cleanup LocalDistribXactData related code. · 8c20bc94
      Ashwin Agrawal 提交于
      Commit fb86c90d "Simplify management of
      distributed transactions." cleanedup lot of code for LocalDistribXactData and
      introduced LocalDistribXactData in PROC for debugging purpose. But it's only
      correctly maintained for QE's, QD never populated LocalDistribXactData in
      MyProc. Instead TMGXACT also had LocalDistribXactData which was just set
      initially for QD but never updated later and confused more than serving the
      purpose. Hence removing LocalDistribXactData from TMGXACT, as it already has
      other fields which provide required information. Also, cleaned-up QD related
      states as even in PROC only QE uses LocalDistribXactData.
      8c20bc94
    • A
      Fully enable lazy XID allocation in GPDB. · 0932453d
      Ashwin Agrawal 提交于
      As part of 8.3 merge, upstream commit 295e6398
      "Implement lazy XID allocation" was merged. But transactionIds were still
      allocated in StartTransaction as code changes required to make it work for GPDB
      with distrbuted transaction was pending, thereby feature remained as
      disabled. Some progress was made by commit
      a54d84a3 "Avoid assigning an XID to
      DTX_CONTEXT_QE_AUTO_COMMIT_IMPLICIT queries." Now this commit addresses the
      pending work needed for handling deferred xid allocation correctly with
      distributed transactions and fully enables the feature.
      
      Important highlights of changes:
      
      1] Modify xlog write and xlog replay record for DISTRIBUTED_COMMIT. Even if
      transacion is read-only for master and no xid is allocated to it, it can still
      be distributed transaction and hence needs to persist itself in such a case. So,
      write xlog record even if no local xid is assigned but transaction is
      prepared. Similarly during xlog replay of the XLOG_XACT_DISTRIBUTED_COMMIT type,
      perform distributed commit recovery ignoring local commit. Which also means for
      this case don't commit to distrbuted log, as its only used to perform reverse
      map of localxid to distributed xid.
      
      2] Remove localXID from gxact, as its no more needed to be maintained and used.
      
      3] Refactor code for QE Reader StartTransaction. There used to be wait-loop with
      sleep checking to see if SharedLocalSnapshotSlot has distributed XID same as
      that of READER to assign reader some xid as that of writer, for SET type
      commands till READER actually performs GetSnapShotData(). Since now a) writer is
      not going to have valid xid till it performs some write, writers transactionId
      turns out InvalidTransaction always here and b) read operations like SET doesn't
      need xid, any more hence need for this wait is gone.
      
      4] Thow error if using distributed transaction without distributed xid. Earlier
      AssignTransactionId() was called for this case in StartTransaction() but such
      scenario doesn't exist hence convert it to ERROR.
      
      5] QD earlier during snapshot creation in createDtxSnapshot() was able to assign
      localXid in inProgressEntryArray corresponding to distribXid, as localXid was
      known by that time. That's no more the case and localXid mostly will get
      assigned after snapshot is taken. Hence now even for QD similar to QE's snapshot
      creation time localXid is not populated but later found in
      DistributedSnapshotWithLocalMapping_CommittedTest(). There is chance to optimize
      and try to match earlier behavior somewhat by populating gxact in
      AssignTransactionId() once locakXid is known but currently seems not so much
      worth it as QE's anyways have to perform the lookups.
      0932453d
    • A
    • A
      bc967e0b
    • A
      Make storage test robust by checking if DB up. · 2cd7fd17
      Ashwin Agrawal 提交于
      2cd7fd17