1. 05 4月, 2017 6 次提交
  2. 04 4月, 2017 12 次提交
    • D
      Fix various typos in comments and docs · 1878fa73
      Daniel Gustafsson 提交于
      [ci skip]
      1878fa73
    • A
      Avoid could not open file pg_subtrans/xxx situation during recovery. · 6e715b6b
      Ashwin Agrawal 提交于
      Initialize TransactionXmin to avoid situations where scanning pg_authid or other
      tables mostly in BuildFlatFiles() via SnapshotNow may try to chase down
      pg_subtrans for older "sub-committed" transaction. File corresponding to which
      may not and is not supposed to exist. Setting TransactionXmin will avoid calling
      SubTransGetParent() in TransactionIdDidCommit() for older XIDs. Also, along the
      way initialize RecentGlobalXmin as Heap access method needs it set.
      
      Repro for record of one such case:
      ```
      CREATE ROLE foo;
      
      BEGIN;
      SAVEPOINT sp;
      DROP ROLE foo;
      RELEASE SAVEPOINT sp; -- this is key which marks in clog as sub-committed.
      
      kill or gpstop -air
      < N transactions to atleast cross pg_subtrans single file limit roughly
      CLOG_XACTS_PER_BYTE * BLCKSZ* SLRU_PAGES_PER_SEGMENT >
      
      restart -- errors recovery with missing pg_subtrans
      ```
      6e715b6b
    • D
      Fix gpfdist Makefile rules · f785aed1
      Daniel Gustafsson 提交于
      The extension for executable binaries is defined in X, replace the
      old (and now defunct) references to EXE_EXT. Also remove a commented
      out dead gpfdist rule in gpMgmt from before the move to core.
      f785aed1
    • D
      Fix typos in Python code · c3cbf89d
      Daniel Gustafsson 提交于
      [ci skip]
      c3cbf89d
    • D
      Revert "Remap transient typmods on receivers instead of on senders." · b1140e54
      Dhanashree Kashid and Jesse Zhang 提交于
      This reverts commit ab4398dd.
      [#142986717]
      b1140e54
    • D
      Explicitly hash-distribute in CTAS [#142986717] · 828b99c4
      Dhanashree Kashid and Jesse Zhang 提交于
      Similar to ea818f0e, we remove the
      sensitivity to segment count in test `dml_oids_delete`. Without this,
      this test was passing for the wrong reason:
      
      0. The table `dml_heap_r` was set up with 3 tuples, whose values in the
      distribution column `a` are 1, 2, NULL respectively. On a 2-segment
      system, the 1-tuple and 2-tuple are on distinct segments, and because of
      a quirk of our local OID counter synchronization, they will get the same
      oids.
      
      0. The table `tempoid` will be distributed randomly under ORCA, with
      tuples copied from `dml_heap_r`
      
      0. The intent of the final assertion is checking that the OIDs are not
      changed by the DELETE. Also hidden in the assumption is that the tuples
      stay on the same segment as the source table.
      
      0. However, the compounding effect of that "same oid" with a randomly
      distributed `tempoid` will lead to a passing test when we have two
      segments!
      
      This commit fixes that. So this test will pass for the right reason, and
      also on any segment count.
      828b99c4
    • T
      Upgrades to behave-1.2.4 · 7fdf6ef0
      Todd Sedano 提交于
      7fdf6ef0
    • T
      Fixes imports for behave 1.2.4 · c470a491
      Todd Sedano 提交于
      Signed-off-by: NChris Hajas <chajas@pivotal.io>
      c470a491
    • M
      GPDB DOCS Port 4.3.x updates to 5.0 (#2133) · 78a14481
      mkiyama 提交于
      [ci skip]
      78a14481
    • H
      Remove Orca translator dead code (#2138) · 0f7caefa
      Haisheng Yuan 提交于
      When I was trying to understand how does Orca generate plan for CTE using
      shared input scan, I found that the share input scan is generated during CTE
      producer & consumer DXL node to PlannedStmt translation stage, instead of Expr
      to DXL stage inside Orca. It turns out CDXLPhysicalSharedScan is not used
      anywhere, so remove all the related dead code.
      0f7caefa
    • H
      Fix duplicate typedefs. · 615b4c69
      Heikki Linnakangas 提交于
      It's an error in standard C - at least in older standards - to typedef
      the same type more than once, even if the definition is the same. Newer
      versions of gcc don't complain about it, but you can see the warnings
      with -pedantic (among a ton of other warnings, search for "redefinition").
      
      To fix, remove the duplicate typedefs. The ones in src/backend/gpopt and
      src/include/gpopt were actually OK, because a duplicate typedef is OK in
      C++, and those files are compiled with a C++ compiler. But many of the
      typedefs in those files were not used for anything, so I nevertheless
      removed duplicate ones there too, that caught my eye.
      
      In gpmon.h, we were redefining apr_*_t types when postgres.h had been
      included. But as far as I can tell, that was always - all the files that
      included gpmon, included postgres.h directly or indirectly before that.
      Search & replace the references to apr_*_t types in that file with the
      postgres equivalents, to make it more clear what they actually are.
      615b4c69
    • H
      Remove CdbCellBuf and CdbPtrBuf facilities. · f78d0246
      Heikki Linnakangas 提交于
      CdbCellBuf was only used in hash aggregates, and it only used a fraction
      of the functionality. In essence, it was using it as a very simple memory
      allocator, where each allocation was fixed size, and the only way to free
      was to reset the whole cellbuf. But the same code was using a different,
      but similar, mpool_* mechanism for allocating other things stored in
      the hash buckets. We might as well use mpool_alloc for the HashAggEntry
      struct as well, and get rid of all the cellbuf code.
      
      CdbPtrBuf was completely unused.
      f78d0246
  3. 03 4月, 2017 7 次提交
    • D
      We don't ship jdbc, or odbc (#2057) · 0193c5f7
      Dave Cramer 提交于
      * We don't ship jdbc, or odbc
      
      For building the installers this repo is not gone, just unlinked
      from gpdb5
      
      * remove references to odbc, and jdbc
      
      * remove more references to jdbc and odbc as well as client documentation
      
      * correctly remove windows specific code
      0193c5f7
    • D
      Remove outdated comment and clarify code · 3c04ddcf
      Daniel Gustafsson 提交于
      The comment about backporting to a 10 year old version has passed
      it's due date so remove. Also actually use the referenced variable
      to make the code less confusing to readers (the compiler will be
      smart enough about stack allocations anyways). Also reflows and
      generally tidies up the comment a little.
      3c04ddcf
    • D
      Use appendStringInfoString() where possible · 54c38de6
      Daniel Gustafsson 提交于
      appendStringInfo() is a variadic function treating the passed
      string as a format specifier. This is wasteful processing when
      just adding a constant string which can be done faster with a
      call to appendStringInfoString() where no format processing is
      performed.
      
      This leaves lots of appendStringInfo() calls in the processing
      but they are from upstream and will be addressed when we merge
      with future versions of postgres. The calls in this patch are
      the GPDB specific ones.
      54c38de6
    • D
      Remove unused BugBuster leftovers · 7dbaace6
      Daniel Gustafsson 提交于
      With the last remaining testsuites moved over to ICW, there is no
      longer anything left running in BugBuster. Remove the remaining
      files and BugBuster makefile integration in one big swing of the
      git rm axe. The only thing left in use was a data file which was
      referenced from ICW, move this to regress/data instead.
      7dbaace6
    • D
      Fix typo and spelling in memory_quota util · c76b1c4b
      Daniel Gustafsson 提交于
      c76b1c4b
    • D
      Move BugBuster memory_quota test to ICW · 6cc722e0
      Daniel Gustafsson 提交于
      This moves the memory_quota tests more or less unchanged to ICW.
      Changes include removing ignore sections and minor formatting as
      well as a rename to bb_memory_quota.
      6cc722e0
    • D
      Migrate BugBuster mpph tests to ICW · 42b33d42
      Daniel Gustafsson 提交于
      This combines the various mpph tests in BugBuster into a single
      new ICW suite, bb_mpph. Most of the existing queries were moved
      over with a few pruned that were too uninteresting, or covered
      elsewhere.
      
      The BugBuster tests combined are: load_mpph, mpph_query,
      mpph_aopart, hashagg and opperf.
      42b33d42
  4. 01 4月, 2017 15 次提交
    • P
      Remap transient typmods on receivers instead of on senders. · ab4398dd
      Pengzhou Tang 提交于
      QD used to send a transient types table to QEs, then QE would remap the
      tuples with this table before sending them to QD. However in complex
      queries QD can't discover all the transient types so tuples can't be
      correctly remapped on QEs. One example is like below:
      
          SELECT q FROM (SELECT MAX(f1) FROM int4_tbl
                         GROUP BY f1 ORDER BY f1) q;
          ERROR:  record type has not been registered
      
      To fix this issue we changed the underlying logic: instead of sending
      the possibly incomplete transient types table from QD to QEs, we now
      send the tables from motion senders to motion receivers and do the remap
      on receivers. Receivers maintain a remap table for each motion so tuples
      from different senders can be remapped accordingly. In such way, queries
      contain multi-slices can also handle transient record type correctly
      between two QEs.
      
      The remap logic is derived from the executor/tqueue.c in upstream
      postgres. There is support for composite/record types and arrays as well
      as range types, however as range types are not yet supported in GPDB so
      the logic is put under a conditional compilation macro, in theory it
      shall be automatically enabled when range types are supported in GPDB.
      
      One side effect for this approach is that on receivers a performance
      down is introduced as the remap requires recursive checks on each tuple
      of record types. However optimization is made to make this side effect
      minimum on non-record types.
      
      Old logic that building transient types table on QD and sending them to
      QEs are retired.
      Signed-off-by: NGang Xiong <gxiong@pivotal.io>
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      ab4398dd
    • G
      Remove the record comparison functions in d9148a54. · 45e7669e
      Gang Xiong 提交于
      Commit d9148a54 enabled the record array as well as the comparison of
      record types, the comparison functions/operators' OIDs used in upstream
      postgres are already used by others in GPDB, and many test cases are
      assuming comparison of record types should fail. As we don't actually
      need this comparison feature at the moment in GPDB we simply remove
      these functions for now.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      45e7669e
    • T
      Implement comparison of generic records (composite types), and invent a... · 02335757
      Tom Lane 提交于
      Implement comparison of generic records (composite types), and invent a pseudo-type record[] to represent arrays of possibly-anonymous composite types. Since composite datums carry their own type identification, no extra knowledge is needed at the array level.
      
      The main reason for doing this right now is that it is necessary to support
      the general case of detection of cycles in recursive queries: if you need to
      compare more than one column to detect a cycle, you need to compare a ROW()
      to an array built from ROW()s, at least if you want to do it as the spec
      suggests.  Add some documentation and regression tests concerning the cycle
      detection issue.
      02335757
    • H
      Use PartitionSelectors for partition elimination, even without ORCA. · e378d84b
      Heikki Linnakangas 提交于
      The old mechanism was to scan the complete plan, searching for a pattern
      with a Join, where the outer side included an Append node. The inner
      side was duplicated into an InitPlan, with the pg_partition_oid aggregate
      to collect the Oids of all the partitions that can match. That was
      inefficient and broken: if the duplicated plan was volatile, you might
      choose wrong partitions. And scanning the inner side twice can obviously
      be slow, if there are a lot of tuples.
      
      Rewrite the way such plans are generated. Instead of using an InitPlan,
      inject a PartitionSelector node into the inner side of the join.
      
      Fixes github issues #2100 and #2116.
      e378d84b
    • N
    • H
      Fix external table CR as end of line issue · 37a0b769
      Haozhou Wang 提交于
      This commit fix issue #1621. Current external table implementation
      only recognize LF as line end. If table is created with CR as line
      end then no data can be selected because whole data is never
      splitted into lines.
      Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
      37a0b769
    • F
      Adding last seen idle time in SessionState (#2137) · 1552c836
      foyzur 提交于
      * Adding last activity time in the SessionState.
      
      * Adding last activity time in the session_state_memory_entries_f and updating view session_level_memory_consumption.
      
      * Adding unit tests.
      
      * Adding SessionState initialization test.
      
      * Changing last_idle_time to idle_start as per PR suggestion.
      1552c836
    • C
      Describe common improvements to typical dev workflow · 9060ef77
      C.J. Jameson 提交于
      9060ef77
    • C
    • H
      Rewrite kerberos tests (#2087) · 2415aff4
      Heikki Linnakangas 提交于
      * Rewrite Kerberos test suite
      
      * Remove obsolete Kerberos test stuff from pipeline and TINC
      
      We now have a rewritten Kerberos test script in installcheck-world.
      
      * Update ICW kerberos test to run on concourse container
      
      This adds a whole new test script in src/test/regress, implemented in plain bash. It sets up temporary a KDC as part of the script, and doesn't therefore rely on a pre-existing Kerberos server, like the old MU_kerberos-smoke test job did. It does require MIT Kerberos server-side utilities to be installed, instead, but no server needs to be running, and no superuser privileges are required.
      
      This supersedes the MU_kerberos-smoke behave tests. The new rewritten bash script tests the same things:
        1. You cannot connect to the server before running 'kinit' (to verify that the server doesn't just let anyone in, which could happen if the pg_hba.conf is misconfigured for the test, for example)
        2. You can connect, after running 'kinit'
        3. You can no longer connect, if the user account is expired
      
      The new test script is hooked up to the top-level installcheck-world target.
      
      There were also some Kerberos-related tests in TINC. Remove all that, too. They didn't seem interesting in the first place, looks like they were just copies of a few random other tests, intended to be run as a smoke test, after a connection had been authenticated with Kerberos, but there was nothing in there to actually set up the Kerberos environment in TINC.
      
      Other minor patches added:
      
      * Remove absolute path when calling kerberos utilities
      -- assume they are on path, so that they can be accessed from various installs
      -- add clarification message if sample kerberos utility is not found with 'which'
      
      * Specify empty load library for kerberos tools
      
      * Move kerberos test to its own script file
      -- this allows a failure to be recorded without exiting Make, and
      therefore the server can be turned off always
      
      * Add trap for stopping kerberos server in all cases
      * Use localhost for kerberos connection
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      2415aff4
    • H
      Fix error message, if EXCHANGE PARTITION with multiple constraints fails. · 30400ddc
      Heikki Linnakangas 提交于
      The loop to print each constraint's name was broken: it printed the name of
      the first constraint multiple times. A test case, as matter of principle.
      
      In the passing, change the set of tests around this error to all use the
      same partitioned table, rather than drop and recreate it for each command.
      And reduce the number of partitions from 10 to 5. Shaves some milliseconds
      from the time to run the test.
      30400ddc
    • T
      Move sles iwc job downstream from compile · a53c05d8
      Tom Meyer 提交于
      Signed-off-by: NJingyi Mei <jmei@pivotal.io>
      a53c05d8
    • T
      Add installer header for sles 11 · 830448fd
      Tom Meyer 提交于
      Signed-off-by: NJingyi Mei <jmei@pivotal.io>
      830448fd
    • J
      Set max_stack_depth explicitly in subtransaction_limit ICG test · a5e26310
      Jingyi Mei 提交于
      This comes from 4.3_STABLE repo
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      a5e26310
    • T
      test: point suse11 openssl to suse10 · 18e39aa7
      Tom Meyer 提交于
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      18e39aa7