- 05 4月, 2017 4 次提交
-
-
由 Melanie Plageman 提交于
* Removing deprecated utility. This was utility was used in 3.3 -> 4.0 upgrade partition. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 David Yozie 提交于
* removing some duplicate files * correctly excluding pivotal condition from build; updating Gemfile.lock to use latest version of bookbinder
-
Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Tushar Dadlani 提交于
[#137900495](https://www.pivotaltracker.com/n/projects/1321816)
-
- 04 4月, 2017 12 次提交
-
-
由 Daniel Gustafsson 提交于
[ci skip]
-
由 Ashwin Agrawal 提交于
Initialize TransactionXmin to avoid situations where scanning pg_authid or other tables mostly in BuildFlatFiles() via SnapshotNow may try to chase down pg_subtrans for older "sub-committed" transaction. File corresponding to which may not and is not supposed to exist. Setting TransactionXmin will avoid calling SubTransGetParent() in TransactionIdDidCommit() for older XIDs. Also, along the way initialize RecentGlobalXmin as Heap access method needs it set. Repro for record of one such case: ``` CREATE ROLE foo; BEGIN; SAVEPOINT sp; DROP ROLE foo; RELEASE SAVEPOINT sp; -- this is key which marks in clog as sub-committed. kill or gpstop -air < N transactions to atleast cross pg_subtrans single file limit roughly CLOG_XACTS_PER_BYTE * BLCKSZ* SLRU_PAGES_PER_SEGMENT > restart -- errors recovery with missing pg_subtrans ```
-
由 Daniel Gustafsson 提交于
The extension for executable binaries is defined in X, replace the old (and now defunct) references to EXE_EXT. Also remove a commented out dead gpfdist rule in gpMgmt from before the move to core.
-
由 Daniel Gustafsson 提交于
[ci skip]
-
This reverts commit ab4398dd. [#142986717]
-
Similar to ea818f0e, we remove the sensitivity to segment count in test `dml_oids_delete`. Without this, this test was passing for the wrong reason: 0. The table `dml_heap_r` was set up with 3 tuples, whose values in the distribution column `a` are 1, 2, NULL respectively. On a 2-segment system, the 1-tuple and 2-tuple are on distinct segments, and because of a quirk of our local OID counter synchronization, they will get the same oids. 0. The table `tempoid` will be distributed randomly under ORCA, with tuples copied from `dml_heap_r` 0. The intent of the final assertion is checking that the OIDs are not changed by the DELETE. Also hidden in the assumption is that the tuples stay on the same segment as the source table. 0. However, the compounding effect of that "same oid" with a randomly distributed `tempoid` will lead to a passing test when we have two segments! This commit fixes that. So this test will pass for the right reason, and also on any segment count.
-
由 Todd Sedano 提交于
-
由 Todd Sedano 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 mkiyama 提交于
[ci skip]
-
由 Haisheng Yuan 提交于
When I was trying to understand how does Orca generate plan for CTE using shared input scan, I found that the share input scan is generated during CTE producer & consumer DXL node to PlannedStmt translation stage, instead of Expr to DXL stage inside Orca. It turns out CDXLPhysicalSharedScan is not used anywhere, so remove all the related dead code.
-
由 Heikki Linnakangas 提交于
It's an error in standard C - at least in older standards - to typedef the same type more than once, even if the definition is the same. Newer versions of gcc don't complain about it, but you can see the warnings with -pedantic (among a ton of other warnings, search for "redefinition"). To fix, remove the duplicate typedefs. The ones in src/backend/gpopt and src/include/gpopt were actually OK, because a duplicate typedef is OK in C++, and those files are compiled with a C++ compiler. But many of the typedefs in those files were not used for anything, so I nevertheless removed duplicate ones there too, that caught my eye. In gpmon.h, we were redefining apr_*_t types when postgres.h had been included. But as far as I can tell, that was always - all the files that included gpmon, included postgres.h directly or indirectly before that. Search & replace the references to apr_*_t types in that file with the postgres equivalents, to make it more clear what they actually are.
-
由 Heikki Linnakangas 提交于
CdbCellBuf was only used in hash aggregates, and it only used a fraction of the functionality. In essence, it was using it as a very simple memory allocator, where each allocation was fixed size, and the only way to free was to reset the whole cellbuf. But the same code was using a different, but similar, mpool_* mechanism for allocating other things stored in the hash buckets. We might as well use mpool_alloc for the HashAggEntry struct as well, and get rid of all the cellbuf code. CdbPtrBuf was completely unused.
-
- 03 4月, 2017 7 次提交
-
-
由 Dave Cramer 提交于
* We don't ship jdbc, or odbc For building the installers this repo is not gone, just unlinked from gpdb5 * remove references to odbc, and jdbc * remove more references to jdbc and odbc as well as client documentation * correctly remove windows specific code
-
由 Daniel Gustafsson 提交于
The comment about backporting to a 10 year old version has passed it's due date so remove. Also actually use the referenced variable to make the code less confusing to readers (the compiler will be smart enough about stack allocations anyways). Also reflows and generally tidies up the comment a little.
-
由 Daniel Gustafsson 提交于
appendStringInfo() is a variadic function treating the passed string as a format specifier. This is wasteful processing when just adding a constant string which can be done faster with a call to appendStringInfoString() where no format processing is performed. This leaves lots of appendStringInfo() calls in the processing but they are from upstream and will be addressed when we merge with future versions of postgres. The calls in this patch are the GPDB specific ones.
-
由 Daniel Gustafsson 提交于
With the last remaining testsuites moved over to ICW, there is no longer anything left running in BugBuster. Remove the remaining files and BugBuster makefile integration in one big swing of the git rm axe. The only thing left in use was a data file which was referenced from ICW, move this to regress/data instead.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
This moves the memory_quota tests more or less unchanged to ICW. Changes include removing ignore sections and minor formatting as well as a rename to bb_memory_quota.
-
由 Daniel Gustafsson 提交于
This combines the various mpph tests in BugBuster into a single new ICW suite, bb_mpph. Most of the existing queries were moved over with a few pruned that were too uninteresting, or covered elsewhere. The BugBuster tests combined are: load_mpph, mpph_query, mpph_aopart, hashagg and opperf.
-
- 01 4月, 2017 17 次提交
-
-
由 Pengzhou Tang 提交于
QD used to send a transient types table to QEs, then QE would remap the tuples with this table before sending them to QD. However in complex queries QD can't discover all the transient types so tuples can't be correctly remapped on QEs. One example is like below: SELECT q FROM (SELECT MAX(f1) FROM int4_tbl GROUP BY f1 ORDER BY f1) q; ERROR: record type has not been registered To fix this issue we changed the underlying logic: instead of sending the possibly incomplete transient types table from QD to QEs, we now send the tables from motion senders to motion receivers and do the remap on receivers. Receivers maintain a remap table for each motion so tuples from different senders can be remapped accordingly. In such way, queries contain multi-slices can also handle transient record type correctly between two QEs. The remap logic is derived from the executor/tqueue.c in upstream postgres. There is support for composite/record types and arrays as well as range types, however as range types are not yet supported in GPDB so the logic is put under a conditional compilation macro, in theory it shall be automatically enabled when range types are supported in GPDB. One side effect for this approach is that on receivers a performance down is introduced as the remap requires recursive checks on each tuple of record types. However optimization is made to make this side effect minimum on non-record types. Old logic that building transient types table on QD and sending them to QEs are retired. Signed-off-by: NGang Xiong <gxiong@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Gang Xiong 提交于
Commit d9148a54 enabled the record array as well as the comparison of record types, the comparison functions/operators' OIDs used in upstream postgres are already used by others in GPDB, and many test cases are assuming comparison of record types should fail. As we don't actually need this comparison feature at the moment in GPDB we simply remove these functions for now. Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Tom Lane 提交于
Implement comparison of generic records (composite types), and invent a pseudo-type record[] to represent arrays of possibly-anonymous composite types. Since composite datums carry their own type identification, no extra knowledge is needed at the array level. The main reason for doing this right now is that it is necessary to support the general case of detection of cycles in recursive queries: if you need to compare more than one column to detect a cycle, you need to compare a ROW() to an array built from ROW()s, at least if you want to do it as the spec suggests. Add some documentation and regression tests concerning the cycle detection issue.
-
由 Heikki Linnakangas 提交于
The old mechanism was to scan the complete plan, searching for a pattern with a Join, where the outer side included an Append node. The inner side was duplicated into an InitPlan, with the pg_partition_oid aggregate to collect the Oids of all the partitions that can match. That was inefficient and broken: if the duplicated plan was volatile, you might choose wrong partitions. And scanning the inner side twice can obviously be slow, if there are a lot of tuples. Rewrite the way such plans are generated. Instead of using an InitPlan, inject a PartitionSelector node into the inner side of the join. Fixes github issues #2100 and #2116.
-
由 Ning Yu 提交于
-
由 Haozhou Wang 提交于
This commit fix issue #1621. Current external table implementation only recognize LF as line end. If table is created with CR as line end then no data can be selected because whole data is never splitted into lines. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 foyzur 提交于
* Adding last activity time in the SessionState. * Adding last activity time in the session_state_memory_entries_f and updating view session_level_memory_consumption. * Adding unit tests. * Adding SessionState initialization test. * Changing last_idle_time to idle_start as per PR suggestion.
-
由 C.J. Jameson 提交于
-
由 Christopher Hajas 提交于
-
由 Heikki Linnakangas 提交于
* Rewrite Kerberos test suite * Remove obsolete Kerberos test stuff from pipeline and TINC We now have a rewritten Kerberos test script in installcheck-world. * Update ICW kerberos test to run on concourse container This adds a whole new test script in src/test/regress, implemented in plain bash. It sets up temporary a KDC as part of the script, and doesn't therefore rely on a pre-existing Kerberos server, like the old MU_kerberos-smoke test job did. It does require MIT Kerberos server-side utilities to be installed, instead, but no server needs to be running, and no superuser privileges are required. This supersedes the MU_kerberos-smoke behave tests. The new rewritten bash script tests the same things: 1. You cannot connect to the server before running 'kinit' (to verify that the server doesn't just let anyone in, which could happen if the pg_hba.conf is misconfigured for the test, for example) 2. You can connect, after running 'kinit' 3. You can no longer connect, if the user account is expired The new test script is hooked up to the top-level installcheck-world target. There were also some Kerberos-related tests in TINC. Remove all that, too. They didn't seem interesting in the first place, looks like they were just copies of a few random other tests, intended to be run as a smoke test, after a connection had been authenticated with Kerberos, but there was nothing in there to actually set up the Kerberos environment in TINC. Other minor patches added: * Remove absolute path when calling kerberos utilities -- assume they are on path, so that they can be accessed from various installs -- add clarification message if sample kerberos utility is not found with 'which' * Specify empty load library for kerberos tools * Move kerberos test to its own script file -- this allows a failure to be recorded without exiting Make, and therefore the server can be turned off always * Add trap for stopping kerberos server in all cases * Use localhost for kerberos connection Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NChumki Roy <croy@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Heikki Linnakangas 提交于
The loop to print each constraint's name was broken: it printed the name of the first constraint multiple times. A test case, as matter of principle. In the passing, change the set of tests around this error to all use the same partitioned table, rather than drop and recreate it for each command. And reduce the number of partitions from 10 to 5. Shaves some milliseconds from the time to run the test.
-
由 Tom Meyer 提交于
Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Tom Meyer 提交于
Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Jingyi Mei 提交于
This comes from 4.3_STABLE repo Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Tom Meyer 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Toolsmiths Team 提交于
This reverts commit 3531b6f681a05352f46f2f57861837f1ced2c6a0.
-
由 Tushar Dadlani 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-