- 17 1月, 2017 5 次提交
-
-
由 Marbin Tan 提交于
* Enable to run behave tests with the installed binary instead of source * code.
-
由 Marbin Tan 提交于
* This is an effort to enable us to run behave tests without relying on * the source code and instead use the GPDB installed in the system as * the behave tests are integration tests.
-
由 Marbin Tan 提交于
* If the version has non-digit characters, then the float casting fails * for the minor version.
-
由 Marbin Tan 提交于
Author: Chumki Roy <croy@pivotal.io> Date: Mon Dec 5 11:06:45 2016 -0800 Remove error line - Aborting PQO plan generation. * Fix ORCA error message and make it same as with Planner. Author: Haisheng Yuan <hyuan@pivotal.io> Date: Wed Jul 20 14:30:38 2016 -0700 Update kerberos tinc test answer files
-
由 Shreedhar Hardikar 提交于
-
- 16 1月, 2017 4 次提交
-
-
由 Tom Lane 提交于
Cherry-picked into GPDB ahead of merge to fix compiler warning from backporting of bytea HEX encode/decode.
-
由 Daniel Gustafsson 提交于
Coverity complained that the aoseg_namespace variable potentially was uninitialized at usage, and while technically correct it would require the pg_aoseg namespace to be missing in the source cluster which seems far fatched. Fix by forcing initialization to invalid and ensuring that the Oid has been changed to a valid Oid before using. While looking at the code I also realized it had been blindly cargoculted to the same design pattern as the remaining queries which here was pointless. Simplify the logic and avoid allocating an extra PQExpBuffer.
-
由 Daniel Gustafsson 提交于
While the type cache used in binary upgrades isn't a dumpable object, it has a DO_ identifier associated to be able to use the indexing code. Skip it when looking through the DO objects to keep compilers and static analyzers happy. This was already done in pg_dump, do so for cdb_dump_agent too and expand the documentation on why.
-
由 Daniel Gustafsson 提交于
When adding operators to an existing operator family, ensure to dispatch the Oids of the created objects to the segments. This also backports the upstream alt_opf17 test case from alter_generic. Reported by Dhanashree Kashid
-
- 14 1月, 2017 4 次提交
-
-
由 Abhijit Subramanya 提交于
- Update answer file for tincrepo/mpp/gpdb/tests/catalog/basic/security_definer/pre_script.ans - Index on pg_aoseg was removed. Update tincrepo/mpp/gpdb/tests/catalog/mpp25256/mpp25256.sql and its answer file to reflect that. - UDF exception handling test cases: - Some of the answer files had to be updated for the (file: line no) diff - The error messages during gang creation has changed. A few answer files had to be changed to account for the diff - AOCO alter table - Fix the diff in the \d+ output for aoco table with cidr and inet columns
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
由 Abhijit Subramanya 提交于
The patch aims to cleanup and fix tests in the tincrepo/mpp/gpdb/tests/storage/basic folder. - Fix the external table tests. - Fix transaction management test and remove the max_prepared_transactions test since it is no longer valid. - Fix the xidlimits compilation. - Fix the partition ddl tests.
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
- 13 1月, 2017 14 次提交
-
-
由 Heikki Linnakangas 提交于
If a CTAS query uses a query with an InitPlan, and the InitPlan contains a call to a function that dispatches another query to the segments, the list of Oids is dispatched to the segments prematurely, in the inner query. As a result, when the CTAS query is dispatched, the OID of the new table is not included, and you get a "no pre-assigned OID for relation" error. To fix, attach the OID of the new table to the query descriptor before executing the InitPlans. Fixes github issue #1543, reported with test case by Abhijit Subramanya.
-
由 Jimmy Yih 提交于
Currently, the walsender process is not properly stopped during gpstop. The SignalSomeChildren function was changed in the Postgres 8.3 merge in commit 20ac0be1. It was previously at a Postgres 9.2 state to support sending signals to walsender as a part of our master and standby master replication.
-
由 Heikki Linnakangas 提交于
We don't build this as part of the CI, and we still won't pass the PL/TCL regression tests (doesn't seem like there's anything serious there at a quick glance though), but let's at least make it compile. Per github issue #1524
-
由 Heikki Linnakangas 提交于
For simplicity. This is less error-prone, too, in the face of future changes to ExprContext.
-
由 Heikki Linnakangas 提交于
DynamicTableScanInfo is an extension of EState, so always allocate it in the same memory context. The DynamicTableScanInfo.memoryContext field always pointed to es_query_cxt, so that is what we in fact always did anyway, this just removes the unnecessary abstraction, for simplicity.
-
由 Heikki Linnakangas 提交于
Pass the EState that contains it to where it's needed, instead.
-
由 Dhanashree Kashid 提交于
-
由 Jesse Zhang 提交于
This doesn't save too much time, it is just the pedant in me cringing when I looked at why we had a mandatory proprietary dependency. Alas this probably should have been done when we first introduced the `--enable-netbackup` configure flag, but better late than never.
-
由 Jesse Zhang 提交于
The four BSA agents seem to (inherently, as opposed to incidentally) share the same dependencies. If that's true, we might as well declare them together in one place.
-
由 Jesse Zhang 提交于
Those binaries need the gpbsa shared object to build. This wasn't exposed until EITHER 0. We changed the order we build them in the previous commit; OR 0. We ran a parallel build This commit simply adds the proper dependency declaration to all the offending targets.
-
由 Jesse Zhang 提交于
This should not break anything given the declarative nature of Make. But it does (especially noticeable if you are not running a parallel `make`). Because we are not declaring a dependency on gpbsa libraries on the the dump agents executables.
-
由 Abhijit Subramanya 提交于
When building a scan descriptor for appendonly tables, the compression type for a table is copied from the pg_appendonly row from relcache. When the pg_appendonly row for the AO table gets updated, as part of relcache invalidation, the old relation object will be destroyed. However the new relation object would still point to the original compresstype object. So appendonly_beginrange scan_internal() should use pstrdup to copy the compression type instead of copying the pointer directly.
-
由 Heikki Linnakangas 提交于
Connect to a segment in utility mode and run the following query The test case that's included is not for the bug that was fixed, but for an older bug with default_tablespace, that doesn't seem to be covered by any existing tests. Seems good to add a test for it since we're touching the code. The bug that this actually fixes, is difficult to test, hence no regression test for that included. But to reproduce manually: --- PGOPTIONS="-c gp_session_role=utility" psql -p 25432 gptest psql (8.3.23) Type "help" for help. gptest=# create table t1(a int); NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'a' as the Greenplum Database data distribution key for this table. HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew. CREATE TABLE gptest=# insert into t1 values (1), (2), (3), (4); INSERT 0 4 gptest=# create table t2 as select * from t1; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. --- Fixes github issue #1511, reported with test case by Abhijit Subramanya.
-
由 Abhijit Subramanya 提交于
When a superuser tries to split a partition table which was created by a non superuser, the split should complete successfully and the resulting partitions should have the owner set to the owner of the parent table.
-
- 12 1月, 2017 6 次提交
-
-
由 Pengzhou Tang 提交于
To avoid hang issue, rewrite AnalyzerWorker to follow the same logic as Worker class. Besides, haltWork() need to be called after join() to shutdown the whole WorkerPool, fix some incorrect usage for this.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Instead of logging errno, inlude the %m format specifier which pulls in the human readable errno translation. Also fix typos and make the error logging a bit more consistent with regards to style, string quoting and spelling. From a report by Github user laixiong in issue #1507
-
由 Daniel Gustafsson 提交于
If the format_str variable is left uninitialized we are in trouble (partly because palloc never returns NULL on failure to allocate). Remove useless checks and also remove initialization on declaration since that can hide us breaking the code subtly as it stands today.
-
由 Dave Cramer 提交于
* Implement bytea HEX encode/decode this is from upstream commit a2a8c7a6 The primary reason for back patching is to allow pgadmin4 to work Unfortunately there are a few hacks to make this work without Enum Config GUCs mostly around setting the GUC as the target is an integer but the guc value is a string * fixed regression tests \ removed unused code
-
由 Heikki Linnakangas 提交于
The code in rewriteDefine.c didn't know about AO tables, and tried to use heap_beginscan() on an AO table. It resulted in an ERROR, because even for an empty table, RelationGetNumberOfBlocks() returned 1, but we tried to scan it like a heap table, heap_getnext() couldn't find the block. To fix, add an explicit check that you can only turn heap tables into views this way. This also forbids turning an external table into a view. (We could possible relax that limitation, but it's a pointless feature anyway we only support for backwards-compatibility reasons in PostgreSQL.) Also add test case for this.
-
- 11 1月, 2017 7 次提交
-
-
由 Heikki Linnakangas 提交于
Constraints are not uniquely identified by namespace and name alone. It's possible to have multiple constraints with the same name, but attached to different tables or domains. So include parent relation and domain OIDs in the key. This was uncovered by a regression test in the TINC test suite. Move the OID consistency tests from TINC into the normal regression test suite, to have test coverage for this.
-
由 Adam Lee 提交于
-
由 Haozhou Wang 提交于
1. Add custom external table dependency with protocol to keep the dumped SQL ordered correctly. (create protocol must execute before creating custom external table) 2. Remove \" from dumped location URI 3. Support dump multiple locations Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit 2a3df5d0 introduced broken changes to the ereport() calls in the external table command. Fix by including the proper format specifier. Also concatenate the error message to a single row while at it to improve readability. Spotted by Heikki Linnakangas
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
由 Ashwin Agrawal 提交于
Seen failures multiple times in pipeline at point for checking if database is in changetracking or not for failover_to_mirror scenarios. For this case primary is paniced and mirror takes over but due to work-load running, master may see panics due to 2PC erroring out after retries. The test should be able to continue working in such a condition, hence adding wait_for_database to be up after master panic.
-
由 Marbin Tan 提交于
* Timeout was removed from the WorkerPool on commit c7dfcc45
-