- 18 1月, 2017 9 次提交
-
-
- `optimizer_analyze_root_partition` GUC functionality is introduced. When optimizer_analyze_root_partition is enabled, it will enable stats collection on root partitions when a plain analyze is run on a root partition table - `optimizer_analyze_midlevel_partition` GUC functionality is introduced When optimizer_analyze_midlevel_partition is enabled, it will enable stats collection on midlevel partitions. Expectation: Case 1: ```sql set optimizer_analyze_root_partition=off; analyze tablename; -- Stats should only be collected for the leaf tables analyze rootpartition tablename; -- Stats should be only collected for the root table ``` Case 2: ```sql set optimizer_analyze_root_partition=on; analyze tablename; -- Stats should be collected for the root and the leaf table; analyze rootpartition tablename; -- Stats should be only collected for the root table ``` Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
Uninitialized variable causes release and debug build to behave differently for analyze. This commit fixes the issue.
-
由 Marbin Tan 提交于
* gptransfer used a sql command with "STRING_AGG" that combined all the attribute * names and generated the distribution key. However, this resulted in * an issue when there were more than 1 distribution key. In that case, * the combined distribution key was escaped incorrectly. * Instead, we now escape each attribute name separately via python. Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Dhanashree Kashid 提交于
With PostgreSQL 8.3, there's a new concept called "operator families". An operator class is now part of an operator family, which can contain cross-datatype operators that are "compatible" with each other. ORCA doesn't know anything about that. This commit updates the Translator files to refer to OpFamily instead of 'OpClasses'. ORCA still doesn't take advantage of this, but at least we are using operator families in operator classes' stead to make indexes work. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Ashwin Agrawal 提交于
This is addressing the GPDB_83_MERGE_FIXME comment in xact.c:1081. GPDB doesn't need `haveNonTemp` check, since GPDB doesn't allow data loss. GPDB doesn't support the asynchronous commits from upstream because this might cause data inconsistency across segments in a cluster. We disable the support of async commit using macro IMPLEMENT_ASYNC_COMMIT. And make the user GUC `synchronous_commit` as DEFUNCT_OPTIONS, so that its setting will be ignored, and a WARNING is generated. The original check for temp table in smgrGetPendingFileSysWork() is not valid in GPDB, since GPDB temp table use shared buffers to support access across slices. Once GPDB decide to support async commit, this macro can be removed. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Asim R P 提交于
-
由 Asim R P 提交于
This allows writing only one .source file for a UAO test. Create the .source file in "input/uao*/" directory. Place the answer file, also named as .source, into corresponding "output/uao*/" directory. The .source files must contain the following header: create schema <filename_prefix>@orientation@; set search_path="$user",<filename_prefix>@orientation@,public; SET gp_default_storage_options='orientation=@orientation@'; Replace "<filename_prefix>" with the filename excluding the ".source" extension. Generated files are named as <filename_prefix>_row.sql and <filename_prefix>_column.sql. Add the generated filenames to schedule files and run pg_regress as usual. A new option "--ao-dir" is added to pg_regress. To enable row/column test generation, set it to the directory name containing generic UAO .source tests. The directory should be created under src/test/regress/input.
-
由 Olaf Flebbe 提交于
-
由 laixiong 提交于
-
- 17 1月, 2017 8 次提交
-
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
* Enable to run behave tests with the installed binary instead of source * code.
-
由 Marbin Tan 提交于
* This is an effort to enable us to run behave tests without relying on * the source code and instead use the GPDB installed in the system as * the behave tests are integration tests.
-
由 Marbin Tan 提交于
* If the version has non-digit characters, then the float casting fails * for the minor version.
-
由 Marbin Tan 提交于
Author: Chumki Roy <croy@pivotal.io> Date: Mon Dec 5 11:06:45 2016 -0800 Remove error line - Aborting PQO plan generation. * Fix ORCA error message and make it same as with Planner. Author: Haisheng Yuan <hyuan@pivotal.io> Date: Wed Jul 20 14:30:38 2016 -0700 Update kerberos tinc test answer files
-
由 Shreedhar Hardikar 提交于
-
- 16 1月, 2017 4 次提交
-
-
由 Tom Lane 提交于
Cherry-picked into GPDB ahead of merge to fix compiler warning from backporting of bytea HEX encode/decode.
-
由 Daniel Gustafsson 提交于
Coverity complained that the aoseg_namespace variable potentially was uninitialized at usage, and while technically correct it would require the pg_aoseg namespace to be missing in the source cluster which seems far fatched. Fix by forcing initialization to invalid and ensuring that the Oid has been changed to a valid Oid before using. While looking at the code I also realized it had been blindly cargoculted to the same design pattern as the remaining queries which here was pointless. Simplify the logic and avoid allocating an extra PQExpBuffer.
-
由 Daniel Gustafsson 提交于
While the type cache used in binary upgrades isn't a dumpable object, it has a DO_ identifier associated to be able to use the indexing code. Skip it when looking through the DO objects to keep compilers and static analyzers happy. This was already done in pg_dump, do so for cdb_dump_agent too and expand the documentation on why.
-
由 Daniel Gustafsson 提交于
When adding operators to an existing operator family, ensure to dispatch the Oids of the created objects to the segments. This also backports the upstream alt_opf17 test case from alter_generic. Reported by Dhanashree Kashid
-
- 14 1月, 2017 4 次提交
-
-
由 Abhijit Subramanya 提交于
- Update answer file for tincrepo/mpp/gpdb/tests/catalog/basic/security_definer/pre_script.ans - Index on pg_aoseg was removed. Update tincrepo/mpp/gpdb/tests/catalog/mpp25256/mpp25256.sql and its answer file to reflect that. - UDF exception handling test cases: - Some of the answer files had to be updated for the (file: line no) diff - The error messages during gang creation has changed. A few answer files had to be changed to account for the diff - AOCO alter table - Fix the diff in the \d+ output for aoco table with cidr and inet columns
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
由 Abhijit Subramanya 提交于
The patch aims to cleanup and fix tests in the tincrepo/mpp/gpdb/tests/storage/basic folder. - Fix the external table tests. - Fix transaction management test and remove the max_prepared_transactions test since it is no longer valid. - Fix the xidlimits compilation. - Fix the partition ddl tests.
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
- 13 1月, 2017 14 次提交
-
-
由 Heikki Linnakangas 提交于
If a CTAS query uses a query with an InitPlan, and the InitPlan contains a call to a function that dispatches another query to the segments, the list of Oids is dispatched to the segments prematurely, in the inner query. As a result, when the CTAS query is dispatched, the OID of the new table is not included, and you get a "no pre-assigned OID for relation" error. To fix, attach the OID of the new table to the query descriptor before executing the InitPlans. Fixes github issue #1543, reported with test case by Abhijit Subramanya.
-
由 Jimmy Yih 提交于
Currently, the walsender process is not properly stopped during gpstop. The SignalSomeChildren function was changed in the Postgres 8.3 merge in commit 20ac0be1. It was previously at a Postgres 9.2 state to support sending signals to walsender as a part of our master and standby master replication.
-
由 Heikki Linnakangas 提交于
We don't build this as part of the CI, and we still won't pass the PL/TCL regression tests (doesn't seem like there's anything serious there at a quick glance though), but let's at least make it compile. Per github issue #1524
-
由 Heikki Linnakangas 提交于
For simplicity. This is less error-prone, too, in the face of future changes to ExprContext.
-
由 Heikki Linnakangas 提交于
DynamicTableScanInfo is an extension of EState, so always allocate it in the same memory context. The DynamicTableScanInfo.memoryContext field always pointed to es_query_cxt, so that is what we in fact always did anyway, this just removes the unnecessary abstraction, for simplicity.
-
由 Heikki Linnakangas 提交于
Pass the EState that contains it to where it's needed, instead.
-
由 Dhanashree Kashid 提交于
-
由 Jesse Zhang 提交于
This doesn't save too much time, it is just the pedant in me cringing when I looked at why we had a mandatory proprietary dependency. Alas this probably should have been done when we first introduced the `--enable-netbackup` configure flag, but better late than never.
-
由 Jesse Zhang 提交于
The four BSA agents seem to (inherently, as opposed to incidentally) share the same dependencies. If that's true, we might as well declare them together in one place.
-
由 Jesse Zhang 提交于
Those binaries need the gpbsa shared object to build. This wasn't exposed until EITHER 0. We changed the order we build them in the previous commit; OR 0. We ran a parallel build This commit simply adds the proper dependency declaration to all the offending targets.
-
由 Jesse Zhang 提交于
This should not break anything given the declarative nature of Make. But it does (especially noticeable if you are not running a parallel `make`). Because we are not declaring a dependency on gpbsa libraries on the the dump agents executables.
-
由 Abhijit Subramanya 提交于
When building a scan descriptor for appendonly tables, the compression type for a table is copied from the pg_appendonly row from relcache. When the pg_appendonly row for the AO table gets updated, as part of relcache invalidation, the old relation object will be destroyed. However the new relation object would still point to the original compresstype object. So appendonly_beginrange scan_internal() should use pstrdup to copy the compression type instead of copying the pointer directly.
-
由 Heikki Linnakangas 提交于
Connect to a segment in utility mode and run the following query The test case that's included is not for the bug that was fixed, but for an older bug with default_tablespace, that doesn't seem to be covered by any existing tests. Seems good to add a test for it since we're touching the code. The bug that this actually fixes, is difficult to test, hence no regression test for that included. But to reproduce manually: --- PGOPTIONS="-c gp_session_role=utility" psql -p 25432 gptest psql (8.3.23) Type "help" for help. gptest=# create table t1(a int); NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'a' as the Greenplum Database data distribution key for this table. HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew. CREATE TABLE gptest=# insert into t1 values (1), (2), (3), (4); INSERT 0 4 gptest=# create table t2 as select * from t1; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. --- Fixes github issue #1511, reported with test case by Abhijit Subramanya.
-
由 Abhijit Subramanya 提交于
When a superuser tries to split a partition table which was created by a non superuser, the split should complete successfully and the resulting partitions should have the owner set to the owner of the parent table.
-
- 12 1月, 2017 1 次提交
-
-
由 Pengzhou Tang 提交于
To avoid hang issue, rewrite AnalyzerWorker to follow the same logic as Worker class. Besides, haltWork() need to be called after join() to shutdown the whole WorkerPool, fix some incorrect usage for this.
-