- 12 9月, 2018 3 次提交
-
-
由 Joao Pereira 提交于
This commit changed the way the grammar was parsing the subpartition information to ensure that the templates could not exist without a `SUBPARTITION BY`. Testing coverage for the case where the `SUBPARTITION TEMPLATE` expression is written before a `SUBPARTITION BY` was added. The error displayed when the template appeared after a Partition was changed to point to the TEMPLATE Co-authored-by: NEkta Khanna <ekhanna@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
-
由 Joao Pereira 提交于
This contrib module was converted to the postgres Extension framework. Ensure that the function gp_heap_distribution_check is called for all segments In the previous implementation the function was not called on all segments, so it produced incorrect results, unless you try to run it directly in each segment Added tests were the some wrong data is inserted into the segment to ensure the function is returning the correct result Use gp_execute_on_server() UDF in test to simulate bad distribution Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Asim R P 提交于
This function is intented for tests to run DDL/DML commands on a specific GPDB segment. Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
- 11 9月, 2018 9 次提交
-
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
The only external manipulation of this field occurs in PortalStart, which we would also like to get rid of, but we're not sure how at the moment. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Adam Berlin 提交于
We have been using Portal->releaseResLock to decide if a resource queue is locked for a given portal. Instead, we give the responsibility to the resource queue system to decide if the portal is locked. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Adam Berlin 提交于
Avoid acquiring a resource queue lock for the same portal more than once while calling ProcessQuery for the portal. An example where this situation occurs can be found in the provided test. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Daniel Gustafsson 提交于
Use EXPLAIN output to test the qual pushdown rather than just invoking the query, as it's hard to varify the pushdown from examining that (this introduces an _optimizer file). Also remove redundant DROP clauses, refactor a few things to avoid NOTICEs and extend documentation to not just list an internal Jira ticket.
-
由 ZhangJackey 提交于
Fix test cases that are failing to pass, They are caused by a4cbf586 .
-
由 ZhangJackey 提交于
When doing an inner join, we will test that if we can use redistribute motion by the function cdbpath_partkeys_from_preds. But if a_partkey is NIL(it is NIL at the beginning of the function), we append nothing into it. Thus this function will only return false. This leads to the planner can only generate a broadcast motion for the inner relation. We fix this by the same logic as an outer join. WTS node is immovable, this commit adds some code to handle it. Co-authored-by: NShujie Zhang <shzhang@pivotal.io> Co-authored-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
There are plan changes after the commit:02af5c59, so updating the output files with the valid plans. It got missed in the earlier commit.
-
由 Dhanashree Kashid 提交于
-
- 10 9月, 2018 4 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Lei (Alexandra) Wang 提交于
Correct cost calculation for SplitUpdate plan This partly addresses a GPDB_90_MERGE_FIXME introduced in 73801e8. As mentioned in the FIXME, this will not help generating a better plan because we have no choice other than simply adding the SplitUpdate node. Note that only the cost is adjusted, the width is still incorrect. We will not fix width for now because upstream commit 3fc6e2d7 will fix it. Co-authored-by: NShujie Zhang <shzhang@pivotal.io> Co-authored-by: NAlexandra Wang <leiwangcheme@gmail.com>
-
由 Jesse Zhang 提交于
This is a leftover from refactoring done in commit 78a4890a
-
由 Pengzhou Tang 提交于
In commit fb9081fc, we introduced a fault injector to drop a stop ack. It released a pthread_mutex_lock by accident which makes interconnect structures in a race condition. As a result, an FATAL error was reported as "FATAL: freelist NULL: count 2 max 1 buf (nil) (ic_udpifc.c:3501)".
-
- 08 9月, 2018 10 次提交
-
-
由 Mel Kiyama 提交于
* docs - ANALYZE command - HLL statistics, incremental analyze also updates topics related to partitioned table statistics. This will be backported to 5X_STABLE with these changes --stakind5 is not available --stakind5 =99 is moved to stakind4. * docs - ANALYZE - HLL statistics, incremental analyze - updates based on review comments. * docs - ANALYZE - HLL statistics, incremental analyze - minor late updates. * docs - ANALYZE - HLL statistics, incremental analyze - fix x-refs.
-
由 Dhanashree Kashid 提交于
Previously, while optimizing nestloop joins, ORCA always generated a blocking materialize node (cdb_strict=true). Though, this conservative nature ensured that the join node produced by ORCA will always be deadlock safe; we sometimes produced slow running plans. ORCA now has a capability of producing blocking materialize only when needed by detecting motion hazard in the nestloop join. A streaming material will be generated when there is no motion hazard. This commit adds a guc to control this behavior. When set to off, we fallback to old behavior of always producing a blocking materialize. Also bump the statement_mem for a test in segspace. After this change, for the test query, we produce a streaming spool which changes number of operator groups in memory quota calculation and query fails with: `ERROR: insufficient memory reserved for statement`. Bump the statement_mem by 1MB to test the fault injection. Also bump the orca version to 2.72.0 Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
由 Goutam Tadi 提交于
Authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Xin Zhang 提交于
[#159742200] Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Goutam Tadi 提交于
Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Goutam Tadi 提交于
[#159742200] Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Goutam Tadi 提交于
Fix standby filespace directory path Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NFei Yang <fyang@pivotal.io> Co-authored-by: NJemish Patel <jpatel@pivotal.io>
-
由 Goutam Tadi 提交于
[#159742200] Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NFei Yang <fyang@pivotal.io>
-
由 Xin Zhang 提交于
Add behave test for FQDN_HBA flag support Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Goutam Tadi 提交于
- Behave tests for gpinitsystem with fqdn Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 07 9月, 2018 10 次提交
-
-
由 Paul Guo 提交于
-
由 Richard Guo 提交于
The only consumer of RestrictInfo->ojscope_relids is gen_implied_qual(), to tell if RestrictInfo is an outer join clause. RestrictInfo->outer_relids can be used to do the same job. So remove ojscope_relids from RestrictInfo and from arguments of make_restrictinfo(). PostgreSQL does not have ojscope_relids for RestrictInfo.
-
由 Paul Guo 提交于
pg_upgrade: Fix dump file diff caused by default value with type bit varying in column of a relation. (#4823) Here is the repro case: psql -X -d regression -c "CREATE TABLE t111 ( a40 bit varying(5) DEFAULT '1');" psql -X -d regression -c "CREATE TABLE t222 ( a40 bit varying(5) DEFAULT B'1');" After pg_upgrade testing, we will see failure and the diff of dump sql files looks like this: CREATE TABLE t111 ( - a40 bit varying(5) DEFAULT B'1'::bit varying + a40 bit varying(5) DEFAULT (B'1'::"bit")::bit varying ) DISTRIBUTED BY (a40); No diff for the table t222. From perspective of functionality, the difference seems to mean nothing, but it is annoying for CI. This issue was found when testing pg_upgrade using the pg_regression database which is generated after running regression tests so we do not need to add a test case for it. This issue exists on latest PG also. We submitted a patch to upstream and the patch is under reviewing. We'll check in GP in advance. Co-authored-by: NRichard Guo <riguo@pivotal.io> Co-authored-by: NPaul Guo <pguo@pivotal.io>
-
由 Heikki Linnakangas 提交于
We can manage without it. Convert them into human-oriented comments, and rely on the usual "compare with expected output" method for all of these tests. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/lrJFgQR-KhI/KFTnrJj2BQAJ
-
由 BaiShaoqi 提交于
Storage in listTables function uesed to show nothing when the pg_catalog.pg_class's relstorage is 'p' or 'f' Add comment for RELSTORAGE_PARQUET in pg_class.h Review comments from Heikki Linnakangas, Daniel Gustafsson
-
由 Richard Guo 提交于
Previously, for query below, we disabled ANY_SUBLINK pullup to work around the assertion failure that left_ec/right_ec not being set. ``` select * from A where exists (select * from B where A.i in (select C.i from C where C.i = B.i)); ``` This commit sets left_ec/right_ec properly in gen_implied_qual() and re-enables above ANY_SUBLINK pullup. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
-
由 Jimmy Yih 提交于
A `CREATE TABLE AS` without a `DISTRIBUTED BY` clause will create a randomly distributed table, when optimized by ORCA. The plan for the CTAS will have a redistribute motion (random) between the scan and the insert. Depending on your data, this style of plan could be more even, equally even, or less even than a hash distributed table (the kind of distribution usually assumed by planner). This commit changes the test to explicitly distribute by the same column that planner would guess. Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Daniel Gustafsson 提交于
-
由 Heikki Linnakangas 提交于
The upstream pg_cancel_backend() function should work fine on GPDB. It's not exactly the same: in gp_cancel_query(), you would pass the sesssion ID and command ID as argument, while pg_cancel_backend() takes a PID. But it seems just as good from a usability point of view. Let's avoid the duplicate code, and remove gp_cancel_query(). Also remove the gp_cancel_query_print_log and gp_cancel_query_delay_time GUCs. They were not directly related to gp_cancel_query() - they would have an effect on cancellations caused by pg_cancel_backend() or statement_timeout, too. But they were marked as "developer options", and they printed the session and command ID so I think they were meant to be used with gp_cancel_backend(). I don't think anyone uses them, though, so let's just remove them, rather than change them to print PIDs.
-
由 Jesse Zhang 提交于
My compiler (Clang-8) is complaining about this, and I agree: ``` cdbgroup.c:4193:24: warning: expression which evaluates to zero treated as a null pointer constant of type 'List *' (aka 'struct List *') [-Wnon-literal-null-conversion] fref->aggdistinct = false; /* handled in preliminary aggregation */ ^~~~~ ../../../src/include/c.h:195:15: note: expanded from macro 'false' #define false ((bool) 0) ^~~~~~~~~~ ``` Commit 35e60338 introduced this, most likely accidentally. Come to think of it, Postgres (until 11) isn't even using a "real boolean": we just `typdef char bool`, which is why we are *not* getting more complaints from more compilers at the default warning level.
-
- 06 9月, 2018 4 次提交
-
-
由 Taylor Vesely 提交于
In GPDB AGGREGATE functions can be either 'ordered' or 'hypothetical', and as a result the token aggr_args has more information than in upstream. Excepting CREATE [ORDERED] AGGREGATE, the parser will extract the function arguments from the aggr_args token using extractAggrArgTypes(). The ALTER EXTENSION ADD/DROP AGGREGATE and SECURITY LIMIT syntax has been added as part of the merge of PostgreSQL between 9.0 and 9.2, so add a call to extract the function arguments. Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
In the recent refactoring which separated PostgreSQL and Greenplum functionality in pg_upgrade, these header comments were overlooked and came out of sync.
-
由 Tang Pengzhou 提交于
* Simplify the AssignGangs() logic for init plans Previously, AssignGangs() assign gangs for both main plans and init plans in one shot. Because init plans and main plan are executed sequentially, so the gangs can be reused between main plan and init plans, function AccumSliceReq() is designed for this. This process can be simplified: already know the root slice index id will be adjusted to according init plan id, init plan only need to assign their own slices. * Integrate Gang management from portal to Dispatcher Previously, Gang was managed by portal, freeGangsForPortal() was used to cleanup gang resource, DTM related commands also needed a gang to dispatch command outside of a portal and used freeGangsForPortal() too. There might be multiple command/plan/utility executed within one portal, all commands relied on a dispatcher routine like CdbDispatchCommand / CdbDispatchPlan/CdbDispatchUtility... to dispatch, gangs were created by each dispatcher routines, but not be recycled or destroyed when a routine finished except for primary writer gang, one defect of this is gang resource cannot be reused between dispatcher routines. GPDB already had an optimization for init plans, if a plan contained init plans, AssignGangs was called before execution of any of them it went through the whole slice tree and created the maximum gang that both main plan and init plans needed, this was doable because init plans and main plan were executed sequentially, but it also made AssignGangs logic complex, meanwhile, reusing an not clean gang was not safe. Another confusing thing was the gang and dispatcher were managed separately which cause context inconsistent like: when a dispatcher state was destroyed, gang was not recycled, when a gang was destroyed by portal, the dispatcher state was still in use and may refer to the context of a destroyed gang. As described above, this commit integrates gang management with dispatcher, a dispather state is responsible for creating and tracking gangs as needed and destroy them when dispatcher state is destroyed. * Handle the case when primary writer gang has gone When members of primary writer gang gone, the writer gang is destroyed immediately (primaryWriterGang is set to NULL) when a dispatcher rountine (eg.CdbDispatchCommand) finished. So when dispatching two-phase-DTM/DTX related command, QD doesn't know writer gang has gone, it may get unexpected error like 'savepoint not exist', 'subtransaction level not match', 'temp file not exist'. Previously, primaryWriterGang is not reset when DTM/DTX commands start even it is pointing to invalid segments, so those DTM/DTX commands will not actually sent to segments, an normal error reported on QD looks like 'could not connect to segment: initialization of segworker'. So we need a way to info global transaction that its writer gang has lost. so when aborting transaction, QD can: 1. disconnect all reader gangs, this is usefull to skip dispatching "ABORT_NO_PREPARE" 2. reset session and drop temp files because temp files in segment is gone. 3. report a error when dispatching "rollback savepoint" DTX because savepoint in segment is gone. 4. report a error when dispatch "abort subtransaction" DTX because subtransaction is rollback when writer segment is down.
-