- 08 9月, 2018 3 次提交
-
-
由 Goutam Tadi 提交于
[#159742200] Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NFei Yang <fyang@pivotal.io>
-
由 Xin Zhang 提交于
Add behave test for FQDN_HBA flag support Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Goutam Tadi 提交于
- Behave tests for gpinitsystem with fqdn Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 07 9月, 2018 10 次提交
-
-
由 Paul Guo 提交于
-
由 Richard Guo 提交于
The only consumer of RestrictInfo->ojscope_relids is gen_implied_qual(), to tell if RestrictInfo is an outer join clause. RestrictInfo->outer_relids can be used to do the same job. So remove ojscope_relids from RestrictInfo and from arguments of make_restrictinfo(). PostgreSQL does not have ojscope_relids for RestrictInfo.
-
由 Paul Guo 提交于
pg_upgrade: Fix dump file diff caused by default value with type bit varying in column of a relation. (#4823) Here is the repro case: psql -X -d regression -c "CREATE TABLE t111 ( a40 bit varying(5) DEFAULT '1');" psql -X -d regression -c "CREATE TABLE t222 ( a40 bit varying(5) DEFAULT B'1');" After pg_upgrade testing, we will see failure and the diff of dump sql files looks like this: CREATE TABLE t111 ( - a40 bit varying(5) DEFAULT B'1'::bit varying + a40 bit varying(5) DEFAULT (B'1'::"bit")::bit varying ) DISTRIBUTED BY (a40); No diff for the table t222. From perspective of functionality, the difference seems to mean nothing, but it is annoying for CI. This issue was found when testing pg_upgrade using the pg_regression database which is generated after running regression tests so we do not need to add a test case for it. This issue exists on latest PG also. We submitted a patch to upstream and the patch is under reviewing. We'll check in GP in advance. Co-authored-by: NRichard Guo <riguo@pivotal.io> Co-authored-by: NPaul Guo <pguo@pivotal.io>
-
由 Heikki Linnakangas 提交于
We can manage without it. Convert them into human-oriented comments, and rely on the usual "compare with expected output" method for all of these tests. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/lrJFgQR-KhI/KFTnrJj2BQAJ
-
由 BaiShaoqi 提交于
Storage in listTables function uesed to show nothing when the pg_catalog.pg_class's relstorage is 'p' or 'f' Add comment for RELSTORAGE_PARQUET in pg_class.h Review comments from Heikki Linnakangas, Daniel Gustafsson
-
由 Richard Guo 提交于
Previously, for query below, we disabled ANY_SUBLINK pullup to work around the assertion failure that left_ec/right_ec not being set. ``` select * from A where exists (select * from B where A.i in (select C.i from C where C.i = B.i)); ``` This commit sets left_ec/right_ec properly in gen_implied_qual() and re-enables above ANY_SUBLINK pullup. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
-
由 Jimmy Yih 提交于
A `CREATE TABLE AS` without a `DISTRIBUTED BY` clause will create a randomly distributed table, when optimized by ORCA. The plan for the CTAS will have a redistribute motion (random) between the scan and the insert. Depending on your data, this style of plan could be more even, equally even, or less even than a hash distributed table (the kind of distribution usually assumed by planner). This commit changes the test to explicitly distribute by the same column that planner would guess. Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Daniel Gustafsson 提交于
-
由 Heikki Linnakangas 提交于
The upstream pg_cancel_backend() function should work fine on GPDB. It's not exactly the same: in gp_cancel_query(), you would pass the sesssion ID and command ID as argument, while pg_cancel_backend() takes a PID. But it seems just as good from a usability point of view. Let's avoid the duplicate code, and remove gp_cancel_query(). Also remove the gp_cancel_query_print_log and gp_cancel_query_delay_time GUCs. They were not directly related to gp_cancel_query() - they would have an effect on cancellations caused by pg_cancel_backend() or statement_timeout, too. But they were marked as "developer options", and they printed the session and command ID so I think they were meant to be used with gp_cancel_backend(). I don't think anyone uses them, though, so let's just remove them, rather than change them to print PIDs.
-
由 Jesse Zhang 提交于
My compiler (Clang-8) is complaining about this, and I agree: ``` cdbgroup.c:4193:24: warning: expression which evaluates to zero treated as a null pointer constant of type 'List *' (aka 'struct List *') [-Wnon-literal-null-conversion] fref->aggdistinct = false; /* handled in preliminary aggregation */ ^~~~~ ../../../src/include/c.h:195:15: note: expanded from macro 'false' #define false ((bool) 0) ^~~~~~~~~~ ``` Commit 35e60338 introduced this, most likely accidentally. Come to think of it, Postgres (until 11) isn't even using a "real boolean": we just `typdef char bool`, which is why we are *not* getting more complaints from more compilers at the default warning level.
-
- 06 9月, 2018 16 次提交
-
-
由 Taylor Vesely 提交于
In GPDB AGGREGATE functions can be either 'ordered' or 'hypothetical', and as a result the token aggr_args has more information than in upstream. Excepting CREATE [ORDERED] AGGREGATE, the parser will extract the function arguments from the aggr_args token using extractAggrArgTypes(). The ALTER EXTENSION ADD/DROP AGGREGATE and SECURITY LIMIT syntax has been added as part of the merge of PostgreSQL between 9.0 and 9.2, so add a call to extract the function arguments. Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
In the recent refactoring which separated PostgreSQL and Greenplum functionality in pg_upgrade, these header comments were overlooked and came out of sync.
-
由 Tang Pengzhou 提交于
* Simplify the AssignGangs() logic for init plans Previously, AssignGangs() assign gangs for both main plans and init plans in one shot. Because init plans and main plan are executed sequentially, so the gangs can be reused between main plan and init plans, function AccumSliceReq() is designed for this. This process can be simplified: already know the root slice index id will be adjusted to according init plan id, init plan only need to assign their own slices. * Integrate Gang management from portal to Dispatcher Previously, Gang was managed by portal, freeGangsForPortal() was used to cleanup gang resource, DTM related commands also needed a gang to dispatch command outside of a portal and used freeGangsForPortal() too. There might be multiple command/plan/utility executed within one portal, all commands relied on a dispatcher routine like CdbDispatchCommand / CdbDispatchPlan/CdbDispatchUtility... to dispatch, gangs were created by each dispatcher routines, but not be recycled or destroyed when a routine finished except for primary writer gang, one defect of this is gang resource cannot be reused between dispatcher routines. GPDB already had an optimization for init plans, if a plan contained init plans, AssignGangs was called before execution of any of them it went through the whole slice tree and created the maximum gang that both main plan and init plans needed, this was doable because init plans and main plan were executed sequentially, but it also made AssignGangs logic complex, meanwhile, reusing an not clean gang was not safe. Another confusing thing was the gang and dispatcher were managed separately which cause context inconsistent like: when a dispatcher state was destroyed, gang was not recycled, when a gang was destroyed by portal, the dispatcher state was still in use and may refer to the context of a destroyed gang. As described above, this commit integrates gang management with dispatcher, a dispather state is responsible for creating and tracking gangs as needed and destroy them when dispatcher state is destroyed. * Handle the case when primary writer gang has gone When members of primary writer gang gone, the writer gang is destroyed immediately (primaryWriterGang is set to NULL) when a dispatcher rountine (eg.CdbDispatchCommand) finished. So when dispatching two-phase-DTM/DTX related command, QD doesn't know writer gang has gone, it may get unexpected error like 'savepoint not exist', 'subtransaction level not match', 'temp file not exist'. Previously, primaryWriterGang is not reset when DTM/DTX commands start even it is pointing to invalid segments, so those DTM/DTX commands will not actually sent to segments, an normal error reported on QD looks like 'could not connect to segment: initialization of segworker'. So we need a way to info global transaction that its writer gang has lost. so when aborting transaction, QD can: 1. disconnect all reader gangs, this is usefull to skip dispatching "ABORT_NO_PREPARE" 2. reset session and drop temp files because temp files in segment is gone. 3. report a error when dispatching "rollback savepoint" DTX because savepoint in segment is gone. 4. report a error when dispatch "abort subtransaction" DTX because subtransaction is rollback when writer segment is down.
-
由 BaiShaoqi 提交于
Do not show usage of \dr, since there is no implementation of the command. Also modify an argument of PageOutput() to reflect real line number of the help message.
-
由 Richard Guo 提交于
Without this FIXME, current implementation for ALL SUBLINK will generate problematic query tree. While further investigations are still needed, this is Greenplum specific code and will not block merging with upstream Postgres. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
-
由 Abhijit Subramanya 提交于
The space in between '--' and 'start_equiv' was causing gpdiff.pl to fail even though there was no actual diff in the output. Fix the space so that gpdiff.pl doesn't fail incorrectly. Co-authored-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Heikki Linnakangas 提交于
It's been unused for years. It was broken by commit d334b016, which renamed functions in src/backend/gpopt, and while we could easily fix it, let's rather just remove it. This also allows us to remove a lot of supporting code in src/backend/gpopt/. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/43391e5Pl1A/Ks9ccs_JBQAJ
-
由 Heikki Linnakangas 提交于
It used to always say "COPY 0", instead of the number of rows copied. This source line was added in PostgreSQL 9.0 (commit 8ddc05fb), but it was missed in the merge. Add a test case to check the command tags of different variants of COPY, including this one.
-
由 Heikki Linnakangas 提交于
Much of the time in qp_functions_in_* tests is spent on waiting, after an error occurred. There's a one second delay after an error happens in a segment, until the dispatcher reacts to it. We can therefore reduce the overall runtime of the regression suite by running them in parallel with other tests that take an even longer time, hiding the waits behind the other tests that do real work. This should shave off 10-20 seconds from the overall regression test suite time. Every little helps.
-
由 kaknikhil 提交于
MADlib gppkg job does not care about the gcc version anymore and always uses gcc 6.2.0 to compile madlib.
-
由 Heikki Linnakangas 提交于
makeGpPolicy() uses palloc(), which can throw an error, so wrappers are needed. Also add a comment to gpdb:IsAbortRequested to explain why it doesn't need wrappers. I suspect the missing wrappers in MakeGpPolicy() happened because it was copy-pasted from IsAbortRequested.
-
由 Omer Arap 提交于
This commit adds more log messages and updates existing log messages to increase logging verbosity. Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 Ashwin Agrawal 提交于
CI can get into the weird state due to which recovery can take a long time, affecting mirror promotion time. If the mirror takes a long time for promotion, master can panic if can't complete commit-prepared in time. Hence, make the test more resilient by bumping the number of retries to 10. Also, to make sure gprecoverseg doesn't fail due to "can't start transaction in BEGIN", add gucs to make sure the retries are higher if the command fails if the segment is recovering.
-
由 Omer Arap 提交于
-
由 Omer Arap 提交于
Currently, `ANALYZE ROOTPARTITION` merge statistics when `optimizer_analyze_root_partition` is set to on and NOT* when it is off. This is an incorrect behavior, it should merge stats if possible immaterial of the GUC. This commit fixes this issue.
-
- 05 9月, 2018 11 次提交
-
-
由 Heikki Linnakangas 提交于
Leftover from filespaces.
-
由 Daniel Gustafsson 提交于
FORCE QUOTE * is a shorthand for specifying all columns in the relation for CVS copy. This was added as part of the PostgreSQL 9.0 merge but was never added to the documentation.
-
由 Daniel Gustafsson 提交于
A few style nits fixed while hacking around in this file, no logical changes introduced. * Concatenate broken up error messages to align with current upstream convention (makes the messages easier to grep for when debugging). * Fix alignment in errormessages and some previously broken lines that doesn't need to be. * Move variable declaration to the top of the function
-
由 Daniel Gustafsson 提交于
FORCE QUOTE * was added in PostgreSQL 9.0 as a shorthand for FORCE QUOTE <column_1 .. column_n> in the COPY comamnd, where all columns of a relation are added for forced quoting. External tables use copy and copy options under the hood, so they too should support '*' to quote all columns. This resolves a FIXME added during the 9.0 merge.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
CdbDispatchDtxProtocolCommand() is defined to return struct pg_result **, but returned false in an error path which forces zero imply a NULL pointer. The caller of this function has another error check and wouldn't actually read the returned value, but it should still be doing the right thing. Fix by returning NULL instead. This resolves the followig clang compiler warning: cdbdisp_dtx.c:157:10: warning: expression which evaluates to zero treated as a null pointer constant of type 'struct pg_result **' [-Wnon-literal-null-conversion] return false; ^~~~~ ../../../../src/include/c.h:195:15: note: expanded from macro 'false' #define false ((bool) 0) ^~~~~~~~~~
-
由 Daniel Gustafsson 提交于
The FIXME was added during the PostgreSQL 9.0 merge as a note to self to double-check the logic for GSS password expiry times. Verified that the code is indeed in sync, and removed the FIXME leaving most of the comment in place as a reminder to keep the two functions synchronized (consolidating them would be neat but it would also introduce risk in a very sensitive codepath so for now I think keeping them separate is the right thing to do). Also cleaned up a few style issues and moved to the same syscache convenience macro that the password check uses.
-
由 Heikki Linnakangas 提交于
The handling of DROP commands for various objects was refactored in PostgreSQL 9.2, to reducate code duplication (commit 82a4a777). This commit changes DROP PROTOCOL command handling to follow that model. No user-visible changes.
-
由 Richard Guo 提交于
Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
-
由 Heikki Linnakangas 提交于
These files were full of conflict markers, which were not resolved in the 9.2 merge. No-one builds GDPB on Windows, and if someone wants to try that (which I wouldn't recommend), I'm pretty sure it would be better to start from scratch, rather than try to keep whatever useful changes vs upstream there might've been here. So let's just replace this all with the upstream version. It is more sensible to build just libpq, and other client tools, on Windows. For that, these scripts might just work as they are after this commit. I think all the GPDB changes we had here were backend related. But again, if not, I think you're better off starting from scratch, rather than try cleaning up the mess here. After this commit, src/tools/msvc is identical to the upstream commitid that we're currently merged up to.
-
由 Heikki Linnakangas 提交于
This is similar to the cases fixed in commit a031b9c0, but was hidden before because the special DQA-planning was disabled altogether. Fixes https://github.com/greenplum-db/gpdb/issues/5670
-