- 26 9月, 2018 8 次提交
-
-
由 Joao Pereira 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Joao Pereira 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Joao Pereira 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Taylor Vesely 提交于
Create an extensions group and add gpcloud as part of it. The group will no longer be added as part of ICW, now it need to be specifically added as a test section when calling gen_pipeline.py Signed-off-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Joao Pereira 提交于
all, install, check-world target are now being recursively sent to gpcontrib that will execute them in all the extensions underneath it Signed-off-by: NDavid Kimura <dkimura@pivotal.io>
-
由 David Kimura 提交于
The file gpcheckcloud was in .gitignore, with the git mv it started ignoring the folder. Added the full path of the binary and added back again the files that were not checked in Signed-off-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
由 Joao Pereira 提交于
Moved the googletest folder to gpcontrib/gpcloud/test Signed-off-by: NTaylor Vesely <tvesely@pivotal.io> Signed-off-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Taylor Vesely 提交于
Signed-off-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
- 25 9月, 2018 10 次提交
-
-
由 Adam Berlin 提交于
In GPDB, we only want an autovacuum worker to start once we know there is a database to vacuum. When we changed the default value of the `autovacuum_start_daemon` from `true` to `false` for GPDB, we made the behavior of the AutoVacuumLauncherMain() be to immediately start an autovacuum worker from the launcher and exit, which is called 'emergency mode'. When the 'emergency mode' is running it is possible to continuously start an autovacuum worker. Within the worker, the PMSIGNAL_START_AUTOVAC_LAUNCHER signal is sent when a database is found that is old enough to be vacuumed, but because we only autovacuum non-connectable databases (template0) in GPDB and we do not have logic to filter out connectable databases in the autovacuum worker. This change allows the autovacuum launcher to do more up-front decision making about whether it should start an autovacuum worker, including GPDB specific rules. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Paul Guo 提交于
create_unique_path() could be used to convert semi join to inner join. Previously, during the Semi-join refactor in commit d4ce0921, creating unique path was disabled for the case where duplicats might be on different QEs. In this patch we enable adding motion to unique_ify the path, only if unique mothod is not UNIQUE_PATH_NOOP. We don't create unique path for that case because if later on during plan creation, it is possible to create a motion above this unique path whose subpath is a motion. In that case, the unique path node will be ignored and we will get a motion plan node above a motion plan node and that is bad. We could further improve that, but not in this patch. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NPaul Guo <paulguo@gmail.com>
-
由 Daniel Gustafsson 提交于
The bkuprestore test was imported along with the source code during the initial open sourcing, but has never been used and hasn't worked in a long time. Rather than trying to save this broken mess, let's remove it and start fresh with a pg_dump TAP test which is a much better way to test backup/restore. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Dhanashree Kashid 提交于
-
由 Shivram Mani 提交于
PXF client in gpdb uses pxf libraries from apache hawq repo. These pxf libraries will continue being developed in a new PXF repo greenplum-db/pxf and is in the process of getting open sourced in the next few days. The PXF extension and gpdb-pxf client code will continue to remain in gpdb repo. The following changes are included in this PR: Transition from the old PXF namespace org.apache.hawq.pxf to org.greenplum.pxf (there is a separate PR in the PXF repo to address the package namespace refactor greenplum-db/pxf#5) Doc updates to reflect the new PXF repo and the new package namespace
-
由 Ashwin Agrawal 提交于
Regular fault injection doesn't work for mirrors. Hence, using SIGUSR2 signal and on-disk file coupled with it just for testing a fault injection mechanism was coded. This seems very hacky and intrusive, hence plan is to get rid of the same. Most of the tests using this framework are found not useful as majority of code is upstream. Even if needs testing, better alternative would be explored.
-
由 Ashwin Agrawal 提交于
Most of the backup block related modification for providing the wal_consistency_checking was removed as part of 9.3 merge. This was mainly done to avoid merge conflicts. The masking functions are still used by gp_replica_check tool to perform checking between primary and mirrors. But the online version of checking during each replay of record was let go. So, in this commit cleaning up remaining pieces which are not used. We will get back this in properly working condition when we catch up to upstream.
-
由 Ashwin Agrawal 提交于
Removing the fault types which do not have implementation. Or have implementation but doesn't seem usable. This will just help to have only working subset of faults. Like data corruption fault seems pretty useless. Even if needed then can be easily coded for specific usecase using the skip fault, instead of having special one defined for it. Fault type "fault" is redundant with "error" hence removing the same as well.
-
由 Ashwin Agrawal 提交于
-
由 Dhanashree Kashid 提交于
Following commits have been cherry-picked again: b1f543f3. b0359e69. a341621d. The contrib/dblink tests were failing with ORCA after the above commits. The issue has been fixed now in ORCA v3.1.0. Hence we re-enabled these commits and bumping the ORCA version.
-
- 24 9月, 2018 3 次提交
-
-
由 Heikki Linnakangas 提交于
I couldn't find an easy way to make this assertion work, with the "flattened" range table in 9.3. The information needed for this is zapped away in add_rte_to_flat_rtable(). I think we can live without this assertion.
-
由 Heikki Linnakangas 提交于
Updating a distribution key column is performed as a "split update", i.e. separate DELETE and INSERT operations, which may happen on different nodes. In case of RETURNING, the DELETE operation was also returning a row, and it was also incorrectly counted in the row count returned to the client, in the command tag (e.g. "UPDATE 2"). Fix, and add a regression test. Fixes https://github.com/greenplum-db/gpdb/issues/5839
-
由 Heikki Linnakangas 提交于
The reason we needed the FIXME pq_getmessage() call, marked with the FIXME comment, was that we were missing the pq_getmessage() call from ProcessStandbyMessage(), that the corresponding upstream version, at the point that we're caught up in the merge, had. I believe the reason it was missing from ProcessStandbyMessage() was that we had earlier backported upstream commit cd19848bd55. That commit removed the pq_getmessage() call from ProcessStandbyMessage(), and added one in ProcessRepliesIfAny(), instead. Clarify this by changing the code to match upstream commit cd19848bd55. (Except that we don't have pq_startmsgread() yet, that will arrive when we merge the rest of commit cd19848bd55.)
-
- 23 9月, 2018 3 次提交
-
-
由 Daniel Gustafsson 提交于
getgpsegmentCount() was defined in both cdbvars.h and cdbutil.h. While not needing another header include in some cases, getgpsegmentCount() is not a variable and the correct location is cdbutil.h. Remove the prototype from cdbvars.g and update includes as required. Also fix the function comment to match reality and minor tweaking of the debug elog() performed. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
There is already an assertion in getgpsegmentCount() testing the count to be > 0 (and 0 can only be returned in utility mode which still holds this assertion always true). Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
Running -j8 for all make invocations regardless of which is a good way to cause spectacular failures in installcheck-world testing, so remove the setting. Also remove the link to workstation-setup repo as it's not helpful for non-Pivotal hackers. Reviewed-by: NJacob Champion <pchampion@pivotal.io>
-
- 22 9月, 2018 6 次提交
-
-
由 Jesse Zhang 提交于
Commit 825ca1e3 didn't seem to work well when we hook up ORCA's memory system to memory accounting. We are tripping multiple asserts in regression tests. The reg test failures seem to suggest we are double-free'ing somewhere (or incorrectly accounting). Reverting for now to get master back to green. This reverts commit 825ca1e3.
-
由 Taylor Vesely 提交于
The memory accounting system generates a new memory account for every execution node initialized in ExecInitNode. The address to these memory accounts is stored in the shortLivingMemoryAccountArray. If the memory allocated for shortLivingMemoryAccountArray is full, we will repalloc the array with double the number of available entries. After creating approximately 67000000 memory accounts, it will need to allocate more than 1GB of memory to increase the array size, and throw an ERROR, canceling the running query. PL/pgSQL and SQL functions will create new executors/plan nodes that must be tracked my the memory accounting system. This level of detail is not necessary for tracking memory leaks, and creating a separate memory account for every executor will use large amount of memory just to track these memory accounts. Instead of tracking millions of individual memory accounts, we consolidate any child executor account into a special 'X_NestedExecutor' account. If explain_memory_verbosity is set to 'detailed' and below, consolidate all child executors into this account. If more detail is needed for debugging, set explain_memory_verbosity to 'debug', where, as was the previous behavior, every executor will be assigned its own MemoryAccountId. Originally we tried to remove nested execution accounts after they finish executing, but rolling over those accounts into a 'X_NestedExecutor' account was impracticable to accomplish without the possibility of a future regression. If any accounts are created between nested executors that are not rolled over to an 'X_NestedExecutor' account, recording which accounts are rolled over can grow in the same way that the shortLivingMemoryAccountArray is growing today, and would also grow too large to reasonably fit in memory. If we were to iterate through the SharedHeaders every time that we finish a nested executor, it is not likely to be very performant. While we were at it, convert some of the convenience macros dealing with memory accounting for executor / planner node into functions, and move them out of memory accounting header files into the sole callers' compilation units. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NEkta Khanna <ekhanna@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Taylor Vesely 提交于
Functions using SQL and PL/pgSQL will plan and execute arbitrary SQL inside a running query. The first time we initialize a plan for an SQL block, the memory accounting system creates a new memory account for each Executor/Node. In the case that we are executing a cached plan, (i.e. plancache.c) the memory accounts will have already been assigned in a previous execution of the plan. As a result, when explain_memory_verbosity is set to 'detail', it is not clear what memory account corresponds to which executor. Instead, move the memoryAccountId into PlanState/QueryDesc, which will insure that every time we initialize an executor, it will be assigned a unique memoryAccountId. Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Heikki Linnakangas 提交于
The FIXME was added to GPDB in commit f86622d9, which backported the local cache of resource owners attached to LOCALLOCK. I think the comment was added, because in the upstream commit that added the cache, the upstream didn't thave the check guarding the pfree() yet. It was added later in upstream, too, in commit 7e6e3bdd3c, and that had already been backported to GPDB. So it's alright, the guard on the pfree is a good thing to have, and there's nothing further to do here.
-
由 Heikki Linnakangas 提交于
We had changed this in GPDB, to print less parens. That's fine and dandy, but it hardly seems worth it to carry a diff vs upstream for this. Which format is better, is a matter of taste. The extra parens make some expressions more clear, but OTOH, it's unnecessarily verbose for simple expressions. Let's follow the upstream on this. These changes were made to GPDB back in 2006, as part of backporting to EXPLAIN-related patches from PostgreSQL 8.2. But I didn't see any explanation for this particular change in output in that commit message. It's nice to match upstream, to make merging easier. However, this won't make much difference to that: almost all EXPLAIN plans in regression tests are different from upstream anyway, because GPDB needs Motion nodes for most queries. But every little helps.
-
由 Heikki Linnakangas 提交于
I don't understand what all this was about, but people have compiled GPDB successfully after the merge commit, where this was commented out, so apparently it's not needed.
-
- 21 9月, 2018 10 次提交
-
-
由 Heikki Linnakangas 提交于
They were all treated the same, with the SeqScan code being duplicated for AppendOnlyScans and AOCSScans. That is a merge hazard: if some code is changed for SeqScans, we would have to remember to manually update the other copies. Small differences in the code had already crept up, although given that everything worked, I guess it had no effect. Or only had a small effect on the computed costs. To avoid the duplication, use SeqScan for all of them. Also get rid of TableScan as a separate node type, and have ORCA translator also create SeqScans. The executor for SeqScan node can handle heap, AO and AOCS tables, because we're not actually using the upstream SeqScan code for it. We're using the GPDB code in nodeTableScan.c, and a TableScanState, rather than SeqScanState, as the executor node. That's how it worked before this patch already, what this patch changes is that we now use SeqScan *before* the executor phase, instead of SeqScan/AppendOnlyScan/AOCSScan/TableScan. To avoid having to change all the expected outputs for tests that use EXPLAIN, add code to still print the SeqScan as "Seq Scan", "Table Scan", "Append-only Scan" or "Append-only Columnar Scan", depending on whether the plan was generated by ORCA, and what kind of a table it is.
-
由 Heikki Linnakangas 提交于
As noted in the FIXME, having two copies of the function is bad. It's easy to avoid the duplication, if we just put it in xlogdesc.c, so that it's available to xlog_desc() in client programs, too.
-
由 Daniel Gustafsson 提交于
Fixes compiler warning on unused variable which was left over in the 9.3 merge. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Alvaro Herrera 提交于
Clang 3.3 correctly complains that a variable of type enum MultiXactStatus cannot hold a value of -1, which makes sense. Change the declared type of the variable to int instead, and apply casting as necessary to avoid the warning. Per notice from Andres Freund
-
由 Heikki Linnakangas 提交于
Merge with PostgreSQL, up to the point where the REL9_3_STABLE branch was created, and 9.4 development started on the PostgreSQL master branch. That is almost up to 9.3beta2. Notable upstream changes, from a GPDB point of view: * LATERAL support. Mostly works in GPDB now, although performance might not be very good. LATERAL subqueries, except for degenerate cases that can be made non-LATERAL during optimization, typically use nested loop joins. Unless the data distribution is the same on both sides of the join, GPDB needs to add Motion nodes, and cannot push down the outer query parameter to the inner side through the motion. That is the same problem we have with SubPlans and nested loop joins in general, but it happens frequently with LATERAL. Also, there are a couple of cases, covered by the upstream regression tests, where the planner currently throws an error. They have been disabled and marked with GPDB_93_MERGE_FIXME comments, and will need to be investigated later. Also, no ORCA support for LATERAL yet. * Materialized views. They have not been made to work in GPDB yet. CREATE MATERIALIZED VIEW works, but REFRESH MATERIALIZED VIEW does not. The 'matviews' test has been temporarily disabled, until that's fixed. There is a GPDB_93_MERGE_FIXME comment about this too. * Support for background worker processes. Nothing special was done about them in the merge, but we could now make use of them for all the various GPDB-specific background processes, like the FTS prober and gpmon processes. * Support for writable foreign tables was introduced. I believe foreign tables now have all the same functionality, at a high level, as external tables, so we could start merging the two concepts. But this merge commit doesn't do anything about that yet, external tables and foreign tables are still two entirely different beasts. * A lot of expected output churn, thanks to a few upstream changes. We no longer print a NOTICE on implicitly created indexes and sequences (commit d7c73484), and the rules on when table aliases are printed were changed (commit 11e13185). * Caught up to a bunch of features that we had already backported from 9.3: data page checksums, numeric datatype speedups, COPY FROM/TO PROGRAM, and pg_upgrade as whole. A couple of other noteworthy changes: * contrib/xlogdump utility is removed, in favor of the upstream contrib/pg_xlogdump utility. * Removed "idle session timeout" hook. The current implementation was badly broken by upstream refactoring of timeout handling (commit f34c68f0). We'll probably need to re-introduce it in some form, but it will look quite different, to make it fit more nicely with the new timeout APIs. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NAsim R P <apraveen@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NEkta Khanna <ekhanna@pivotal.io> Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NJinbao Chen <jinchen@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NPaul Guo <paulguo@gmail.com> Co-authored-by: NRichard Guo <guofenglinux@gmail.com> Co-authored-by: NShaoqi Bai <sbai@pivotal.io>
-
由 Adam Lee 提交于
``` $ wget http://ftp.jaist.ac.jp/pub/apache/apr/${APR}.tar.gz --2018-09-21 07:16:24-- http://ftp.jaist.ac.jp/pub/apache/apr/apr-1.6.3.tar.gz Resolving ftp.jaist.ac.jp (ftp.jaist.ac.jp)... 150.65.7.130, 2001:df0:2ed:feed::feed Connecting to ftp.jaist.ac.jp (ftp.jaist.ac.jp)|150.65.7.130|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2018-09-21 07:16:25 ERROR 404: Not Found. ```
-
由 Adam Lee 提交于
It happens if the copy command errors out before assigning dispatcherState. Initialize the dispatcherState as NULL to fix it, palloc0() to avoid future new member issues. 5X has no such problem. ``` (gdb) c Continuing. Detaching after fork from child process 25843. Program received signal SIGSEGV, Segmentation fault. 0x0000000000aa04dd in getCdbCopyPrimaryGang (c=0x23d4150) at cdbcopy.c:44 44 return (Gang *)linitial(c->dispatcherState->allocatedGangs); (gdb) bt \#0 0x0000000000aa04dd in getCdbCopyPrimaryGang (c=0x23d4150) at cdbcopy.c:44 \#1 0x0000000000aa12d8 in cdbCopyEndAndFetchRejectNum (c=0x23d4150, total_rows_completed=0x0, abort_msg=0xd0c8f8 "aborting COPY in QE due to error in QD") at cdbcopy.c:642 \#... (gdb) p c->dispatcherState $1 = (struct CdbDispatcherState *) 0x100000000 ```
-
由 Heikki Linnakangas 提交于
In aligned format, there is an end-of-line marker at the end of each line, and its position depends on the longest line. If the width changes, all lines need to be adjusted for the moved end-of-line-marker. While testing this, we found out that 'atmsort' had been doing bad things to the YAML output before: -- Check Explain YAML output EXPLAIN (FORMAT YAML) SELECT * from boxes LEFT JOIN apples ON apples.id = boxes.apple_id LEFT JOIN box_locations ON box_locations.id = boxes.location_id; QUERY PLAN ___________ { 'id' => 1, 'short' => '- Plan: +' } GP_IGNORE:(1 row) In other worse, we were not comparing the output at all, except for that one line that says "Plan:". The access plan for one of the queries had changed, from a Left Join to a Right Join, and we still had the old plan memorized in expected output, but the test was passing because atmsort hid the issue. This commit fixes the expected output for the new plan.
-
由 Heikki Linnakangas 提交于
When creating an ORCA plan for "INSERT ... (<col list>) VALUES (<values>)" statement, the ORCA translator performed NULL checks for any columns not listed in the column list. Nothing wrong with that per se, but we needed to keep the error messages in sync, or we'd get regression test failures caused by different messages. To simplify that, remove the check from ORCA translator, and rely on the execution time check. We bumped into this while working on the 9.3 merge, because 9.3 added DETAIL to the error message in executor: postgres=# create table notnulls (a text NOT NULL, b text NOT NULL); CREATE TABLE postgres=# insert into notnulls (a) values ('x'); ERROR: null value in column "b" violates not-null constraint postgres=# insert into notnulls (a,b) values ('x', NULL); ERROR: null value in column "b" violates not-null constraint (seg2 127.0.0.1:40002 pid=26547) DETAIL: Failing row contains (x, null). Doing this now will avoid that inconsistency in the merge. One little difference with this is that EXPLAIN on an insert like above now works, and you only get the error when you try to execute it. Before, with ORCA, even EXPLAIN would throw the error.
-
由 Huiliang.liu 提交于
gpfdist --sslclean option is a platform related patch for Solaris system. gpfdist delays cleaning ssl buffer for some seconds which is configured by sslclean option. GPDB6 doesn't support Solaris now. We don't think that solution has benefit for other platforms, so we remove --sslclean option. Have Verified this patch manually and the default test cases cover this change.
-