- 21 11月, 2017 4 次提交
-
-
由 Heikki Linnakangas 提交于
For clarity, and to make merging easier. The code to manage the hash table of "pending resync EOFs" for append-only tables is moved to smgr_ao.c. One notable change here is that the pendingDeletesPerformed flag is removed. It was used to track whether there are any pending deletes, or any pending AO table resyncs, but we might as well check the pending delete list and the pending syncs hash table directly, it's hardly any slower than checking a separate boolean. There are still plenty of GPDB changes in smgr.c, but this is a good step forward.
-
由 Heikki Linnakangas 提交于
These were added in GPDB a long time ago, probably before gpdiff.pl was introduced to mask row order differences in regression test output. But now that gpdiff.pl can do that, these are unnecessary. Revert to match the upstream more closely. This includes updates to the 'rules' and 'inherit' tests, although they are disabled and still doesn't pass after these changes.
-
由 Jesse Zhang 提交于
Fix it for realz. :(
-
由 Jesse Zhang 提交于
-
- 20 11月, 2017 3 次提交
-
-
由 Karen Huddleston 提交于
-
由 C.J. Jameson 提交于
These steps are necessary for: gpssh assumes that there's a public key already in-place if there's a private key. When we create the cluster CCP, we don't propagate the public keys. So, this is a work around for that. When we run gpseginstall, it creates a tarball of the binary ($GPHOME) on the same parent directory. Since we have GPDB installed at /usr/local/greenplum-db, it tries to create the tarball at /usr/local. Which means we need user permissions for that directory. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Adam Lee 提交于
autocompress feature was dropped by mistake, this commit fixes it and add tests to cover all configuration options.
-
- 19 11月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
Long time ago, when updating our psql version to 8.3 (or something higher), we had decided to keep the old single-line style when displaying access privileges, to avoid having to update regression tests. It's time to move forward, update the tests, and use the nicer 8.3 style for displaying access privileges. Also, \d on a view no longer prints the View Definition. You need to use the verbose \d+ option for that. (I'm not a big fan of that change myself: when I want to look at a view I'm almost always interested in the View Definition. But let's not second-guess decisions made almost 10 years ago in the upstream.) Note: psql still defaults to the "old-ascii" style when printing multi-line fields. The new style was introduced only later, in 9.0, so to avoid changing all the expected output files, we should stick to the old style until we reach that point in the merge. This commit only changes the style for Access privileges, which is different from the multi-line style.
-
- 18 11月, 2017 2 次提交
-
-
由 Xin Zhang 提交于
-
由 Marbin Tan 提交于
Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
- 17 11月, 2017 6 次提交
-
-
由 Heikki Linnakangas 提交于
According to git history, this was added a very long time ago, before GPDB was open sourced, because it was needed by some migration tool back then. But the code that used it has been removed since.
-
由 Taylor Vesely 提交于
Initial work was done in collaboration with Kanaiya Kariya @kanhaiya7 and Jiangtian Nie @zhadan01 Signed-off-by: NXin Zhang <xzhang@pivotal.io> Signed-off-by: NAsim Praveen <apraveen@pivotal.io> Signed-off-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Goutam Tadi 提交于
Signed-off-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Mel Kiyama 提交于
* docs: gpbackup - add support for single level partitioned table w/ ext. tbl. leaf partition. * docs: gpbackup edits based on review comments. -changed gprestore default --jobs to 1. -removed gprestore comment about faster restore of indexes to match admin guide. * docs: gpbackup fixed typo.
-
由 Bhuvnesh Chaudhary 提交于
- Add option to pass additional configure flags. - --disable-gpcloud for building gpdb in ORCA pipeline. - Merge test_gpdb.py functionality with build_gpdb.py with a parameter action=build or action=test. Both the files had common steps and it makes difficult to keep them in sync, so better have them as different actions in the script - Add optional gcc-env-file argument in build_gpdb.py to source gcc environment file. This is not used currently, but while working on this issue we used a different images which made us realize that as we move on different images it becomes cumbersome to source proper gcc env, so better have an option. - Install ORCA in the same location of GPDB installtion. This is controlled by --install-orca-in-gpdb-location parameter. In ORCA ci we need to build ORCA from github as opposed to building using conan and test_gpdb.py expects the gpdb packaged with all the dependencies. - Minor refactor of GpBuild.py Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
- 16 11月, 2017 24 次提交
-
-
由 Daniel Gustafsson 提交于
This was lost in commit 152d1223 which has the corresponding update for the postgres planner output.
-
由 Daniel Gustafsson 提交于
Avoid the extra string copy by using the StringInfo buffer and simplify the code a bit now that HASH partitions are removed.
-
由 Daniel Gustafsson 提交于
The REFERENCE partition type was never used anywhere except being defined in the parttyp enum. Remove.
-
由 Daniel Gustafsson 提交于
Hash partitioning was never fully implemented, and was never turned on by default. There has been no effort to complete the feature, so rather than carrying dead code this removes all support for hash partitioning. Should we ever want this feature, we will most likely start from scratch anyways. As an effect from removing the unsupported MERGE/MODIFY commands, this previously accepted query is no longer legal: create table t (a int, b int) distributed by (a) partition by range (b) (start () end(2)); The syntax was an effect of an incorrect rule in the parser which made the start boundary optional for CREATE TABLE when it was only intended for MODIFY PARTITION. pg_upgrade was already checking for hash partitions so no new check was required (upgrade would've been impossible anyways due to hash algorithm change).
-
由 Heikki Linnakangas 提交于
A Concourse job just failed with a diff like this: *** ./expected/partition_pruning.out 2017-11-16 08:38:51.463996042 +0000 --- ./results/partition_pruning.out 2017-11-16 08:38:51.691996042 +0000 *************** *** 5785,5790 **** --- 5786,5793 ---- -- More Equality and General Predicates --- drop table if exists foo; + ERROR: "foo" is not a base table + HINT: Use DROP EXTERNAL TABLE to remove an external table. create table foo(a int, b int) partition by list (b) (partition p1 values(1,3), partition p2 values(4,2), default partition other); That's because both 'partition_pruning' and 'portals_updatable' use a test table called 'foo', and they are run concurrently in the test scheudle. If the above 'drop table' happens to run at just the right time, when the table is created but not yet dropped in 'portals_updatable', you get that error. The test table in 'partitions_pruning' is created in a different schema, so creating two tables with the same name would actually be a problem, but the 'drop table' sees the table created in the public schema, too. To fix, rename the table in 'portals_updatable', and also remove the above unnecessary 'drop table. Either one would be enough to fix the race condition, but might as well do both.
-
由 Daniel Gustafsson 提交于
Removing commented out code which is incorrect, resqueue comes from a defelem which in turn is parsed with the IDENT rule and thus is guaranteed to be lowercased. Also collapse the two identical if statements into one since the latter will never fire due to the condition being changed the former. This removes indentation and simplifies the code a bit. Also concatenate the error message making it easier to grep for and end error hint with a period (following the common convention in postgres code).
-
由 xiong-gang 提交于
If a QD exits when the transaction is still active, QD will abort the transaction and destroy all gangs, without waiting for QEs finish the transaction. In this case, a new transaction (the resource group slot waken up by this QD) may execute before the QEs exit, so we cannot use the number of currently running QEs to calculate the memory quota.
-
由 Daniel Gustafsson 提交于
Any trailing commas were intended to be removed from the partition string, but due to a typo the fixed string was saved into a new variable instead. However, Chris Hajas realized that the step was no longer in use to remove it rather than fix it.
-
由 Heikki Linnakangas 提交于
Compared to upstream, there was a bunch of changes to the error handling in EXPLAIN, to try to catch errors and produce EXPLAIN output even when an error happens. I don't understand how it was supposed to work, but I don't remember it ever doing anything too useful. And it has the strange effect that if an ERROR happens during EXPLAIN ANALYZE, it is reported to the client as merely a NOTICE. So, revert this to the way it is in the upstream. Also reorder the end-of-EXPLAIN steps so that the "Total runtime" is printed at the very end, like in the upstream. I don't see any reason to differ from upstream on that, and this makes the "Total runtime" numbers more comparable with PostgreSQL's, for what it's worth. I dug up the commit that made these changes in the old git repository, from before GPDB was open sourced: --- commit 1413010e71cb8eae860160ac9d5246216b2a80b4 Date: Thu Apr 12 12:34:18 2007 -0800 MPP-1177. EXPLAIN ANALYZE now can usually produce a report, perhaps a partial one, even if the query encounters a runtime error. The Slice Statistics indicate 'ok' for slices whose worker processes all completed successfully; otherwise it shows how many workers returned errors, were canceled, or were not dispatched. The error message from EXPLAIN in such cases will appear as a NOTICE instead of an ERROR. The client is first sent a successful completion response, followed by the NOTICE. (psql displays the NOTICE before the EXPLAIN report, however.) Although presented to the client as a NOTICE, the backend handles it just like an ERROR with regard to logging, rollback, etc. --- I couldn't figure out under what circumstances that would be helpful, and couldn't come up with an example. Perhaps it used to work differently when it was originally committed, but not anymore? I also looked up the MPP-1177 ticket in the old bug tracking system. The description for that was: --- EXPLAIN ANALYZE should report on the amount of data spilled to temporary workfiles on disk, and how much work_mem would be required to complete the query without spilling to disk. ---- Unfortunately, that description, and the comments that followed it, didn't say anything about suppressing errors or being able to print out EXPLAIN output even if an error happens. So I don't know why that change was made as part of MPP-1177. It was seemingly for a different issue.
-
由 Heikki Linnakangas 提交于
In Makefile.global, we set BISONFLAGS to "-Wno-deprecated". Don't override that in the gpmapreduce's Makefile. Put the -d flag directly to the bison invocation's command-line, like it's done e.g. in src/backend/parser.
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
`kick-off` script is good enough for gpcloud tests, retire the gpcloud_pipeline then.
-
由 David Yozie 提交于
-
由 Michael Roth 提交于
-
由 David Yozie 提交于
* clarifying enable chapter * move existing root partition info to new topic; add new best practices info
-
由 Lisa Owen 提交于
* docs - add pxf json profile info * add missing an * incorporate edits from david
-
由 Lisa Owen 提交于
- requires that the pxf server/agent bits be built and installed separately - refer to pxf readmes in greenplum database and apache hawq repos
-
由 Mel Kiyama 提交于
* docs: pl/container - add note about docker hang. * docs: gprestore - review comment edits.
-
由 Mel Kiyama 提交于
-
由 Amil Khanzada 提交于
[ci skip]
-
由 Ben Christel 提交于
Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
`DR_intorel` had redundant fields `use_wal` and `is_bulkload`, serving the same purpose. So, cleanup code to leverage `use_wal` throughout and remove `is_bulkload`.
-
由 Ashwin Agrawal 提交于
Commit fa287d01 missed retaining the original negation for `is_bulkload` argument to `heap_insert()`. So, in case bulkload being false, the use_wal also became false and no xlog records were getting generated. Commit b8f8fccc attempted to fix it but just didn't fix the nagation at right place.
-