- 16 11月, 2017 21 次提交
-
-
由 Heikki Linnakangas 提交于
A Concourse job just failed with a diff like this: *** ./expected/partition_pruning.out 2017-11-16 08:38:51.463996042 +0000 --- ./results/partition_pruning.out 2017-11-16 08:38:51.691996042 +0000 *************** *** 5785,5790 **** --- 5786,5793 ---- -- More Equality and General Predicates --- drop table if exists foo; + ERROR: "foo" is not a base table + HINT: Use DROP EXTERNAL TABLE to remove an external table. create table foo(a int, b int) partition by list (b) (partition p1 values(1,3), partition p2 values(4,2), default partition other); That's because both 'partition_pruning' and 'portals_updatable' use a test table called 'foo', and they are run concurrently in the test scheudle. If the above 'drop table' happens to run at just the right time, when the table is created but not yet dropped in 'portals_updatable', you get that error. The test table in 'partitions_pruning' is created in a different schema, so creating two tables with the same name would actually be a problem, but the 'drop table' sees the table created in the public schema, too. To fix, rename the table in 'portals_updatable', and also remove the above unnecessary 'drop table. Either one would be enough to fix the race condition, but might as well do both.
-
由 Daniel Gustafsson 提交于
Removing commented out code which is incorrect, resqueue comes from a defelem which in turn is parsed with the IDENT rule and thus is guaranteed to be lowercased. Also collapse the two identical if statements into one since the latter will never fire due to the condition being changed the former. This removes indentation and simplifies the code a bit. Also concatenate the error message making it easier to grep for and end error hint with a period (following the common convention in postgres code).
-
由 xiong-gang 提交于
If a QD exits when the transaction is still active, QD will abort the transaction and destroy all gangs, without waiting for QEs finish the transaction. In this case, a new transaction (the resource group slot waken up by this QD) may execute before the QEs exit, so we cannot use the number of currently running QEs to calculate the memory quota.
-
由 Daniel Gustafsson 提交于
Any trailing commas were intended to be removed from the partition string, but due to a typo the fixed string was saved into a new variable instead. However, Chris Hajas realized that the step was no longer in use to remove it rather than fix it.
-
由 Heikki Linnakangas 提交于
Compared to upstream, there was a bunch of changes to the error handling in EXPLAIN, to try to catch errors and produce EXPLAIN output even when an error happens. I don't understand how it was supposed to work, but I don't remember it ever doing anything too useful. And it has the strange effect that if an ERROR happens during EXPLAIN ANALYZE, it is reported to the client as merely a NOTICE. So, revert this to the way it is in the upstream. Also reorder the end-of-EXPLAIN steps so that the "Total runtime" is printed at the very end, like in the upstream. I don't see any reason to differ from upstream on that, and this makes the "Total runtime" numbers more comparable with PostgreSQL's, for what it's worth. I dug up the commit that made these changes in the old git repository, from before GPDB was open sourced: --- commit 1413010e71cb8eae860160ac9d5246216b2a80b4 Date: Thu Apr 12 12:34:18 2007 -0800 MPP-1177. EXPLAIN ANALYZE now can usually produce a report, perhaps a partial one, even if the query encounters a runtime error. The Slice Statistics indicate 'ok' for slices whose worker processes all completed successfully; otherwise it shows how many workers returned errors, were canceled, or were not dispatched. The error message from EXPLAIN in such cases will appear as a NOTICE instead of an ERROR. The client is first sent a successful completion response, followed by the NOTICE. (psql displays the NOTICE before the EXPLAIN report, however.) Although presented to the client as a NOTICE, the backend handles it just like an ERROR with regard to logging, rollback, etc. --- I couldn't figure out under what circumstances that would be helpful, and couldn't come up with an example. Perhaps it used to work differently when it was originally committed, but not anymore? I also looked up the MPP-1177 ticket in the old bug tracking system. The description for that was: --- EXPLAIN ANALYZE should report on the amount of data spilled to temporary workfiles on disk, and how much work_mem would be required to complete the query without spilling to disk. ---- Unfortunately, that description, and the comments that followed it, didn't say anything about suppressing errors or being able to print out EXPLAIN output even if an error happens. So I don't know why that change was made as part of MPP-1177. It was seemingly for a different issue.
-
由 Heikki Linnakangas 提交于
In Makefile.global, we set BISONFLAGS to "-Wno-deprecated". Don't override that in the gpmapreduce's Makefile. Put the -d flag directly to the bison invocation's command-line, like it's done e.g. in src/backend/parser.
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
`kick-off` script is good enough for gpcloud tests, retire the gpcloud_pipeline then.
-
由 David Yozie 提交于
-
由 Michael Roth 提交于
-
由 David Yozie 提交于
* clarifying enable chapter * move existing root partition info to new topic; add new best practices info
-
由 Lisa Owen 提交于
* docs - add pxf json profile info * add missing an * incorporate edits from david
-
由 Lisa Owen 提交于
- requires that the pxf server/agent bits be built and installed separately - refer to pxf readmes in greenplum database and apache hawq repos
-
由 Mel Kiyama 提交于
* docs: pl/container - add note about docker hang. * docs: gprestore - review comment edits.
-
由 Mel Kiyama 提交于
-
由 Amil Khanzada 提交于
[ci skip]
-
由 Ben Christel 提交于
Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
`DR_intorel` had redundant fields `use_wal` and `is_bulkload`, serving the same purpose. So, cleanup code to leverage `use_wal` throughout and remove `is_bulkload`.
-
由 Ashwin Agrawal 提交于
Commit fa287d01 missed retaining the original negation for `is_bulkload` argument to `heap_insert()`. So, in case bulkload being false, the use_wal also became false and no xlog records were getting generated. Commit b8f8fccc attempted to fix it but just didn't fix the nagation at right place.
-
由 Kris Macoskey 提交于
The pattern has already been removed in other tests. In order to get to a matched state, it should be removed in the other locations. Included is a slight change in how the mpp_interconnect job installs the correct kernel-devel package version in order to fix a bug for the package version not being found
-
- 15 11月, 2017 11 次提交
-
-
由 Kris Macoskey 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 David Yozie 提交于
-
由 Mel Kiyama 提交于
* docs: S3 gpcheckcloud - add config parameter gpcheckcloud_newline * docs: fix typo.
-
由 Mel Kiyama 提交于
-
由 Lisa Owen 提交于
-
由 Mel Kiyama 提交于
* docs: gpbackup/gprestore limitation for partitioned table with external table partition. * docs: edit to fix a typo. * docs: updated gpbackup/gprestore limitation. partitioned table w/ external table leaf part. not supported.
-
由 Alexandra Wang 提交于
With the removal of installation of base packages in setup_ssh_to_cluster in CCP 1.4.4, the gppkg job's assumption on those base packages being installed, specifically openssl, were broken. Either the jobs task can install the package on the base centos 6 image it was using on every execution, or we can just use the ccp image that has them preinstalled. Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Lisa Owen 提交于
* docs - add content for pxf writable profiles * use consistent naming for PXF * add writing to HDFS book subnav * main pxf subnav title change * edits from david * use consistent terms when introducing pxf * not writing is experimental * style experimental warning on write page closer to rest of docs * dir/file distinction, other edits from alex * discuss distributed by * rename read topic, more info to distributed by
-
由 Alexandra Wang 提交于
Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Larry Hamel 提交于
- add regression test for gpcloud on ubuntu - create pattern for reusing task across platforms - Remove configure step since it is not necessary - Remove unused files Signed-off-by: NGoutam Tadi <gtadi@pivotal.io>
-
- 14 11月, 2017 8 次提交
-
-
由 Michael Roth 提交于
* Initial setup of building on Ubuntu with Conan * Updated Pipeline to use new/less Ubuntu jobs * Updated PR pipeline
-
由 Daniel Gustafsson 提交于
-
由 dyozie 提交于
-
由 Mel Kiyama 提交于
* docs: gpbackup/gprestore limitation for partitioned table with external table partition. * docs: edit to fix a typo.
-
由 Mel Kiyama 提交于
* docs: gpload - YAML ERROR_TABLE element backward compatibility In 5.x, ERROR_TABLE in the YAML file was removed and returns an error. For backward compatibility issue a warning and use LOG_ERRORS and REUSE_TABLE set to true. * docs: gpload - edits based on review.
-
由 Lisa Owen 提交于
* docs - add content for PXF HBase profile * fix incorrect table name - indirect mapping now working w example * use new page title in pxf overview xrefs * incorporate edits from david * note that hbase does not support predicate pushdown
-
由 Jimmy Yih 提交于
The sequence xlog record generated at the end of DefineSequence records the local tuple->t_data when the actual page item is available. Not using the actual page item caused gp_replica_check extension to fail for sequences because the tuple->t_data ctid would be replicated as (0,0) instead of the actual page item ctid (0,1). This is because tuple->t_data is used in the xlog record creation but its t_ctid is not set even if tuple->t_self is fine. To fix this, we cherry-pick and modify upstream fix. Reference commit message: commit 8d34f686 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Tue Apr 22 09:50:47 2014 +0300 Avoid transient bogus page contents when creating a sequence. Don't use simple_heap_insert to insert the tuple to a sequence relation. simple_heap_insert creates a heap insertion WAL record, and replaying that will create a regular heap page without the special area containing the sequence magic constant, which is wrong for a sequence. That was not a bug because we always created a sequence WAL record after that, and replaying that overwrote the bogus heap page, and the transient state could never be seen by another backend because it was only done when creating a new sequence relation. But it's simpler and cleaner to avoid that in the first place.
-
由 Jimmy Yih 提交于
We were using heap_mask which kinda worked but ideally we should be using the intended mask for sequences which is seq_mask.
-