- 03 8月, 2018 2 次提交
-
-
由 Joao Pereira 提交于
The error that was expected in this error was wrong. In this test the error expecte should have been that the type could not be found instead of the one present. This correction was made in the previous commit
-
由 Joao Pereira 提交于
The commit 8ab6a6b4 from upstream removed the function check_pg_get_expr_args and all the calls to it. Nevertheless the merge brought it back and with it the issue corrected on commit f223bb7aCo-authored-by: NTaylor Vesely <tvesely@pivotal.io> (cherry picked from commit 39560a95615771768b7381842fffe4af9b4284b6)
-
- 02 8月, 2018 3 次提交
-
-
由 Ed Espino 提交于
Replace trigger conditions so nightly jobs will still trigger Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NEd Espino <edespino@pivotal.io>
-
由 Kris Macoskey 提交于
This is the same version_id used in the 5.10.0 release from the 5X-release pipeline Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NEd Espino <edespino@pivotal.io>
-
由 Jason Vigil 提交于
Rather than push to the gpdb5 bucket prefix, push release candidates for 5.10.X to a dedicated 5.10.X prefix. Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NEd Espino <edespino@pivotal.io>
-
- 25 7月, 2018 1 次提交
-
-
由 Ed Espino 提交于
The following information will be saved in ${GREENPLUM_INSTALL_DIR}/etc/git-info.json: * Root repo (uri, sha1) * Submodules (submodule source path, sha1, tag) Save git commits since last release tag into ${GREENPLUM_INSTALL_DIR}/etc/git-current-changelog.txt
-
- 24 7月, 2018 2 次提交
-
-
由 David Kimura 提交于
If autovacuum was triggered before ShmemVariableCache->latestCompletedXid is updated by manually consuming xids then autovacuum may not vacuum template0 with a proper transaction id to compare against. We made the test more reliable by suspending a new fault injector (auto_vac_worker_before_do_autovacuum) right before autovacuum worker sets recentXid and starts doing the autovacuum. This allows us to guarantee that autovacuum is comparing against a proper xid. We also removed the loop in the test because vacuum_update_dat_frozen_xid fault injector ensures the pg_database table has been updated. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Lisa Owen 提交于
* docs - remove duplicate gphdfs/kerberos topic in best practices * remove unused file
-
- 23 7月, 2018 2 次提交
-
-
由 Huiliang.liu 提交于
- add fast_match option in gpload config file. If both reuse_tables and fast_match are true, gpload will try fast match external table(without checking columns). If reuse_tables is false and fast_match is true, it will print warning message.
-
由 Hao Wang 提交于
1. When doing harvesting, raise the gp_max_csv_line_length to maximum legal value in session level. 2. For query longer than gp_max_csv_line_length, this workaround replaces line breaks in query text with space to prevent load failure. It may lead long query statement changed when load to history table, but it is still better than fail to load or truncate the query text. Co-authored-by: Teng Zhang tezhang@pivotal.io Co-authored-by: Hao Wang haowang@pivotal.io
-
- 21 7月, 2018 1 次提交
-
-
由 Lisa Owen 提交于
* docs - correct log file locations in best practices * edit requested by david
-
- 20 7月, 2018 3 次提交
-
-
由 David Kimura 提交于
The MyProc inDropTransaction flag was used to make sure concurrent AO vacuum would not conflict with each other during drop phase. Two concurrent AO vacuum on same relation was possible back in 4.3 where the different AO vacuum phases (prepare, compaction, drop, cleanup) would interleave with each other, and having two AO vacuum drop phases concurrently on the same AO relation was dangerous. We now hold the ShareUpdateExclusiveLock through the entire AO vacuum which renders the inDropTransaction flag useless and disallows the interleaving mechanism. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Larry Hamel 提交于
- Previously, DNS was queried within the `Ping` utility constructor, so a DNS failure would always raised an exception. - Now the DNS query is in the standard `run()` method, so a failure from DNS will raise optionally, depending on the `validateAfter` parameter. - `Command` declared as a new style class so that `super(Ping, self).run()` can be called. Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NJemish Patel <jpatel@pivotal.io>
-
由 Heikki Linnakangas 提交于
We've seen a lot of failures in the 'sreh' test in the pipeline, like this: --- 263,269 ---- FORMAT 'text' (delimiter '|') SEGMENT REJECT LIMIT 10000; SELECT * FROM sreh_ext; ! ERROR: connection failed dummy_protocol://DUMMY_LOCATION INSERT INTO sreh_target SELECT * FROM sreh_ext; NOTICE: Found 10 data formatting errors (10 or more input rows). Rejected related input data. SELECT count(*) FROM sreh_target; I don't really know, but I'm guessing it could be because it sometimes takes more than one second for gpfdist to fully start up, if there's a lot of disk or other activity. Increase the sleep time from 1 to 3 seconds; we'll see if that helps. (cherry picked from commit bb8575a9)
-
- 19 7月, 2018 4 次提交
-
-
由 Yandong Yao 提交于
We have lately seen a lot of failures in test cases related to partitioning, with errors like this: select tablename, partitionlevel, partitiontablename, partitionname, partitionrank, partitionboundary from pg_partitions where tablename = 'mpp3079a'; ERROR: cache lookup failed for relation 148532 (ruleutils.c:7172) The culprit is that that the view passes a relation OID to the pg_get_partition_rule_def() function, and the function tries to perform a syscache lookup on the relation (in flatten_reloptions()), but the lookup fails because the relation was dropped concurrently by another transaction. This race is possible, because the query runs with an MVCC snapshot, but the syscache lookups use SnapshotNow. This commit doesn't eliminate the race completely, but at least it makes it narrower. A more reliable solution would've been to acquire a lock on the table, but then that might block, which isn't nice either. Another solution would've been to modify flatten_reloptions() to return NULL instead of erroring out, if the lookup fails. That approach is taken on the other lookups, but I'm reluctant to modify flatten_reloptions() because it's inherited from upstream. Let's see how well this works in practice first, before we do more drastic measures.
-
由 Mel Kiyama 提交于
* docs - update gpbackup API - add segment instance and update backup directory information. Also update API version to 0.3.0. This will be ported to 5X_STABLE * docs - gpbackup API - review updates and fixes for scope information Also, cleanup edits. * docs - gpbackup API - more review updates and fixes to scope information.
-
由 mkiyama 提交于
-
由 Lisa Owen 提交于
-
- 18 7月, 2018 4 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
-
由 Huiliang Liu 提交于
-
由 Wang Hao 提交于
gp_max_csv_line_length is a session level GUC. When change it in session, it affects statement like select * from <external_table>. But it does not work for INSERT INTO table SELECT * FROM <external_table>. For such statement, the scan of external table happens in a QE backend process, not the QD. This fix add GUC_GPDB_ADDOPT so that setting this GUC in session level can affect both QD and QE process.
-
由 mkiyama 提交于
-
- 17 7月, 2018 2 次提交
- 14 7月, 2018 4 次提交
-
-
由 David Sharp 提交于
Authored-by: NDavid Sharp <dsharp@pivotal.io> (cherry picked from commit a88bec32)
-
由 Larry Hamel 提交于
-- removes match for "localhost" because a Greenplum cluster defined with "localhost" as the name of a node will not work Authored-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
-- previously, `gpstop -u` used `ssh` for all commands. This changes to add a conditional for when the host == localhost, and runs the command without `ssh` Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Jimmy Yih 提交于
As part of the Postgres 8.3 merge, all heap tables now automatically create an array type. The array type will usually be created with typname '_<heap_name>' since the automatically created composite type already takes the typname '<heap_name>' first. If typname '_<heap_name>' is taken, the logic will continue to prepend underscores until no collision (truncating the end if typname gets past NAMEDATALEN of 64). This might be an oversight in upstream Postgres since certain scenarios involving creating a large number of heap tables with similar names could result in a lot of typname collisions until no heap tables with similar names can be created. This is very noticable in Greenplum heap partition tables because Greenplum has logic to automatically name child partitions with similar names instead of having the user name each child partition. To prevent typname collision failures when creating a heap partition table with a large number of child partitions, we will now stop automatically creating the array type for child partitions. References: https://www.postgresql.org/message-id/flat/20070302234016.GF3665%40fetter.org https://github.com/postgres/postgres/commit/bc8036fc666a8f846b1d4b2f935af7edd90eb5aa
-
- 13 7月, 2018 6 次提交
-
-
由 Jialun Du 提交于
- Remove async session before CREATE FUNCTION - Change comment format from -- to /* */ (cherry picked from commit 1cc742247cc4de314941624cfac9e8676bc71100)
-
由 Jialun 提交于
* Add resource group bypass memory limit. - Bypassed query allocates all the memory in group / global shared memory, and is also enforced by the group's memory limit; - Bypassed query also has a memory limit of 10 chunks per process; * Add test cases for the resgroup bypass memory limit. * Provide ORCA answer file. * Adjust memory limit on QD. (cherry picked from commit dc82ceea)
-
由 Mel Kiyama 提交于
* docs - gpbackup S3 plugin - support for S3 compatible data stores. Link to HTML format on GPDB docs review site. http://docs-gpdb-review-staging.cfapps.io/review/admin_guide/managing/backup-s3-plugin.html * docs - gpbackup S3 plugin - review comment updates * docs - gpbackup S3 plugin - Add OSS/Pivotal support information. * docs - gpbackup S3 plugin - fix typos. * docs - gpbackup S3 plugin - updated information about S3 compatible data stores.
-
由 Mel Kiyama 提交于
* docs - gpbackup API - update for scope argument. This will be ported to 5X_STABLE * docs - gpbackup API - correct API scope description based on review comments. * docs - gpbackup API - edit scope description based on review comments. --Updated API version
-
由 David Yozie 提交于
* fix problematic xrefs * Consistency edits for list items * edit, relocate filter pushdown section * minor edits to guc description * remove note about non-support in Hive doc * Edits from Lisa's review * Adding note about experimental status of HBase connector pushdown * Adding note about experimental status of Hive connector pushdown * Revert "Adding note about experimental status of Hive connector pushdown" This reverts commit 43dfe51526e19983835f7cbd25d540d3c0dec4ba. * Revert "Adding note about experimental status of HBase connector pushdown" This reverts commit 3b143de058c7403c2bc141c11c61bf227c2abf3a. * restoring HBase, Hive pushdown support * slight wording change * adding xref
-
由 Asim R P 提交于
We seemed to be doing that in this case. This was caught by enabling memory_protect_buffer_pool GUC.
-
- 12 7月, 2018 4 次提交
-
-
由 Mel Kiyama 提交于
* docs - gpcopy new options --dry-run, --no-distribution-check --Add limitation and warning about copying some types of partitioned tables with --no-distribution-check. --Also added limitation and warning to Admin Guide. * docs - gpcopy fix typo for new option --no-distribution-check * docs - gpcopy new options. Removed unneeded xref.
-
由 Mel Kiyama 提交于
* docs - add GUC gp_resource_group_bypass Link to HTML on GPDB review doc site. http://docs-gpdb-review-staging.cfapps.io/review/ref_guide/config_params/guc-list.html#gp_resource_group_bypass GUC list by category http://docs-gpdb-review-staging.cfapps.io/review/ref_guide/config_params/guc_category-list.html#topic444 * docs - new GUC gp_resource_group_bypass --Edits based on review comments. --Add link to GUC from Admin Guide. --Update TOC for this guc and other gucs not in TOC. * docs - review updates for GUC gp_resource_group_bypass * docs - fix typos in definition of GUC gp_resource_group_bypass * docs - update to GUC gp_resource_group_bypass based on dev changes.
-
由 Mel Kiyama 提交于
* docs - update gphdfs parquet support information. --Update parequet support to 1.7.0 and later. --Change location of parquet bundle jar files to https://mvnrepository.com/artifact/org.apache.parquet/parquet-hadoop-bundle previous location was http://parquet.apache.org/downloads/ * docs - review updates of gphdfs parquet support information.
-
由 Jim Doty 提交于
The domain changed, this update now points at the domain that is active at this moment in history.
-
- 11 7月, 2018 2 次提交
-
-
由 Pengzhou Tang 提交于
To keep it consistent with the "Create table" syntax, CTAS should also disallow duplicate distributed keys, otherwise backup and restore will mess up.
-
由 Jimmy Yih 提交于
TRUNCATE will rewrite the relation by creating a temporary table and swapping it with the real relation. For AO, this includes the auxiliary tables which is concerning for the AO relation's pg_aoseg table which holds information that a AO segment file is available for write or waiting to be compacted/dropped. Since we do not currently invalidate the AppendOnlyHash cache entry, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of TRUNCATE on AO relations. Conflicts: src/backend/commands/tablecmds.c - TRUNCATE is a bit different between Greenplum 5 and 6. Needed to move the heap_close() to the end to invalidate the AppendOnlyHash entries after dispatch. - Macros RelationIsAppendOptimized() and IS_QUERY_DISPATCHER() do not exist in 5X_STABLE. src/test/isolation2/sql/truncate_after_ao_vacuum_skip_drop.sql - Isolation2 utility mode connections use dbid instead of content id like in Greenplum 6.
-