- 19 7月, 2018 4 次提交
-
-
由 Yandong Yao 提交于
We have lately seen a lot of failures in test cases related to partitioning, with errors like this: select tablename, partitionlevel, partitiontablename, partitionname, partitionrank, partitionboundary from pg_partitions where tablename = 'mpp3079a'; ERROR: cache lookup failed for relation 148532 (ruleutils.c:7172) The culprit is that that the view passes a relation OID to the pg_get_partition_rule_def() function, and the function tries to perform a syscache lookup on the relation (in flatten_reloptions()), but the lookup fails because the relation was dropped concurrently by another transaction. This race is possible, because the query runs with an MVCC snapshot, but the syscache lookups use SnapshotNow. This commit doesn't eliminate the race completely, but at least it makes it narrower. A more reliable solution would've been to acquire a lock on the table, but then that might block, which isn't nice either. Another solution would've been to modify flatten_reloptions() to return NULL instead of erroring out, if the lookup fails. That approach is taken on the other lookups, but I'm reluctant to modify flatten_reloptions() because it's inherited from upstream. Let's see how well this works in practice first, before we do more drastic measures.
-
由 Mel Kiyama 提交于
* docs - update gpbackup API - add segment instance and update backup directory information. Also update API version to 0.3.0. This will be ported to 5X_STABLE * docs - gpbackup API - review updates and fixes for scope information Also, cleanup edits. * docs - gpbackup API - more review updates and fixes to scope information.
-
由 mkiyama 提交于
-
由 Lisa Owen 提交于
-
- 18 7月, 2018 4 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
-
由 Huiliang Liu 提交于
-
由 Wang Hao 提交于
gp_max_csv_line_length is a session level GUC. When change it in session, it affects statement like select * from <external_table>. But it does not work for INSERT INTO table SELECT * FROM <external_table>. For such statement, the scan of external table happens in a QE backend process, not the QD. This fix add GUC_GPDB_ADDOPT so that setting this GUC in session level can affect both QD and QE process.
-
由 mkiyama 提交于
-
- 17 7月, 2018 2 次提交
- 14 7月, 2018 4 次提交
-
-
由 David Sharp 提交于
Authored-by: NDavid Sharp <dsharp@pivotal.io> (cherry picked from commit a88bec32)
-
由 Larry Hamel 提交于
-- removes match for "localhost" because a Greenplum cluster defined with "localhost" as the name of a node will not work Authored-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
-- previously, `gpstop -u` used `ssh` for all commands. This changes to add a conditional for when the host == localhost, and runs the command without `ssh` Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Jimmy Yih 提交于
As part of the Postgres 8.3 merge, all heap tables now automatically create an array type. The array type will usually be created with typname '_<heap_name>' since the automatically created composite type already takes the typname '<heap_name>' first. If typname '_<heap_name>' is taken, the logic will continue to prepend underscores until no collision (truncating the end if typname gets past NAMEDATALEN of 64). This might be an oversight in upstream Postgres since certain scenarios involving creating a large number of heap tables with similar names could result in a lot of typname collisions until no heap tables with similar names can be created. This is very noticable in Greenplum heap partition tables because Greenplum has logic to automatically name child partitions with similar names instead of having the user name each child partition. To prevent typname collision failures when creating a heap partition table with a large number of child partitions, we will now stop automatically creating the array type for child partitions. References: https://www.postgresql.org/message-id/flat/20070302234016.GF3665%40fetter.org https://github.com/postgres/postgres/commit/bc8036fc666a8f846b1d4b2f935af7edd90eb5aa
-
- 13 7月, 2018 6 次提交
-
-
由 Jialun Du 提交于
- Remove async session before CREATE FUNCTION - Change comment format from -- to /* */ (cherry picked from commit 1cc742247cc4de314941624cfac9e8676bc71100)
-
由 Jialun 提交于
* Add resource group bypass memory limit. - Bypassed query allocates all the memory in group / global shared memory, and is also enforced by the group's memory limit; - Bypassed query also has a memory limit of 10 chunks per process; * Add test cases for the resgroup bypass memory limit. * Provide ORCA answer file. * Adjust memory limit on QD. (cherry picked from commit dc82ceea)
-
由 Mel Kiyama 提交于
* docs - gpbackup S3 plugin - support for S3 compatible data stores. Link to HTML format on GPDB docs review site. http://docs-gpdb-review-staging.cfapps.io/review/admin_guide/managing/backup-s3-plugin.html * docs - gpbackup S3 plugin - review comment updates * docs - gpbackup S3 plugin - Add OSS/Pivotal support information. * docs - gpbackup S3 plugin - fix typos. * docs - gpbackup S3 plugin - updated information about S3 compatible data stores.
-
由 Mel Kiyama 提交于
* docs - gpbackup API - update for scope argument. This will be ported to 5X_STABLE * docs - gpbackup API - correct API scope description based on review comments. * docs - gpbackup API - edit scope description based on review comments. --Updated API version
-
由 David Yozie 提交于
* fix problematic xrefs * Consistency edits for list items * edit, relocate filter pushdown section * minor edits to guc description * remove note about non-support in Hive doc * Edits from Lisa's review * Adding note about experimental status of HBase connector pushdown * Adding note about experimental status of Hive connector pushdown * Revert "Adding note about experimental status of Hive connector pushdown" This reverts commit 43dfe51526e19983835f7cbd25d540d3c0dec4ba. * Revert "Adding note about experimental status of HBase connector pushdown" This reverts commit 3b143de058c7403c2bc141c11c61bf227c2abf3a. * restoring HBase, Hive pushdown support * slight wording change * adding xref
-
由 Asim R P 提交于
We seemed to be doing that in this case. This was caught by enabling memory_protect_buffer_pool GUC.
-
- 12 7月, 2018 4 次提交
-
-
由 Mel Kiyama 提交于
* docs - gpcopy new options --dry-run, --no-distribution-check --Add limitation and warning about copying some types of partitioned tables with --no-distribution-check. --Also added limitation and warning to Admin Guide. * docs - gpcopy fix typo for new option --no-distribution-check * docs - gpcopy new options. Removed unneeded xref.
-
由 Mel Kiyama 提交于
* docs - add GUC gp_resource_group_bypass Link to HTML on GPDB review doc site. http://docs-gpdb-review-staging.cfapps.io/review/ref_guide/config_params/guc-list.html#gp_resource_group_bypass GUC list by category http://docs-gpdb-review-staging.cfapps.io/review/ref_guide/config_params/guc_category-list.html#topic444 * docs - new GUC gp_resource_group_bypass --Edits based on review comments. --Add link to GUC from Admin Guide. --Update TOC for this guc and other gucs not in TOC. * docs - review updates for GUC gp_resource_group_bypass * docs - fix typos in definition of GUC gp_resource_group_bypass * docs - update to GUC gp_resource_group_bypass based on dev changes.
-
由 Mel Kiyama 提交于
* docs - update gphdfs parquet support information. --Update parequet support to 1.7.0 and later. --Change location of parquet bundle jar files to https://mvnrepository.com/artifact/org.apache.parquet/parquet-hadoop-bundle previous location was http://parquet.apache.org/downloads/ * docs - review updates of gphdfs parquet support information.
-
由 Jim Doty 提交于
The domain changed, this update now points at the domain that is active at this moment in history.
-
- 11 7月, 2018 5 次提交
-
-
由 Pengzhou Tang 提交于
To keep it consistent with the "Create table" syntax, CTAS should also disallow duplicate distributed keys, otherwise backup and restore will mess up.
-
由 Jimmy Yih 提交于
TRUNCATE will rewrite the relation by creating a temporary table and swapping it with the real relation. For AO, this includes the auxiliary tables which is concerning for the AO relation's pg_aoseg table which holds information that a AO segment file is available for write or waiting to be compacted/dropped. Since we do not currently invalidate the AppendOnlyHash cache entry, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of TRUNCATE on AO relations. Conflicts: src/backend/commands/tablecmds.c - TRUNCATE is a bit different between Greenplum 5 and 6. Needed to move the heap_close() to the end to invalidate the AppendOnlyHash entries after dispatch. - Macros RelationIsAppendOptimized() and IS_QUERY_DISPATCHER() do not exist in 5X_STABLE. src/test/isolation2/sql/truncate_after_ao_vacuum_skip_drop.sql - Isolation2 utility mode connections use dbid instead of content id like in Greenplum 6.
-
由 Jimmy Yih 提交于
ALTER TABLE commands that are tagged as AT_SetDistributedBy require a gather motion and does its own variation of creating a temporary table for CTAS (basically bypassing the usual ATRewriteTable which actually does do AppendOnlyHash cache entry invalidation). Without the AppendOnlyHash cache entry invalidation, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of AT_SetDistributedBy ALTER TABLE cases. Conflicts: src/backend/commands/tablecmds.c - Macro IS_QUERY_DISPATCHER() does not exist in 5X_STABLE. src/test/isolation2/sql/reorganize_after_ao_vacuum_skip_drop.sql - Isolation2 utility mode connections use dbid instead of content id like in Greenplum 6.
-
由 David Yozie 提交于
-
由 Mel Kiyama 提交于
* docs - PL/Container - added note - domain object not supported. * docs - PL/Container - updated note for non-support of domain object
-
- 10 7月, 2018 2 次提交
-
-
由 Soumyadeep Chakraborty 提交于
Repeated calls to CheckTableExists in gpcrondump was creating a performance issue where filtered backups were taking significantly longer than non-filtered ones. This was because CheckTableExists, which opens a DB conn, queries the database and closes the conn, was being called in a loop. Simply extracting the call from out of the loop fixed the issue. Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
Previously, gpdbrestore attempted to truncate tables in the restore set when --truncate was passed. However, if one of these tables was an external table, the restore would return an error as external tables cannot be truncated. We no longer attempt to truncate external tables at the beginning of the restore. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 07 7月, 2018 2 次提交
-
-
由 Chris Hajas 提交于
The pg_get_partition_template_def and pg_get_partition_def functions take access share locks, but do not release them until the end of the transaction. If a transaction is long-running, this can conflict with other user operations. Is it not necessary to hold the lock indefinitely as the lock is only needed for the duration of the function call. Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
由 dyozie 提交于
-
- 06 7月, 2018 3 次提交
-
-
由 Jialun 提交于
If a segment exists in gp_segment_configuration but its ip address can not be resolved we will run into a runtime error on gang creation: ERROR: could not translate host name "segment-0a", port "40000" to address: Name or service not known (cdbutil.c:675) This happens even if segment-0a is a mirror and is marked as down. With this error queries can not be executed, gpstart and gpstop will also fail. One way to trigger the issue: - create a multiple segments cluster; - remove sdw1's dns entry from /etc/hosts on mdw; - kill postgres primary process on sdw1; FTS can detect this error and automatically switch to mirror, but queries can not be executed. (cherry picked from commit dd861e72)
-
由 Mel Kiyama 提交于
* docs - update system catalog maintenance information. --Updated Admin. Guide and Best Practices for running REINDEX, VACUUM, and ANALYZE --Added note to REINDEX reference about running ANALYZE after REINDEX. * docs - edits for system catalog maintenance updates * docs - update recommendation for running vacuum and analyze. Update based on dev input.
-
由 Lisa Owen 提交于
-
- 03 7月, 2018 2 次提交
-
-
由 Jialun 提交于
- Introduce a new GUC gp_resource_group_bypass, when it is on, the query in this session will not be limited by resource group
-
由 Jim Doty 提交于
The gen pipeline script outputs a suggested command when setting a dev pipeline. Currently the git remote and git branch have to be edited before executing the command. Since often times the branch has been created and is tracing remote, it is possible to guess those details. The case statements attempt to prevent suggesting using the production branches, and fall back to the same string as before. Authored-by: NJim Doty <jdoty@pivotal.io> (cherry picked from commit ea12d3a1)
-
- 30 6月, 2018 2 次提交
-
-
由 David Yozie 提交于
-
由 Ivan Leskin 提交于
* Extra docs for gp_external_enable_filter_pushdown Add extra documentation for 'gp_external_enable_filter_pushdown' and the pushdown feature in PXF extension. * Minor doc text fixes Minor documentation text fixes, proposed by @dyozie. * Clarify the pushdown support by PXF Add the following information: * List the PXF connectors that support pushdown; * State that GPDB PXF extension supports pushdown; * Add a list of conditions that need to be fulfilled for the pushdown feature to work when PXF protocol is used. * Correct the list of PXF connectors with pushdown * State that Hive and HBase PXF connectors support filter predicate pushdown; * Remove references to JDBC and Apache Ignite PXF connectors, as proposed by @dyozie (these are not officially supported by Greenplum).
-