- 11 7月, 2018 4 次提交
-
-
由 Jimmy Yih 提交于
TRUNCATE will rewrite the relation by creating a temporary table and swapping it with the real relation. For AO, this includes the auxiliary tables which is concerning for the AO relation's pg_aoseg table which holds information that a AO segment file is available for write or waiting to be compacted/dropped. Since we do not currently invalidate the AppendOnlyHash cache entry, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of TRUNCATE on AO relations. Conflicts: src/backend/commands/tablecmds.c - TRUNCATE is a bit different between Greenplum 5 and 6. Needed to move the heap_close() to the end to invalidate the AppendOnlyHash entries after dispatch. - Macros RelationIsAppendOptimized() and IS_QUERY_DISPATCHER() do not exist in 5X_STABLE. src/test/isolation2/sql/truncate_after_ao_vacuum_skip_drop.sql - Isolation2 utility mode connections use dbid instead of content id like in Greenplum 6.
-
由 Jimmy Yih 提交于
ALTER TABLE commands that are tagged as AT_SetDistributedBy require a gather motion and does its own variation of creating a temporary table for CTAS (basically bypassing the usual ATRewriteTable which actually does do AppendOnlyHash cache entry invalidation). Without the AppendOnlyHash cache entry invalidation, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of AT_SetDistributedBy ALTER TABLE cases. Conflicts: src/backend/commands/tablecmds.c - Macro IS_QUERY_DISPATCHER() does not exist in 5X_STABLE. src/test/isolation2/sql/reorganize_after_ao_vacuum_skip_drop.sql - Isolation2 utility mode connections use dbid instead of content id like in Greenplum 6.
-
由 David Yozie 提交于
-
由 Mel Kiyama 提交于
* docs - PL/Container - added note - domain object not supported. * docs - PL/Container - updated note for non-support of domain object
-
- 10 7月, 2018 2 次提交
-
-
由 Soumyadeep Chakraborty 提交于
Repeated calls to CheckTableExists in gpcrondump was creating a performance issue where filtered backups were taking significantly longer than non-filtered ones. This was because CheckTableExists, which opens a DB conn, queries the database and closes the conn, was being called in a loop. Simply extracting the call from out of the loop fixed the issue. Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
Previously, gpdbrestore attempted to truncate tables in the restore set when --truncate was passed. However, if one of these tables was an external table, the restore would return an error as external tables cannot be truncated. We no longer attempt to truncate external tables at the beginning of the restore. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 07 7月, 2018 2 次提交
-
-
由 Chris Hajas 提交于
The pg_get_partition_template_def and pg_get_partition_def functions take access share locks, but do not release them until the end of the transaction. If a transaction is long-running, this can conflict with other user operations. Is it not necessary to hold the lock indefinitely as the lock is only needed for the duration of the function call. Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
由 dyozie 提交于
-
- 06 7月, 2018 3 次提交
-
-
由 Jialun 提交于
If a segment exists in gp_segment_configuration but its ip address can not be resolved we will run into a runtime error on gang creation: ERROR: could not translate host name "segment-0a", port "40000" to address: Name or service not known (cdbutil.c:675) This happens even if segment-0a is a mirror and is marked as down. With this error queries can not be executed, gpstart and gpstop will also fail. One way to trigger the issue: - create a multiple segments cluster; - remove sdw1's dns entry from /etc/hosts on mdw; - kill postgres primary process on sdw1; FTS can detect this error and automatically switch to mirror, but queries can not be executed. (cherry picked from commit dd861e72)
-
由 Mel Kiyama 提交于
* docs - update system catalog maintenance information. --Updated Admin. Guide and Best Practices for running REINDEX, VACUUM, and ANALYZE --Added note to REINDEX reference about running ANALYZE after REINDEX. * docs - edits for system catalog maintenance updates * docs - update recommendation for running vacuum and analyze. Update based on dev input.
-
由 Lisa Owen 提交于
-
- 03 7月, 2018 2 次提交
-
-
由 Jialun 提交于
- Introduce a new GUC gp_resource_group_bypass, when it is on, the query in this session will not be limited by resource group
-
由 Jim Doty 提交于
The gen pipeline script outputs a suggested command when setting a dev pipeline. Currently the git remote and git branch have to be edited before executing the command. Since often times the branch has been created and is tracing remote, it is possible to guess those details. The case statements attempt to prevent suggesting using the production branches, and fall back to the same string as before. Authored-by: NJim Doty <jdoty@pivotal.io> (cherry picked from commit ea12d3a1)
-
- 30 6月, 2018 6 次提交
-
-
由 David Yozie 提交于
-
由 Ivan Leskin 提交于
* Extra docs for gp_external_enable_filter_pushdown Add extra documentation for 'gp_external_enable_filter_pushdown' and the pushdown feature in PXF extension. * Minor doc text fixes Minor documentation text fixes, proposed by @dyozie. * Clarify the pushdown support by PXF Add the following information: * List the PXF connectors that support pushdown; * State that GPDB PXF extension supports pushdown; * Add a list of conditions that need to be fulfilled for the pushdown feature to work when PXF protocol is used. * Correct the list of PXF connectors with pushdown * State that Hive and HBase PXF connectors support filter predicate pushdown; * Remove references to JDBC and Apache Ignite PXF connectors, as proposed by @dyozie (these are not officially supported by Greenplum).
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
The issue happens because of constant folding in the testexpr of the SUBPLAN expression node. The testexpr may be reduced to a const and any PARAMs, previous used in the testexpr, disappear, However, the subplan still remains. This behavior is similar in upstream Postgres 10 and may be of performance consideration. Leaving that aside for now, the constant folding produces an elog(ERROR)s when the plan has subplans and no PARAMs are used. This check in `addRemoteExecParamsToParamList()` uses `context.params` which computes the used PARAMs in the plan and `nIntPrm = list_length(root->glob->paramlist`, which is the number of PARAMs declared/created. Given the ERROR messages generated, the above check makes no sense. Especially since it won’t even trip for the InitPlan bug (mentioned in the comments) as long as there is at least one PARAM in the query. This commit removes this check since it doesn't correctly capture the intent. In theory, it could be be replaced by one specifically aimed at InitPlans, that is, find all the params ids used by InitPlan and then make sure they are used in the plan. But we already do this and remove any unused initplans in `remove_unused_initplans()`. So I don’t see the point of adding that. Fixes #2839
-
由 Trevor Yacovone 提交于
This is related to the work we have done to fix the sles11 and windows compilation failures. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
- 29 6月, 2018 2 次提交
-
-
由 David Yozie 提交于
-
由 Jamie McAtamney 提交于
This is related to the work we have done to fix the sles11 and windows compilation failures on master. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NLisa Oakley <loakley@pivotal.io>
-
- 27 6月, 2018 1 次提交
-
-
由 Alexander Denissov 提交于
Added new test job to the pipeline to certify GPHDFS with MAPR Hadoop distribution and renamed existing GPHDFS certification job to state that it tests with generic Hadoop. MAPR cluster consists of 1 node deployed by CCP scripts into GCE. Backported from GPDB master. - MAPR 5.2 - Parquet 1.8.1 Co-authored-by: NAlexander Denissov <adenissov@pivotal.io> Co-authored-by: NShivram Mani <smani@pivotal.io> Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
- 26 6月, 2018 2 次提交
-
-
由 mkiyama 提交于
-
由 Shoaib Lari 提交于
For long running commands such as gpinitstandby with a large master data directory, the server takes a long time. Therefore, there is no acitivity from the client to the server. If the ClientAliveInterval is set, then the server reports a timeout after ClientAliveInterval seconds. Setting a ServerAliveInterval value less than the ClientAliveInterval interval forces the client to send a Null message to the server. Hence, avoiding the timeout. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> (cherry picked from commit 15493596)
-
- 23 6月, 2018 3 次提交
-
-
由 Lav Jain 提交于
Co-authored-by: NLav Jain <ljain@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 Ivan Leskin 提交于
* Change src/backend/access/external functions to extract and pass query constraints; * Add a field with constraints to 'ExtProtocolData'; * Add 'pxffilters' to gpAux/extensions/pxf and modify the extension to use pushdown. * Remove duplicate '=' check in PXF Remove check for duplicate '=' for the parameters of external table. Some databases (MS SQL, for example) may use '=' for database name or other parameters. Now PXF extension finds the first '=' in a parameter and treats the whole remaining string as a parameter value. * disable pushdown by default * Disallow passing of constraints of type boolean (the decoding fails on PXF side); * Fix implicit AND expressions addition Fix implicit addition of extra 'BoolExpr' to a list of expression items. Before, there was a check that the expression items list did not contain logical operators (and if it did, no extra implicit AND operators were added). This behaviour is incorrect. Consider the following query: SELECT * FROM table_ex WHERE bool1=false AND id1=60003; Such query will be translated as a list of three items: 'BoolExpr', 'Var' and 'OpExpr'. Due to the presence of a 'BoolExpr', extra implicit 'BoolExpr' will not be added, and we get an error "stack is not empty ...". This commit changes the signatures of some internal pxffilters functions to fix this error. We pass a number of required extra 'BoolExpr's to 'add_extra_and_expression_items'. As 'BoolExpr's of different origin may be present in the list of expression items, the mechanism of freeing the BoolExpr node changes. The current mechanism of implicit AND expressions addition is suitable only before OR operators are introduced (we will have to add those expressions to different parts of a list, not just the end, as done now).
-
由 Lisa Owen 提交于
* docs - create ... external ... temp table * update CREATE EXTERNAL TABLE sgml docs
-
- 22 6月, 2018 2 次提交
-
-
由 Abhijit Subramanya 提交于
-
由 Chuck Litzell 提交于
* Edits to apply organizational improvements made in the HAWQ version, using consistent realm and domain names, and testing that procedures work. * Convert tasks to topics to fix formatting. Clean up pg_ident.conf topic. * Convert another task to topic * Remove extraneous tag * Formatting and minor edits * - added $ or # prompts for all code blocks - Reworked section "Mapping Kerberos Principals to Greenplum Database Roles" to describe, generally, a user's authentication process and to more clearly describe how principal name is mapped to gpdb name. * - add krb_realm auth param - add description of include_realm=1 for completeness
-
- 21 6月, 2018 2 次提交
-
-
由 Dhanashree Kashid 提交于
Add tests to ensure sane behavior when a subquery appears nested inside a scalar expression. The intent is to check for correct results. Bump ORCA version to 2.63.0 Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io> (cherry picked from commit dd77c59c)
-
由 Lisa Owen 提交于
-
- 20 6月, 2018 2 次提交
-
-
由 Mel Kiyama 提交于
--change command that tests email notification to a psql command. --remove old example that uses gmail public SMTP server
-
由 mkiyama 提交于
-
- 19 6月, 2018 2 次提交
- 18 6月, 2018 1 次提交
-
-
由 Mel Kiyama 提交于
* docs - gpbackup/gprestore new functionality. --gpbackup new option --jobs to backup tables in parallel. --gprestore --include-table* options support restoring views and sequences. * docs - gpbackup/gprestore. fixed typos. Updated backup/restore of sequences and views * docs - gpbackup/gprestore - clarified information on dependent objects. * docs - gpbackup/gprestore - updated information on locking/quiescent state. * docs - gpbackup/gprestore - clarify connection in --jobs option.
-
- 16 6月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
For CO table, storageAttributes.compress only conveys if should apply block compression or not. RLE is performed as stream compression within the block and hence storageAttributes.compress true or false doesn't relate to rle at all. So, with rle_type compression storageAttributes.compress is true for compression levels > 1 where along with stream compression, block compression is performed. For compress level = 1 storageAttributes.compress is always false as no block compression is applied. Now since rle doesn't relate to storageAttributes.compress there is no reason to touch the same based on rle_type compression. Also, the problem manifests more due the fact in datumstream layer AppendOnlyStorageAttributes in DatumStreamWrite (`acc->ao_attr.compress`) is used to decide block type whereas in cdb storage layer functions AppendOnlyStorageAttributes from AppendOnlyStorageWrite (`idesc->ds[i]->ao_write->storageAttributes.compress`) is used. Due to this difference changing just one that too unnecessarily is bound to cause issue during insert. So, removing the unnecessary and incorrect update to AppendOnlyStorageAttributes. Test case showcases the failing scenario without the patch.
-
- 14 6月, 2018 3 次提交
-
-
由 Ming LI 提交于
Because this bug, previous external table catalog stored the wrong info about 'LOG ERRORS', which will still cause crash of this bug(because this fix only correct the wrong info when storing, but can not fix the info already wrongly stored), in this case user need to re-create the external table.
-
由 Kris Macoskey 提交于
The netbackup jobs are paused because of an expired license, so the centos6 resource is not passing for the release candidate. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Lisa Oakley 提交于
Netbackup test depends on an external resource with a valid license. The license has expired. We're removing the jobs from blocking the release candidate until the license is renewed. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-