- 26 2月, 2019 4 次提交
-
-
由 Oliver Albertini 提交于
Vagrant: Make Debian and Centos builds functional * Refactor bash and ruby code in vagrant setup * Give unique names to VMs with Gporca and without * add an example `vagrant-local.yml` that syncs `~/gpdb` with `rsync` * Remove gp-xerces, use out-of-the-box Xerces instead * use Bentobox builds for Debian and Centos * Apply host's GPDB git configuration in VM: includes git user.name, user.email, clones from the same origin as user, adds all the user's remotes, and attempts to check out user's current remote/branch * gitignore the .vagrant directories
-
由 David Krieger 提交于
Minor changes are made to pg_upgrade to allow GPDB5 to be upgraded to GPDB6: 1). pg_ctl arguments need to explicitly contain the dbid, contentid and numcontents in order to start the GPDB5 cluster 2). the type of the attnum field of the gp_distribution_policy had changed 3). gp_toolkit is a view, and hence datatypes it contains that have been modified in GPDB6(such as name), need not be flagged as errors during pg_upgrade's check of the old cluster. 4). index access method type 'bitmap' needs to be added to the exclusion list to select only bpchar_pattern_ops index access methods Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
由 David Krieger 提交于
Minor modifications are made to pg_dump, to allow the table attributes and functions to be correctly dumped for a GPDB-5 cluster. These changes essentially fix a bug, as the GPDB-6 pg_dump should be able to dump a GPDB-5 cluster. Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Shreedhar Hardikar 提交于
They are renamed to start with a gp_/GP_/Gp prefix (as appropriate). This will prevent any namespace clashes when building & running external HLL-related GPDB extensions. I decided to keep the directory name that same since that doesn't matter for name conflicts, and none of the other directories in utils started with the gp_ prefix.
-
- 25 2月, 2019 1 次提交
-
- 23 2月, 2019 4 次提交
-
-
由 Jacob Champion 提交于
The new "orphaned TOAST table" check performed queries against pg_depend by matching rows on OID, but those queries didn't check to make sure that the OID belonged to the expected catalog table. OIDs can collide across catalog tables (though it is rare) -- usually we don't have to think about it because there is a catalog table implied by the context, but in this case, pg_depend can link any catalog table to any other. Fix the check by limiting results to OIDs that match within pg_class. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Alexandra Wang 提交于
Add a reminder for mutual review offset obligations. Ideally we are expected to review at least one other PR when submitting our own PR.
-
由 Alexandra Wang 提交于
Reset the fault injector on dbid 2 after re-verifying the segment is down. Both gp_request_fts_probe_scan() and gp_inject_fault() call getCdbComponentInfo() in order to dispatch to QEs, which will trigger the fault `GetDnsCachedAddress` on dbid 2, and gp_request_fts_probe_scan() returns true even before the probe finishes. Therefore, there is a race condition between the fts_probes and the reset of the fault injector; when the reset triggers the fault before the fts probe completes, the primary will be taken down without removing the fault, which caused all the following tests failing after a `gprecoverseg -ar` with double fault detected. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Francisco Guerrero 提交于
We introduce a new structure `ExternalSelectDesc` in ExtProtocolData that encapsulates projection information (`ProjectionInfo`) as well as filter qualifiers. This allows for external protocols to do filter pushdown and column projection (which is the contribution of this PR). We have modified the PXF protocol to make use of both these attributes.<F24><F25> Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io> Co-authored-by: NShivram Mani <smani@pivotal.io>
-
- 22 2月, 2019 4 次提交
-
-
由 Xiaoran Wang 提交于
gpload,gpfdist,createdb,dropdb,createuser, dropuser,createlang,droplang,psql,pg_dump, and pg_dumpall, these components are all packaged into clients package. 1)gpAux/Makfile: * Remove loaders in gpAux/Makefile. * Now clients use some system libraries, so no need to copy these denpendencies into lib. 2)Modify compile_gpdb_clients.bash to export gpdb_clients bin package for building clients rpm. 3)Remove unused files under gpAux.
-
由 Peifeng Qiu 提交于
fe-misc.c belongs to libpq frontend. libpq is needed by pygresql. On windows python 2.7 is compiled with VS 2008, and so must be all extensions including pygresql. VS 2008 is not compliant with C99. Most part of libpq comes from upstream and is friendly with non-C99 compiler. We only need to fix pqFlushNonBlocking() which is introduced in commit 510a20b6 and unique to GPDB.
-
由 Jacob Champion 提交于
This line was updated during the recent changes to gp_distribution_policy -- it used to be coalesce(1+array_upper(attrnums,1)-array_lower(attrnums,1),0) which should just be replaced with array_length() directly at this point. The current implementation is off by one due to the retention of the old "+ 1", and we no longer need a COALESCE because distkey can't be NULL. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Chuck Litzell 提交于
- Remove GP/WLM - Change GPCC entries to gpperfmon (for gpmmon/gpsmon) - Add GPCC gpccws http and rpc ports and ccagent - Add GPText ZooKeepr and Solr ports Update ports and protocols page for gp-wlm, gpcc, and gptext Clarify GPText use a range of ports. Make GPCC and GPText rows conditional.
-
- 21 2月, 2019 18 次提交
-
-
由 Richard Guo 提交于
Previously we have assertions in adjust_selectivity_for_nulltest to ensure that the pushed qual of form "Var IS (NOT) NULL" is applied on the nullable side of the outer join. Now that we do not have the outer_rel and inner_rel passed in as parameters, we cannot do that assertion any more. But we can know the assertion is true, because otherwise the qual would be figured out as not outerjoin_delayed by check_outerjoin_delay() and be pushed down to the non-nullable side of the outer join. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit 98627fe7 introduced a workaround for Oid collisions stemming partition exchange of heap relations that lack an array type. Commit 73434db2 introduced a preallocation check to the Oid dispatcher which now is responsible for avoiding these collisions. Remove the workaround from the pre-test script. Reviewed-by: NJacob Champion <pchampion@pivotal.io> Reviewed-by: NShaoqi Bai <sbai@pivotal.io>
-
由 Shaoqi Bai 提交于
-
由 David Yozie 提交于
* vacuumdb - add missing equal signs = * CREATE/ALTER/DROP COLLATION. Adds new references for these commands. * ALTER CONVERSION. Add SET SCHEMA variant. * ALTER OPERATOR. add SET SCHEMA. * ALTER OPERATOR CLASS. add SET SCHEMA * ALTER OPERATOR FAMILY. add SET SCHEMA, FOR SEARCH/FOR ORDER BY * ALTER ROLE. add REPLICATION/NOREPLICATION options * ALTER TABLE. Adds collation order, table constraints, many edits. * reindexdb - add equal signs = * pg_restore - misc edits, reorg and add some new options * pg_dumpall - misc edits, new/rearrange options * pg_dump - misc edits, new/rearrange options, new example * dropuser - add equal signs = * droplang - misc edits, add equal signs * dropdb - add equal signs * createuser - misc edits, add equal signs * ALTER TYPE. reorganize synopsis. edits. * ALTER USER. Add replication/noreplication * BEGIN. Add DEFERRABLE options * CLUSTER. Edits to usage notes * COMMENT. Add new object types, examples * createlang - misc edits, note, add equal signs * createdb - misc edits, rearrange options, add equal signs * clusterdb - add equal signs * remove space * note should be info, not warning * COPY. Add encoding option. * CREATE DOMAIN. Add collation option. * CREATE INDEX. A collation. * CREATE ROLE. Add replication/no replication * CREATE OPERATOR CLASS. Add FOR SEARCH/ORDER BY. * CREATE TYPE. Add collatable. Update Compatibility. * CREATE USER. Add REPLICATION/NOREPLICATION. Make consistent with create role. * CREATE VIEW. Edits to usage * CREATE TABLE. Add UNLOGGED table type and COLLATE for table column. updated - UNLOGGED warnings are per segment. * CREATE TABLE. Add UNLOGGED table type. UNLOGGED information/warnings are per segment. * GRANT. minor edits. * psql - add equal signs, many additions, regorgs, and edits * COPY - change literal to codeph * DELETE. Add WITH query clause * LOCK. SERIALIZABLE xact locking clarifications. * SET TRANSACTION. Adds deferrable mode. * DROP COLLATION. Remove redundant privileges statement. * DROP TYPE. Qualify/hedge type extension compatibility * EXPLAIN. Add JSON and YAML format examples. * SET TRANSACTION. Add DEFERRED syntax and note that it is inoperative in gpdb * REVOKE. add missing spaces in syntax. * SELECT. Add DISTINCT to several clauses, and may edits. * TRUNCATE. edits about RESTART and triggers * VACUUM. deprecate notice for unparenthized syntax. * SELECT INTO. Add DISTINCT keyword (syntax only), UNLOGGED table keyword and definition. * INSERT. add WITH [RECURSIVE] clause * UPDATE. Add WITH [RECURSIVE] clause * createlang/droplang - remove deprecation note * CREATE LANGUAGE. Use create extension instead for languages repackaged as extensions. * pg_dump - --serializable-deferrable is a no-op * SET TRANSACTION. Add transaction_default_deferred GUC and another unsupported notice * BEGIN. Note DEFERRABLE has no effect in Greenplum Database * EXPLAIN. Add missing query from example * Address review comments. * edits from Chuck
-
由 Mel Kiyama 提交于
-
由 Shoaib Lari 提交于
Follow convention to use single underscore for private methods instead of double underscores. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
由 Jacob Champion 提交于
Every caller had to create the full file path to the repair script, but the only thing that ever changed was the script name, and the test case already stored the directory that was in use. Use that instead and simplify the callers.
-
由 Jacob Champion 提交于
There are four main types of orphan TOAST tables: - Double Orphan TOAST tables due to a missing reltoastrelid in pg_class and a missing pg_depend entry. - Bad Reference Orphaned TOAST tables due to a missing reltoastrelid in pg_class. - Bad Dependency Orphaned TOAST tables due to a missing entry in pg_depend. - Mismatch Orphaned TOAST tables due to reltoastrelid in pg_class pointing to an incorrect TOAST table. Repair scripts are done on a per-segment basis since catalog changes are required. Note: A repair script cannot be generated for mismatch orphan TOAST tables because the current repair implmentation does not work with two or more tables, so let the user repair manually for safety. A manual catalogue change is needed to fix by updating the pg_depend TOAST table entry and setting the refobjid field to the correct dependent table. Similarly, we do not attempt to repair double-orphan situations. Authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-Authored-by: NMark Sliva <msliva@pivotal.io> Co-Authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Mark Sliva 提交于
When passing in a glob directory not all scripts were being run since it was cd'ing to one particular directory. Update the command to actually run all repair scripts. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Jacob Champion 提交于
Since only one test method was using this mock, move it to that test method. Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Kalen Krempely 提交于
The gpcheckcat test for orphan toast tables creates repair scripts on individual segments since it requires catalog changes. This makes changes to repair.py to allow that. - Update repair.py unit tests - Add unit test for create_segment_repair_scripts() - Remove references to Repair.TIMESTAMP which is now in gpcheckcat GV Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Kalen Krempely 提交于
This fixes the issue where multiple repair scripts were generated when running the behave tests. This is because filenames were based off the timestamp which was being instantiated multiple times. Authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Kalen Krempely 提交于
gplog does 'import datetime' which will export datetime, another module does an 'import datetime from datetime. Thus, when gpcheckcat does 'from gplog import *' datetime is the class rather than the expected module. Authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Jacob Champion 提交于
The PYTHONPATH wasn't being set up correctly. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Jacob Champion 提交于
The summary pane handled an oid of None by turning it into an 'N/A' string... but then attempted to print that string as an integer. Always handle it as a string. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Alexandra Wang 提交于
-
由 Alexandra Wang 提交于
The alter database set tablesapce command should alter the tablespace on QD and all the QEs. This patch adds dispatching of the alter database set tablespace command to QEs. In Greenplum, the transaction should be atomic across all the segments and master, hence need to use two phase commit for this operation. Due to usage of 2PC, the implementation needs to differ compared upstream, can't locally commit the transactions. Catalog changes are committed and later using pending deletes, old tablespace directory is removed only on commit. Other alternative considered was using separate dispatch for deleting the directory instead of using pending delete after commiting alter database set tablespace transaction. But that seemed unnecessary overhead compared to pending deletes approach. Fixes #5643 Github issue. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Alexandra Wang 提交于
This will help keep the table around for debugging intermittently failure Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 20 2月, 2019 9 次提交
-
-
由 David Krieger 提交于
We tweak the existing 6-6 upgrade test test_gpdb.sh to work on a 5-6 upgrade. This allows anyone to upgrade a 5 demo cluster, with various user data, to a 6 demo cluster. Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Georgios Kokolatos 提交于
The enabled section verify only the serialized use case. Proper concurrent tests exist and are enabled in isolation <160984c8>. The exact test case with upstream could not be maintained, due to the corresponding create index concurrently use case being disabled. In order to keep future merge conflicts to a minimum the tests have been moved to a greenplum specific test file. A new file was needed because the test case did not semantically fit the cases in the existing files. Removes 93 merge fixme. Reviewed-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Jialun 提交于
Now gpexpand will fail if database,schema or table contains special character. Because it creates pending tables data file by python and load it to working queue table gpexpand.status_detail by "COPY", but didn't do escape according to COPY rule. So we generate data file by COPY instead of generating by native python write, the escape rule will be same. Meanwhile, we have added other gpexpand.status_detail related escape operation.
-
由 Richard Guo 提交于
All callers of simplify_function() are using the same parameters as in upstream. Guess thie FIXME has been out of date. So retire it. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Sambitesh Dash 提交于
This reverts commit 203a0892.
-
由 Sambitesh Dash 提交于
This reverts commit 8758e2bf.
-
由 Ben Christel 提交于
This commit moves all of the intermediate concourse resources (which are only used internally in the pipeline) to now use GCP buckets as opposed to AWS S3 storage. Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Co-authored-by: NOliver Albertini <oalbertini@pivotal.io>
-
由 Amil Khanzada 提交于
The ((pipeline-name)) variable is needed for supporting the naming conventions that are used for new GCS objects, which namespace artifacts by pipeline. Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Ashwin Agrawal 提交于
In order to avoid crash_recovery_dtm test from hanging, as for this scenario doesn't expect master to PANIC, increase dtx_phase2_retry_count to higher number which should help to complete the phase 2 processing and avoid master PANIC.
-