- 05 3月, 2019 11 次提交
-
-
由 Pengzhou Tang 提交于
checkPolicyForUniqueIndex() checks if distribution key conflict with unique/primary key, for example, unique index is not allowed for random-distributed table and allowed for replicated-distributed table, for normal distributed, the set of columns being indexed should be a superset of the table. What about entry-distributed table? (eg, table created in utility mode, it has no records in gp_distribution_policy and GpPolicyFetch translated it to entry-distributed)? Such tables are localized in a single db, so adding a unique index should also be allowed. This is spotted by the assertion in checkPolicyForUniqueIndex() when checking the conflict for normal distributed tables. This fixes #5880
-
由 Pengzhou Tang 提交于
Now each QE in a session is assigned with a unique identifier, meanwhile, QD dispatches a slice table to all QEs and each slice in the slice table has a bitmapset of QE identifiers. QE go through all slices and decide the slice it belongs to by checking its identifier in the bitmapset. A problem is, the QE identifier was never resued when a QE was destroyed, so the identifier was incrementally increased until the bitmapset is inefficient and insufficient to hold it. This commit fixes it by reusing the QE identifiers to limit it in a reasonable range. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Amil Khanzada 提交于
- add a rename task to make the artifacts use the new naming convention - keep pushing to the s3 resources so that other pipelines using those published artifacts don't break (although those s3 resources are now deprecated). Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 David Kimura 提交于
Issue is that we point the first snapshot's in-progress transaction array at the second block that we've allocated for xips. This results in every snapshot N pointing to snapshot N+1's block of allocated xips. This meant that the memory allocated which was intended for snapshot 0 was never used. Worse though is that the last snapshot's xip points to an address that was not memory allocated for xips. This bug manifests itself in interesting ways because it essentially corrupts a local snapshot passed from a writer to a reader if we are unfortunate enough to use the last indexed shared local snapshot in-progress transaction array. When the snapshot is corrupt then we cannot guarantee the correct visibility of tuples in a table. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Chuck Litzell 提交于
* Warn not to run gpbackup/gprestore during expand. * Updates from review
-
由 Soumyadeep Chakraborty 提交于
Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Jacob Champion 提交于
We don't have Subject Alternative Name support, and won't until 9.5. Make sure we uncomment these tests when we get there. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Jacob Champion 提交于
This has been broken since 94fcdf3f6, which disallowed setting a distribution policy for child partitions. Now that the behave tests correctly fail this test, we're disabling it for now while we decide what the correct repair action should be. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jacob Champion 提交于
The two separate queries made as part of the distribution_policy checks overlapped when the table in question was randomly distributed. This led to two copies of the same DROP CONSTRAINT statement in the repair scripts, which failed. To fix, we can remove the overlapping WHERE clause from the second query, and only focus on column-distributed tables there. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jacob Champion 提交于
Several layers were discarding failures. The bash script and the psql call itself need to stop on the first failure. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jacob Champion 提交于
Commit 352eb65c had already fixed this bug, in a way that did not ignore failed return codes from the repair scripts that are running. This reverts commit 061da8fc. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
- 04 3月, 2019 2 次提交
-
-
由 Zhang Shujie 提交于
We generate the fake ItemPointer for AO table's tuple, but it is a little different from the HEAP table's ItemPointer, the offset of the pointer can be 0 when it is a fake ItemPointer, but we treat 0 to be invalid in the previous codes, In order to run correctly, we set the 16th bit of offset to be 1 as a flag, then it can pass some check, but the value is not the real value, so when using the fake ItemPointer, we have to note to process the flag. The bitmap index build code expects to see the tuple IDs in order. It gets confused when it sees offset number 32768(0x8000) before offset number 1, this commit convert 32768(0x8000) to be 0.
-
由 Oliver Albertini 提交于
* this way when you ssh onto the VM, you have e.g. psql available to you Authored-by: NOliver Albertini <oalbertini@pivotal.io>
-
- 02 3月, 2019 1 次提交
-
-
由 Francisco Guerrero 提交于
- This will allow existing PXF Release (java) continue working with the gpdb 6
-
- 01 3月, 2019 9 次提交
-
-
由 Ning Yu 提交于
To query the gpexpand schema we must connect to "postgres" database now, update a forgot-to-changed dbname in gpexpand behave tests
-
由 Zhenghua Lyu 提交于
The following utilities do not work when we are in gpexpand phase 1: * gppkg * gpconfig * gpcheckat Add check for them so that if cluster is expanding in phase1, they will print error message and exit.
-
由 Ning Yu 提交于
This position is used by gpstate tests to verify the output, we must reset it on each executed command to correctly matching multiple commands in one scenario.
-
由 Ning Yu 提交于
Added a new gpstate argument `-x` to show gpexpand status. The status information is obtained with the gp_expand_get_status() function, progress information of the data expansion phase can also be showed. For `gpstate` or `gpstate -s` a `Cluster Expansion = In Progress` summary is also displayed if gpexpand is in progress.
-
由 Ning Yu 提交于
`get_gpexpand_status()` is provided in gppylib.commands.gp, it returns an object whose attributes indicate the current gpexpand status: - phase: a number to specify current phase: - 0: expansion is not in progress; - 1: phase 1, the cluster setup phase; - 2: phase 2, the data redistribution phase; - status: a string to specify current phase, not quite meaningful; The returned object also has a `get_progress()` method which can be used to further check the progress information of the data distribution phase.
-
由 Ning Yu 提交于
gpexpand used to store schema tables in the database specified by the user with -D option, this caused some complicated handlings, for example gpexpand needed to scan all the databases to ensure schema tables in the specified database are correct. To simplify this we retired the -D option, and always use the 'postgres' database to store the schema tables.
-
由 Lisa Owen 提交于
* docs - misc pxf updates * scp -> cp * misc edits requested by david * remove seg failure troubleshooting section, new PR for that
-
由 Lisa Owen 提交于
* docs - add new host/seg resource group usage views * clarifications requested by ning
-
- 28 2月, 2019 4 次提交
-
-
由 Paul Guo 提交于
Replace twophase_XLogPageRead() with read_local_xlog_page() for generic xlog page read for two phase code. logical_read_local_xlog_page() was renamed as read_local_xlog_page() in postgres commit 422a55a6 for generic xlog page read. We should use that for two phase state xlog reading given that comes from upstream and also it seems that twophase_XLogPageRead() is buggy also (e.g. it does not consider cross segment file). We backport 422a55a6 also because it is used by another patch that we will backport also (see below 2). This patch removes two FIXMEs also: 1. Previously XLogCloseReadRecord() was needed to re-initialize for XLogPageRead() (typically after startup ends), but now we of course do not need that, so removing the comment below. * GPDB_93_MERGE_FIXME: GPDB used to do XLogCloseReadRecord() and then read, * do we need to perform something similar with new interface. 2. The FIXME below is deleted when we replace the function twophase_XLogPageRead(), but the comment is reasonable because it is possible that PREPARE xlog exists in an older segment file with previous timeline id even during mirror promotion some previous xlogs are copied when creating the new segment file for the new timeline id. We backport postgres patch 49e4196 to address this. // GPDB_93_MERGE_FIXME: Is ThisTimeLineID correct here? Do we need to // fetch prepare records for past timelines, after promotion?
-
由 Simon Riggs 提交于
Uses page-based mechanism to ensure we’re using the correct timeline. Tests are included to exercise the functionality using a cold disk-level copy of the master that's started up as a replica with slots intact, but the intended use of the functionality is with later features. Craig Ringer, reviewed by Simon Riggs and Andres Freund
-
由 Simon Riggs 提交于
Previously we didn’t have a generic WAL page read callback function, surprisingly. Logical decoding has logical_read_local_xlog_page(), which was actually generic, so move that to xlogfunc.c and rename to read_local_xlog_page(). Maintain logical_read_local_xlog_page() so existing callers still work. As requested by Michael Paquier, Alvaro Herrera and Andres Freund
-
由 Ben Christel 提交于
This commit moves all of the intermediate concourse resources (which are only used internally in the pipeline) to now use GCP buckets as opposed to AWS S3 storage. The ((pipeline-name)) variable added to gen_pipeline.py is needed for supporting the naming conventions that are used for new GCS objects, which namespace artifacts by pipeline. Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Co-authored-by: NOliver Albertini <oalbertini@pivotal.io>
-
- 27 2月, 2019 8 次提交
-
-
由 Jialun 提交于
- Retire GP_POLICY_ALL_NUMSEGMENTS and GP_POLICY_ENTRY_NUMSEGMENTS, unify to getgpsegmentCount - retire GP_POLICY_MINIMAL_NUMSEGMENTS & GP_POLICY_RANDOM_NUMSEGMENTS - Change NUMSEGMENTS related macro from variable macro to function macro - Change default return value of getgpsegmentCount to 1, which represents a singleton postgresql in utility mode - change __GP_POLICY_INVALID_NUMSEGMENTS to GP_POLICY_INVALID_NUMSEGMENTS
-
由 Ning Yu 提交于
-
由 Ning Yu 提交于
When operator memory overuse happens we used to produce a warning messages, it can be confusing as we actually expect it to happen. Reduced the log levels for these messages.
-
由 Ning Yu 提交于
Each hashagg spill batch file needs about 16KB memory to store the meta data, when there are many batch files the overall meta data size might exceed the assigned operatory memory. In resource group we could allow this overuse as there are better memory control at transaction level, and the resource group shared memory is designed to serve these kinds of overuses.
-
由 David Yozie 提交于
* vacuumdb - add missing equal signs = * CREATE/ALTER/DROP COLLATION. Adds new references for these commands. * remove space * DROP COLLATION. Remove redundant privileges statement. * SELECT. Add DISTINCT to several clauses, and may edits. * CREATE TABLE. Adds NO INHERIT, IF NOT EXISTS, column_reference_storage_directive syntax, edits. * ALTER DOMAIN. Add new forms of command. * ALTER EXTENSION. Small edit only. * ALTER FUNCTION. Add LEAKPROOF. * ALTER_INDEX. Change SET/RESET FILLFACTOR to SET (fillfactor=) * ALTER OPERATOR FAMILY. Add SP_GiST to descriptions. * ALTER SEQUENCE. Add IF EXISTS. * ALTER SERVER. Small edits. * Add Range Type section & related changes * ALTER TYPE - add UAGE privilege requirement * ALTER VIEW - add IF EXISTS keyword; add syntax for view settings * ANALYZE - add info about foreign tables * CHECKPOINT - remove WAL paragraph; other edits * ALTER TABLE. Add IF EXISTS, constraints, edits * CREATE VIEW - add view option syntax * CREATE OPERATOR TYPE - minor edits * CREATE OPERATOR - add USAGE requirement * createdb - new maintenance-db option; minor edits; simplify synopsis to be consistent with util output & postgresql docs * createlang - add note about lower-casing the name * createuser - add default notices, --interactive option, update examples * DELETE - fix codeph style * DROP INDEX - add CONCURRENTLY option * DROP TABLE - small edit to permissions required * dropdb - add --maintenance-db option * droplang - add lowercase notice * dropuser - add --if-exists; edits around prompting. * clusterdb. Add --maintenance-db connection option. * COMMENT. Replace table_name argument with relation_name * CREATE AGGREGATE. Add privileges paragraph. * CREATE CAST. Add privileges required. * CREATE DOMAIN. Add required privileges, edit example. * CREATE FUNCTION. Add LEAKPROOF function attribute. * CREATE INDEX. add BUFFERING storage parameter. * CREATE LANGUAGE. minor edit. * CREATE TABLE AS. Edits. Deprecate GLOBAL/LOCAL. Also update SELECT and CREATE TABLE to enable links. CREATE TABLE AS. Edits. Deprecate GLOBAL/LOCAL. Also update CREATE TABLE to enable links. * CREATE ROLE. minor edits. * GRANT - add USAGE ON DOMAIN, ON TYPE, with related notes * EXPLAIN. Add BUFFERS option. Fix missing query in example. * PREPARE. Edits. * SELECT. add values clause to with_query, updates select list, ORDER BY, LIMIT, FOR UPDATE/FOR SHARE clauses * DELETE. updates to with_query for data-modifying commands. * INSERT - updates to with_query for data-modifying commands. * UPDATE - updates to with_query for data-modifying commands. * REVOKE. add DOMAIN and TYPE variants * SET. minor edit. * pg_dump. Add --section and related parameters. * pg_restore - add --section option and related text/edits * reindexdb - add --maintenancedb connection option * vacuumdb - add --maintenancedb connection option * SET TRANSACTION. - snapshot not supported. minor edit. * postgresql - edits, additions introduced in 9.2; major omissions and re-writes were sourced from the 9.4 docs * Changes from review * misc edits * misc edit * Remove outputclass tags from sgml conversion * reformat converted files
-
由 Abhijit Subramanya 提交于
-
由 Abhijit Subramanya 提交于
Prior to this patch, the MCVs for text columns was not being passed to ORCA. Hence the cardinality estimation for predicates involving text was inaccurate and led to sub optimal plans being picked. This patch allows the MCVs to be passed in to ORCA so that it can now estimate the cardinality using MCVs equal and not equal operators for text columns. Co-authored-by: NAbhijit Subramanya <asubramanya@pivotal.io> Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 David Krieger 提交于
All of these checks are done only when the old cluster version has support for gphdfs. 1). skip checking for gphdfs tables for cluster versions where gphdfs support is absent. 2). fail upon a successful check for gphdfs roles NOTE: we do not special case the existence of the gphdfs.so library, even in absense of gphdfs tables or roles. In other words, the existing library checks force the user to drop all gphdfs-dependent functions before upgrade. Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
- 26 2月, 2019 5 次提交
-
-
由 Huiliang.liu 提交于
- For centos and ubuntu, openssl.conf is from system default path now. etc/openssl.conf in gpdb directory has been removed. So if we should not export OPENSSL_CONF in greenplum_path.sh.
-
由 Oliver Albertini 提交于
Vagrant: Make Debian and Centos builds functional * Refactor bash and ruby code in vagrant setup * Give unique names to VMs with Gporca and without * add an example `vagrant-local.yml` that syncs `~/gpdb` with `rsync` * Remove gp-xerces, use out-of-the-box Xerces instead * use Bentobox builds for Debian and Centos * Apply host's GPDB git configuration in VM: includes git user.name, user.email, clones from the same origin as user, adds all the user's remotes, and attempts to check out user's current remote/branch * gitignore the .vagrant directories
-
由 David Krieger 提交于
Minor changes are made to pg_upgrade to allow GPDB5 to be upgraded to GPDB6: 1). pg_ctl arguments need to explicitly contain the dbid, contentid and numcontents in order to start the GPDB5 cluster 2). the type of the attnum field of the gp_distribution_policy had changed 3). gp_toolkit is a view, and hence datatypes it contains that have been modified in GPDB6(such as name), need not be flagged as errors during pg_upgrade's check of the old cluster. 4). index access method type 'bitmap' needs to be added to the exclusion list to select only bpchar_pattern_ops index access methods Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
由 David Krieger 提交于
Minor modifications are made to pg_dump, to allow the table attributes and functions to be correctly dumped for a GPDB-5 cluster. These changes essentially fix a bug, as the GPDB-6 pg_dump should be able to dump a GPDB-5 cluster. Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Shreedhar Hardikar 提交于
They are renamed to start with a gp_/GP_/Gp prefix (as appropriate). This will prevent any namespace clashes when building & running external HLL-related GPDB extensions. I decided to keep the directory name that same since that doesn't matter for name conflicts, and none of the other directories in utils started with the gp_ prefix.
-