- 05 3月, 2019 1 次提交
-
-
由 Jacob Champion 提交于
Commit 352eb65c had already fixed this bug, in a way that did not ignore failed return codes from the repair scripts that are running. This reverts commit 061da8fc. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
- 04 3月, 2019 2 次提交
-
-
由 Zhang Shujie 提交于
We generate the fake ItemPointer for AO table's tuple, but it is a little different from the HEAP table's ItemPointer, the offset of the pointer can be 0 when it is a fake ItemPointer, but we treat 0 to be invalid in the previous codes, In order to run correctly, we set the 16th bit of offset to be 1 as a flag, then it can pass some check, but the value is not the real value, so when using the fake ItemPointer, we have to note to process the flag. The bitmap index build code expects to see the tuple IDs in order. It gets confused when it sees offset number 32768(0x8000) before offset number 1, this commit convert 32768(0x8000) to be 0.
-
由 Oliver Albertini 提交于
* this way when you ssh onto the VM, you have e.g. psql available to you Authored-by: NOliver Albertini <oalbertini@pivotal.io>
-
- 02 3月, 2019 1 次提交
-
-
由 Francisco Guerrero 提交于
- This will allow existing PXF Release (java) continue working with the gpdb 6
-
- 01 3月, 2019 9 次提交
-
-
由 Ning Yu 提交于
To query the gpexpand schema we must connect to "postgres" database now, update a forgot-to-changed dbname in gpexpand behave tests
-
由 Zhenghua Lyu 提交于
The following utilities do not work when we are in gpexpand phase 1: * gppkg * gpconfig * gpcheckat Add check for them so that if cluster is expanding in phase1, they will print error message and exit.
-
由 Ning Yu 提交于
This position is used by gpstate tests to verify the output, we must reset it on each executed command to correctly matching multiple commands in one scenario.
-
由 Ning Yu 提交于
Added a new gpstate argument `-x` to show gpexpand status. The status information is obtained with the gp_expand_get_status() function, progress information of the data expansion phase can also be showed. For `gpstate` or `gpstate -s` a `Cluster Expansion = In Progress` summary is also displayed if gpexpand is in progress.
-
由 Ning Yu 提交于
`get_gpexpand_status()` is provided in gppylib.commands.gp, it returns an object whose attributes indicate the current gpexpand status: - phase: a number to specify current phase: - 0: expansion is not in progress; - 1: phase 1, the cluster setup phase; - 2: phase 2, the data redistribution phase; - status: a string to specify current phase, not quite meaningful; The returned object also has a `get_progress()` method which can be used to further check the progress information of the data distribution phase.
-
由 Ning Yu 提交于
gpexpand used to store schema tables in the database specified by the user with -D option, this caused some complicated handlings, for example gpexpand needed to scan all the databases to ensure schema tables in the specified database are correct. To simplify this we retired the -D option, and always use the 'postgres' database to store the schema tables.
-
由 Lisa Owen 提交于
* docs - misc pxf updates * scp -> cp * misc edits requested by david * remove seg failure troubleshooting section, new PR for that
-
由 Lisa Owen 提交于
* docs - add new host/seg resource group usage views * clarifications requested by ning
-
- 28 2月, 2019 4 次提交
-
-
由 Paul Guo 提交于
Replace twophase_XLogPageRead() with read_local_xlog_page() for generic xlog page read for two phase code. logical_read_local_xlog_page() was renamed as read_local_xlog_page() in postgres commit 422a55a6 for generic xlog page read. We should use that for two phase state xlog reading given that comes from upstream and also it seems that twophase_XLogPageRead() is buggy also (e.g. it does not consider cross segment file). We backport 422a55a6 also because it is used by another patch that we will backport also (see below 2). This patch removes two FIXMEs also: 1. Previously XLogCloseReadRecord() was needed to re-initialize for XLogPageRead() (typically after startup ends), but now we of course do not need that, so removing the comment below. * GPDB_93_MERGE_FIXME: GPDB used to do XLogCloseReadRecord() and then read, * do we need to perform something similar with new interface. 2. The FIXME below is deleted when we replace the function twophase_XLogPageRead(), but the comment is reasonable because it is possible that PREPARE xlog exists in an older segment file with previous timeline id even during mirror promotion some previous xlogs are copied when creating the new segment file for the new timeline id. We backport postgres patch 49e4196 to address this. // GPDB_93_MERGE_FIXME: Is ThisTimeLineID correct here? Do we need to // fetch prepare records for past timelines, after promotion?
-
由 Simon Riggs 提交于
Uses page-based mechanism to ensure we’re using the correct timeline. Tests are included to exercise the functionality using a cold disk-level copy of the master that's started up as a replica with slots intact, but the intended use of the functionality is with later features. Craig Ringer, reviewed by Simon Riggs and Andres Freund
-
由 Simon Riggs 提交于
Previously we didn’t have a generic WAL page read callback function, surprisingly. Logical decoding has logical_read_local_xlog_page(), which was actually generic, so move that to xlogfunc.c and rename to read_local_xlog_page(). Maintain logical_read_local_xlog_page() so existing callers still work. As requested by Michael Paquier, Alvaro Herrera and Andres Freund
-
由 Ben Christel 提交于
This commit moves all of the intermediate concourse resources (which are only used internally in the pipeline) to now use GCP buckets as opposed to AWS S3 storage. The ((pipeline-name)) variable added to gen_pipeline.py is needed for supporting the naming conventions that are used for new GCS objects, which namespace artifacts by pipeline. Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Co-authored-by: NOliver Albertini <oalbertini@pivotal.io>
-
- 27 2月, 2019 8 次提交
-
-
由 Jialun 提交于
- Retire GP_POLICY_ALL_NUMSEGMENTS and GP_POLICY_ENTRY_NUMSEGMENTS, unify to getgpsegmentCount - retire GP_POLICY_MINIMAL_NUMSEGMENTS & GP_POLICY_RANDOM_NUMSEGMENTS - Change NUMSEGMENTS related macro from variable macro to function macro - Change default return value of getgpsegmentCount to 1, which represents a singleton postgresql in utility mode - change __GP_POLICY_INVALID_NUMSEGMENTS to GP_POLICY_INVALID_NUMSEGMENTS
-
由 Ning Yu 提交于
-
由 Ning Yu 提交于
When operator memory overuse happens we used to produce a warning messages, it can be confusing as we actually expect it to happen. Reduced the log levels for these messages.
-
由 Ning Yu 提交于
Each hashagg spill batch file needs about 16KB memory to store the meta data, when there are many batch files the overall meta data size might exceed the assigned operatory memory. In resource group we could allow this overuse as there are better memory control at transaction level, and the resource group shared memory is designed to serve these kinds of overuses.
-
由 David Yozie 提交于
* vacuumdb - add missing equal signs = * CREATE/ALTER/DROP COLLATION. Adds new references for these commands. * remove space * DROP COLLATION. Remove redundant privileges statement. * SELECT. Add DISTINCT to several clauses, and may edits. * CREATE TABLE. Adds NO INHERIT, IF NOT EXISTS, column_reference_storage_directive syntax, edits. * ALTER DOMAIN. Add new forms of command. * ALTER EXTENSION. Small edit only. * ALTER FUNCTION. Add LEAKPROOF. * ALTER_INDEX. Change SET/RESET FILLFACTOR to SET (fillfactor=) * ALTER OPERATOR FAMILY. Add SP_GiST to descriptions. * ALTER SEQUENCE. Add IF EXISTS. * ALTER SERVER. Small edits. * Add Range Type section & related changes * ALTER TYPE - add UAGE privilege requirement * ALTER VIEW - add IF EXISTS keyword; add syntax for view settings * ANALYZE - add info about foreign tables * CHECKPOINT - remove WAL paragraph; other edits * ALTER TABLE. Add IF EXISTS, constraints, edits * CREATE VIEW - add view option syntax * CREATE OPERATOR TYPE - minor edits * CREATE OPERATOR - add USAGE requirement * createdb - new maintenance-db option; minor edits; simplify synopsis to be consistent with util output & postgresql docs * createlang - add note about lower-casing the name * createuser - add default notices, --interactive option, update examples * DELETE - fix codeph style * DROP INDEX - add CONCURRENTLY option * DROP TABLE - small edit to permissions required * dropdb - add --maintenance-db option * droplang - add lowercase notice * dropuser - add --if-exists; edits around prompting. * clusterdb. Add --maintenance-db connection option. * COMMENT. Replace table_name argument with relation_name * CREATE AGGREGATE. Add privileges paragraph. * CREATE CAST. Add privileges required. * CREATE DOMAIN. Add required privileges, edit example. * CREATE FUNCTION. Add LEAKPROOF function attribute. * CREATE INDEX. add BUFFERING storage parameter. * CREATE LANGUAGE. minor edit. * CREATE TABLE AS. Edits. Deprecate GLOBAL/LOCAL. Also update SELECT and CREATE TABLE to enable links. CREATE TABLE AS. Edits. Deprecate GLOBAL/LOCAL. Also update CREATE TABLE to enable links. * CREATE ROLE. minor edits. * GRANT - add USAGE ON DOMAIN, ON TYPE, with related notes * EXPLAIN. Add BUFFERS option. Fix missing query in example. * PREPARE. Edits. * SELECT. add values clause to with_query, updates select list, ORDER BY, LIMIT, FOR UPDATE/FOR SHARE clauses * DELETE. updates to with_query for data-modifying commands. * INSERT - updates to with_query for data-modifying commands. * UPDATE - updates to with_query for data-modifying commands. * REVOKE. add DOMAIN and TYPE variants * SET. minor edit. * pg_dump. Add --section and related parameters. * pg_restore - add --section option and related text/edits * reindexdb - add --maintenancedb connection option * vacuumdb - add --maintenancedb connection option * SET TRANSACTION. - snapshot not supported. minor edit. * postgresql - edits, additions introduced in 9.2; major omissions and re-writes were sourced from the 9.4 docs * Changes from review * misc edits * misc edit * Remove outputclass tags from sgml conversion * reformat converted files
-
由 Abhijit Subramanya 提交于
-
由 Abhijit Subramanya 提交于
Prior to this patch, the MCVs for text columns was not being passed to ORCA. Hence the cardinality estimation for predicates involving text was inaccurate and led to sub optimal plans being picked. This patch allows the MCVs to be passed in to ORCA so that it can now estimate the cardinality using MCVs equal and not equal operators for text columns. Co-authored-by: NAbhijit Subramanya <asubramanya@pivotal.io> Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 David Krieger 提交于
All of these checks are done only when the old cluster version has support for gphdfs. 1). skip checking for gphdfs tables for cluster versions where gphdfs support is absent. 2). fail upon a successful check for gphdfs roles NOTE: we do not special case the existence of the gphdfs.so library, even in absense of gphdfs tables or roles. In other words, the existing library checks force the user to drop all gphdfs-dependent functions before upgrade. Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
- 26 2月, 2019 5 次提交
-
-
由 Huiliang.liu 提交于
- For centos and ubuntu, openssl.conf is from system default path now. etc/openssl.conf in gpdb directory has been removed. So if we should not export OPENSSL_CONF in greenplum_path.sh.
-
由 Oliver Albertini 提交于
Vagrant: Make Debian and Centos builds functional * Refactor bash and ruby code in vagrant setup * Give unique names to VMs with Gporca and without * add an example `vagrant-local.yml` that syncs `~/gpdb` with `rsync` * Remove gp-xerces, use out-of-the-box Xerces instead * use Bentobox builds for Debian and Centos * Apply host's GPDB git configuration in VM: includes git user.name, user.email, clones from the same origin as user, adds all the user's remotes, and attempts to check out user's current remote/branch * gitignore the .vagrant directories
-
由 David Krieger 提交于
Minor changes are made to pg_upgrade to allow GPDB5 to be upgraded to GPDB6: 1). pg_ctl arguments need to explicitly contain the dbid, contentid and numcontents in order to start the GPDB5 cluster 2). the type of the attnum field of the gp_distribution_policy had changed 3). gp_toolkit is a view, and hence datatypes it contains that have been modified in GPDB6(such as name), need not be flagged as errors during pg_upgrade's check of the old cluster. 4). index access method type 'bitmap' needs to be added to the exclusion list to select only bpchar_pattern_ops index access methods Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
由 David Krieger 提交于
Minor modifications are made to pg_dump, to allow the table attributes and functions to be correctly dumped for a GPDB-5 cluster. These changes essentially fix a bug, as the GPDB-6 pg_dump should be able to dump a GPDB-5 cluster. Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Shreedhar Hardikar 提交于
They are renamed to start with a gp_/GP_/Gp prefix (as appropriate). This will prevent any namespace clashes when building & running external HLL-related GPDB extensions. I decided to keep the directory name that same since that doesn't matter for name conflicts, and none of the other directories in utils started with the gp_ prefix.
-
- 25 2月, 2019 1 次提交
-
- 23 2月, 2019 4 次提交
-
-
由 Jacob Champion 提交于
The new "orphaned TOAST table" check performed queries against pg_depend by matching rows on OID, but those queries didn't check to make sure that the OID belonged to the expected catalog table. OIDs can collide across catalog tables (though it is rare) -- usually we don't have to think about it because there is a catalog table implied by the context, but in this case, pg_depend can link any catalog table to any other. Fix the check by limiting results to OIDs that match within pg_class. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Alexandra Wang 提交于
Add a reminder for mutual review offset obligations. Ideally we are expected to review at least one other PR when submitting our own PR.
-
由 Alexandra Wang 提交于
Reset the fault injector on dbid 2 after re-verifying the segment is down. Both gp_request_fts_probe_scan() and gp_inject_fault() call getCdbComponentInfo() in order to dispatch to QEs, which will trigger the fault `GetDnsCachedAddress` on dbid 2, and gp_request_fts_probe_scan() returns true even before the probe finishes. Therefore, there is a race condition between the fts_probes and the reset of the fault injector; when the reset triggers the fault before the fts probe completes, the primary will be taken down without removing the fault, which caused all the following tests failing after a `gprecoverseg -ar` with double fault detected. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Francisco Guerrero 提交于
We introduce a new structure `ExternalSelectDesc` in ExtProtocolData that encapsulates projection information (`ProjectionInfo`) as well as filter qualifiers. This allows for external protocols to do filter pushdown and column projection (which is the contribution of this PR). We have modified the PXF protocol to make use of both these attributes.<F24><F25> Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io> Co-authored-by: NShivram Mani <smani@pivotal.io>
-
- 22 2月, 2019 4 次提交
-
-
由 Xiaoran Wang 提交于
gpload,gpfdist,createdb,dropdb,createuser, dropuser,createlang,droplang,psql,pg_dump, and pg_dumpall, these components are all packaged into clients package. 1)gpAux/Makfile: * Remove loaders in gpAux/Makefile. * Now clients use some system libraries, so no need to copy these denpendencies into lib. 2)Modify compile_gpdb_clients.bash to export gpdb_clients bin package for building clients rpm. 3)Remove unused files under gpAux.
-
由 Peifeng Qiu 提交于
fe-misc.c belongs to libpq frontend. libpq is needed by pygresql. On windows python 2.7 is compiled with VS 2008, and so must be all extensions including pygresql. VS 2008 is not compliant with C99. Most part of libpq comes from upstream and is friendly with non-C99 compiler. We only need to fix pqFlushNonBlocking() which is introduced in commit 510a20b6 and unique to GPDB.
-
由 Jacob Champion 提交于
This line was updated during the recent changes to gp_distribution_policy -- it used to be coalesce(1+array_upper(attrnums,1)-array_lower(attrnums,1),0) which should just be replaced with array_length() directly at this point. The current implementation is off by one due to the retention of the old "+ 1", and we no longer need a COALESCE because distkey can't be NULL. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Chuck Litzell 提交于
- Remove GP/WLM - Change GPCC entries to gpperfmon (for gpmmon/gpsmon) - Add GPCC gpccws http and rpc ports and ccagent - Add GPText ZooKeepr and Solr ports Update ports and protocols page for gp-wlm, gpcc, and gptext Clarify GPText use a range of ports. Make GPCC and GPText rows conditional.
-
- 21 2月, 2019 1 次提交
-
-
由 Richard Guo 提交于
Previously we have assertions in adjust_selectivity_for_nulltest to ensure that the pushed qual of form "Var IS (NOT) NULL" is applied on the nullable side of the outer join. Now that we do not have the outer_rel and inner_rel passed in as parameters, we cannot do that assertion any more. But we can know the assertion is true, because otherwise the qual would be figured out as not outerjoin_delayed by check_outerjoin_delay() and be pushed down to the non-nullable side of the outer join. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-