- 03 3月, 2017 1 次提交
-
-
由 Pengzhou Tang 提交于
gp_resource_manager is reserved for later use to switch resource control stratagy from resource queue to an under-developing resource statagy named resource group. Eg: gpconfig -c gp_resource_manager -v 'group' gpconfig -c gp_resource_manager -v 'queue' To make it work, a restart of cluster is needed.
-
- 28 2月, 2017 3 次提交
-
-
Bad things happen otherwise. One case in point is create database followed by a crash. Create database requests a checkpoint after inserting new tuple into pg_database. Crash happens right after create database commits and before clog update is not flushed to disk. Relcache initialization before xlog replay will set HEAP_XMIN_INVALID hint bit in the newly created database's tuple because clog did not report the xmin as committed. The FileRep processes including: - recovery process - resync manager process - resync worker process
-
由 Daniel Gustafsson 提交于
The error messages are developer or debug facing, no reason to believe this will break anyones regexing of logfiles in prod.
-
由 Jimmy Yih 提交于
The current dbInfoRel hash table key only contains the relfilenode oid. However, relfilenode oids can be duplicated over different tablespaces which can cause dropdb (and possibly persistent rebuild) to fail. This commit adds the tablespace oid as part of the dbInfoRel hash table key for more uniqueness. One thing to note is that a constructed tablespace/relfilenode key is compared with other keys using memcmp. Supposedly... this should be fine since the struct just contains two OID variables and the keys are always palloc0'd. The alignment should be fine during comparison.
-
- 26 2月, 2017 4 次提交
-
-
由 Daniel Gustafsson 提交于
The gp_enable_alter_table_inherit_cols GUC was used to allow a list of columns to override the attribute discovery for inheritance in ALTER TABLE INHERIT. According to code comments the only consumer of this was gpmigrator, but no callsite remains and no support in pg_dumpall remains either. Remove the leftovers of this to avoid a potential footgun and get us closer to upstream code.
-
由 Daniel Gustafsson 提交于
The gp_external_grant_privileges GUC was needed before 4.0 to let non superusers create external tables for gphdfs and http protocols. This GUC was however deprecated during the 4.3 cycle so remove all traces of it. The utility of the GUC was replaced in 4.0 when rights management for external tables was implemented with the normal GRANT REVOKE framework so this has been dead code for quite some time. Remove GUC, code which handles it, all references to it from the documentation and a release notes entry.
-
由 Daniel Gustafsson 提交于
The gp_eager_hashtable_release GUC was deprecated in version 4.2 in 2011 when the generic eager free framework was implemented. The leftover gp_eager_hashtable_release was asserted to be true and never intended to be turned off. The same body of work deprecated the max_work_mem setting which was bounding the work_mem setting. While not technically tied to eager hashtable release, remove as well since it's deprecated, undocumented and not terribly useful. Relevant commit in closed source repo is 88986b7d
-
由 Daniel Gustafsson 提交于
The gp_hashagg_compress_spill_files GUC was deprecated in 2010 when it was replaced by gp_workfile_compress_algorithm. The leftovers haven't done anything for quite some time so remove GUC. Relevant commit in closed source repo is c1ce9f03
-
- 24 2月, 2017 1 次提交
- 23 2月, 2017 2 次提交
-
-
Bad things happen otherwise. One case in point is create database followed by a crash. Create database requests a checkpoint after inserting new tuple into pg_database. Crash happens right after create database commits and before clog update is not flushed to disk. Relcache initialization before xlog replay will set HEAP_XMIN_INVALID hint bit in the newly created database's tuple because clog did not report the xmin as committed.
-
由 Kenan Yao 提交于
Signed-off-by Gang Xiong <gxiong@pivotal.io>
-
- 14 2月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
It hasn't done anything since 2010. If I'm reading the commit log correctly, it was added and deprecated only a few months apart, and probably hasn't done anything in any released version.
-
由 Heikki Linnakangas 提交于
SendDummyPacket() is completely specific to the UDP interconnect implementation. Along the way, I couldn't resist some cosmetic cleanup: use %m rather than strerror(errno), avoid unnecessary variable initializations, and pgindent.
-
由 Heikki Linnakangas 提交于
And other misc cleeanup.
-
- 13 2月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
This hasn't been tested for a while. And if someone wants to build GPDB on Solaris, should use autoconf tests and upstream "#ifdef _sparc" method to guard platform-dependent code, rather than the GPDB-specific "pg_on_solaris" flag.
-
- 10 2月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
-
- 04 2月, 2017 1 次提交
-
-
由 Omer Arap 提交于
Previously gporca translator was only pruning the non-visible system columns from the table descriptor for non-partition `appendonly` tables or if the paritition table is marked as `appendonly` at the root level. If one of the leaf partitions in is marked as `appendonly` but the root is not, the system columns still appears in the table descriptor. This commit fixes the issue by checking if the root table has `appendonly` paritions and pruning system columns if it has.
-
- 03 2月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
Context: gp_fastsequence is used to generate and keep track of row numbers for AO and CO tables. Row numbers for AO/CO tables act as a component to form TID, stored in index tuples and used during index scans to lookup intended tuple. Hence this number must be monotonically incrementing value. Also should not rollback irrespective of insert/update transaction aborting for AO/CO table, as reusing row numbers even across aborted transactions would yield wrong results for index scans. Also, entries in gp_fastsequence only must exist for lifespan of the corresponding table. Change: Given those special needs, now reserved entries in gp_fastsequence are created as part of create table itself instead of deffering their creation to insert time. Insert within same transaction as create table is the only scenario needs coverage from these precreated entries, reserved entries above hence means entry for segfile 0 (used by CTAS or ALTER) and segfile 1 (used by insert within same transaction as create). Rest all entries continue to use frozen inserts to gp_fastsequence as they can only happen after create table transaction has committed. With that change in logic to leverage MVCC to handle cleanup of entries for gp_fastseqeunce, enables to get rid of special recovery and abort code performing frozen deletes. With that code gone fixes issues like: 1] `REINDEX DATABASE` or `REINDEX TABLE pg_class` hang on segment nodes if encounters error after Prepare Transaction. 2] Dangling gp_fastsequence in scenario, transaction created AO table, inserted tuples and aborts after prepare phase is complete. To cleanup gp_fastsequence, must open the relation and perform frozen heap delete to mark the entry as invisible. But if the backend performing the abort prepared is not connected to the same database, then delete operation cannot be done and leaves dangling entries. Output of helpful interaction with Heikki Linnakangas and Asim R P. See discussion on gpdb-dev, thread 'reindex database abort hang': https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/ASml6lN0qRE
-
- 20 1月, 2017 1 次提交
-
-
由 alldefector 提交于
Binary COPY was previously disabled in Greenplum, this commit re-enables the binary mode by incorporating the upstream code from PostgreSQL. Patch by Github user alldefector with additional hacking by Daniel Gustafsson
-
- 13 1月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
For simplicity. This is less error-prone, too, in the face of future changes to ExprContext.
-
由 Heikki Linnakangas 提交于
Pass the EState that contains it to where it's needed, instead.
-
- 07 1月, 2017 1 次提交
-
-
由 Foyzur Rahman 提交于
This reverts commit 48c495a1.
-
- 04 1月, 2017 1 次提交
-
-
由 foyzur 提交于
Undo selected partitions before reselecting new partitions to avoid unnecessary leftover partitions from previous selections. * Adding ICG qp_dpe test to verify that the partitions are reset for each outer tuple.
-
- 27 12月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
Fixing incorret filename identifiers in the header comment blocks around the code, mostly stemming from copy-paste it would seem.
-
- 22 12月, 2016 1 次提交
-
-
由 Ashwin Agrawal 提交于
With commit 324653d3, we stopped performing checkpoint within filerep backend process like Resyncmanager process. Instead started using RequestCheckpoint mechanism to get teh job done by checkpoint process. Since CreateCheckpoint code is also handling the resync_to_sync transition processing due to locking constraints, need to modify the mechanism to convey the same to checkpoint process. Hence adding flag so that ResyncManager process can convey to checkpoint process to perform the transition actions. Without the change primary segment would encounter fault and transition back to changetracking from resync instead of moving to sync state.
-
- 21 12月, 2016 1 次提交
-
-
由 Ashwin Agrawal 提交于
QE reader leverages SharedLocalSnapshot to perform visibility checks. QE writer is responsible to keep the SharedLocalSnapshot up to date. Before this fix, SharedLocalSnapshot was only updated by writer while acquiring the snapshot. But if transaction id is assigned to subtransaction after it has taken the snapshot, it was not reflected. Due to this when QE reader called TransactionIdIsCurrentTransactionId, it may get sometimes false based on timings for subtransaction ids used by QE writer to insert/update tuples. Hence to fix the situation, SharedLocalSnapshot is now updated when assigning transaction id and deregistered if subtransaction aborts. Also, adding faultinjector to suspend cursor QE reader instead of guc/sleep used in past. Moving cursor tests from bugbuster to ICG and adding deterministic test to exercise the behavior. Fixes #1276, reported by @pengzhout
-
- 20 12月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
This commit substantially rewrites pg_upgrade to handle upgrading a Greenplum cluster from 4.3 to 5.0. The Greenplum specifics of pg_upgrade are documented in contrib/pg_upgrade/README.gpdb. A summary of the changes is listed below: - Make pg_upgrade to pass the pre-checks against GPDB 4.3. - Restore dumped schema in utility mode: pg_upgrade is executed on a single server in offline mode so ensure we are using utility mode. - Disable pg_upgrade checks that don't apply when upgrading to 8.3: When support for upgrading to Greenplum 6.0 is added the checks that make sense to backport will need to be readded. - Support AO/AOCS table: This bumps the AO table version number, and adds a conversion routine for numeric attributes. The on-disk format of numerics changed between PostgreSQL 8.3 and 8.4. With this commit, we can distinguish between AO segments created in the old format and the new, and read both formats. New AO segments are always created in the new format. Also performs a check for AO tables having NUMERIC attributes without free segfiles. Since AO table segments cannot be rewritten if there are no free segfiles, issue a warning if such a table is encountered during the upgrade. - Add code to convert heap pages offline: Bumps heap page format version number. While this isn't strictly necessary, when we're doing the conversion off-line, it reduces confusion if something goes wrong. - Add check for money datatype: the upgrade doesn't support the money datatype so check for it's presence and abort upgrade if found. - Create new Oid in QD and pass new Oids in dump for pg_upgrade on QE: When upgrading from GPDB4 to 5, we need to create new arraytypes for the base relation rowtypes in the QD, but we also need to dispatch these new OIDs to the QEs. Objects assigning InvalidOid in the Oid dispatcher will cause a new Oid to be assigned. Once the new cluster is restored, dump the new Oids into a separate dumpfile which isn't unlinked on exit. If this file is placed into the cwd of pg_upgrade on the QEs, it will be pulled into the db dump and used during restoring, thus "dispatching" the Oids from the QD even though they are offline. pg_upgrade doesn't at this point know if it's running at a QD or a QE so it will always dump this file and include the InvalidOid markers. - gp_relation_node is reset and rebuilt during upgrade once the data files from the old cluster are available to the new cluster. This change required altering how checkpoints are requested in the backend. - Mark indexes as invalid to ensure they are rebuilt in the new cluster. - Copy the pg_distributedlog from old to new during upgrade: We need the distributedlog in the new cluster to be able to start up once the upgrade has pulled over the clog. - Dont delete dumps when runnin with --debug: While not specific to Greenplum, this is a local addition which greatly helps testing and development of pg_upgrade. For testing purposes, a small test cluster created with Greenplum 4.3 is included in contrib/pg_upgrade/test Heikki Linnakangas, Daniel Gustafsson and Dave Cramer
-
- 13 12月, 2016 1 次提交
-
-
由 Asim R P 提交于
Refactor the phase 2 retry logic of distributed transaction so that the retry happens immediately after failure instead of happening inside EndCommand(). The patch also increases the number of retries in case of failure to 2 and introduces a guc called dtx_phase2_retry_count to control the number of retries.
-
- 28 11月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
-
- 18 11月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
-
- 15 11月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
Macros like this might be a good idea, but all the upstream code just uses the "offsetof(<struct>, <member>) + sizeof(<elem type>) * <count>" idiom directly, so let's follow the example. The SIZEOF_FIELD and VARELEMENTS_TO_FIT macros were outright unused.
-
- 14 11月, 2016 1 次提交
-
-
由 xiong-gang 提交于
pqFlush is sending data synchronously though the socket is set O_NONBLOCK, this incurs performance downgradation. This commit uses pqFlushNonBlocking instead, and synchronizes the completion of dispatching to all Gangs before query execution. Signed-off-by: Kenan Yao<kyao@pivotal.io>
-
- 07 11月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
Instead of carrying a "new OID" field in all the structs that represent CREATE statements, introduce a generic mechanism for capturing the OIDs of all created objects, dispatching them to the QEs, and using those same OIDs when the corresponding objects are created in the QEs. This allows removing a lot of scattered changes in DDL command handling, that was previously needed to ensure that objects are assigned the same OIDs in all the nodes. This also provides the groundwork for pg_upgrade to dictate the OIDs to use for upgraded objects. The upstream has mechanisms for pg_upgrade to dictate the OIDs for a few objects (relations and types, at least), but in GPDB, we need to preserve the OIDs of almost all object types.
-
- 04 11月, 2016 1 次提交
-
-
由 xiong-gang 提交于
Signed-off-by: NKenan Yao <kyao@pivotal.io>
-
- 02 11月, 2016 3 次提交
-
-
由 Heikki Linnakangas 提交于
This meant moving the version field from pg_appendonly to the pg_aoseg_<oid> table (or pg_aocsseg_<oid>, for AOCS). We can still read and write both formats, but new segments will always be created in the new format (except if you set the test_appendonly_version_default GUC).
-
由 Daniel Gustafsson 提交于
Ensure that that the header guards match the actual name of the file.
-
由 Daniel Gustafsson 提交于
Remove unused fields from past version control systems and ensure that all filenames in the comments match the actual name of the file. Also fix some spelling and references.
-
- 20 10月, 2016 3 次提交
-
-
由 Ashwin Agrawal 提交于
As persistent tables start using native heap free mechanism, the previous_free_tid field from the gp_persistent_* are no more needed.
-
由 Ashwin Agrawal 提交于
It's very error prone to externally maintain variable to track maxTID for a table, need to worry about lot of cases like vacuum and all. Persistent tables historically used this mechanism to perform certain checks, which seem irrelevant now that we are freeing tuples in regular heap way for persistent tables as well.
-
由 Ashwin Agrawal 提交于
Persistent tables historically implemented different mechanism to maintain free tuples by using on-disk freelist chain of itself. It proved extremely hard to maintain this free-list and required lot of supporting code to maintain and validate its integrity. Hence this commit leverages the native heap delete and vacuum framework to manage tuple deletion for persistent tables as well.
-