- 21 12月, 2016 3 次提交
-
-
由 Ashwin Agrawal 提交于
Relfilenode is only unique within a tablespace. Across tablespaces same relfilenode may be allocated within a database. Currently, gp_relation_node only stores relfilenode and segment_num and has unique index using only those fields without tablespace. So, it breaks for situations where same relfilenode gets allocated to table within database.
-
由 Ashwin Agrawal 提交于
Tests currently use gprecoverseg -r, to rebalance the segments after performing failover to mirror. This is mainly to set the satge correctl for next tests. hence not testing gprecoverseg -r so for speed gpstop -air is much better alternative.
-
由 Ashwin Agrawal 提交于
QE reader leverages SharedLocalSnapshot to perform visibility checks. QE writer is responsible to keep the SharedLocalSnapshot up to date. Before this fix, SharedLocalSnapshot was only updated by writer while acquiring the snapshot. But if transaction id is assigned to subtransaction after it has taken the snapshot, it was not reflected. Due to this when QE reader called TransactionIdIsCurrentTransactionId, it may get sometimes false based on timings for subtransaction ids used by QE writer to insert/update tuples. Hence to fix the situation, SharedLocalSnapshot is now updated when assigning transaction id and deregistered if subtransaction aborts. Also, adding faultinjector to suspend cursor QE reader instead of guc/sleep used in past. Moving cursor tests from bugbuster to ICG and adding deterministic test to exercise the behavior. Fixes #1276, reported by @pengzhout
-
- 20 12月, 2016 13 次提交
-
-
由 Heikki Linnakangas 提交于
Remnants of workfile caching code that was removed earlier.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Instead of performing lookups based on relation name/oid ensure that the old and new database info structures are equally sorted and perform an O(1) pass over the synchronized arrays. This backports parts of the below commit from upstream which was introduced in PostgreSQL 9.1: commit 002c105a Author: Bruce Momjian <bruce@momjian.us> Date: Sat Jan 8 13:44:44 2011 -0500 In pg_upgrade, remove functions that did sequential array scans looking up relations, but rather order old/new relations and use the same array index value for both. This should speed up pg_upgrade for databases with many relations.
-
由 Daniel Gustafsson 提交于
This works by building an indexarray containing all the pg_type entries on the first lookup, once built the indexarray can be queried and further SQL queries avoided. The corresponding array type is cached together with the base type.
-
由 Daniel Gustafsson 提交于
Ensure that we are directing the checkpoint to the process which is responsible for checkpointing. CreateCheckpoint() should only be used by the Checkpointer process to ensure we aren'r losing any requested operations.
-
由 Heikki Linnakangas 提交于
Backport another pg_upgrade-related change from upstream. Distinguishing between binary-upgrade and "normal" utility mode turns out to be rather handy. CAUTION: This removes the GPDB-specific postmaster command-line options -b and -C, becuase they clashed with the newly introduced -b flag, used to choose binary-upgrade mode! In order to invoke the GPDB functionality previously reached via -b -C, the long option forms need to be used. This commit switches pg_ctl over to using long options. This commit adds a substantial amount of Greenplum specific code to handle binary-upgrade in the database backend. Below is a list of the changes: - When restoring a schema dump on a master node, in binary-upgrade mode, we want to populate gp_distribution_policy, while in "normal" utility mode, we don't. In the passing, simplify the query used to fetch the entries from gp_distribution_policy in pg_dump. - pg_dump will in binary upgrade mode preassign Oids for the Oid dispatcher to ensure that object Oids are stable across dump/restore. This requires the Oid dispatcher to handle binary upgrade mode. Dump files created with binary-upgrade mode will assign multiple Oids into the preassigned Oids list, sometimes for objects not created within the current transaction. Allow the preassigned_oids list to preserve the Oids across end-of-xact when in binary upgrade mode. - The binary upgrade functions from pg_upgrade_support are used to preassign Oids during pg_dump in binary mode and are installed into a separate binary_upgrade schema. Due to a chicken-and-egg problem, installing these functions by themselves will however fail Oid assignment as they are required for Oid assignment. This reserves a block of OIDs for use with the binary_upgrade schema to ensure that the functions can be installed. The original commit from upstream which has been heavily amended to: commit 76dd09bb Author: Bruce Momjian <bruce@momjian.us> Date: Mon Apr 25 12:00:21 2011 -0400 Add postmaster/postgres undocumented -b option for binary upgrades. This option turns off autovacuum, prevents non-super-user connections, and enables oid setting hooks in the backend. The code continues to use the old autoavacuum disable settings for servers with earlier catalog versions. This includes a catalog version bump to identify servers that support the -b option. Heikki Linnakangas, Daniel Gustafsson and Dave Cramer
-
由 Heikki Linnakangas 提交于
This commit substantially rewrites pg_upgrade to handle upgrading a Greenplum cluster from 4.3 to 5.0. The Greenplum specifics of pg_upgrade are documented in contrib/pg_upgrade/README.gpdb. A summary of the changes is listed below: - Make pg_upgrade to pass the pre-checks against GPDB 4.3. - Restore dumped schema in utility mode: pg_upgrade is executed on a single server in offline mode so ensure we are using utility mode. - Disable pg_upgrade checks that don't apply when upgrading to 8.3: When support for upgrading to Greenplum 6.0 is added the checks that make sense to backport will need to be readded. - Support AO/AOCS table: This bumps the AO table version number, and adds a conversion routine for numeric attributes. The on-disk format of numerics changed between PostgreSQL 8.3 and 8.4. With this commit, we can distinguish between AO segments created in the old format and the new, and read both formats. New AO segments are always created in the new format. Also performs a check for AO tables having NUMERIC attributes without free segfiles. Since AO table segments cannot be rewritten if there are no free segfiles, issue a warning if such a table is encountered during the upgrade. - Add code to convert heap pages offline: Bumps heap page format version number. While this isn't strictly necessary, when we're doing the conversion off-line, it reduces confusion if something goes wrong. - Add check for money datatype: the upgrade doesn't support the money datatype so check for it's presence and abort upgrade if found. - Create new Oid in QD and pass new Oids in dump for pg_upgrade on QE: When upgrading from GPDB4 to 5, we need to create new arraytypes for the base relation rowtypes in the QD, but we also need to dispatch these new OIDs to the QEs. Objects assigning InvalidOid in the Oid dispatcher will cause a new Oid to be assigned. Once the new cluster is restored, dump the new Oids into a separate dumpfile which isn't unlinked on exit. If this file is placed into the cwd of pg_upgrade on the QEs, it will be pulled into the db dump and used during restoring, thus "dispatching" the Oids from the QD even though they are offline. pg_upgrade doesn't at this point know if it's running at a QD or a QE so it will always dump this file and include the InvalidOid markers. - gp_relation_node is reset and rebuilt during upgrade once the data files from the old cluster are available to the new cluster. This change required altering how checkpoints are requested in the backend. - Mark indexes as invalid to ensure they are rebuilt in the new cluster. - Copy the pg_distributedlog from old to new during upgrade: We need the distributedlog in the new cluster to be able to start up once the upgrade has pulled over the clog. - Dont delete dumps when runnin with --debug: While not specific to Greenplum, this is a local addition which greatly helps testing and development of pg_upgrade. For testing purposes, a small test cluster created with Greenplum 4.3 is included in contrib/pg_upgrade/test Heikki Linnakangas, Daniel Gustafsson and Dave Cramer
-
由 Heikki Linnakangas 提交于
The binary-upgrade mode in pg_dump which was backported in a previous commit is here extended to handle Greenplum clusters. The gist of the changes are to extend the Oid dumping to cover all object types as upstream PostgreSQL only covers types. The changes include: Make "pg_dump --binary-upgrade" connect in utility mode, it's supposed to be run against a single segment server. Also make pg_dump work against 4.3 clusters as the binary-upgrade mode will be used with pg_upgrade on the old cluster in the upgrade. Dump Oids for all object types in binary upgrade mode. In PostgreSQL only type Oids are dumped in binary upgrade mode, for Greenplum however we need to dump the Oids for all objects that require Oid dispatching (see oid_dispatch.c for details). Inject calls in the dump file for the Oid dispatcher to preassign Oids during restore. For partitioned tables, the Oids for each child table as well as their constraints are added to the dump file. The distributioncheck for partitions is skipped when restoring during binary upgrade. The distribution info is not available in the segments so during binary upgrade we cant enforce this check. Avoid creating array type for AO relation types, it's not useful for anything and only grows the catalog for no use. Heikki Linnakangas, Daniel Gustafsson and Dave Cramer
-
由 Heikki Linnakangas 提交于
It is needed by pg_upgrade. Note that pg_upgrade runs the *new* version's pg_dump binary, so there is no need to backport this to GPDB 4.3. Original commit: Author: Bruce Momjian <bruce@momjian.us> Date: Tue Feb 17 15:41:50 2009 +0000 Add pg_dump --binary-upgrade flag to be used by binary upgrade utilities. The new code allows transfer of dropped column information to the upgraded server.
-
由 Heikki Linnakangas 提交于
This commit just brings in the upstream code for pg_upgrade and pg_upgrade_support as is, with no changes. Subsequent work will change it to work for Greenplum. The goal is to make it usable for upgrading from Greenplum 4.3 to 5.0. pg_upgrade_support is a crucial part of pg_upgrade, but lives in a separate directory to make the build easier.
-
由 Dhanashree Kashid 提交于
Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit bbed4116 removed the explicit ignoring of 'DISTRIBUTED BY' NOTICEs from atmsort as it made all distrubution notices and hints consistent in its output. This added the requirement that all ans, and other output, files must memorize the NOTICE as it comes from GPDB or add a proper DISTRIBUTED BY (..) clause. Add relevant NOTICEs to the sql and expected files that were missed in bbed4116 since that commit focused on the misspelled ones, not the ones where it was missing completely.
-
由 Heikki Linnakangas 提交于
If you perform an DELETE or UPDATE on an inherited table, but the qual rules out all inheritance children, the planner generates a dummy Result node with a false One-Time Filter, instead of an Append node. That dummy Result node did not have a Flow attached to it, which resulted in an assertion failure later. To fix, attach a Flow to the Result node, like we do to an Append node in the non-degenerate case. I wasn't exactly sure what kind of a Flow is appropriate for a dummy node like this, but mark_plan_general(), i.e. FLOW_SINGLETON, seems to work. That's what we also use for the similar Result node that we generate for the case of a HAVING without aggregates or GROUP BY, in grouping_planner(). Fixes github issue #1433.
-
- 19 12月, 2016 3 次提交
-
-
由 Daniel Gustafsson 提交于
While it should be rare (and the original ticket referred indicates that it is), it's perfectly legal for a UDP buffer to fill up. Set the messagelevel to LOG rather than WARNING.
-
由 Daniel Gustafsson 提交于
This is a matchsubs block, ensure to set the correct header for atmsort.pm parsing.
-
由 Daniel Gustafsson 提交于
The different kinds of NOTICE messages regarding table distribution were using a mix of upper and lower case for 'DISTRIBUTED BY'. Make them consistent by using upper case for all messages and update the test files, and atmsort regexes, to match.
-
- 18 12月, 2016 2 次提交
-
-
由 Daniel Gustafsson 提交于
This ignore regex was simply hiding the fact that one expected test output was broken. Remove the ignore and fix the test.
-
由 Daniel Gustafsson 提交于
Commit 0bf31cd6 ensured that we are using proper error codes instead of internal errors with filename:location. Remove this left over regex which will no longer be useful.
-
- 17 12月, 2016 19 次提交
-
-
由 Asim R P 提交于
This was causing icw_orca to fail. Removing order by will allow for gpdiff's order agnostic comparison. The order by was on a non-distinct set of keys, causing the results to differ between orca and planner. Also add yet another orca-specific answer file that was missed earlier.
-
由 Asim R P 提交于
-
由 Ashwin Agrawal 提交于
* Cleaning-up bunch of sqls and functions not used * Adding robust mechanism to test if db is back-up to make tests deterministic * Optimize a bit by stop performing restartdb, remove drop table statements, etc..
-
由 Asim RP 提交于
* Merge 4 alter_ao_part_exch* tests into one to avoid redundant partitioned table creation. * Avoid creating bitmap index when not required. * Merge analyze* tests into analyze_ao_table_every_dml.sql to reduce number of tables that need to be created. * Merge cluster.sql into alter_ao_table_index.sql. * Merge auxiliary_tables.sql into create_ao_tables.sql. * Add update and delete statements where they were missing. * Remove DROP TABLE statements. * Remove unnecessary "select * from ..." to reduce diff time. * Merge truncate_ao_table.sql into create_ao_tables.sql. * Merge truncate_ao_table_part.sql into alter_ao_part_tables.sql. * Remove redundant COPY tests. COPY command is already tested by analyze_ao_table_every_dml.sql. * Remove "ALTER TABLE ... CLUSTER" tests, these commands should report error if run against append-optimized tables. * Remove fillfactor tests, fillfactor is not applicable to append-optimized tables. * Merge blocksize tests into one to minimize creating tables. * Merge *ao_table_compresstype.sql into one compresstype.sql.
-
由 Stephen Wu 提交于
Use a single DB connection when dumping stats instead of starting a new one for each table. The previous commit included a bug causing unit test pollution which has since been fixed. The changes involve patching out the mocked method rather than directly replacing it in the module.
-
由 Daniel Gustafsson 提交于
Ensure to free any allocated content on premature exit. These potential leaks are small but we might as well clean up to make static analysis happy. Per defect in Coverity
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The MemTupleNoOidSpace() wasn't implemented and did just do an assertion rather than what it said on the tin. Remove since the intention of the macro isn't really possible - there is nothing like HEAP_HASOID for memtuples.
-
由 Daniel Gustafsson 提交于
If we error out we need to properly close the filehandle. Per defect in Coverity
-
由 Daniel Gustafsson 提交于
Passing -pthread to programs not using threading cause clang to complain over unused flag.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
While these uses of strcpy() were safe in the sense that the limits were set to NAMEDATALEN, using strlcpy() is not wrong and it makes the intent of the code clear such that anyone looking for potential buffer overflows can skip over these. Per gripe by Coverity
-
由 Daniel Gustafsson 提交于
transformPassThroughParms() is hardcoded to always return true since the code that could render false has been killed. No single callsite further inspect the returnvalue (for good reasons). Remove return value altogether and make function return void.
-
由 Daniel Gustafsson 提交于
At this point we already know that pQry is non-NULL as we have dereferenced it previous in the function. Remove useless check to make the code clearer. Per gripe by Coverity.
-
由 Daniel Gustafsson 提交于
Remove commented out code which clearly hasn't been in use for some time. Anyone interested in this code have the VCS archives at their disposal.
-
由 David Sharp 提交于
Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Jingyi Mei 提交于
Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Jingyi Mei 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-