- 28 2月, 2017 17 次提交
-
-
由 Daniel Gustafsson 提交于
The error messages are developer or debug facing, no reason to believe this will break anyones regexing of logfiles in prod.
-
由 Ashwin Agrawal 提交于
-
由 khannaekta 提交于
* Update regex to reflect output change [#140454709] `pg_lock_status` changed output format.
-
由 Bhuvnesh Chaudhary 提交于
-
由 Jesse Zhang 提交于
We cannot assume we are running in a special environment that is always in -8 time zone without DST. Making such setting explicit to make tests portable. Unskipping test as they will succeed now.
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
Followup stories created to fix the issues
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
It depends on TPCH datagen, which is not a viable idea.
-
由 Bhuvnesh Chaudhary 提交于
-
由 Daniel Gustafsson 提交于
Fix various typos that seemed common.
-
由 Jimmy Yih 提交于
The current dbInfoRel hash table key only contains the relfilenode oid. However, relfilenode oids can be duplicated over different tablespaces which can cause dropdb (and possibly persistent rebuild) to fail. This commit adds the tablespace oid as part of the dbInfoRel hash table key for more uniqueness. One thing to note is that a constructed tablespace/relfilenode key is compared with other keys using memcmp. Supposedly... this should be fine since the struct just contains two OID variables and the keys are always palloc0'd. The alignment should be fine during comparison.
-
由 Chris Hajas 提交于
These tests are necessary to validate our backup and restore utilities on Data Domain. Unfortunately they are still in TINC, but we plan to move them to behave when possible. Authors: Chris Hajas and Marbin Tan
-
由 Bhuvnesh Chaudhary 提交于
We may have UNKNOWN vars from untyped const or params which must be coerced. Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
- 27 2月, 2017 9 次提交
-
-
由 Heikki Linnakangas 提交于
The test launched one backend, and ran ALTER TABLE ADD COLUMN in it. Before committing the ALTER TABLE transaction, it launched another backend, and ran a query against the table. That will block, because the ALTER TABLE is holding an AccessExclusiveLock on the table. It then committed the first transaction, letting the second transaction to continue. That's not a very interesting test case. ALTER TABLE grabs an AccessExclusiveLock very early, so it's hard to imagine it would fail to hold it. Also hard to imagine what could go wrong in having the other query block on the lock. Backends frequently block on all kinds of locks, for short periods of time, so just blocking on a lock isn't very interesting. Furthermore, AFAICS, if the ALTER TABLE somehow failed to acquire the lock, so that the query didn't block, the test would return the same output, and therefore wouldn't catch that case.
-
由 Heikki Linnakangas 提交于
We have sufficient coverage of external tables in the main regression suite and src/bin/gpfdist/regress already.
-
由 Lirong Jian 提交于
-
由 Heikki Linnakangas 提交于
I kept the tests using QuickLZ in TINC, as we don't have QuickLZ support in the open source version anymore. They ought to be moved to where we keep the code for reading legacy quicklz-compressed tables.
-
由 Heikki Linnakangas 提交于
There were three variants of the same test, combined with 4 compresslevels. The "*_null" variants insert rows, such that the delta between some rows is positive, and between others, negative. And it also contains some NULLs. The other variants only contained positive or postive and negative deltas, without NULLs. It doesn't seem interesting to have separate tests for the more narrow cases. If there's something wrong with the handling of positive or negative deltas, the "*_nulls" tests will catch that too.
-
由 Heikki Linnakangas 提交于
OIDS defaults to 'off' for user tables, so this test is effectively the same as not specifying OIDS at all. (And OIDS=true is not supported on AOCS tables.)
-
由 Heikki Linnakangas 提交于
Commit 714a8375 removed the Python script that referenced these.
-
由 Heikki Linnakangas 提交于
There is no tincrepo/mpp/gpdb/tests/utilities/upgrade directory.
-
由 Heikki Linnakangas 提交于
-
- 26 2月, 2017 14 次提交
-
-
由 Daniel Gustafsson 提交于
We must support older than 80300 in dumping pg_type Oid's in order to support upgrades from 4.3. Remove bogus check inherited from upstream PostgreSQL where 8.3 is the earliest version supported by pg_upgrade. Replace with a check for 8.2 as the base version in the main binary upgrade dump function and explain the difference to upstream.
-
由 Daniel Gustafsson 提交于
In case there were no auxiliary relations associated with the rel oid then skip adding the Archive entry to not have blank entries in the dump with just a header comment.
-
由 Daniel Gustafsson 提交于
The conrelid is equal to the oid (it's a join criteria after all). Perform a little bit of cleanup for clarity (and microoptimization) and don't project it into the result.
-
由 Daniel Gustafsson 提交于
This moves over the queries worth keeping from Bugbuster gpsort to ICW sort. Many of the queries returned a single row, no rows at all or even had syntax errors, all of which are uninteresting cases for testing result sorting. All rows in the test table are distributed to a single segment to keep the sort order stable during testing. Also remove pointless ignore blocks from sort and avoid dropping the schema at the end so shave some time off.
-
由 Daniel Gustafsson 提交于
The gp_enable_alter_table_inherit_cols GUC was used to allow a list of columns to override the attribute discovery for inheritance in ALTER TABLE INHERIT. According to code comments the only consumer of this was gpmigrator, but no callsite remains and no support in pg_dumpall remains either. Remove the leftovers of this to avoid a potential footgun and get us closer to upstream code.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
TINC had a lot of gpmapreduce tests which weren't connected to any build or pipeline, resurrect (some of) these by porting over to the mapred suite in ICW. This is by no means a direct copy of the existing TINC testsuites, but instead a partial port of the tests that made sense to bring over. A lot of the tests were highly overlapping, some completely broken, some were testing functionality ICW already had tests for and others plain uninteresting.
-
由 Daniel Gustafsson 提交于
The gp_external_grant_privileges GUC was needed before 4.0 to let non superusers create external tables for gphdfs and http protocols. This GUC was however deprecated during the 4.3 cycle so remove all traces of it. The utility of the GUC was replaced in 4.0 when rights management for external tables was implemented with the normal GRANT REVOKE framework so this has been dead code for quite some time. Remove GUC, code which handles it, all references to it from the documentation and a release notes entry.
-
由 Daniel Gustafsson 提交于
Release notes entry for the removal of gp_hashagg_compress_spill_files, gp_eager_hashtable_release, max_work_mem and gp_hash_index.
-
由 Daniel Gustafsson 提交于
The gp_eager_hashtable_release GUC was deprecated in version 4.2 in 2011 when the generic eager free framework was implemented. The leftover gp_eager_hashtable_release was asserted to be true and never intended to be turned off. The same body of work deprecated the max_work_mem setting which was bounding the work_mem setting. While not technically tied to eager hashtable release, remove as well since it's deprecated, undocumented and not terribly useful. Relevant commit in closed source repo is 88986b7d
-
由 Daniel Gustafsson 提交于
The gp_hashagg_compress_spill_files GUC was deprecated in 2010 when it was replaced by gp_workfile_compress_algorithm. The leftovers haven't done anything for quite some time so remove GUC. Relevant commit in closed source repo is c1ce9f03
-
由 Heikki Linnakangas 提交于
Revert commit ffc6226a and 1d31323a. It failed on concourse.
-
由 Heikki Linnakangas 提交于
In commit ffc6226a, I removed DROP + CREATE DATABASE from these tests. That didn't work, you now get "relation already exists" errors from, after the first test. I'm trying to fix this in the blind one more time, by changing the tests to use a separate schema, and DROP + CREATE the schema between each test. That's still a lot cheaper than dropping and creating a database.
-
由 Heikki Linnakangas 提交于
The 'gptest' database was actually used by the tests. CREATE DATABASE is a fairly expensive operation, so let's avoid doing it without a good reason. I don't actually have means to test this properly, so I'm going to push this in blind and see what Concourse thinks of it. If something goes awry, I'll revert.
-