- 03 3月, 2017 5 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Once upon a time, these UPDATEs used to give a "Append-only tables are not updatable" error. But these days, they are updateable, and we have plenty of tests for updating append-only tables. Remove this test as obsolete or redundant
-
In PostgreSQL, the unlink is deferred and handled by checkpoint. In GPDB, the unlink is always handled by persistent tables and hence it's protected of duplicated relfilenode deletion during recovery. Functionally, there is no harm to send unlink to checkpoint process, and checkpoint process cannot even find the relfilenode to be deleted. However, this will be a performance impact under scenario like deleting a table with large number of partitions, where the fsync request queue is unnecessarily filled. The detailed discussions are at: https://groups.google.com/a/greenplum.org/forum/#!searchin/gpdb-dev/mdunlink%7Csort:relevance/gpdb-dev/PHKuQPNwWs0/1kIwDk-CEgAJ
-
由 Ashwin Agrawal 提交于
-
- 02 3月, 2017 32 次提交
-
-
由 Heikki Linnakangas 提交于
We have good coverage of this in the 'analyze' test in the main test suite now.
-
由 Heikki Linnakangas 提交于
REINDEX grabs a ShareLock on the table relation very early on, and TRUNCATE grabs an AccessExclusiveLock very early on. If we trust that the lock manager isn't broken, there's no need to test this combination of commands in particular. (For testing the lock manager, these tests give only very narrow coverage.) Furthermore, all these test cases are effectively the same, as it it makes no difference what kind of a table or what kind of an index it is, the locking works the same for all. ALTER and DROP TABLE also grab an AccessExclusiveLock on the table, which conflicts with the ShareLock that REINDEX takes. It works the same with all flavors of ALTER and DROP TABLE, and with all kinds of tables, and all kinds of indexes.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
This was mostly already in bfv_partition. For completeness, move the code from the TINC test to bfv_partition to also execute the query, not just EXPLAIN it.
-
由 Heikki Linnakangas 提交于
There's a test in the main suite's 'partition' test for this already, with the 'deep_part' test table.
-
由 Heikki Linnakangas 提交于
There is already a similar test in the 'partition' test in the main test suite. Search for "-- Sub-partition insertion" in partition.sql to find it.
-
由 Heikki Linnakangas 提交于
It's not allowed. We already have tests for both CREATE TABLE and ALTER TABLE ADD PARTITION, with "WITH OIDS=false", in the 'partition' test of the main test suite.
-
由 Heikki Linnakangas 提交于
Rename the test tables in 'partition' test, so that they don't clash with the test tables in 'partition1' test. Change a few validation queries to not get confused if there are unrelated tables with partitions in the database. With these changes, we can run 'partition' and 'partition1' in the same parallel group, which is a more logical grouping.
-
由 Heikki Linnakangas 提交于
All of these tests used the same test table, but it was dropped and re-created for each test. To speed things up, create it once, and wrap each test in a begin-rollback block. The access plan of one of the tests varied depending on optimizer_segments, and it caused a difference in the ERROR message. The TINC tests were always run with 2 segments, but you got a different plan and message with 3 or more segments. Added a "SET optimizer_segments=2" to stabilize that, and a comment explaining the situation.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
The transformation of "NOT IN" or "= ALL" incorrectly added a condition that the outer side of the join must not be NULL. But that's not true, when the other side is an empty set. Discovered while moving old TINC tests to the main test suite. The 'lasj_hash_all_3' test exposed this issue, when run with ORCA disabled. This has apparently gone unnoticed, because no-one has run those tests without ORCA. There were also incorrect query results memorized, for tests 'lasj_notin_multiple_3' and 'lasj_hash_notin_multiple_3'. They are similar queries, but because ORCA falls back to the planner for them, we had memorized the Planner results for them, and hadn't noticed that the results were in fact incorrect. Fixes github issue #1907.
-
由 Heikki Linnakangas 提交于
We have a bunch of tests on subpartitioned tables in the main suite, e.g. in the 'partition' and 'bfv_partition' tests. And these tests were marked with "@skip", anyway, so they were not run by TINC either.
-
由 Heikki Linnakangas 提交于
I moved them to the 'partition_pruning_with_fn' test, but because there is no functions involved in these tests, I renamed the test to just 'partition_pruning'. There is possibly some overlap with these new tests and the existing ones in the same file, but will let future test archeologists to decide that. These are cheap tests, in any case.
-
由 Haozhou Wang 提交于
________________________________________________________________________________________________________ *** CID 163038: Uninitialized members (UNINIT_CTOR) /extensions/gpcloud/src/s3interface.cpp: 556 in S3MessageParser::S3MessageParser(const Response &)() 550 S3MessageParser::S3MessageParser(const Response &resp) { 551 // Compatible S3 services don't always return XML 552 if (resp.getRawData().data() == NULL) { 553 message = "Unknown error"; 554 code = "Unknown error code"; 555 >>> CID 163038: Uninitialized members (UNINIT_CTOR) >>> Non-static class member "xmlptr" is not initialized in this constructor nor in any functions that it calls. 556 return; 557 } ________________________________________________________________________________________________________ *** CID 163037: Uninitialized members (UNINIT_CTOR) /extensions/gpcloud/include/restful_service.h: 37 in Response::Response(ResponseStatus, PGAllocator<unsigned char> &)() 31 class Response { 32 public: 33 explicit Response(ResponseStatus status) : status(status) { 34 } 35 explicit Response(ResponseStatus status, S3MemoryContext& context) 36 : status(status), dataBuffer(context) { >>> CID 163037: Uninitialized members (UNINIT_CTOR) >>> Non-static class member "responseCode" is not initialized in this constructor nor in any functions that it calls. 37 } ________________________________________________________________________________________________________ *** CID 163033: Uninitialized members (UNINIT_CTOR) /extensions/gpcloud/include/s3params.h: 13 in S3Params::S3Params(const std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> &, bool, const std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> &, const +std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> &)() 7 8 class S3Params { 9 public: 10 S3Params(const string& sourceUrl = "", bool useHttps = true, const string& version = "", 11 const string& region = "") 12 : s3Url(sourceUrl, useHttps, version, region), keySize(0), chunkSize(0), numOfChunks(0) { >>> CID 163033: Uninitialized members (UNINIT_CTOR) >>> Non-static class member "verifyCert" is not initialized in this constructor nor in any functions that it calls. 13 } ________________________________________________________________________________________________________ *** CID 163032: Error handling issues (UNCAUGHT_EXCEPT) /extensions/gpcloud/src/compress_writer.cpp: 9 in CompressWriter::~CompressWriter()() 3 uint64_t S3_ZIP_COMPRESS_CHUNKSIZE = S3_ZIP_DEFAULT_CHUNKSIZE; 4 5 CompressWriter::CompressWriter() : writer(NULL), isClosed(true) { 6 this->out = new char[S3_ZIP_COMPRESS_CHUNKSIZE]; 7 } 8 >>> CID 163032: Error handling issues (UNCAUGHT_EXCEPT) >>> An exception of type "S3RuntimeError" is thrown but the throw list "throw()" doesn't allow it to be thrown. This will cause a call to unexpected() which usually calls terminate(). 9 CompressWriter::~CompressWriter() { 10 this->close(); 11 delete this->out; 12 } ________________________________________________________________________________________________________ *** CID 163029: Memory - illegal accesses (STRING_NULL) /extensions/gpcloud/src/gpwriter.cpp: 49 in _ZN8GPWriter18constructRandomStrB5cxx11Ev() 43 string GPWriter::constructRandomStr() { 44 int randomDevice = ::open("/dev/urandom", O_RDONLY); 45 char randomData[32]; 46 size_t randomDataLen = 0; 47 48 while (randomDataLen < sizeof(randomData)) { >>> CID 163029: Memory - illegal accesses (STRING_NULL) >>> Function "read" does not terminate string "randomData[randomDataLen]". ________________________________________________________________________________________________________ *** CID 163028: Performance inefficiencies (PASS_BY_VALUE) /extensions/gpcloud/src/s3interface.cpp: 265 in S3InterfaceService::listBucket(S3Url)() 259 // ListBucket lists all keys in given bucket with given prefix. 260 // 261 // Return NULL when there is failure due to network instability or 262 // service unstable, so that caller could retry. 263 // 264 // Caller should delete returned object. >>> CID 163028: Performance inefficiencies (PASS_BY_VALUE) >>> Passing parameter s3Url of type "S3Url" (size 256 bytes) by value. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
Without this commit, test scripts will re-compile gpcloud anyway, better to use the one from package instead in the main pipeline.
-
由 Lisa Owen 提交于
[ci skip]
-
由 Heikki Linnakangas 提交于
The storage_persistent_filerep_accessmethods_and_vacuum task contained two filerep tests: FilerepResync.test_filerep_resysnc ... 585576.51 ms ... ok FilerepProcArrayRemoveTestCase.test_verify_ct_after_procarray_removal ... 136553.63 ms ... ok Move them together with the filerep_end_to_end_xlog_ctlog_cons task, so that all the filerep tests are in the same task. To reflect that, rename the 'filerep_end_to_end_xlog_ctlog_cons task' to just 'filerep'. The 'storage_persistent_filerep_accessmethods_and_vacuum' task was the slowest task among the parallel tasks in the job, so moving the two filerep tests elsewhere should reduce overall runtime of the storage job.
-
由 Shoaib Lari 提交于
If the segrelid, visimaprelid, or the visimapidxid fields are 0 for an AO table, then error out.
-
由 Lirong Jian 提交于
-
由 Heikki Linnakangas 提交于
These are already covered by the 'appendonly' and 'uao_dml/uao_dml' tests in the main test suite.
-
由 Heikki Linnakangas 提交于
"int" and "int4" are the same datatype. They are mapped to the same early on, in the parsing stage. For those tests that have been duplicated for many different datatypes, remove the _int_ variant, leaving just the _int4_ variant.
-
由 Heikki Linnakangas 提交于
We have sufficient coverage for these in the 'geometry' test in the main test suite. There are no UPDATEs in 'geometry', but that seems uninteresting; there is no reason to believe that UPDATE on these types would behave differently from an UPDATE on any other type. The test descriptions were misleading, BTW. These are not "boundary" tests, the values used are not close to the min or max values the data types can represent.
-
由 Tom Meyer 提交于
PR pipeline: use 'max_in_flight: 1' and 'version: every' to ensure all PRs flow through the pipeline Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Jingyi Mei 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Jingyi Mei 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
We have a bunch of existing tests that test whether ORCA falls back. They set optimizer_log_failure='all' and client_min_messages='log', and grep the output for "Planner" or "Planner produced plan". That's error-prone. This new GUC makes that kind of tests easier and more robust. You can simply set the GUC, and if there are no extra INFO messages in the output, ORCA didn't fall back.
-
由 Heikki Linnakangas 提交于
We had already disabled ORCA for some queries, but the query to estimate reltuples and relpages still used ORCA. This makes ANALYZE slightly faster (because ORCA is slow to start up), and avoids spurious messages about GPORCA falling back when the new optimizer_trace_failure option is used.
-
由 Jingyi Mei 提交于
Currently, we have a job compile_gpdb_custom_config_centos6 in pr_pipeline and master that compiles gpdb with a custom config, but the purpose of this job is unclear. It turns out that this job is for compiling gpdb the open-source way, so we renamed the job as compile_gpdb_open_source and changed the name of task yml and scripts. Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Heikki Linnakangas 提交于
* Move all the udf_insert_* tests to the main suite. * Move 'volatilefn_dml_int8' test to the main suite. * Remove all the other volatilefn_dml_* tests. There's no reason to believe that the data types used in the table would make a difference, so keeping one variant of this is enough.
-
由 Marbin Tan 提交于
* Run behave tests in concourse * Initial steps to enable us to run behave tests in concourse. This will only work for single node behave tests. Signed-off-by: NChumki Roy <croy@pivotal.io>
-
- 01 3月, 2017 3 次提交
-
-
由 Ning Yu 提交于
Below columns are added: * rsgid: resgroup id; * rsgname: resgroup name; * rsgqueueduration: queued duration in the resgroup; As resgroup is not fully implemented yet there is only dummy output for these columns at the moment.
-
由 Ning Yu 提交于
Add view gp_toolkit.gp_resgroup_status for resource group runtime statistics information, such as cpu usage, memory usage, etc.. Example: -- query resource group runtime statistics information SELECT * FROM gp_toolkit.gp_resgroup_status; A helper function pg_resgroup_get_status_kv() is also provided to collect these information. A new dir src/backend/utils/resgroup/ is created to hold the resgroup source code. As resgroup is not fully implemented yet there is only dummy output for this view at the moment.
-
由 Ning Yu 提交于
Add view gp_toolkit.gp_resgroup_config for resource group information, such as concurrency, cpu rate limitation, etc.. Example: -- query resource group config information SELECT * FROM gp_toolkit.gp_resgroup_config;
-