- 23 2月, 2017 2 次提交
- 05 2月, 2017 1 次提交
-
-
由 Jimmy Yih 提交于
The gppersistentrebuild tool was trying to use string_agg without casting the oid to text. This used to work before but not anymore. The behave tests only have two scenarios but the second scenario would always fail because it would run when a primary/mirror segment pair were still in resync mode.
-
- 23 2月, 2017 15 次提交
-
-
由 Larry Hamel 提交于
* When mocking, we may setup patches that are not cleaned up properly and causes test pollutions. Ensure that we do tearDown properly and raise a condition if we are not. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Larry Hamel 提交于
When running gpconfig -s with --file option, we can now view the postgresql.conf guc settings instead of just viewing the user guc settings in the database. This option also reports if there are any inconsistencies with the postgresql.conf between the segments. reformat gpconfig: * enable unit testing remove gpconfig white-space Add gpconfig behave tests: * When you run gpconfig with -s and --file, you can sometimes run into a nonetype string format error. This is due to us not joining all the threads properly and closing them out properly. * Added gpconfig behave tests to ensure that we are not missing anything in the unittest tests. As we are mocking the pool, we need to make sure that we are not doing anything silly, so adding minimal behave tests. Single node is probably enough. Signed-off-by: NChumki Roy <croy@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Larry Hamel 提交于
* Fixing a hardcoded python library path that may not be valid for other python version. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
-
由 Shreedhar Hardikar 提交于
* Using uninitialized value kstr. Field kstr.isPrefixOnly is uninitialized when calling memcpy on line 3536. Setting kstr is OK since the memcpy will copy the value to ret. * Buffer not null terminated Calling strncpy with a maximum size argument of 1024 bytes on destination array work_set_path of size 1024 bytes might leave the destination string unterminated. Upstream precedent : 586dd5d6
-
-
由 Heikki Linnakangas 提交于
Commit 821b8e10 changed pg_dump's shouldPrintColumn() function to return true for all columns in an external table, and changed dumpExternal() to use the shouldPrintColumn() function. However, that change was not made to the copy of shouldPrintColumn() in cdb_dump_include.c. That oversight didn't cause any ill effects, until commit 58b86683 came along and refreshed cdb_dump_agent.c's copy of the dumpExternal() function, to match that in pg_dump.c. With that change cdb_dump_agent's dumpExternal() started to call shouldPrintColumn() for external tables, but its version of shouldPrintColumn() didn't have the change to always return true for external tables yet. As a result, external tables didn't have any of their columns included. To fix, update cdb_dump's copy of shouldPrintColumn() to match that in pg_dump.c.
-
由 Heikki Linnakangas 提交于
Many callers of get_funcs_for_compression(), like the one from appendonly_insert_init() pass a reference to the relcache as argument, like this: get_funcs_for_compression(NameStr(rel->rd_appendonly->compresstype)); If a relcache invalidation happens while looking up compression type, that reference to rel->rd_appendonly is free'd and becomes invalid. get_funcs_for_compression() performs a catalog lookup, and can therefore invalidate the reference itself, before using it. Make get_funcs_for_compression (GetCompressionImplementation, actually) to tolerate that, by making a copy of the argument before calling heap_open, which can cause the cache invalidation. RelationData->rd_index potentially has the same problem, but I don't see any problematic call sites that would pass a rel->rd_index reference to a function like this. In rd_appendonly, all the other fields but compresstype are pass-by-value types (Oid, bool, etc.), that don't have the same hazard. The crash can be reproduced easily by compiling with --enable-testutils, setting gp_test_system_cache_flush_force=plain, in a psql session, and performing any select or update on a compressed AO table. I am not adding a regression test, as it would only work with --enable-testutils, but it would be a good idea to run all the regression tests with that GUC enabled, every once in a while.
-
`rowMarks` in a Query object represent relations whose tuples are subject to locking when a query contains `FOR SHARE` or `FOR UPDATE` clauses in a SELECT. Those rowMarks are wholesale carried over to an EState, via PlannedStmt. In Postgres the optimizer (planner) would add a junk TID column for each table in the rowMarks. And the executor will lock the tuples when executing the root node in the plan. The check we are removing seems dormant even in Postgres: to actually cover the `elog(ERROR)`, the code path needs to have 0. No junk TID columns 0. A non-empty rowMarks list In Postgres the above conditions are all-or-nothing. So the check feels more like an `Assert` rather than an `elog(ERROR)`, and it seems to be dead code. In Greenplum, the above condition does not hold. Greenplum has a different (but compliant) implementation, where we place a lock on each table in the rowMarks. Consequently, we do *not* generate the junk column in the plan (except for tables that are only on the master). This is reasonable because the root node is most likely executing in a different slice from the leaf nodes (which are most likely SCAN's): by the time a (hypothetical) TID reaches the root node, there's no guarantee that the tuple has not passed through a motion node, hence it makes no sense locking it. This commit removes that check.
-
由 Heikki Linnakangas 提交于
These exact same tests were in both 'direct_dispatch', and in 'bfv_dd_types'. Oddly, one SELECT was missing from 'bfv_dd_types', though. Add the missing SELECT there, and remove all these type tests from 'direct_dispatch'. Now that there are no duplicate table names in these tests, also run 'direct_dispatch' in the same parallel group as the rest of the direct dispatch tests. That's a more logical grouping.
-
由 Heikki Linnakangas 提交于
The test disconnects, and immediately reconnects and checks that temp table from the previous session has been removed. However, the old backend removes the tables only after the disconnection, so there's a race condition here: the new session might query pg_tables and still see the table, if the old session was not fast enough to remove it first. Add a two-second sleep, to make the race condition less likely to happen. This is not guaranteed to avoid the race altogether, but I don't see an easy way to wait for the specific event that the old backend has exited. A sleep in a test that doesn't run in parallel with other tests is a bit annoying, as it directly makes the whole regression suite slower. To compensate for that, move the test to run in parallel with the other nearby bfv_* tests. Those other tests don't use temp tables, so it still works. Also, modify the validation query in the test that checks that there are no temp tables, to return whole rows, not just COUNT(*). That'll make analyzing easier, if this test fails.
-
- 22 2月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
I removed these tests in commit 1398c14e, but forgot to remove it from this Makefile.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
The uao_dml and uaocs_dml tests were identical, except that there were a few additional queries in the uao_dml suite that were missing from the uaocs_dml suite, and various cosmetic differences in table names and such. Use the new "uao templating" mechanism in pg_regress to generate both cases from a single set of files. I didn't look for redundant or useless tests in these suites. At a quick glance, at least many of the select tests seem quite pointless, so it would be good to cull these in the future.
-
由 Heikki Linnakangas 提交于
This is a straightforward copy-paste, except for thing: I changed all the quicklz-compressed tables to use zlib instead, so that these tests work without support for legacy quicklz compression. AFAICT, the compression isn't a very interesting aspect for these tests (I even considered making them uncompressed.)
-
由 Heikki Linnakangas 提交于
These test queries were copy-pasted from the upstream test at src/pl/plperl/plperl_trigger.sql, so nothing interesting there. The test harness would try to install plperl with gppkg, but I don't think that's very interesting to test. AFAIK we don't plan to install plperl.so with the gppkg mechanism anymore. And if we do, the tests for that should live next to the gppkg scripts to create the package, not here. test_spi.py referenced query01.sql, query02.sql and query03.sql, but the directory only contained query01.sql and query03.sql. Therefore, I believe this was actually broken as it is.
-
由 Heikki Linnakangas 提交于
We have more comprehensive tests for gpcheckcat in gpMgmt/test/behave/mgmt_utils/gpcheckcat.feature, and in gpMgmt/bin/gppylib/test/unit/test_unit_gpcheckcat.py.
-
由 Peifeng Qiu 提交于
gp_dump_agent is used by management utils for backup/restore operation. Apply similar patch for pg_dump to cdb_dump_agent. Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Adam Lee 提交于
Due to changes of external table "on master" feature Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
Due to changes of external table "on master" feature Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Heikki Linnakangas 提交于
I typoed the INSERT command in this test, and even checked in the ERROR in the expected output. Fix. Also, make test case a bit less fragile, by advancing the XID counter also after the INSERT+DELETEs in the second part of these tests. This ensures that the tables will look 'old' before the vacuum. Tweak the limits of 'old' and 'young' XID, to make more sure that a table just created is considered 'young', and that advancing the XID counter pushes the age to 'old' on all segments.
-
- 21 2月, 2017 12 次提交
-
-
由 Heikki Linnakangas 提交于
I'm not sure why this seemed to pass on my laptop without this file, but looking at the code, I'm pretty sure it's still needed for the test_uao_crash_update_with_ins_fault test. We'll see what the Concourse job thinks..
-
由 Heikki Linnakangas 提交于
These tests test for an old bug, where vacuuming an empty AO relation would not advance its relfrozenxid. Doesn't seem very likely for that particular bug to reappear, but seems like a good idea to have some coverage for freezing and relfrozenxid advancement. In this rewritten form, these tests are fairly quick, too (1-2 seconds on my laptop).
-
由 Heikki Linnakangas 提交于
Turns out that the remaining tests need both uao_crash_update2.sql, and uaocs_crash_update2.sql. [me puts on brown paper back]
-
由 Heikki Linnakangas 提交于
The remaining tests needed uaocs_crash_update2.sql, not uao_crash_update2.sql
-
由 Heikki Linnakangas 提交于
I removed these files in commit b39c1356, but it turns out that some of the remaining tests still used those files.
-
由 Heikki Linnakangas 提交于
The pattern in these tests was: 1. Create and populate a test append-only table 2. Induce a fault with gpfaultinjector, to crash the server at beginning of appendonly_insert/update/delete function. 3. Try to INSERT, UPDATE or DELETE some rows, crashing the server. Crashing the server at the beginning of the operation isn't very interesting, because we haven't done any on-disk modifications at that point yet. There is no reason to believe that there would be a problem at crash recovery at that point. In particular, after the crash, the state at disk will look identical in the INSERT, UPDATE and DELETE scenarios. Therefore, remove the tests.
-
由 Heikki Linnakangas 提交于
These tests set the appendonly_delete fault injection trigger, and then executed a TRUNCATE or ALTER TABLE DROP COLUMN on the table. However, neither command performs any DELETEs on the target table, and therefore don't even hit the fault. This is effectively the same as just testing TRUNCATE or ALTER TABLE DROP COLUMN on an AOCS table without any faults, and we have tests for that elsewhere already (e.g. src/test/regress/input/uao_dml/uao_dml.source and src/test/regress/sql/alter_table_aocs.sql).
-
由 xiong-gang 提交于
- Add 2 catalog tables for resource group - add one column in pg_authid to record the resource group of a role - bump catalog version - hard code default resource group of roles temporarily Signed-off-by: NKenan Yao <kyao@pivotal.io>
-
由 Heikki Linnakangas 提交于
ANALYZE used to be implemented with fairly complicated aggregate queries, to count distinct number of values, and gather many other statistics. Once upon a time, there was a bug in the planner, that it tried to use a hash aggregate, even if the data type wasn't hashable. That case was triggered by the internal queries that ANALYZE runs. ANALYZE has since been rewritten, and no longer issues such queries, so this test case isn't very interesting in its old form anymore. But the underlying problem, that we mustn't use a hash aggregate on a datatype that isn't hashable, would still be good to test. So write a new more targeted test for that, that doesn't involve ANALYZE.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Subpartition names are SQL identifiers, so you cannot use a plain integer as name, unless you quote it. There isn't really anything special here, and never was. The original bug report was: > I was trying to add subpartitions named 2001, 2002, 2003, etc. but was not > allowed: > ... > However it was allowed when the name was changed from a number to a string It was closed as "No fix needed", as that is expected behavior. I'm not sure why we bothered to add test cases for this originally, but I think we can just drop them now.
-
由 Heikki Linnakangas 提交于
The old test case was no longer testing what it was supposed to. The ancient bug reported as MPP-3115 was related to renaming a partition, but the behavior has changed since then, to not allow two partitions to have the same name. I couldn't see any tests that RENAME PARTITION enforces uniqueness, except for the special case that the new name is the same as the old, so I added a test for that to partition1.sql. Other than that, I think we can simply remove this test case as obsolete.
-