- 23 2月, 2017 14 次提交
-
-
The need for heap access methods before xlog replay is removed by commit e2d6aa1481f6cdbd846d4b17b68eb4387dae9211. This commit simply moves the relcache initialization to pass4, where it is really needed. Do not bother to remove relcache init file at the end of crash recovery pass2. Error out if relation cache initialized at wrong time.
-
由 Ashwin Agrawal 提交于
-
由 Heikki Linnakangas 提交于
There are only a few tests left in "uao", most have now been moved to installcheck-world. The "uao" job now runs in about 4 minutes, of which 2 minutes is spent on setting up the test cluster. That seems like overkill, so simplify things by merging the remaining uao fault injection tests into the larger "storage" job.
-
由 Heikki Linnakangas 提交于
These are the same tests queries for column-oriented append-only tables, as those moved by commit 11a5a807, for row-oriented append-only tables. There were two additional tests that were never executed for row-oriented tables though: phantom_reads_update_serializable and phantom_reads_delete_serializable. I believe that was an oversight in the original test suite; they are now also executed for row-oriented tables. We use the UAO templating mechanism, to run the same test files against row- and column-oriented tables. To make that work, fix a bug in the templating mechanism pg_regress.c: if the --ao-dir argument was shorter than 7 characters, the uao directory was not detected correctly.
-
由 Heikki Linnakangas 提交于
We have sufficient test coverage for gpload in the Behave tests in gpMgmt/test/behave/mgmt_utils. And we have plenty of tests for external tables elsewhere. The combination of using an external table or gpload as the source, and append-only table as the target, is not particularly interesting. So remove the tests.
-
由 Heikki Linnakangas 提交于
-
由 Haozhou Wang 提交于
Update pg_dump, cdb_dump_agent, psql to fix MU tests after ON MASTER patch
-
由 Adam Lee 提交于
cURL automatically adds the CONTENT-LENGTH header when performs "PUT" request, we also did it without this commit, AWS is OK with that, but others would report duplicate CONTENT-LENGTH error. Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Yuan Zhao 提交于
palloc and repalloc will check request size is under MaxAllocSize, but the implementation(AllocSetAllocImpl, AllocSetReallocImpl) allocs ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ more bytes to store block information. Later MemoryContextNoteAlloc is called with block size. If palloc request size is between MaxAllocSize - (ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ) and MaxAllocSize, an internal error occurs. Fix: simply remove those assertions, the calculations in MemoryContextNoteAlloc and MemoryContextNoteFree should work with larger values. This resolves https://github.com/greenplum-db/gpdb/issues/1090. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Adam Lee 提交于
AWS S3 returns internal errors (HTTP response code 500) serveral times, unusual but better not to ignore, this commit re-sends requests when it happens. Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Marbin Tan 提交于
* We were accessing a non-existent variable as it's being generated by the later function. Swapping the two methods. * Fix on top of a86e4901
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Kenan Yao 提交于
Signed-off-by Gang Xiong <gxiong@pivotal.io>
-
由 Kenan Yao 提交于
Signed-off-by Gang Xiong <gxiong@pivotal.io>
-
- 05 2月, 2017 1 次提交
-
-
由 Jimmy Yih 提交于
The gppersistentrebuild tool was trying to use string_agg without casting the oid to text. This used to work before but not anymore. The behave tests only have two scenarios but the second scenario would always fail because it would run when a primary/mirror segment pair were still in resync mode.
-
- 23 2月, 2017 15 次提交
-
-
由 Larry Hamel 提交于
* When mocking, we may setup patches that are not cleaned up properly and causes test pollutions. Ensure that we do tearDown properly and raise a condition if we are not. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Larry Hamel 提交于
When running gpconfig -s with --file option, we can now view the postgresql.conf guc settings instead of just viewing the user guc settings in the database. This option also reports if there are any inconsistencies with the postgresql.conf between the segments. reformat gpconfig: * enable unit testing remove gpconfig white-space Add gpconfig behave tests: * When you run gpconfig with -s and --file, you can sometimes run into a nonetype string format error. This is due to us not joining all the threads properly and closing them out properly. * Added gpconfig behave tests to ensure that we are not missing anything in the unittest tests. As we are mocking the pool, we need to make sure that we are not doing anything silly, so adding minimal behave tests. Single node is probably enough. Signed-off-by: NChumki Roy <croy@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Larry Hamel 提交于
* Fixing a hardcoded python library path that may not be valid for other python version. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
-
由 Shreedhar Hardikar 提交于
* Using uninitialized value kstr. Field kstr.isPrefixOnly is uninitialized when calling memcpy on line 3536. Setting kstr is OK since the memcpy will copy the value to ret. * Buffer not null terminated Calling strncpy with a maximum size argument of 1024 bytes on destination array work_set_path of size 1024 bytes might leave the destination string unterminated. Upstream precedent : 586dd5d6
-
-
由 Heikki Linnakangas 提交于
Commit 821b8e10 changed pg_dump's shouldPrintColumn() function to return true for all columns in an external table, and changed dumpExternal() to use the shouldPrintColumn() function. However, that change was not made to the copy of shouldPrintColumn() in cdb_dump_include.c. That oversight didn't cause any ill effects, until commit 58b86683 came along and refreshed cdb_dump_agent.c's copy of the dumpExternal() function, to match that in pg_dump.c. With that change cdb_dump_agent's dumpExternal() started to call shouldPrintColumn() for external tables, but its version of shouldPrintColumn() didn't have the change to always return true for external tables yet. As a result, external tables didn't have any of their columns included. To fix, update cdb_dump's copy of shouldPrintColumn() to match that in pg_dump.c.
-
由 Heikki Linnakangas 提交于
Many callers of get_funcs_for_compression(), like the one from appendonly_insert_init() pass a reference to the relcache as argument, like this: get_funcs_for_compression(NameStr(rel->rd_appendonly->compresstype)); If a relcache invalidation happens while looking up compression type, that reference to rel->rd_appendonly is free'd and becomes invalid. get_funcs_for_compression() performs a catalog lookup, and can therefore invalidate the reference itself, before using it. Make get_funcs_for_compression (GetCompressionImplementation, actually) to tolerate that, by making a copy of the argument before calling heap_open, which can cause the cache invalidation. RelationData->rd_index potentially has the same problem, but I don't see any problematic call sites that would pass a rel->rd_index reference to a function like this. In rd_appendonly, all the other fields but compresstype are pass-by-value types (Oid, bool, etc.), that don't have the same hazard. The crash can be reproduced easily by compiling with --enable-testutils, setting gp_test_system_cache_flush_force=plain, in a psql session, and performing any select or update on a compressed AO table. I am not adding a regression test, as it would only work with --enable-testutils, but it would be a good idea to run all the regression tests with that GUC enabled, every once in a while.
-
`rowMarks` in a Query object represent relations whose tuples are subject to locking when a query contains `FOR SHARE` or `FOR UPDATE` clauses in a SELECT. Those rowMarks are wholesale carried over to an EState, via PlannedStmt. In Postgres the optimizer (planner) would add a junk TID column for each table in the rowMarks. And the executor will lock the tuples when executing the root node in the plan. The check we are removing seems dormant even in Postgres: to actually cover the `elog(ERROR)`, the code path needs to have 0. No junk TID columns 0. A non-empty rowMarks list In Postgres the above conditions are all-or-nothing. So the check feels more like an `Assert` rather than an `elog(ERROR)`, and it seems to be dead code. In Greenplum, the above condition does not hold. Greenplum has a different (but compliant) implementation, where we place a lock on each table in the rowMarks. Consequently, we do *not* generate the junk column in the plan (except for tables that are only on the master). This is reasonable because the root node is most likely executing in a different slice from the leaf nodes (which are most likely SCAN's): by the time a (hypothetical) TID reaches the root node, there's no guarantee that the tuple has not passed through a motion node, hence it makes no sense locking it. This commit removes that check.
-
由 Heikki Linnakangas 提交于
These exact same tests were in both 'direct_dispatch', and in 'bfv_dd_types'. Oddly, one SELECT was missing from 'bfv_dd_types', though. Add the missing SELECT there, and remove all these type tests from 'direct_dispatch'. Now that there are no duplicate table names in these tests, also run 'direct_dispatch' in the same parallel group as the rest of the direct dispatch tests. That's a more logical grouping.
-
由 Heikki Linnakangas 提交于
The test disconnects, and immediately reconnects and checks that temp table from the previous session has been removed. However, the old backend removes the tables only after the disconnection, so there's a race condition here: the new session might query pg_tables and still see the table, if the old session was not fast enough to remove it first. Add a two-second sleep, to make the race condition less likely to happen. This is not guaranteed to avoid the race altogether, but I don't see an easy way to wait for the specific event that the old backend has exited. A sleep in a test that doesn't run in parallel with other tests is a bit annoying, as it directly makes the whole regression suite slower. To compensate for that, move the test to run in parallel with the other nearby bfv_* tests. Those other tests don't use temp tables, so it still works. Also, modify the validation query in the test that checks that there are no temp tables, to return whole rows, not just COUNT(*). That'll make analyzing easier, if this test fails.
-
- 22 2月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
I removed these tests in commit 1398c14e, but forgot to remove it from this Makefile.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
The uao_dml and uaocs_dml tests were identical, except that there were a few additional queries in the uao_dml suite that were missing from the uaocs_dml suite, and various cosmetic differences in table names and such. Use the new "uao templating" mechanism in pg_regress to generate both cases from a single set of files. I didn't look for redundant or useless tests in these suites. At a quick glance, at least many of the select tests seem quite pointless, so it would be good to cull these in the future.
-
由 Heikki Linnakangas 提交于
This is a straightforward copy-paste, except for thing: I changed all the quicklz-compressed tables to use zlib instead, so that these tests work without support for legacy quicklz compression. AFAICT, the compression isn't a very interesting aspect for these tests (I even considered making them uncompressed.)
-
由 Heikki Linnakangas 提交于
These test queries were copy-pasted from the upstream test at src/pl/plperl/plperl_trigger.sql, so nothing interesting there. The test harness would try to install plperl with gppkg, but I don't think that's very interesting to test. AFAIK we don't plan to install plperl.so with the gppkg mechanism anymore. And if we do, the tests for that should live next to the gppkg scripts to create the package, not here. test_spi.py referenced query01.sql, query02.sql and query03.sql, but the directory only contained query01.sql and query03.sql. Therefore, I believe this was actually broken as it is.
-
由 Heikki Linnakangas 提交于
We have more comprehensive tests for gpcheckcat in gpMgmt/test/behave/mgmt_utils/gpcheckcat.feature, and in gpMgmt/bin/gppylib/test/unit/test_unit_gpcheckcat.py.
-
由 Peifeng Qiu 提交于
gp_dump_agent is used by management utils for backup/restore operation. Apply similar patch for pg_dump to cdb_dump_agent. Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Adam Lee 提交于
Due to changes of external table "on master" feature Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
Due to changes of external table "on master" feature Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Heikki Linnakangas 提交于
I typoed the INSERT command in this test, and even checked in the ERROR in the expected output. Fix. Also, make test case a bit less fragile, by advancing the XID counter also after the INSERT+DELETEs in the second part of these tests. This ensures that the tables will look 'old' before the vacuum. Tweak the limits of 'old' and 'young' XID, to make more sure that a table just created is considered 'young', and that advancing the XID counter pushes the age to 'old' on all segments.
-