- 15 2月, 2017 7 次提交
-
-
由 Heikki Linnakangas 提交于
It's not necessary to clear the error buffer on every call. Also, GPOS_NEW_ARRAY() is a pretty slow way of clearing memory. This greatly reduces the overhead of planning simple queries with ORCA. On my laptop, this reduces the time for planning "SELECT 123;" from about 60 ms to 5 ms.
-
由 Karen Huddleston 提交于
-
由 Heikki Linnakangas 提交于
I'm not sure what exactly the purpose of each of these tests are, and they probably need more culling and cleanup, but for now, just move them out of bugbuster, so that we can get rid of bugbuster as a separate suite.
-
由 Heikki Linnakangas 提交于
I don't see any reason to expect that this would behave differently with an external table than a normal one. Move the test from bugbuster to the normal regression suite.
-
由 Heikki Linnakangas 提交于
This moves 'spi', 'spilltodisk', and 'wrkloadadmin' tests. The goal is to get rid of bugbuster as a separate suite altogether, and this commit is one step towards that goal.
-
由 Jingyi Mei 提交于
Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Tom Meyer 提交于
- /etc/init.d/sshd doesn't exist - disable newer host key types in sshd_config Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
- 14 2月, 2017 30 次提交
-
-
由 Daniel Gustafsson 提交于
Also update all uses of the name in the code as well as pgindent file.
-
由 Daniel Gustafsson 提交于
-
由 Heikki Linnakangas 提交于
It was not enabled, because there were a lot of errors in it, because of features that have been disabled in Greenplum. Like enforcing foreign keys, INSERT RETURNING, and doing SPI in functions in segments. Those limitations make these tests a less interesting than in the upstream, but there are still many good tests in there too. Like the test for anonymous DO blocks, that are not tested by any other tests. The .sql file was almost identical to the upstream version. Clean it up to be even closer, by re-enabling some commented queries in it, removing the duplicated tests for RETURN QUERY, removing the READS/CONTAINS/MODIFIES SQL DATA noise words etc. Memorize the expected output as it is. That's quite different from the upstream, because of all the disabled functionality, but also because of some cosmetic differences in error messages. The 'schema_topology' test contained a copy of the for_vect(), so remove that, now that we run the real thing.
-
由 Heikki Linnakangas 提交于
There was no difference between the "vacuumed data" and "unvacuumed data" tests, except that the first one performed a vacuum at the end. I think these databases were meant to be used for some kind of follow-up tests, comparing the behaviour of something with or without the vacuum. But we have no such follow-up tests, so this is rather pointless. It's questionable whether it makes sense to have even one copy of this, but I kept it for now.
-
由 Heikki Linnakangas 提交于
These tables and views were not used for any further testing. They are uninteresting in themselves, we have a lot more interesting tests elsewhere for tables, views, and comments on them. In the test for upper case tables, one INSERT is enough to confirm that it works. We have more comprehensive testing of transactions in the 'transactions', and elsewhere. The 'employee' case is quite unremarkable, so just remove it.
-
由 Heikki Linnakangas 提交于
That's where we have all the other tests for this feature, let's keep them together.
-
由 Heikki Linnakangas 提交于
Don't bother to create a separate table for each ALTER test.
-
由 Heikki Linnakangas 提交于
And at the beginning, move it to a more logical place.
-
由 Heikki Linnakangas 提交于
* Calling current_database() inside an ignore block doesn't do anything interesting. We have a test, which is not inside an ignore block, already in qp_functions. * No need to switch to template1 before dropping "admin" role. We can drop it from within the regression database, like all the other test roles. * Setting optimizer_disable_missing_stats_collection had no effect on any of the actual tests, because switching the connection with \c, on the very next command, resets all GUCs.
-
由 Heikki Linnakangas 提交于
Most of these queries were already in the qp_correlated_query test. A few were not, moved those instead of outright removing them. This removed the last queries from schema_topology whose output was different with ORCA than without ORCA, so remove the ORCA-specific expected output file.
-
由 Heikki Linnakangas 提交于
We have tests like this in the main test suite's 'external_table' test, and in gpfdist's regression suite. Note that these external tables were never queried, only created and dropped.
-
由 Heikki Linnakangas 提交于
These roles were dropped just a few lines earlier already.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
It hasn't done anything since 2010. If I'm reading the commit log correctly, it was added and deprecated only a few months apart, and probably hasn't done anything in any released version.
-
由 Heikki Linnakangas 提交于
SendDummyPacket() is completely specific to the UDP interconnect implementation. Along the way, I couldn't resist some cosmetic cleanup: use %m rather than strerror(errno), avoid unnecessary variable initializations, and pgindent.
-
由 Heikki Linnakangas 提交于
And other misc cleeanup.
-
由 Pengzhou Tang 提交于
With commit e28c84b2, query get cancelled if one or more postmaster of segments get down, but some backends of those segment still get stuck in FileRepPrimary_IsMirroringRequired(). FileRepPrimary_IsMirroringRequired() is waiting the third coordinator for the next step, unfortunately, the third coordinator already exit because postmaster is gone, so the whole backend get stuck.
-
由 Pengzhou Tang 提交于
Although postmaster of one segment is killed, QEs of it are still available and for some defects, query may get hung. Improvements in this commit include: 1. Interconnect motion receiver and sender check segments status if no data available for long time to avoid query hang issue. 2. Add segments status checking into gang sanity test. 3. Do not reuse Gangs whose postmaster is not alive, and recreate a new one. 4. Check segments status when creating gang failed. 5. Close connection if it's peer is down
-
由 Pengzhou Tang 提交于
Formerly, GPDB kept quite when dispatching DTX command to a busy gang in which cases GPDB is not suppose to support, it hidden some bugs of the planner and even worse cause SIGSEGV because dispatch threads may access and modify same connections without protections.
-
由 Pengzhou Tang 提交于
Within external_beginscan(), scan->errcontext.previous may link to local error contexts such as spierrcontext, if errors occur, abort routines call external_endscan() to set global value error_context_stack to those local error contexts whose stack are not exist anymore, so everytime elog() or ereport() are called, a SIGSEGV occurs. To avoid this, setting up and restoring error_context_stack within a single function.
-
由 Jimmy Yih 提交于
When rebasing https://github.com/greenplum-db/gpdb/commit/20332d98dc3f181f75db550b3d18ca5e979629ad on top of https://github.com/greenplum-db/gpdb/commit/bfb63ea83e379089f7e90ff9f3c2d6faab0ae722, I forgot to edit the new orca answer file that was added.
-
由 David Sharp 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Jamie McAtamney 提交于
-
由 Tushar Dadlani 提交于
In compile_gpdb.bash, simply rely on the variables being set in the environment. Comment that they are required rather than explicitly passing them. In gphdfs/Makefile, write the credentials to a property file in /tmp, and pass that file to ant. Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Jimmy Yih 提交于
Append-optimized tables do not support CLUSTER command. For some unknown reason, the ALTER TABLE CLUSTER ON command was left alone. This commit will properly disable the command for append-optimized (row and column orientated) tables and report an error message when used.
-
由 Jimmy Yih 提交于
The gpactivatestandby utility was recently changed in commit https://github.com/greenplum-db/gpdb/commit/c42035ea6089ad9f447f2780b90e7b53413d5e7c to disable some validations on master data directory. The commit was merged without running this test and made our CI pipeline red... so here's the test fix to update expected error handling stdout.
-
由 Shreedhar Hardikar 提交于
There is a similar assertion that has been observered to fail once in a while, but is no longer reproducible. Since this is an exceptional case we should error our sooner rather than later. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Dhanashree 提交于
Transformation of `AlterTableStmt` is moved from `parse_analyze()` to `ProcessUtility()` GPDB4 flow: ``` ATPExecPartSplit() parse_analyze() transformStmt() transformAlterTableStmt() -> may expand to multiple statements transformAlterTable_all_PartitionStmt() foreach statement in statementList ProcessUtility() ``` GPDB Master flow: ``` ATPExecPartSplit() parse_analyze() transformStmt() -> no-op for AlterTableStmt ProcessUtility() transformAlterTableStmt() -> may expand to multiple statements transformAlterTable_all_PartitionStmt() foreach statement in statementList AlterTable() or ProcessUtility() ``` Hence in `ATPExecPartSplit()`, it is guaranteed that we have not expanded into multiple statements since no transformation happened in `parse_analyze()`. This commit removes the FIXME comment.
-
由 Haisheng Yuan 提交于
- The dpe.sql file was duplicated in bugbuster - so we remove that one. - Result files were old, and hence updated. Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Heikki Linnakangas 提交于
Commit 46d9521b moved the test to the main regression suite, and re-enabled it.
-
- 13 2月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
In commit 3a795d25, I refactored things so that it was left unused, but I failed to actually remove it then.
-
由 Daniel Gustafsson 提交于
-pthread is coming as part of PTHREAD_CFLAGS, no need to add it separately.
-
由 Daniel Gustafsson 提交于
cdb_dumpall_agent.c is neither compiled, not does it contain anything of interest. Kill it for now, if we want cdb_dumpall_agent it can be resurrected or started on from scratch.
-