- 14 2月, 2017 20 次提交
-
-
由 Heikki Linnakangas 提交于
We have tests like this in the main test suite's 'external_table' test, and in gpfdist's regression suite. Note that these external tables were never queried, only created and dropped.
-
由 Heikki Linnakangas 提交于
These roles were dropped just a few lines earlier already.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
It hasn't done anything since 2010. If I'm reading the commit log correctly, it was added and deprecated only a few months apart, and probably hasn't done anything in any released version.
-
由 Heikki Linnakangas 提交于
SendDummyPacket() is completely specific to the UDP interconnect implementation. Along the way, I couldn't resist some cosmetic cleanup: use %m rather than strerror(errno), avoid unnecessary variable initializations, and pgindent.
-
由 Heikki Linnakangas 提交于
And other misc cleeanup.
-
由 Pengzhou Tang 提交于
With commit e28c84b2, query get cancelled if one or more postmaster of segments get down, but some backends of those segment still get stuck in FileRepPrimary_IsMirroringRequired(). FileRepPrimary_IsMirroringRequired() is waiting the third coordinator for the next step, unfortunately, the third coordinator already exit because postmaster is gone, so the whole backend get stuck.
-
由 Pengzhou Tang 提交于
Although postmaster of one segment is killed, QEs of it are still available and for some defects, query may get hung. Improvements in this commit include: 1. Interconnect motion receiver and sender check segments status if no data available for long time to avoid query hang issue. 2. Add segments status checking into gang sanity test. 3. Do not reuse Gangs whose postmaster is not alive, and recreate a new one. 4. Check segments status when creating gang failed. 5. Close connection if it's peer is down
-
由 Pengzhou Tang 提交于
Formerly, GPDB kept quite when dispatching DTX command to a busy gang in which cases GPDB is not suppose to support, it hidden some bugs of the planner and even worse cause SIGSEGV because dispatch threads may access and modify same connections without protections.
-
由 Pengzhou Tang 提交于
Within external_beginscan(), scan->errcontext.previous may link to local error contexts such as spierrcontext, if errors occur, abort routines call external_endscan() to set global value error_context_stack to those local error contexts whose stack are not exist anymore, so everytime elog() or ereport() are called, a SIGSEGV occurs. To avoid this, setting up and restoring error_context_stack within a single function.
-
由 Jimmy Yih 提交于
When rebasing https://github.com/greenplum-db/gpdb/commit/20332d98dc3f181f75db550b3d18ca5e979629ad on top of https://github.com/greenplum-db/gpdb/commit/bfb63ea83e379089f7e90ff9f3c2d6faab0ae722, I forgot to edit the new orca answer file that was added.
-
由 David Sharp 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Jamie McAtamney 提交于
-
由 Tushar Dadlani 提交于
In compile_gpdb.bash, simply rely on the variables being set in the environment. Comment that they are required rather than explicitly passing them. In gphdfs/Makefile, write the credentials to a property file in /tmp, and pass that file to ant. Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Jimmy Yih 提交于
Append-optimized tables do not support CLUSTER command. For some unknown reason, the ALTER TABLE CLUSTER ON command was left alone. This commit will properly disable the command for append-optimized (row and column orientated) tables and report an error message when used.
-
由 Jimmy Yih 提交于
The gpactivatestandby utility was recently changed in commit https://github.com/greenplum-db/gpdb/commit/c42035ea6089ad9f447f2780b90e7b53413d5e7c to disable some validations on master data directory. The commit was merged without running this test and made our CI pipeline red... so here's the test fix to update expected error handling stdout.
-
由 Shreedhar Hardikar 提交于
There is a similar assertion that has been observered to fail once in a while, but is no longer reproducible. Since this is an exceptional case we should error our sooner rather than later. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Dhanashree 提交于
Transformation of `AlterTableStmt` is moved from `parse_analyze()` to `ProcessUtility()` GPDB4 flow: ``` ATPExecPartSplit() parse_analyze() transformStmt() transformAlterTableStmt() -> may expand to multiple statements transformAlterTable_all_PartitionStmt() foreach statement in statementList ProcessUtility() ``` GPDB Master flow: ``` ATPExecPartSplit() parse_analyze() transformStmt() -> no-op for AlterTableStmt ProcessUtility() transformAlterTableStmt() -> may expand to multiple statements transformAlterTable_all_PartitionStmt() foreach statement in statementList AlterTable() or ProcessUtility() ``` Hence in `ATPExecPartSplit()`, it is guaranteed that we have not expanded into multiple statements since no transformation happened in `parse_analyze()`. This commit removes the FIXME comment.
-
由 Haisheng Yuan 提交于
- The dpe.sql file was duplicated in bugbuster - so we remove that one. - Result files were old, and hence updated. Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Heikki Linnakangas 提交于
Commit 46d9521b moved the test to the main regression suite, and re-enabled it.
-
- 13 2月, 2017 8 次提交
-
-
由 Heikki Linnakangas 提交于
In commit 3a795d25, I refactored things so that it was left unused, but I failed to actually remove it then.
-
由 Daniel Gustafsson 提交于
-pthread is coming as part of PTHREAD_CFLAGS, no need to add it separately.
-
由 Daniel Gustafsson 提交于
cdb_dumpall_agent.c is neither compiled, not does it contain anything of interest. Kill it for now, if we want cdb_dumpall_agent it can be resurrected or started on from scratch.
-
由 Heikki Linnakangas 提交于
This hasn't been tested for a while. And if someone wants to build GPDB on Solaris, should use autoconf tests and upstream "#ifdef _sparc" method to guard platform-dependent code, rather than the GPDB-specific "pg_on_solaris" flag.
-
由 Heikki Linnakangas 提交于
If you did "DROP USER IF EXISTS not_there", you got a notice from every segment: postgres=# DROP USER IF EXISTS testuser; NOTICE: role "testuser" does not exist, skipping NOTICE: role "testuser" does not exist, skipping (seg0 127.0.0.1:40000 pid=1554) NOTICE: role "testuser" does not exist, skipping (seg1 127.0.0.1:40001 pid=1556) NOTICE: role "testuser" does not exist, skipping (seg2 127.0.0.1:40002 pid=1555) That was quite noisy. Suppress the notices from the segments, so that you only get one NOTICE, from the master. We had done this for all other object types that support IF EXISTS, like tables, functions, etc.
-
由 Pengzhou Tang 提交于
Add GUC_GPDB_ADDOPT flag for internalstyle so it can be dispatched to QEs
-
由 Adam Lee 提交于
-
由 Daniel Gustafsson 提交于
The tests in this suite are simply wrapping queries from other ICW suites (mainly qp_derived_table, qp_olap_window, qp_olap_mdqa) in plain CREATE VIEW statements with a SELECT * on the view. Remove as we already have the queries in testing and we have ample coverage of CREATE VIEW in the create_view suite.
-
- 12 2月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
This code was put in place back in 2007, to work around inconsistent formattin of the DETAIL lines in errors coming from external tables. From MPP-1557: > The exttab1 test fails occassionally due to strangeness in the DETAIL > formatting. Here's what happens when I select from my badly-formatted > external table. Note that for the same select statement on the same table, > the DETAIL record formatting is different: > > jc1=# select count from bad_whois ; > ERROR: missing data for column "domain_name" (seg0 slice1 localhost:11002 pid=11535) > DETAIL: External table bad_whois, line 2 of gpfdist://localhost:8080/whois.csv: "" > > jc1=# select count from bad_whois ; > ERROR: missing data for column "subdomain" (seg0 slice1 localhost:11002 pid=11535) > DETAIL: > External table bad_whois, line 3 of gpfdist://localhost:8080/whois.csv: "is provided by WEBCC for information purposes, and to assist in obtaining > information about or rela..." > > The problem is that normally the DETAIL information comes immediately on > the same line as the "DETAIL:" label, but if the DETAIL information > contains a newline, then a newline is inserted after the "DETAIL:" label. > It's a little tricky to handle this case cleanly for gpdiff comparison. > If the discrepancy is due to some weirdness in external table message > generation, it would be nice to fix, but if it is due to some strange > postgresql message formatting I can probably live with it. That doesn't seem to be a problem anymore, so remove the hack.
-
由 Heikki Linnakangas 提交于
We haven't used CVS, with its Revision tags, for a very long time.
-
由 Heikki Linnakangas 提交于
Commit fecad30b forgot to update this alternative expected output. (For when you build without ORCA)
-
- 11 2月, 2017 9 次提交
-
-
由 Heikki Linnakangas 提交于
In passing, I fixed the typo in the error message, but missed it in the expected output of these tests.
-
由 C.J. Jameson 提交于
* rename 5.0.json to 5.json * the code expects the file to be named based on the major version number, which now only has a single digit Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
In addition, delete unnecessary drop statements.
-
由 C.J. Jameson 提交于
We added make commands to clean up files that are gitignored, because now git clean won't be able to delete them. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Jimmy Yih 提交于
This commit adds TINC aoco_compression to the GPDB 5.0_MASTER pipeline as a nightly trigger job.
-
由 Jimmy Yih 提交于
The storage type for cidr and inet was changed from plain to main around the beginning of open source Greenplum but still plagues some tests. These should be the last ones inside the TINC directory. Other than that, just some random ans file updates. This commit is just to make the tests green on CI. The tests will be refactored in a later commit to run more quickly and to reduce clutter.
-
由 Ashwin Agrawal 提交于
sync_tools currently is being tar'd in compile job, uploaded to S3 and download by whole bunch of jobs. Seems not required, so lets stop performing the same and cut down time and resource wastage.
-
由 Ashwin Agrawal 提交于
Don't see rational for serializing job runs. Multiple commits should be able to run the job in parallel to help gain faster feedbacks.
-