- 04 5月, 2019 2 次提交
-
-
由 Ashwin Agrawal 提交于
heaptuple_form_to() zero's out the buffer, so this commit does same for memtuple_form_to(). This is done to have zeros in padding areas and hence help get good compression for AO tables. This also helps to eliminate the flakiness seen in appendonly test for compression ratio. Now given same input the compression ratio will be same. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/1N5Qi-4WPis/EAIdwvy9CAAJ
-
由 Lisa Owen 提交于
* docs - clarify pxf filter partitioning support for hive * clarify the hadoop cfg update content * remove cluster start * edits requested by david * remove disable statement per shivram
-
- 03 5月, 2019 5 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Shreedhar Hardikar 提交于
Includes ICG changes for ORCA commit: 'Convert FULL OUTER JOIN to LEFT JOIN when possible'
-
由 Chris Hajas 提交于
Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Hans Zeller 提交于
Also add a few more tests for unions to ICG.
-
由 Lisa Owen 提交于
-
- 02 5月, 2019 5 次提交
-
-
由 Daniel Gustafsson 提交于
The functionality to send alerts via email (and snmp was removed in commit 65822b80 but I missed removing the autoconf check for the libcurl feature required. Since there are no more consumers, remove the check and feature macro as well. Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Asim R P 提交于
The test should wait for the transactions to be in the right state before promoting standby. This commit adds a wait step to ensure just that. One of the ICW jobs in CI failed because the test promoted the standby before the transactions were preprared on master. This should no longer happen now.
-
由 Lisa Owen 提交于
* docs - add pxf jdbc.connection.transactionIsolation server cfg property * use read uncommitted in example; add external
-
由 Jacob Champion 提交于
-
由 Shoaib Lari 提交于
Because users may find it confusing when gpconfig prints '-' or 'None' when no GUC value is found in the file, this commit updates gpconfig to output a clearer message. Apply the out-of-band messaging to failure reports as well, and fix up the output of --file-compare. Add tests for each modified implementation. Co-Authored-By: NJamie McAtamney <jmcatamney@pivotal.io> Co-Authored-By: NMark Sliva <msliva@pivotal.io> Co-Authored-By: NShoaib Lari <slari@pivotal.io>
-
- 01 5月, 2019 10 次提交
-
-
由 Asim R P 提交于
Transactions that are in the middle of two phase commit are suspended on master. Standby is promoted while they are suspended. Based on what XLOG records are emitted by master, the standby is expected to perform DTM recovery and complete the transactions upon promotion.
-
由 Asim R P 提交于
This feature enables tests to run SQL on standby after it is promoted. Use "-1S: <sql>" to run <sql> statement on standby. It is assumed that the standby is already promoted.
-
由 Chuck Litzell 提交于
-
由 Lisa Owen 提交于
* docs - jdbc cfg supports connection- and session-level properties * some edits requested by david * reword jdbc server cfg opening paragraph * clarify rejected session prop/value chars as requested by ivan
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - discuss pxf fragment metadata caching * a large number of
-
由 Kalen Krempely 提交于
Remove unused behave step: "the user waits for "{process_name}" to finish running"
-
由 Kalen Krempely 提交于
Remove behave step "the database is killed on hosts mdw,sdw1,sdw2" in favor of "the database is not running. This shuts down the databse more cleanly, and avoids any potential race conditons.
-
由 Kalen Krempely 提交于
Remove "user kills a primary postmaster process" in favor of the more generic equivalent step "user stops all {segment_type} processes" which can take either a primary or mirror. For clarity and accuracy rename "user kills all {segment_type} processes" to "user stops all {segment_type} processes".
-
由 Kalen Krempely 提交于
Cleanly shutdown segments using pg_ctl to avoid potential race conditions by grep'ing for the pid via ps. This is a much simplier approach to help with maintainability and extensibility.
-
- 30 4月, 2019 3 次提交
-
-
由 Peifeng Qiu 提交于
S3KeyReaderTest.MTReadWithUnexpectedFetchDataAtSecondRound is a flaky case, related to multithread timing. The case setup S3KeyReader and try to download in parallel with 2 chunks(threads). When any of them encounters an error, all thread will abort with the shared error. The case assumed that the first created thread will call fetchData() twice before another thread fetch with error. But if the first thread is never scheduled to run, the second thread will call fetchData() first and sets the shared error. Then the first thread continues and will exit at the first call to fetchData(), reporting shared error. Modify the second call to fetchData() to be at most once.
-
由 Paul Guo 提交于
Recursively create tablespace directories if they do not exist but we need them when re-redoing some tablespace related xlogs (e.g. database create with a tablespace) on mirror. It is observed many time that gp_replica_check test fails because some mirror nodes can not be brought up before testing recently. The related log looks like this: 2019-04-17 14:52:14.951 CST [23030] FATAL: could not create directory "pg_tblspc/65546/PG_12_201904072/65547": No such file or directory 2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for Database/CREATE: copy dir 1663/1 to 65546/65547 That is because some mirror nodes can not be recovered after previous testing, not due to gp_replica_check itself. The root cause is that tablespace recovery related. Pengzhou Tang and Hao Wu digged that intially and kindly found a mini repro as below. run on shell: rm -rf /tmp/some_isolation2_pg_basebackup_tablespace mkdir -p /tmp/some_isolation2_pg_basebackup_tablespace copy and run the below sql on psql client: drop tablespace if exists some_isolation2_pg_basebackup_tablespace; create tablespace some_isolation2_pg_basebackup_tablespace location '/tmp/some_isolation2_pg_basebackup_tablespace'; \!gpstop -ra -M fast; drop database if exists some_database_with_tablespace; create database some_database_with_tablespace tablespace some_isolation2_pg_basebackup_tablespace; drop database some_database_with_tablespace; drop tablespace some_isolation2_pg_basebackup_tablespace; \!gpstop -ra -M immediate; The root cause is on mirror after drop database & drop tablespace, 'immediate' stop causes the pg_control file not up-to-date with latest redo start lsn (this is allowed), when the node restarts, it re-redoes 'create database some_database_with_tablespace tablespace some_isolation2_pg_basebackup_tablespace' but the tablespace directories have been deleted in previous redoing. The 'could not create directory' error could happen on re-redoing create table in a tablespace also. We've seen this case on the ci environment, but that is because missing of a get_parent_directory() call in the 'create two parents' code block in TablespaceCreateDbspace(). Changing it to a simpler call pg_mkdir_p() instead. Also it seems that the src_path could be missing also in dbase_redo() for the example below. For example re-redoing at the alter step since tbs1 directory is deleted in later 'drop tablespace tbs1'. alter database db1 set tablespace tbs2; drop tablespace tbs1; There is discussion on upstream about this, https://www.postgresql.org/message-id/flat/CAEET0ZGx9AvioViLf7nbR_8tH9-%3D27DN5xWJ2P9-ROH16e4JUA%40mail.gmail.com In this patch I recreate those directories to avoid this error. Other solutions include ignoring the directory-not-existing error or forcing a flush when redoing those kind of checkpoint xlogs which are added normally in drop database, etc. Let's revert or update the code change after the solution is finalized on upstream.
-
由 Lisa Owen 提交于
* docs - pxf init/sync support to master standby * edits requested by david * edits requested by francisco and oliver * pxf sync from master TO standby or seg host * identify sync run on master in pxf sync option description
-
- 29 4月, 2019 8 次提交
-
-
由 Georgios Kokolatos 提交于
strncat unfortunately takes a misleading size argument which means at most size from src. It is a bit of an antipattern in the string family of functions and for that compilers will emit a warning if it happens that the size argument matches the size of src, since that is not what usually users of strncat want to do. The usage of the function in the code was correct. However instead of silencing the compiler, strncat was replaced with the dynamic buffer family of operations that postgres provides for the frontend. Also protect against an invalid read in case that the size of the result is zero. Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Daniel Gustafsson 提交于
The curl slist API properly handle NULLs so we can be less verbose and skip the check before passing to the slist cleanup function. Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
The header callback was storing the response in the context, but as it was never used we might as well save the memory and just return the required return of the number of bytes we would've saved should we have allocated. Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
The only time the internal buffer cleanup code was called was just before freeing the entire context, so individually zeroing out the members is pointless. Remove the function entirely and inline the buffer freeing into the context cleanup codepath. For zeroing the error buffer, it's only called right after allocating the error buffer with palloc0() in the first place so the memory will always be zeroed out when reaching here. Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
If strlen(addr) is zero then based on how get_dest_address() works addr will be NULL, and pfree() on NULL is not permitted. Also, we know that addr will either be a non-empty string or NULL, so we can just as well test for addr being NULL and avoid a strlen() call. Fix by only pfreeing when addr is set. (this is in an elog(ERROR..) context so freeing isn't terribly interesting but it also doesn't hurt so I'm keeping the current codepath.) Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
The libchurl abstraction layer has many internal helper functions which weren't marked static and thus exported. Fix by marking all as static. Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
Spotted while reading code.
-
由 Adam Lee 提交于
CI reported some read failures just after the write action, it's probably because the write action is not flushed yet by AWS. Sleep and retry if AWS returns "NoSuchKey" error. Update the workflow but not the test cases because users might have the same issue. Co-authored-by: NPeifeng Qiu <pqiu@pivotal.io>
-
- 28 4月, 2019 3 次提交
-
-
由 Adam Lee 提交于
CI reported some read failures just after the write action, it's probably because the write action is not flushed yet by AWS. Sleep and retry if AWS returns "NoSuchKey" error. Update the workflow but not the test cases because users might have the same issue.
-
由 Paul Guo 提交于
Previously we error out in gp_replica_check test if the cluster is not in synced, which is usually due to bugs revealed by previous tests, instead of gp_replica_check itself. Print more detailed message so that people are not confused with the status and this failure reason of the gp_replica_check test.
- 27 4月, 2019 2 次提交
-
-
由 Chuck Litzell 提交于
* Updates to GUCs from Cyrille's reviews * Revise description of temp_buffers to match 6.0 behavior * Fix a couple typos * For effective_cache_size GUC, show default in blocks and size.
-
由 Chuck Litzell 提交于
-
- 26 4月, 2019 2 次提交
-
-
由 Chuck Litzell 提交于
-
由 Chuck Litzell 提交于
-