- 01 5月, 2019 7 次提交
-
-
由 Lisa Owen 提交于
* docs - jdbc cfg supports connection- and session-level properties * some edits requested by david * reword jdbc server cfg opening paragraph * clarify rejected session prop/value chars as requested by ivan
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - discuss pxf fragment metadata caching * a large number of
-
由 Kalen Krempely 提交于
Remove unused behave step: "the user waits for "{process_name}" to finish running"
-
由 Kalen Krempely 提交于
Remove behave step "the database is killed on hosts mdw,sdw1,sdw2" in favor of "the database is not running. This shuts down the databse more cleanly, and avoids any potential race conditons.
-
由 Kalen Krempely 提交于
Remove "user kills a primary postmaster process" in favor of the more generic equivalent step "user stops all {segment_type} processes" which can take either a primary or mirror. For clarity and accuracy rename "user kills all {segment_type} processes" to "user stops all {segment_type} processes".
-
由 Kalen Krempely 提交于
Cleanly shutdown segments using pg_ctl to avoid potential race conditions by grep'ing for the pid via ps. This is a much simplier approach to help with maintainability and extensibility.
-
- 30 4月, 2019 3 次提交
-
-
由 Peifeng Qiu 提交于
S3KeyReaderTest.MTReadWithUnexpectedFetchDataAtSecondRound is a flaky case, related to multithread timing. The case setup S3KeyReader and try to download in parallel with 2 chunks(threads). When any of them encounters an error, all thread will abort with the shared error. The case assumed that the first created thread will call fetchData() twice before another thread fetch with error. But if the first thread is never scheduled to run, the second thread will call fetchData() first and sets the shared error. Then the first thread continues and will exit at the first call to fetchData(), reporting shared error. Modify the second call to fetchData() to be at most once.
-
由 Paul Guo 提交于
Recursively create tablespace directories if they do not exist but we need them when re-redoing some tablespace related xlogs (e.g. database create with a tablespace) on mirror. It is observed many time that gp_replica_check test fails because some mirror nodes can not be brought up before testing recently. The related log looks like this: 2019-04-17 14:52:14.951 CST [23030] FATAL: could not create directory "pg_tblspc/65546/PG_12_201904072/65547": No such file or directory 2019-04-17 14:52:14.951 CST [23030] CONTEXT: WAL redo at 0/3011650 for Database/CREATE: copy dir 1663/1 to 65546/65547 That is because some mirror nodes can not be recovered after previous testing, not due to gp_replica_check itself. The root cause is that tablespace recovery related. Pengzhou Tang and Hao Wu digged that intially and kindly found a mini repro as below. run on shell: rm -rf /tmp/some_isolation2_pg_basebackup_tablespace mkdir -p /tmp/some_isolation2_pg_basebackup_tablespace copy and run the below sql on psql client: drop tablespace if exists some_isolation2_pg_basebackup_tablespace; create tablespace some_isolation2_pg_basebackup_tablespace location '/tmp/some_isolation2_pg_basebackup_tablespace'; \!gpstop -ra -M fast; drop database if exists some_database_with_tablespace; create database some_database_with_tablespace tablespace some_isolation2_pg_basebackup_tablespace; drop database some_database_with_tablespace; drop tablespace some_isolation2_pg_basebackup_tablespace; \!gpstop -ra -M immediate; The root cause is on mirror after drop database & drop tablespace, 'immediate' stop causes the pg_control file not up-to-date with latest redo start lsn (this is allowed), when the node restarts, it re-redoes 'create database some_database_with_tablespace tablespace some_isolation2_pg_basebackup_tablespace' but the tablespace directories have been deleted in previous redoing. The 'could not create directory' error could happen on re-redoing create table in a tablespace also. We've seen this case on the ci environment, but that is because missing of a get_parent_directory() call in the 'create two parents' code block in TablespaceCreateDbspace(). Changing it to a simpler call pg_mkdir_p() instead. Also it seems that the src_path could be missing also in dbase_redo() for the example below. For example re-redoing at the alter step since tbs1 directory is deleted in later 'drop tablespace tbs1'. alter database db1 set tablespace tbs2; drop tablespace tbs1; There is discussion on upstream about this, https://www.postgresql.org/message-id/flat/CAEET0ZGx9AvioViLf7nbR_8tH9-%3D27DN5xWJ2P9-ROH16e4JUA%40mail.gmail.com In this patch I recreate those directories to avoid this error. Other solutions include ignoring the directory-not-existing error or forcing a flush when redoing those kind of checkpoint xlogs which are added normally in drop database, etc. Let's revert or update the code change after the solution is finalized on upstream.
-
由 Lisa Owen 提交于
* docs - pxf init/sync support to master standby * edits requested by david * edits requested by francisco and oliver * pxf sync from master TO standby or seg host * identify sync run on master in pxf sync option description
-
- 29 4月, 2019 8 次提交
-
-
由 Georgios Kokolatos 提交于
strncat unfortunately takes a misleading size argument which means at most size from src. It is a bit of an antipattern in the string family of functions and for that compilers will emit a warning if it happens that the size argument matches the size of src, since that is not what usually users of strncat want to do. The usage of the function in the code was correct. However instead of silencing the compiler, strncat was replaced with the dynamic buffer family of operations that postgres provides for the frontend. Also protect against an invalid read in case that the size of the result is zero. Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Daniel Gustafsson 提交于
The curl slist API properly handle NULLs so we can be less verbose and skip the check before passing to the slist cleanup function. Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
The header callback was storing the response in the context, but as it was never used we might as well save the memory and just return the required return of the number of bytes we would've saved should we have allocated. Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
The only time the internal buffer cleanup code was called was just before freeing the entire context, so individually zeroing out the members is pointless. Remove the function entirely and inline the buffer freeing into the context cleanup codepath. For zeroing the error buffer, it's only called right after allocating the error buffer with palloc0() in the first place so the memory will always be zeroed out when reaching here. Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
If strlen(addr) is zero then based on how get_dest_address() works addr will be NULL, and pfree() on NULL is not permitted. Also, we know that addr will either be a non-empty string or NULL, so we can just as well test for addr being NULL and avoid a strlen() call. Fix by only pfreeing when addr is set. (this is in an elog(ERROR..) context so freeing isn't terribly interesting but it also doesn't hurt so I'm keeping the current codepath.) Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
The libchurl abstraction layer has many internal helper functions which weren't marked static and thus exported. Fix by marking all as static. Reviewed-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Daniel Gustafsson 提交于
Spotted while reading code.
-
由 Adam Lee 提交于
CI reported some read failures just after the write action, it's probably because the write action is not flushed yet by AWS. Sleep and retry if AWS returns "NoSuchKey" error. Update the workflow but not the test cases because users might have the same issue. Co-authored-by: NPeifeng Qiu <pqiu@pivotal.io>
-
- 28 4月, 2019 3 次提交
-
-
由 Adam Lee 提交于
CI reported some read failures just after the write action, it's probably because the write action is not flushed yet by AWS. Sleep and retry if AWS returns "NoSuchKey" error. Update the workflow but not the test cases because users might have the same issue.
-
由 Paul Guo 提交于
Previously we error out in gp_replica_check test if the cluster is not in synced, which is usually due to bugs revealed by previous tests, instead of gp_replica_check itself. Print more detailed message so that people are not confused with the status and this failure reason of the gp_replica_check test.
- 27 4月, 2019 2 次提交
-
-
由 Chuck Litzell 提交于
* Updates to GUCs from Cyrille's reviews * Revise description of temp_buffers to match 6.0 behavior * Fix a couple typos * For effective_cache_size GUC, show default in blocks and size.
-
由 Chuck Litzell 提交于
-
- 26 4月, 2019 6 次提交
-
-
由 Chuck Litzell 提交于
-
由 Chuck Litzell 提交于
-
由 Chuck Litzell 提交于
* Docs - update pg_class and pg_index catalog table references * dyozie review comments
-
由 Lisa Owen 提交于
-
由 Adam Berlin 提交于
-
由 Hans Zeller 提交于
The script used in pipelines that search for explain plan changes lists those queries that have cost, row or plan changes. In many cases the user will want to investigate those changes further. A new set of options generates two directories that are easy to compare, one contains the baseline plans, one file per plan, and the other contains the changed plans of the test. $ ~/workspace/gpdb/concourse/scripts/perfsummary.py --help usage: perfsummary.py [-h] [--baseLog BASELOG] [--diffDir DIFFDIR] [--diffThreshold DIFFTHRESHOLD] [--diffLevel DIFFLEVEL] [log_file] Summarize the test suite execute and explain log positional arguments: log_file log file with explain/execute output optional arguments: -h, --help show this help message and exit --baseLog BASELOG specify a log file from a base version to compare to --diffDir DIFFDIR request diff files to be created and specify a directory to place diffs into --diffThreshold DIFFTHRESHOLD specify a numerical threshold to record plan diffs with a performance regression of more than n percent --diffLevel DIFFLEVEL specify which diff files to generate: 1 = all diffs, 2 = ignore cost diffs, 3 = plan diffs only
-
- 25 4月, 2019 11 次提交
-
-
由 Peifeng Qiu 提交于
-
由 Peifeng Qiu 提交于
Connectivity and loaders are no longer vendored. Loaders will be merged into clients. Add script to create msi package.
-
由 Peifeng Qiu 提交于
Requires a working libpq.dll. Set CMAKE_PREFIX_PATH to a successful client build.
-
由 Peifeng Qiu 提交于
Behavior of stat() is not stable across C Runtime versions. Implementation of msvcrt.dll calls FindFirstFile(), while in normal CRT that omes with Visual Studio or Redistribute packages, it calls CreateFile(). CreateFile() is problematic here, if path is a pipe, it will open the pipe once then close, causing the other side to connect to the wrong pipe. Skip stat() if the name is pipe and pretend there is one.
-
由 Peifeng Qiu 提交于
- Don't include strings.h when build with MSVC - C99 Syntax fix for struct initializer. - Call event_set to initialize event struct. Later libevent version expect this behavior. - Use recv instead of read. Latest CRT implementation of read only works on files, not sockets.
-
由 Peifeng Qiu 提交于
-
由 Peifeng Qiu 提交于
- Fix upstream build system for MSVC, add option to build.pl to only build client tools. - Add detection for MSVC SDK version. If this field is missing, 8.1 is assumed. The latest compiler will complain about this. - Various small fixes
-
由 Chuck Litzell 提交于
* Add a list of SQL keywords to the ref guide * Fix error from review * Update from review comment
-
由 Tingfang Bao 提交于
Story: https://www.pivotaltracker.com/story/show/164917628 The gp-integration-testing pipeline needs the gp-clients RC package as a input, and then create the clients RPM base on it. The custom installer patch also need it to build client bin installer. Co-authored-by: NXiaoran Wang <xiwang@pivotal.io> Co-authored-by: NShaoqi Bai <sbai@pivotal.io> Co-authored-by: NBob Bao <bbao@pivotal.io>
-
由 Chris Hajas 提交于
ORCA Explain plans will now contain: `Optimizer: Pivotal Optimizer (GPORCA) version 3.35.0` instead of: `Optimizer: PQO version 3.35.0` Authored-by: Chris Hajas chajas@pivotal.io
-
由 Jacob Champion 提交于
Follow-up to the previous commit, which worked fine for CentOS6 but fell apart with CentOS7. Because our vendored Python doesn't contain an RPATH/RUNPATH pointer to the location of its libpython, trying to execute it directly will result in failures at link time. The previous commit took the approach that greenplum_path.sh takes, which is to hardcode an LD_LIBRARY_PATH that makes up for this bug. This approach works for CentOS6, which is running Python 2.6 as its system version. On CentOS7, which has Python 2.7, the LD_LIBRARY_PATH causes the system Python to use the vendored libpython.so.2.7, and virtualenv fails. Instead of forcing a cross-linking situation with LD_LIBRARY_PATH, fix the problem in the vendored Python binary, by using patchelf to set up a proper RUNPATH. (We originally tried to build our vendored Python with an RPATH set at compile time, but the only way to do that without knowing the eventual installation prefix is by setting a relative RPATH using the `$ORIGIN` construct, and virtualenv is unfortunately incompatible with that.) We do this on any platforms that provide a patchelf binary, and do our best to limp along on all others. Along the way, get rid of the run_behave.yml task, which has been confusing us for the entirety of this work. CCP jobs now use run_behave_on_ccp_cluster.yml consistently. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-