- 07 2月, 2018 9 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Since node has already been checked in the preceeding if statement, the assertion is will never hit and will only cost processing so remove all instances of this pattern.
-
由 Shivram Mani 提交于
This reverts commit 8c538317.
-
由 Shivram Mani 提交于
-
由 Xin Zhang 提交于
There is 5 sec delay between mirror retry to connect to primary. The original timeout to detect for the streaming state change was 10 seconds, which caused stability issue of the test if the probe happen to fall between two mirror retries. The fix increase the timeout to 20 seconds (200), which is also consistent with rest of the timeout used in the test. Author: Xin Zhang <xzhang@pivotal.io>
-
由 Kris Macoskey 提交于
Pulse has been retired. Tests that once ran on Pulse now entirely use CCP. Maintaining Pulse code is no longer necessary.
-
由 Mel Kiyama 提交于
* docs: gpload - new yaml file parameter STAGING_TABLE Ported from 4.3.x documentation. PR for 5X_STABLE Will be ported to MAIN * docs: gpload - updates for STAGING_TABLE based on review and email comments.
-
由 C.J. Jameson 提交于
The output argument `-O` is now a filepath, not a filename The template argument is still a filename within `templates/` because that's how Jinja likes it Co-Author: Kevin Yeap <kyeap@pivotal.io> Co-Author: Jim Doty <jdoty@pivotal.io> Co-Author: C.J. Jameson <cjameson@pivotal.io> Co-Author: Shoaib Lari <slari@pivotal.io>
-
由 Shoaib Lari 提交于
gpstart did a cluster-wide check of heap_checksum settings and refused to start the cluster if this setting was inconsistent. This meant a round of ssh'ing across the cluster which was causing OOM errors with large clusters. This commit moves the heap_checksum validation to gpsegstart.py, and changes the logic so that only those segments which have the same heap_checksum setting as master are started. Author: Jim Doty <jdoty@pivotal.io> Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
- 06 2月, 2018 4 次提交
-
-
由 Heikki Linnakangas 提交于
When I run the regression tests with enable_mergejoin=on, I got: +FATAL: Unexpected internal error (costsize.c:2010) +DETAIL: FailedAssertion("!(outer_skip_rows <= outer_rows)", File: "costsize.c", Line: 2010) +HINT: Process 26232 will wait for gp_debug_linger=120 seconds before termination. +Note that its locks and other resources will not be released until then. +server closed the connection unexpectedly + This probably means the server terminated abnormally + before or while processing the request. +connection to server was lost That happened in the 'gp_recursive_cte' test, with this query: select recursive_table_1.id from recursive_table_1, recursive_table_2 where recursive_table_1.id = recursive_table_2.id and EXISTS (select * from r where r.i = recursive_table_2.id); I tried to reduce that to a simpler test case, but gave up. I'm sure it could be done, but I wasn't able to, with quick testing. This bug seems unlikely to reappear in the same form, and it's covered by the existing test, even if it's a bit accidental, so that's good enough for me.
-
由 Ashwin Agrawal 提交于
This is just preparation step/iteration before moving these tests to ICW. Reducing the number of combination it runs as nesting level 3 and 4 doesn't matter. Plus, reload the config (gpstop -u) only once after setting all the GUC instead of individual ones. Ideally while moving to ICW can set the GUC at session level and reload is not needed at all, plus all of these tests can be run in parallel.
-
由 Todd Sedano 提交于
Sometime last year, we discussed setting the GOPATH and PATH as follows export GOPATH=/Users/pivotal/go:/Users/pivotal/workspace/gpdb/gpMgmt/go-utils export PATH=/Users/pivotal/go:/Users/pivotal/workspace/gpdb/gpMgmt/go-utils/bin: /Users/pivotal/.rbenv/shims:/Users/pivotal/go:/Users/pivotal/workspace/gpdb/gpMgmt/go-utils/bin:/usr/local/opt/python/libexec/bin:/usr/local/opt/python/libexec/bin:/Users/pivotal/.rbenv/shims:/Users/pivotal/.rbenv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin But I'm now realizing that this is tricky since PATH does not see all the binaries in the GOPATH. Only the last directory will have its bin: /Users/pivotal/go:/Users/pivotal/workspace/gpdb/gpMgmt/go-utils/bin: /usr/local/opt/python/libexec/bin:.... Shell parameter expression could help solve this problem, PATH=${GOPATH//:/bin:}:$PATH but this adds /bin to all directories except for the last one. Instead, we will just hardcode both directories onto PATH
-
由 David Yozie 提交于
-
- 04 2月, 2018 1 次提交
-
-
由 Lav Jain 提交于
-
- 03 2月, 2018 7 次提交
-
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
uao tests were moved to isolation2 and transactionmanagement tests are currently added to storage_accessmethods_and_vacuum target.
-
由 Ashwin Agrawal 提交于
Crash test during appendonly_insert and appendonly_update are not much interesting, hence they are not ported. Only vacuum related tests were moved.
-
由 Ashwin Agrawal 提交于
Without this change the faults CompactionBeforeSegmentFileDropPhase and CompactionBeforeCleanupPhase when set affected all the AO tables running vacuum. To enable writing parallel tests, make these faults hit only specified tables.
-
由 Heikki Linnakangas 提交于
The test forcibly modifies pg_statistics.stavalues, with a array that doesn't match the column's datatype. In general, if you muck around the catalogs like that, all bets are off, but since we have a test like that in the test suite, let's at least make it work. The problem only arises with enable_mergejoin=on, because apparently the cost estimation of other join types follow a different codepath that happens to work.
-
由 Asim R P 提交于
End of multi-line as well as single line SQL commands is marked by ';'. Two tests are modified to demonstrate the usage. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
- 02 2月, 2018 10 次提交
-
-
由 Daniel Gustafsson 提交于
Remove extra semicolon which breaks the query for copy-paste. Per report by Michael Mulcahy
-
由 Daniel Gustafsson 提交于
-
由 Heikki Linnakangas 提交于
* Change the API of various functions that create temporary files so that the caller is not supposed to include the 'pgsql_tmp/' prefix in the pathname. This includes ExecWorkFile_CreateUnique, BufFileCreateFile (renamed to BufFileCreateTempFile to make that more clear), and BufFileOpenFile (also renamed, to BufFileOpenTempFile) * The above-mentioned functions, which take an exact file name as argument, still don't obey temp_tablespaces. That's because it would make it unpredictable which exact path the file gets created in, which would make it hard to re-open the same file later. It is always created in the current database's pgsql_tmp diretory. (Normally, that's base/pgsql_tmp, but if the database's default tablespace is something else, then pgsql_tmp is also under that tablespace.) * The OpenTemporaryFile() function now respects temp_tablespaces. We left that out when we merged with PG 8.3, because it was not applicable when we still had filespaces. The upstream commit that implemented this was acfce502. The whole situation with temporary files is still a bit messy. Some temporary files are created using the workfile API, while others are not. The workfile code uses OpenNamedTemporaryFile() rather than OpenTemporaryFile(), so they still don't obey temp_tablespaces. On the other hand, the workfile code provides some extra features, like putting limits on disk space and compression. Temporary files opened without the workfile API still don't have those feature, even though they will now obey temp_tablespaces. But this is a step in the right direction.
-
由 Heikki Linnakangas 提交于
Rename BufFileCreateFile and BufFileOpenFile functions to BufFileCreateNamedTemp and BufFileOpenNamedTemp. This makes it more clear that these are for opening temporary files, not permantent ones, and the "Named" means that they create/open a file with a particular name. Unlike the upstream BufFileCreate() function, which constructs a unique filename on the fly. Remove the 'create' argument from BufFileOpenNamedTemp(). Previously, it could be used to create a new file or open an existing file, now you must use BufFileCreateNamedTemp() to create a new file, which seems more clear. Remove BufFileCreateTemp_ReaderWriter() function, and replace its use with BufFileCreateNamedTemp() and BufFileOpenNamedTemp(). It does the same thing, and the Create/Open names seem more clear.
-
由 Heikki Linnakangas 提交于
OpenTemporaryFile() is now just like the upstream OpenTemporaryFile(), except that it takes an extra filename prefix argument, for debugging purposes. The files it creates are automatically made unique, and are deleted on close. OpenNamedTemporaryFile() creates a new file, or opens an existing file, in the temp directory, with given name. This can be used for inter-process communication.
-
由 Heikki Linnakangas 提交于
Instead, avoid creating such Result nodes in the first place, by making plan_pushdown_tlist() check if the Result node would have any work to do. With this, you get Result nodes in some cases where the old code could zap it away. But on the other hand, this can avoid inserting Result nodes, not only on top of Appends, but on top of any node. This can be seen in the included expected output changes: some test queries lose a Result, some gain one. So performance-wise this is about a wash, but this is simpler. The reason to do this right now is that we ran into issues with the "zapping" code while working on the 9.0 merge. I'm sure we could fix those issues, but let's do this rather than spend time debugging and fixing the zapping code with the merge.
-
由 Jimmy Yih 提交于
The gprecoverseg tool has been broken after filerep and persistent tables were removed. This commit cleans it up a little bit and makes full mirror recovery work. Also change mirror_promotion isolation2 test to use gprecoverseg tool instead of gpsegwalrep dev script.
-
由 Mel Kiyama 提交于
-Move note in Examples section to emphasize container ID depends on configuration -Add creating a docker group in Docker Install instructions for CentOS 6
-
由 Ashwin Agrawal 提交于
`repair_frag()` should consult distributed snapshot (`localXidSatisfiesAnyDistributedSnapshot()`) while following and moving chains of updated tuples. Vacuum consults distributed snapshot (`localXidSatisfiesAnyDistributedSnapshot()`) to find which tuples can be deleted and not. For RECENTLY_DEAD tuples it used to make decision just based on comparison with OldestXmin which is not sufficient and even there distributed snapshot must be checked. Fixes #4298
-
由 Lisa Owen 提交于
-
- 01 2月, 2018 9 次提交
-
-
由 Nikhil Kak 提交于
Commit 3520f6c8 removed all references to the parameter tf-bucket-path because it is now hardcoded to clusters.
-
由 Adam Lee 提交于
1, pipes might not exist while in close_program_pipes(), check it. For instance, relation doesn't exist, copy workflow fails before executing the program, "cstate->program_pipes->pid" dereferences NULL. 2, the program might be still running or hang when copy exits, kill it. Cases like the program hangs, doesn't take signals, user is trying to cancel. Since it's already the end of copy, and the program was started by copy, it should be safe to kill to clean up.
-
由 Lisa Owen 提交于
* docs - clarify some resource group information * reword a bit to make intent clearer
-
由 Lav Jain 提交于
-
由 Taylor Vesely 提交于
Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NAsim Praveen <apraveen@pivotal.io>
-
由 Asim R P 提交于
Co-authored-by: NAsim R P <apraveen@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Asim R P 提交于
The bgwriter and checkpointer were one process in 8.4. Because of this we may have missed shutting down bgwriter during smart shutdown request. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Ashwin Agrawal 提交于
Lot more cleanup needs to happen of code in gpstate, this is just small initial attempt to fix the currently borken tests in walrep schedule.
-
由 Ashwin Agrawal 提交于
Attmept to see if this fixes the current walrep_2 test failures.
-