- 25 6月, 2019 1 次提交
-
-
由 Nikolaos Kalampalikis 提交于
Buffer corruption was caused by not correctly terminating the readlink(), as of commit ccfa3ab7. We restored a previous line of code that allows correct termination. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
- 24 6月, 2019 1 次提交
-
-
由 Asim R P 提交于
-
- 22 6月, 2019 3 次提交
-
-
由 Mark Sliva 提交于
So that machines with the same user do not accidentally clobber existing pipeline. Also remove the -b option because it is now default and rework some logic. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 David Yozie 提交于
* gpmovemirrors change input separator from colon to | * gpaddmirrors change input separator from colon to |, remove mirror string * gprecoverseg change input separator from colon to | * gpexpand change input separator * fix from Chuck
-
由 Chuck Litzell 提交于
* docs - add diskquota module to reference guide * Add missing quote; minor edits * Make module doc more consistent with other module docs. * Updates from reviews * Add section about shared memory and diskquota.max_active_tables
-
- 21 6月, 2019 12 次提交
-
-
由 Lisa Owen 提交于
* docs - add info about configing hive access via jdbc * add About to title * remove some nesting, misc edits from review * better table column names? clearer Authenticated User values * edits requested by alex, adjust heading levels
-
由 xiong-gang 提交于
If a transaction has only updated one QE, we can do one-phase commit there. If one-phase commit transactions don't write pg_distributedlog, the tuples' visibility will be checked only with the local snapshot. This will result in an incorrect result in repeatable read isolation level. For example: create table t(a int); tx 1: BEGIN ISOLATION LEVEL REPEATABLE READ; tx 2: insert into t values(1); tx 1: select * from t where a = 1; tx 2: insert into t values(1); tx 2: insert into t values(2); tx 1: select * from t; The first SELECT of tx1 will create a distributed snapshot on QD and a local snapshot on segment 1, and the later SELECT of tx1 will create a local snapshot on segment 2. In this way, the later SELECT sees the first and the third tuple but not the second one. Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Co-authored-by: NHubert Zhang <hzhang@pivotal.io> Co-authored-by: NGang Xiong <gxiong@pivotal.io>
-
由 Adam Lee 提交于
The code doesn't call ExecCopySlotMemTupleTo() as the comments was speaking, and the two times of memtuple_form_to() are confusing, at least to me, update it.
-
由 Adam Lee 提交于
This backports upstream commit bd1693e8, slightly modified to resolve the comments conflict. Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Thu Jun 23 10:55:59 2016 -0400 Fix small memory leak in partial-aggregate deserialization functions. A deserialize function's result is short-lived data during partial aggregation, since we're just going to pass it to the combine function and then it's of no use anymore. However, the built-in deserialize functions allocated their results in the aggregate state context, resulting in a query-lifespan memory leak. It's probably not possible for this to amount to anything much at present, since the number of leaked results would only be the number of worker processes. But it might become a problem in future. To fix, don't use the same convenience subroutine for setting up results that the aggregate transition functions use. David Rowley Report: <10050.1466637736@sss.pgh.pa.us>
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
Test does not include tablespace drop.
-
由 Adam Berlin 提交于
-
由 Jacob Champion 提交于
Commit a993ef03 inadvertently introduced gp_role=utility in the postmaster arguments when starting segments via gpsegstart. This is because the sixth argument to SegmentStart is a boolean (utilityMode, which defaults to False) but that commit incorrectly passes the master_checksum_version instead. If checksums are enabled on master (which sets the checksum version to 1, a truthy value), this code will enable utility mode for the started segments. SegmentStart does not take a checksum version argument, so remove it entirely. Ashwin noticed this in #6334, but we hadn't put two and two together until now. An ideal followup would be to remove the huge number of layers that make mistakes like this so easy to miss. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Nikolaos Kalampalikis 提交于
We skip the following tests for the following reasons: - gpinitsystem: TZ interpretation difference for TZ '' vs unset - gppkg: underlying package zip file contains wrong OS version - gpactivatestandby: cause unknown - gprecoverseg: bash pipe interpretation difference - gpconfig: demo cluster has LOCALE set to POSIX, not UTF-8 We plan to fix these in a subsequent commit. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io> Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io>
-
由 Shoaib Lari 提交于
We have added Ubuntu 18.04 support for the behave CLI tests. Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NDavid Krieger <dkrieger@pivotal.io> Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io>
-
由 Kalen Krempely 提交于
This is a followup to commit 89f01461 - "CLI code coverage: add coverage to the production pipeline". This updates the master-generated pipeline now that gpperfmon has been updated to collect and upload coverage.
-
由 Lisa Owen 提交于
* docs - add resgroup best prac section addressing low swap * edits requested by david
-
- 20 6月, 2019 13 次提交
-
-
由 Your Name 提交于
Tarball Configure with specific configure flags for open source gpdb. Except for python, others quicklz, sigar are not built with open source gpdb. Includes python dependencies from pythonsrc-ext. For more info, refer https://github.com/pivotal/gp-releng/blob/master/docs/OpenSource-Greenplum-Database-Server-Feature-Component-Packaging-Guidelines.mdCo-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NShaoqi Bai <sbai@pivotal.io>
-
由 Ning Yu 提交于
Reorder the resource group struct properties to reduce paddings and cache line misses.
-
由 Ning Yu 提交于
For SET/RESET/SHOW commands the resource group bypass mode are forcely enabled, we have to check this before beginning of the transaction, so we have to run an extra parsing before the transaction. The parse tree should be released in time to reduce memory usage.
-
由 Ning Yu 提交于
There are some functions to check if resource group is enabled or activated, they are just fast and simple ones but as they are used in frequently called functions like CHECK_FOR_INTERRUPTS(), the calls to them are still too expansive. We could optimize it by converting them to macros so they are inlined in the callers. The resource queue checker functions are also converted. This improve select-only benchmark performance for 10% ~ 20% in both resource group and resource queue modes.
-
由 Ning Yu 提交于
The memory_spill_ratio can be set both via a catalog property or a GUC, at beginning of each transaction the value will be synced from the catalog to the GUC. Setting a GUC is not too expansive, however if we have to set it on each transaction then it's a significant overhead. Optimize it by only sync it when the catalog and GUC have different values.
-
由 Mark Sliva 提交于
Also update gpperfmon to collect and upload coverage. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Mark Sliva 提交于
Update the production master pipeline to collect and report CLI coverage. We also add combine_cli_coverage as a job that should not block the release. Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io>
-
由 Jacob Champion 提交于
We add a concourse job that runs the check_centos CLI unit tests. Also we add functions to common.bash that install the necessary python-requirements needed for generating coverage. We use the same gsutil_sync script as before to upload the generated coverage files. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Kalen Krempely 提交于
Create a concourse job that runs after all CLI behave coverage jobs pass. It downloads the collected coverage, and combines it into a single file. Then it generates an html coverage report, which is accessible at the URL that is printed at the end of the combine coverage job. Python dependencies are omitted from the coverage output. The html report is uploaded to the same bucket path as the raw coverage files. Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Jacob Champion 提交于
We set up coverage collection on demo and ccp behave tests. We then define new concourse steps that upload coverage files to the coverage bucket. This uses a generic script we added that uploads a directory to a gcs bucket uri. Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Mark Sliva 提交于
This makes it easier to maintain this part of the script, now that we no longer need the virtualenv to be active while running the tests. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Jacob Champion 提交于
This installs the requirements for each host in a ccp cluster, and on a demo cluster. For the demo cluster, it copies the Python requirements that are first installed into a temporary virtual env into the vendored Python stack. For ccp clusters, it copies the Python libraries from the virtual env on mdw to each host including mdw that is in hostfile_all. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io>
-
由 Soumyadeep Chakraborty 提交于
This fixes issue:#7922. For a DROP on a relation, after the COMMIT PREPARED record is written on the QE, we remove the relation file from disk (call to DropRelationFiles() inside FinishPreparedTransaction()). While doing so, we need to know whether the relation is a temp relation such that the file deletion code can prepend the 't_' prefix to the relfilenode name. Apart from the case above, whenever we have code dropping temporary relation files as a part of the 2PC mechanism, we have to ensure that we have enough information to construct the correct filesystem path to the temp relation's relfilenode. This commit ensures that we persist a flag specifying whether a relation is temporary through the 2PC infrastructure (a variety of XLOG records and in-mem pending deletes structure), in order to access it while performing the file removal inside DropRelationFiles.
-
- 19 6月, 2019 5 次提交
-
-
由 David Yozie 提交于
* add --port-range option to reference page * Remove COPY query limitation for ON SEGMENT clause * small edit to port range * adding whitespace for consistency * start --dest-table additions * Change --port-range to --data-port-range * Additional info about --dest-table * Some initial work on query support * clean up port variables * Revert "Some initial work on query support" This reverts commit fd33babba243add74fb0632eb9b33c035b94f781. * Edits from Chuck * Removing limitation wording regarding renaming conflict resolution * Updating --data-port-range based on review feedback * Feedback from Lisa * --include-table-file support for query, new JSON-format file * Change minimum data-port range to 1024; add requirement to cover --jobs; add example port range to --job example * Split JSON info into new command argument * Add notice that query isn't supported with multiple table selections * Add json limitations * Indent --dest-table for better visibility in synopsis * Regrouping related parameters in the synopsis * Reorganizing options according to function; adding sections for each option category * Removing SQL query info from --include-table-file * Edits to clarify connection options * --dest-table edits * remove --validate limitation; add fix from Jerome * Revert "Remove COPY query limitation for ON SEGMENT clause" This reverts commit 5a3119b27e8e29bf1ee8fbff9df41ce7154e2690. * Revert "Revert "Remove COPY query limitation for ON SEGMENT clause"" This reverts commit 32b3afcae00a382413187e70c99bd05eb5bf0c0b. * Remove limitation of --validate --append with data in destination table * Remove semicolon warning, per Ming Li * Revert "Remove limitation of --validate --append with data in destination table" This reverts commit 93046734aef4c28aed0c9e6bc7fcdcceb09ee7ab.
-
由 Adam Lee 提交于
``` WARNING: skipping "__gp_log_master_ext" --- cannot vacuum non-tables or special system tables server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. connection to server was lost ``` External or foreign tables don't support vacuum action, skip them. The same checks in vacuum_rel() from upstream are not needed to keep since vacuumStatement_Relation() already checked.
-
由 Asim R P 提交于
The test injects a skip fault on standby and then starts a create table command in background. The create table command is expected to block due to the fault. The test used to run the create table command before waiting for the fault to be triggered. Sometimes, the command was found to complete without blocking if it reached faster than the previously started fault injection command. Make the test deterministic by ensuring that the skip fault is triggered on standby and only then start the create table command. Introduce wait loop until the create table command is shown as waiting in pg_stat_activity. Commit 7be7e1b3 tried to fix this hastily. This patch should fix it properly. Reviewed by Jimmy Yih
-
由 David Yozie 提交于
-
由 Hans Zeller 提交于
-
- 18 6月, 2019 3 次提交
-
-
由 Asim R P 提交于
The create table command should be executed only after the fault to skip flush on standby is triggered. The patch enforces this order. Reviewed by Paul Guo
-
由 David Yozie 提交于
* Add docs for auto_explain contrib module * Feedback from Lisa * Add postgres/greenplum feature statement * fix copy/paste error * Clarify that it executes only on master * Restore alphabetical order
-
由 Lisa Owen 提交于
* docs - updates for changes to resgroup MAX_SPILL_RATIO/MEMORY_LIMIT * some of the edits requested by hubert * add RG info to statement_mem guc descript, RGs use max_statement_mem * more misc edits * new topic about reserving memory vs. using RG global shared mem pool * more misc edits * more misc edits * change title
-
- 17 6月, 2019 2 次提交
-
-
由 Asim R P 提交于
The test used to wait indefinitely for master to shutdown (PANIC) due to all retries failing in the second phase of 2PC. The indefinite wait would actually be indefinite if the roles for content0 primary and mirror are flipped. The roles would flip because of a previously failing test. Fix this by making the wait bounded. Also use new fault injector API, while at it. Reviewed by Pengzhou Tang.
-
由 Asim R P 提交于
When '<content_id>U:' is encountered in isolation2 spec, the framework opens a connection to master and obtains hostname and port of the primary segment with specified content ID. This connection to master should be in utility mode. A normal mode connection would wait for DTM recovery and if a test is in the middle of testing DTM recovery, the connection may not even complete. Reviewed by Pengzhou Tang
-