- 28 6月, 2019 8 次提交
-
-
由 Xiaoran Wang 提交于
-
由 Jesse Zhang 提交于
Two empty test files introduced in commit 364cbce7. Must have been a think-O
-
由 David Krieger 提交于
This reverts commit 559d9d9f.
-
由 Shoaib Lari 提交于
The recommended sysctl settings for user clusters are out-of-date, and after some investigation we've discovered a good minimal set of recommended defaults. These will be updated in the documentation, but we also want to set them in the VMs used for our CLI Behave tests so that we use values similar to those that users will have in their environments, so this commit adds a section to the Concourse cluster task file to do so. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Pengzhou Tang 提交于
A Motion node may has a subplan whose targetlist need to be expanded to make sure all entries of the subplan's hashExpr are in its targetlist, in this case, the motion node should also update its targetlist, otherwise, the difference of targetlist will cause Motion node parse the tuple (especially for MemTuple) incorrectly with a mismatched tuple description, make wrong result or even crash. For a subplan, if we can get a Expr for distkey column from the targetlist, the Expr must be in the targetlist or reference the Vars in the targetlist, The old logic is, if the Expr is valid, the Expr that reference Vars is also added to the targetlist. For example, a subplan has targetlist (c1) and the locus whose distkey is (c1::float8), so the Expr returned will be one referenced Vars in the targetlist and then the old targetlist will be extended to (c1, c1::float8). Obviously, the extension of targetlist here is unnecessary, the Expr can evaluate from Vars, meanwhile, motion need to transfer more columns in a tuple. Co-authored-by: NAdam Lee <ali@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NRichard Guo <rguo@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Shreedhar Hardikar 提交于
This commit handles a missed case in the previous commit: "Fix algebrization of subqueries in queries with complex GROUP BYs". The logic inside RunExtractAggregatesMutator's Var case was intended to fix top-level Vars inside subqueries in the targetlist, but also incorrectly fixed top-level Vars in subqueries inside of aggregates.
-
由 Shoaib Lari 提交于
The gpconfig test was failing for UTF-8 characters when run on Ubuntu because our test containers use the POSIX locale. In this commit, we have set the `LC_TYPE` to `en_US.UTF-8` for the gpconfig test so that the test has the same behavior on all platforms. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Shoaib Lari 提交于
The `gen_pipeline()` function called the `suggested_git_remote()` and the `suggested_git_branch()` functions as default values for the `git_remote` and `git_branch` parameters. For `prod` pipeline, the `gen_pipeline()` function is called with GPDB repo and `BASE_BRANCH`. However, the `suggested_*()` functions are called in the `gen_pipeline()` function definition resulting in error as they are not applicable for the production branches. Therefore, in this commit we have used `None` as the default and call the `suggested_*()` functions only if the corresponding parameters are not provided by the caller. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
- 27 6月, 2019 11 次提交
-
-
由 Lisa Owen 提交于
* docs - pxf now bundled with postgresql 42.2.5 jar * remove postgresql jar registration from the example
-
由 Tingfang Bao 提交于
Authored-by: NTingfang Bao <bbao@pivotal.io>
-
由 Shaoqi Bai 提交于
greenplum-db/greenplum-database-release Previously, releng team merged PR https://github.com/greenplum-db/gpdb/pull/7906 to release open source Greenplum-db in gpdb repo, and later on, we decided to move these work to new repo greenplum-db/greenplum-database-release, this commit delete the previouse work we have done Reviewed-by: NMark Sliva <msliva@pivotal.io> Reviewed-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Chris Hajas 提交于
ORCA commit: Skip group expression containing circular references Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 David Yozie 提交于
* Docs: add note that gpcopy inherits encryption from client connection * Change note to clarify that SSL encryption is not supported * typo fix
-
由 Tingfang Bao 提交于
Authored-by: NTingfang Bao <bbao@pivotal.io>
-
由 Bradford D. Boyle 提交于
The resource rename is needed in order to fetch the correct version. Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Bradford D. Boyle 提交于
Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Bradford D. Boyle 提交于
Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Mark Sliva 提交于
We are introducing a new pattern of using BATS and python unit testing for files inside the concourse/scripts folder. Currently, there are no unit tests, but this pattern allows them to easily be added. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Mark Sliva 提交于
The task was originally separating NON_PRODUCTION_FILES.txt out of the bin_gpdb tarball and putting them into the QAUTILS_TARBALL, and leaving OUTPUT_TARBALL with the rest of the files. However, QAUTILS_TARBALL is never used, the only important function is to remove the NON_PRODUCTION_FILES.txt. With this commit, we refactor the separate_qautils_files to only remove files. Also, we introduce a new pattern for TDD'ing bash scripts by using the MIT-licensed BATS framework. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> (cherry-picked from 5458b94b)
-
- 26 6月, 2019 4 次提交
-
-
由 Peifeng Qiu 提交于
OpenSSL will load config file under the path it's being configured. If the path is writable by non-admin user, a malicious user can potentially inject code on openssl invocation. Change dependency path to c:\windows\system32. Reference: https://curl.haxx.se/docs/CVE-2019-5443.html
-
由 David Yozie 提交于
-
由 Shoaib Lari 提交于
In the Behave test for gpinitsystem, testing for database creation in default timezone was done by setting the TZ variable to an empty string. On Ubuntu this resulted in the `date +"%Z"` command giving `Universal` as the timezone rather than `UTC`. In this commit, the `TZ` variable is unset rather than set to an empty string. Thus giving uniform behavior on Ubuntu and other platforms. Since we already have a Behave step to unset a variable correctly, so we are removing this incorrect step. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Nikolaos Kalampalikis 提交于
We were not killing the postmaster processes because of the following reasons. 1. The `$` symbols passed to `Command` was expanded on the local machine due to the `RemoteExecutionContext` implementation. This made the awk filter a no-op. 2. We did not catch this on Centos because the Centos implementation of /bin/kill will kill as many of the processes as are valid and ignoring the invalid ones. However, Ubuntu fails fast if any of the arguments are invalid. 3. The test step did not validate the result of `Command.run()`. Thus ignoring the failure. We have replaced `Command` by `subprocess.check_call()`. Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 25 6月, 2019 8 次提交
-
-
由 Adam Berlin 提交于
- Ensure that indexes are placed into the default tablespace on the master and segments. - Default tablespace GUC should not affect where a temporary table or its indexes are placed. - Default tablespace should override explict database tablespace configuration - ALTER DATABASE SET TABLESPACE should not impact SET default_tablespace to 'some_tablespace' - Objects created when default_tablespace is set should end up 'some_tablespace'.
-
由 Tingfang Bao 提交于
Add the windows clinets testing jobs to pipelines Authored-by: NTingfang Bao <bbao@pivotal.io>
-
由 Tingfang Bao 提交于
1) Run TEST_REMOTE.py of gpload 2) Run psql SSL, gpfdist Windows pipe testing Co-authored-by: NPeifeng Qiu <pqiu@pivotal.io> Co-authored-by: NTingfang Bao <bbao@pivotal.io> Co-authored-by: NXiaoran Wang <xiwang@pivotal.io> Co-authored-by: NXiaodong Huo <xhuo@pivotal.io>
-
由 xiong-gang 提交于
It's introduced in 'e00098', it happens in such case: Tx 1: generate xid 1000 on segment 0 (xmin=1000, globalxmin=1000) Tx 2: generate xid 2000 on segment 0 (xmin=1000, globalxmin=1000) Tx 1: finish Tx 3: generate xid 3000 on segment 0 (xmin=2000, globalxmin=1000) Tx 2: finish Tx 4: generate xid 4000 on segment 0 (xmin=3000, globalxmin=2000) In this scenario, tx4 can advance the DistributedLogShared->oldestXmin to 2000, tx3 should use its globalxmin 1000 instead of the value of DistributedLogShared->oldestXmin.
-
由 Chris Hajas 提交于
Improve PciIntervalFromScalarBoolOr() to use faster algorithm Only allow const expr evaluation for (const cmp const) ignoring casts Remove FDisjointLeft check in CRange::PrngExtend() Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Mark Sliva 提交于
It is superseded by NON_PRODUCTION_FILES.txt Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Jamie McAtamney 提交于
Prior to this commit, standard connections made to a primary segment would hang forever if that segment was not started in utility mode, because the call to cdb_setup() waits for a state that will never arrive. The hang exists in 6X as well, for slightly different reasons (an infinite wait on a semaphore). Some utilities make use of pg_isready to verify segment status, and a hung backend will report an incorrect status for the segment, which is how this bug was discovered. The hung backend sticks around forever in startup state as well. Add an explicit check on QE segments that the incoming session does not expect a dispatcher role, and bail out at the same place that we would normally complain about a mismatched utility-mode connection. (If the incoming session is set to the executor role, we'll raise a FATAL for other reasons, but at least we won't hang.) Add a regress test for the new behavior. We coopt the new internal_connection test file for this, and rename it gp_connections. We also add a more end-to-end regression test using the same reproduction case as the original bug. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Nikolaos Kalampalikis 提交于
Buffer corruption was caused by not correctly terminating the readlink(), as of commit ccfa3ab7. We restored a previous line of code that allows correct termination. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
- 24 6月, 2019 1 次提交
-
-
由 Asim R P 提交于
-
- 22 6月, 2019 3 次提交
-
-
由 Mark Sliva 提交于
So that machines with the same user do not accidentally clobber existing pipeline. Also remove the -b option because it is now default and rework some logic. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 David Yozie 提交于
* gpmovemirrors change input separator from colon to | * gpaddmirrors change input separator from colon to |, remove mirror string * gprecoverseg change input separator from colon to | * gpexpand change input separator * fix from Chuck
-
由 Chuck Litzell 提交于
* docs - add diskquota module to reference guide * Add missing quote; minor edits * Make module doc more consistent with other module docs. * Updates from reviews * Add section about shared memory and diskquota.max_active_tables
-
- 21 6月, 2019 5 次提交
-
-
由 Lisa Owen 提交于
* docs - add info about configing hive access via jdbc * add About to title * remove some nesting, misc edits from review * better table column names? clearer Authenticated User values * edits requested by alex, adjust heading levels
-
由 xiong-gang 提交于
If a transaction has only updated one QE, we can do one-phase commit there. If one-phase commit transactions don't write pg_distributedlog, the tuples' visibility will be checked only with the local snapshot. This will result in an incorrect result in repeatable read isolation level. For example: create table t(a int); tx 1: BEGIN ISOLATION LEVEL REPEATABLE READ; tx 2: insert into t values(1); tx 1: select * from t where a = 1; tx 2: insert into t values(1); tx 2: insert into t values(2); tx 1: select * from t; The first SELECT of tx1 will create a distributed snapshot on QD and a local snapshot on segment 1, and the later SELECT of tx1 will create a local snapshot on segment 2. In this way, the later SELECT sees the first and the third tuple but not the second one. Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Co-authored-by: NHubert Zhang <hzhang@pivotal.io> Co-authored-by: NGang Xiong <gxiong@pivotal.io>
-
由 Adam Lee 提交于
The code doesn't call ExecCopySlotMemTupleTo() as the comments was speaking, and the two times of memtuple_form_to() are confusing, at least to me, update it.
-
由 Adam Lee 提交于
This backports upstream commit bd1693e8, slightly modified to resolve the comments conflict. Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Thu Jun 23 10:55:59 2016 -0400 Fix small memory leak in partial-aggregate deserialization functions. A deserialize function's result is short-lived data during partial aggregation, since we're just going to pass it to the combine function and then it's of no use anymore. However, the built-in deserialize functions allocated their results in the aggregate state context, resulting in a query-lifespan memory leak. It's probably not possible for this to amount to anything much at present, since the number of leaked results would only be the number of worker processes. But it might become a problem in future. To fix, don't use the same convenience subroutine for setting up results that the aggregate transition functions use. David Rowley Report: <10050.1466637736@sss.pgh.pa.us>
-
由 Adam Berlin 提交于
-