- 28 6月, 2019 1 次提交
-
-
由 Shoaib Lari 提交于
The recommended sysctl settings for user clusters are out-of-date, and after some investigation we've discovered a good minimal set of recommended defaults. These will be updated in the documentation, but we also want to set them in the VMs used for our CLI Behave tests so that we use values similar to those that users will have in their environments, so this commit adds a section to the Concourse cluster task file to do so. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
- 20 6月, 2019 3 次提交
-
-
由 Mark Sliva 提交于
Also update gpperfmon to collect and upload coverage. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Jacob Champion 提交于
We set up coverage collection on demo and ccp behave tests. We then define new concourse steps that upload coverage files to the coverage bucket. This uses a generic script we added that uploads a directory to a gcs bucket uri. Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Jacob Champion 提交于
This installs the requirements for each host in a ccp cluster, and on a demo cluster. For the demo cluster, it copies the Python requirements that are first installed into a temporary virtual env into the vendored Python stack. For ccp clusters, it copies the Python libraries from the virtual env on mdw to each host including mdw that is in hostfile_all. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io>
-
- 05 6月, 2019 1 次提交
-
-
由 Bradford D. Boyle 提交于
This PR removes SLES11 from the gpAux build system and from Concourse CI scripts. SLES11 will not be supported for GPDB 6+.
-
- 25 4月, 2019 1 次提交
-
-
由 Jacob Champion 提交于
Follow-up to the previous commit, which worked fine for CentOS6 but fell apart with CentOS7. Because our vendored Python doesn't contain an RPATH/RUNPATH pointer to the location of its libpython, trying to execute it directly will result in failures at link time. The previous commit took the approach that greenplum_path.sh takes, which is to hardcode an LD_LIBRARY_PATH that makes up for this bug. This approach works for CentOS6, which is running Python 2.6 as its system version. On CentOS7, which has Python 2.7, the LD_LIBRARY_PATH causes the system Python to use the vendored libpython.so.2.7, and virtualenv fails. Instead of forcing a cross-linking situation with LD_LIBRARY_PATH, fix the problem in the vendored Python binary, by using patchelf to set up a proper RUNPATH. (We originally tried to build our vendored Python with an RPATH set at compile time, but the only way to do that without knowing the eventual installation prefix is by setting a relative RPATH using the `$ORIGIN` construct, and virtualenv is unfortunately incompatible with that.) We do this on any platforms that provide a patchelf binary, and do our best to limp along on all others. Along the way, get rid of the run_behave.yml task, which has been confusing us for the entirety of this work. CCP jobs now use run_behave_on_ccp_cluster.yml consistently. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
- 20 3月, 2019 1 次提交
-
-
由 Wang Hao 提交于
For GP 6 beta, the release engineering team is removing the apr-util package from the list of bundled dependencies. Users will be asked to provide their own apr-util package, which can differ in the version on each platform. So it is necessary to verify gpperfmon is workable with the platform provided apr-util on each supported platform. Originally, gpperfmon test is done in CLI test suite, only covers centos6. This commit changed it to a dedicated suite in order to test multiple platforms. Note: on SLES12 does not need to install libapr-util1 to run gpmmon
-
- 03 4月, 2018 1 次提交
-
-
由 Alexandra Wang 提交于
CCP 2.0 includes the following changes: 1) CCP migration from AWS to GOOGLE. CCP jobs (except for jobs need connection to ddboost and netbackup) now no longer need external workers, therefore ccp tags for external workers are removed. The tfstate backend for AWS and GOOGLE are stored seperatedly on s3 bucket, `clusters-aws/` for aws and `clusters-google/` for google, set_failed are also different between the two cloud providers. 2) Separate gpinitsystem from the gen_cluster task When failures occur in production for gpinitsystem itself, it is important for a developer to be able to quickly distinguish whether it is a CCP failure, or a problem with the binaries used to init the GPDB cluster. By separating the tasks, it is easier to see when gpinit itself has failed. 3) The path to scripts used in CCP has changed Instead of all of the generic scripts being in `ccp_src/aws/` they are now in a better location of `ccp_src/scripts/`. 4) Paramater names have changed platform is now PLATFORM for all references in CCP jobs 5) NVME jobs Jobs that used NVME in AWS have been migrated to an identical feature for NVME in GCP but this does include a change to the terraform path specified in the job. 6) Instance types mapping from Ec2 to GCE The new paramater name for specifying instance type in GCP jobs is `instance_type`. There is not always a 1:1 match for instance types so there are slight differences in available resources for some jobs. Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 21 12月, 2017 1 次提交
-
-
由 Kris Macoskey 提交于
Installation of packages on every execution of a test suffers from any upstream flakiness. Therefore installation of generic packages is being moved to the underlying OS, in this case the AMI being used for the CCP job. In place of outright removing the package installation, it is a much better pattern to instead replace installation with a validation of the assumptions made for packages installed on the underlying OS the test will run within. The call `yum --cacheonly list installed [List of Packages]` does a number of things: 1. For the given list of packages, if installed the command will return 0, and if any are not installed will return 1 2. The `--cacheonly` prevents the call from issuing an upstream repository metadata refresh. This is not a requirement, but is an easy optimization that avoids upstream flakiness even further. Note: `--cacheonly` assumes that the repostiroy metadata cache has been refresh atleast once. If not, the flag will cause the command to fail. We are assuming that it has been performed at least once in the underlying OS in order to install the packages in the first place. Signed-off-by: NAlexandra Wang <lewang@pivotal.io> Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
- 04 11月, 2017 1 次提交
-
-
由 Karen Huddleston 提交于
-
- 06 10月, 2017 1 次提交
-
-
由 Chris Hajas 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 22 7月, 2017 2 次提交
-
-
由 Jim Doty 提交于
The source code for gpdb is being copied to the master node, so there is no need to run scripts in the container that can be directly run on the master node. This makes scripts in the GPDB source tree not depend on the connection that CI system has to the cluster, just that your test is running on a cluster. Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 Tom Meyer 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io> Signed-off-by: NChris Hajas <chajas@pivotal.io>
-