- 06 6月, 2017 2 次提交
-
-
由 Jimmy Yih 提交于
We need some tests to see if binaries can be swapped around. This commit provides a simple framework and basic tests to check if binaries can be upgraded/downgraded by just swapping the Greenplum binary install paths. Running the tests requires the user to have another Greenplum compiled which will be provided as an argument to the test script. It also requires the regression database to be built from running tests in src/test/regress.
-
由 Shoaib Lari 提交于
Define a xlog format for AO operations. The xlog is generated when an AO block is written. A test is added to receive the AO log from the WAL Sender process after an INSERT operation and verify that the the received xlog has the AO xlog record in it. Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
- 05 6月, 2017 2 次提交
-
-
由 Daniel Gustafsson 提交于
At some point in the past it seems the keyword list was merged from upstream 8.4 without merging the actual code, and instead keywords which weren't yet supported where commented out. The XML whitespace keywords for STRIP WHITESPACE and PRESERVE WHITESPACE were however not uncommented when XML support was merged, resulting in the below error when trying to restore the XML views in ICW back after a dump: db=# SELECT XMLPARSE(CONTENT '<abc>x</abc>'::text PRESERVE WHITESPACE) AS "xmlparse"; ERROR: syntax error at or near "WHITESPACE" LINE 1: ...CT XMLPARSE(CONTENT '<abc>x</abc>'::text PRESERVE WHITESPACE... ^ Put the keywords back, and also remove commented out keywords which we dont support but will get when merging 8.4 to reduce the diff with upstream. Also add a testcase for XML whitespace syntax parsing.
-
由 Richard Guo 提交于
Record memory usage for resource group. 1. Update total memory usage for a resource group when a session belonging to this group allocates/frees memory. 2. Update total memory usage for related resource groups when a session enters into or leaves from a resource group. 3. Dispatch current resource group ID from QD to QEs to keep track of current resource group. 4. Show total memory usage of a resource group. 5. Add test case for memory usage recording of resource group. Signed-off-by: Nxiong-gang <gxiong@pivotal.io> Signed-off-by: NKenan Yao <kyao@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
- 02 6月, 2017 2 次提交
-
-
由 Xin Zhang 提交于
In addition to renaming the files, a new test is added to cover the scenario when readers need to use subtransaction xid cache in writer's PGPROC. NOTE: Please use `git show -M10` to see the rename instead of separate remove and add. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Xin Zhang 提交于
Now that subtransaction information is no longer maintained in shared snapshot between readers and a writer, the tests are obsolete. Faults used by the tests are removed as well. This commit contains a gpdb_master pipeline change - to remove "subtransaction" job that used to run the tests being removed by this commit. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
- 01 6月, 2017 4 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Before parallelization on nodes in cdbparallelize if there are any subplan nodes in the plan which refer to the same plan_id, parallelization step breaks as a node must be processed only once by it. This patch fixes the issue by generating a new subplan node in glob subplans, and updating the plan_id of the subplan to refer to the newly created node.
-
由 Bhuvnesh Chaudhary 提交于
The condition containing subplans will be duplicated as the partition selection key in the PartitionSelector node. It is not OK to duplicate the expression, if it contains SubPlans, because the code that adds motion nodes to a subplan gets confused if there are multiple SubPlans referring the same subplan ID.
-
由 Bhuvnesh Chaudhary 提交于
Skip duplicating subplans clauses as multiple subplan node referring to same plan_id of subplans breaks cdbparallize. cdbparallize expects to apply motion only to a node once, so if two subplan nodes refers to the same plan_id the assertion breaks. Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
由 Venkatesh Raghavan 提交于
-
- 31 5月, 2017 3 次提交
-
-
由 Ekta Khanna 提交于
Few minor updates in error messages and comments. Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Dhanashree Kashid 提交于
This commit updates the `rangefuncs_cdb` tests to use `plpgsql` instead of `plperlu`; since `plperlu` is not supported by SLES. We do not need a separate answer file for ORCA because the results produced by planner and ORCA are the same. Hence this commit also removes `rangefuncs_cdb_optimizer.out` file. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Karen Huddleston 提交于
* Move ddboost cleanup step to be last to ensure all files in directory are removed from data domain server
-
- 27 5月, 2017 1 次提交
-
-
由 Andreas Scherbaum 提交于
Provide alternative answer for gp_metadata test when GPORCA is not compiled in
-
- 26 5月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
Also, add retry to check for walsender is gone after walrcv_disconnect() as it can take sometime to detect the connection drop.
-
- 25 5月, 2017 7 次提交
-
-
由 Jimmy Yih 提交于
For subselect_gp test, we were removing the distribution policy of a table to see if it would do a gather motion or not. Since it's technically a corrupted table, we should delete it after we're done with it. We also remove a quicklz reference that should not have been there. For gppc test, they were using the regression database. This made our gpcheckcat call at the end of ICW relatively useless since all our data would have been deleted due to gppc tests recreating the regression database. For gpload test, some generated files were previously commited. We should be actively cautious of this and remove them when we see them.
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
-
由 Daniel Gustafsson 提交于
PL/Perl is an optional component, and the main ICW should not use it as it may not be present. Move the tests that seem useful to the plperl test suite instead and remove the ones which we have ample coverage for elsewhere.
-
由 Daniel Gustafsson 提交于
The gpmapreduce application is an optional install included via the --enable-mapreduce configure option. The tests were however still in src/test/regress and unconditionally included in the ICW schedule, thus causing test failures when mapreduce wasn't configured. Move all gpmapreduce tests to co-locate them with the mapreduce code and only test when configured. Also, add a dependency on Perl for gpmapreduce in autoconf since it's a required component.
-
由 Bhuvnesh Chaudhary 提交于
- Before building Index object (IMDIndex), we build LogicalIndexes via calling `gpdb::Plgidx(oidRel)` in which a partition tables is traversed and index information (such as logicalIndexOid, nColumns, indexKeys, indPred, indExprs, indIsUnique, partCons, defaultLevels) is captured. - For Indexes which are available on all the partitions partCons and defaultLevels are NULL/empty. - Later in `CTranslatorRelcacheToDXL::PmdindexPartTable` to build Index object, we use the derived LogicalIndexes information and populates the array holding the levels on which default partitions exists. But since defaultLevels is NIL in this case, pdrgpulDefaultLevels is set to empty i,e `default partitions on levels: {}` - This causes an issue while trying to build the propagation expression, as because of wrong number of default partitions on level we mark the scan as partial and tries to construct a test propagation expression instead of a const propagation expression. - This patch fixes the issue by marking the default partitions on levels for index equal to the default partitions on levels for the part relation if the index exists on all the parts. Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
- 23 5月, 2017 1 次提交
-
-
由 Adam Lee 提交于
`outputdir` here is where to place the converted files, not the directory named `output`. PostgreSQL places them into `outputdir` rightly. commit 64cdbbc4 Author: Peter Eisentraut <peter_e@gmx.net> Date: Sat Feb 14 21:33:41 2015 -0500 pg_regress: Write processed input/*.source into output dir Before, it was writing the processed files into the input directory, which is incorrect in a vpath build.
-
- 22 5月, 2017 2 次提交
-
-
由 Adam Lee 提交于
ON MASTER feature is not fully supported by built-in protocols other than s3 yet, disable it with those protocols for now.
-
由 Yuan Zhao 提交于
1. Add --with-gssapi to sles configurations in gpAux/Makefile to enable kerberos build 2. Add kerberos sbin path to PATH for sles. 3. Disable psql pager to avoid concourse hang. Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
- 20 5月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
Just a start to have wal replication tests in ICW. This has simple protocol functions whichs kind-of mocks walreceiver side to help validate walsender and xlog stream. Mainly to portray something on these lines can be easily leveraged to validate like xlog generation and stream for AO tables when done, avoiding to fully instantiate a mirror or something on similar lines at ease.
-
- 19 5月, 2017 5 次提交
-
-
由 Ning Yu 提交于
resource_group cases were already moved to the standalone resgroup target, however one entrance was still left in isolation2 schedule which caused a failure in pipeline. This commit is a cleanup to make pipeline green.
-
由 Ning Yu 提交于
As cgroup is now required to enable resgroup on linux and cgroup itself requires privileged permission to setup & config, so resgroup tests will fail or at least produce extra warnings in ICW pipeline. We moved them to the installcheck-resgroup target as there is a standalone privileged pipeline to run this target. Also the tests are updated as the psql output format is different between ICW and installcheck-resgroup.
-
由 Pengzhou Tang 提交于
Resource group cpu rate limitation is implemented with cgroup on linux system. When resource group is enabled via GUC we check whether cgroup is available and properly configured on the system. A sub cgroup is created for each resource group, cpu quota and share weight will be set depends on the resource group configuration. The queries will run under these cgroups, and the cpu usage will be restricted by cgroup. The cgroups directory structures: * /sys/fs/cgroup/{cpu,cpuacct}/gpdb: the toplevel gpdb cgroup * /sys/fs/cgroup/{cpu,cpuacct}/gpdb/*/: cgroup for each resource group The logic for cpu rate limitation: * in toplevel gpdb cgroup we set the cpu quota and share weight as: cpu.cfs_quota_us := cpu.cfs_period_us * 256 * gp_resource_group_cpu_limit cpu.shares := 1024 * ncores * for each sub group we set the cpu quota and share weight as: sub.cpu.cfs_quota_us := -1 sub.cpu.shares := top.cpu.shares * sub.cpu_rate_limit The minimum and maximum cpu percentage for a sub cgroup: sub.cpu.min_percentage := gp_resource_group_cpu_limit * sub.cpu_rate_limit sub.cpu.max_percentage := gp_resource_group_cpu_limit The acutal percentage depends on how busy the system is. gp_resource_group_cpu_limit is a GUC introduced to control the cpu resgroups assigned on each host. gpconfig -c gp_resource_group_cpu_limit -v '0.9' A new pipeline is created to perform the tests as we need privileged permission to enable and setup cgroups on the system. Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Venkatesh Raghavan 提交于
In the updated tests, we used functions like disable_xform and enable_xform to hint the optimizer to disallow/allow a particular physical node. However, these functions are only available when GPDB is built with GPORCA. Planner on the other hand accomplished this via a GUC. To avoid usage of these functions in tests, I have introduced couple of GUCS that mimic the same planner behavior but now for GPORCA. In this effort I needed to add an API inside GPORCA.
-
由 Ashwin Agrawal 提交于
-
- 17 5月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
Fixes github issue #1774.
-
- 16 5月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
In the past there was functionality to decorate a stacktrace with the developer in charge of the separate functions. This is not in use anymore and it's not something we want either.
-
- 15 5月, 2017 2 次提交
-
-
由 Venkatesh Raghavan 提交于
* Enable analyzing root partitions * Ensure that the name of the guc is clear * Remove double negation (where possible) * Update comments * Co-locate gucs that have similar purpose * Remove dead gucs * Classify them correctly so that they are no longer hidden
-
由 Pengzhou Tang 提交于
Formally, GPDB assumed Gp_interconnect_queue_depth was constant during the interconnect life-time which was incorrect for Cursors, if Gp_interconnect_queue_depth was changed after a Cursor was declared, a panic occurred. To avoid this, we make a copy of Gp_interconnect_queue_depth when interconnect is set up. Gp_interconnect_snd_queue_depth has no such problem because it is only used by senders and senders of Cursor will never receive the GUC change command.
-
- 13 5月, 2017 2 次提交
-
-
cache.
-
由 Shreedhar Hardikar 提交于
Earlier `workfile_set` gets cleaned up only at the top transaction level. If we have a lot of sub-transaction, we found that we couldn't create more `workfile_set` in shared memory. So we used resource owner to clean up the `workfile_set` at the respective transaction level itself. Thanks to Robert Mu <dbx_c@hotmail.com> for coming up with the initial fix through the PR #2325. This would resolve #1767 Signed-off-by: NKarthikeyan Jambu Rajaraman <karthi.jrk@gmail.com>
-
- 12 5月, 2017 3 次提交
-
-
This reverts commit a5e26310.
-
This reverts commit 8c62e892.
-
由 Tom Lane 提交于
Make get_stack_depth_rlimit() handle RLIM_INFINITY more sanely. Rather than considering this result as meaning "unknown", report LONG_MAX. This won't change what superusers can set max_stack_depth to, but it will cause InitializeGUCOptions() to set the built-in default to 2MB not 100kB. The latter seems like a fairly unreasonable interpretation of "infinity". Per my investigation of odd buildfarm results as well as an old complaint from Heikki. Since this should persuade all the buildfarm animals to use a reasonable stack depth setting during "make check", revert previous patch that dumbed down a recursive regression test to only 5 levels.
-