- 22 7月, 2017 7 次提交
-
-
由 Tom Meyer 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io> Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Asim R P 提交于
regress.c cannot include fmgroids.h because the header file is generated during build process. The ICW jobs in CI checkout gpdb source code and run make from within src/test/regress. That fails to find fmgroids.h. It seems we need a dedicated contrib module for gp_inject_fault. This reverts commit bd26a268.
-
由 Asim R P 提交于
This fixes ICW breakage due to "postgres" binary cannot be loaded as a shared library. To run gp_fault_inject() function manually, generate regress.so by running make in src/test/regress. Thereafter, create function command can be used to create the function, as in create_fault_function.source.
-
由 Asim R P 提交于
The gp_inject_fault() function is now available in pg_regress so a contrib module is not required. The test was not being run, it trips an assertion. So it is not added to greenplum_schedule.
-
由 Asim R P 提交于
The function gp_inject_fault() was defined in a test specific contrib module (src/test/dtm). All tests can now make use of it. Two pg_regress tests (dispatch and cursor) are modified to demonstrate the usage. The function is also made capable to inject fault in any segment, specified by dbid. No more invoking gpfaultinjector python script from SQL files.
-
- 21 7月, 2017 3 次提交
-
-
由 Jesse Zhang 提交于
Partition Selection is the process of determining at runtime ("execution time") which leaf partitions we can skip scanning. Three types of Scan operators benefit from partition selection: DynamicTableScan, DynamicIndexScan, and BitmapTableScan. Currently, there is a minimal amount of logging about what partitions are selected, but they are scattered between DynamicIndexScan and DynamicTableScan (and so we missed BitmapTableScan). This commit moves the logging into the PartitionSelector operator itself, when it exhausts its inputs. This also brings the nice side effect of more granular information: the log now attributes the partition selection to individual partition selectors.
-
由 Venkatesh Raghavan 提交于
Arguments to the function scan can themselve have a subquery that can create new rtable entries. Therefore, first translate all arguments of the FunctionScan before setting scanrelid of the FunctionScan.
-
由 Chris Hajas 提交于
The changes in commit bdafd0ce should not be tested against the backup43/restore5 test since this code is not yet present in 4.3. Additionally, query the 'template1' database when verifying roles exist.
-
- 20 7月, 2017 7 次提交
-
-
由 Andreas Scherbaum 提交于
-
由 Karen Huddleston 提交于
Additionally, gpdbrestore with `-G only` no longer requires the database to be created. Previously, gpdbrestore -G required the database to already exist or the -e flag. The `-G only` option will now only restore globals and the test is changed to reflect this.
-
由 Karen Huddleston 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Tom Meyer 提交于
Globals should be restored first, as createdb relies on roles being present Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Tom Meyer 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This ping test can sometimes fail. We previously added retry logic which significantly reduced failures, but some are still present.
-
由 Chris Hajas 提交于
This is an internal utility called by gpdbrestore and should not have specific tests (except for testing with valgrind).
-
- 19 7月, 2017 14 次提交
-
-
由 Xiaoran Wang 提交于
Add an option of proxy in the configuration. Set proxy for cURL, http, https, socks4, socks4a, socks5 and socks5h are supported. Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Peifeng Qiu 提交于
* Port Postgres TAP SSL tests Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> * Add config to enable tap tests Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> 1. Add enable-tap-tests flag to control tests. 2. Add Perl checking module. 3. Enable tap tests for enterprise build by default. * Adapt postgres tap tests to gpdb 1. Assume a running GPDB cluster instance(gpdemo), instead of using temp installation. Remove most node init operation. Disable environment variable override during test init. 2. Replace node control operation with GPDB counterpart: start -> gpstart -a stop -> gpstop -a restart -> gpstop -arf reload -> gpstop -u disable promote, add restart_qd. restart_qd -> pg_ctl -w -t 3 -D $MASTER_DATA_DIRECTORY 3. Add default server key and certificate for GPDB. 4. Update server setup to work with running gpdemo. 5. Disable SSL alternative names cases. Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Jemish Patel 提交于
Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Nadeem Ghani 提交于
If you have multiple segment hosts with failures during a "sync", we used to only report the first issue. Fix: accumulate all the failures before reporting to the user. Add unit test Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Marbin Tan 提交于
Currently, doing a gppkg --migrate will migrate the gppkgs from one gphome to another. However, this is not done on the segment hosts as well. So when a user tries to run a gppkg installed sql function, it will break. Fix: sync up the migrated packages from the master gphome to all segments -Add tests to cover MigratePackages -Add more unit test coverage for gppkg Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Marbin Tan 提交于
This commit introduces BEHAVE_FLAGS as a new parameter in concourse. This will help us run specific tests within a specific scenario, it used to be just all or nothing. Now we can separate multi-host testing vs single host testing. Add tests for gppkg --clean for multi-host * gppkg --clean should install to the segment host with no gppkg * gppkg --clean should remove on all segment hosts when gppkg does not exist in master
-
由 Marbin Tan 提交于
Ensure that gppkg is doing an RPM to all hosts -- this is just a backfill testing addition.
-
由 C.J. Jameson 提交于
-
由 Larry Hamel 提交于
-- metadata about gucs, like GUC_DISALLOW_IN_FILE, is not available via sql. Work around that by parsing the C code that defines this metadata, and store relevant information in a file, to be read by gpconfig when called to change a GUC. -- the Makefile in gpMgmt/bin will create a local file during 'make install', used by gpconfig to validate that a given GUC can be changed Signed-off-by: NC.J. Jameson <cjameson@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io> Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
This commit introduces a new operator for ValuesScan, earlier we generated `UNION ALL` for cases where VALUES lists passed are all constants, but now a new Operator CLogicalConstTable with an array of const tuples will be generated Once the plan is generated by ORCA, it will be translated to valuesscan node in GPDB. This enhancement helps significantly in improving the total run time for the queries involving values scan in ORCA with const values. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Nadeem Ghani 提交于
gppkg --remove tried to match with a regex <package name>*.gppkg. This misses the case where a package name will be a substring of a different package. Make it so that gppkg --remove will now do an equality check <package name>.gppkg before trying out the previous behavior. Message the user if there is more than one match. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Xin Zhang 提交于
At `init` time, the `gp_segment_configuration` table needs to be updated with the mirror information. We use the `gp_add_segment_mirror` and `gp_remove_segment_mirror` functions to update the catalog directly. The `start`, `stop` and `destroy` operations can now use the information present in the catalog instead of having hard coded values. Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
由 Omer Arap 提交于
If gpdb and orca is build with debugging enabled, there is an assert in orca to check if the upper and lower bound are both closed. `GPOS_ASSERT_IMP(FSingleton(), fLowerClosed && fUpperClosed);` The histogram that is stored in pg_statistics might lead to have a singleton buckets as follows: `10, 20, 20, 30, 40` which will lead to have buckets in this format: `[0,10), [10, 20), [20,20), [20,30), [30,40]` This will cause assert to fail since [20,20) is a singleton bucket but its upper bound is open. With this fix, the generated buckets will look like below: `[0,10], [10,20), [20,20], (20,30), [30,40]` Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
- 18 7月, 2017 7 次提交
-
-
由 Michael Roth 提交于
* initial commit * Updated instructions - Added note to build missing packages
-
由 Andreas Scherbaum 提交于
* Update VACUUM documentation * VACUUM does not remove rows, only marks space for reuse * No daily VACUUM required, but depends on the frequency of changes * VACUUM FULL does not (yet) recreate the table, it rewrites it
-
由 Ming LI 提交于
If there are two external tables refer to the same PIPE file using gpfdist or file protocol directly, concurrent read will result in wrong data format or hang for gpfdist. Now before read the pipe, it will firstly flock the pipe file (Windows not supported yet), other requests from gpdb will report error. Signed-off-by: NMing LI <mli@apache.org> Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Abhijit Subramanya 提交于
Exclude wal sender process backends (which application is `walreceiver`) from accounted as leftover backends. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Xin Zhang 提交于
Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
由 Abhijit Subramanya 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Abhijit Subramanya 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 16 7月, 2017 1 次提交
-
-
由 Roman Shaposhnik 提交于
-
- 15 7月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
* Remove PartOidExpr, it's not used in GPDB. The target lists of DML nodes that ORCA generates includes a column for the target partition OID. It can then be referenced by PartOidExprs. ORCA uses these to allow sorting the tuples by partition, before inserting them to the underlying table. That feature is used by HAWQ, where grouping tuples that go to the same output partition is cheaper. Since commit adfad608, which removed the gp_parquet_insert_sort GUC, we don't do that in GPDB, however. GPDB can hold multiple result relations open at the same time, so there is no performance benefit to grouping the tuples first (or at least not enough benefit to counterbalance the cost of a sort). So remove the now unused support for PartOidExpr in the executor. * Bump ORCA version to 2.37 Signed-off-by: NEkta Khanna <ekhanna@pivotal.io> * Removed acceptedLeaf Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-