- 10 11月, 2017 7 次提交
-
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - RG/RQ-qualify gpperfmon, other content where appropriate * edits from david
-
由 Ashwin Agrawal 提交于
In all kills 17 combinations running gpstop -a. These tests already have coverage for gpstop -i. In case of gpstop -a tests the fault injected is resumed, command completes and gpstop -a is performed, which will write checkpoint and clean shut-down. So, really there is nothing to test as recovery will be normal means not doing anything, do not need so many combinations testing this behavior.
-
由 Ashwin Agrawal 提交于
These tests already have coverage for gpstop -i. In case of gpstop -a tests the fault injected is resumed, command completes and gpstop -a is performed, which will write checkpoint and clean shut-down. So, really there is nothing to test as recovery will be normal means not doing anything, do not need so many combinations testing this behavior.
-
由 Ashwin Agrawal 提交于
test_switch_13_24.py was using fault injector `dtm_xlog_distributed_commit`, while test_switch_01_12.py has tests for `dtm_broadcast_commit_prepared`. From code `dtm_xlog_distributed_commit` is set after commit record is written in `RecordTransactionCommit()` and `dtm_broadcast_commit_prepared` is set just before commit prepared is broadcasted to segments in `doNotifyingCommitPrepared()` which gets called right after `RecordTransactionCommit()`. No 2PC state change between these two fault injector points making the tests at these 2 points redundant. Hence, reducing one more box on CI by moving some combinations to test_switch.py and deleting the test_switch_13_24.py.
-
由 Lisa Owen 提交于
* docs - add gp_toolkit discussion of resgroup views * remove proposed_cpu_rate_limit
-
由 Shoaib Lari 提交于
For AO tables, users do not always want to run ANALYZE on a table when the analyzedb command is run. For example, when they have already ANALYZE'ed the table. The --gen_profile_only option saves the modification count of the specified AO table (or all AO tables if none is specified) so that a subsequent analyzedb command will not ANALYZE the AO table if the modification count for the table has not changed from the saved value. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 09 11月, 2017 22 次提交
-
-
由 Daniel Gustafsson 提交于
The releng NON_PRODUCTION_FILES and QAUTILS_FILES were referencing quite a few apps that were removed a long time ago (some of them in the 3.x cycle). Also, the Perl module split in 5.x has made explain in this list not work since it lacked the corresponding module.
-
由 Daniel Gustafsson 提交于
Instead of having to remember to manually update the gppylib JSON file (which has frequently been forgotten), hook the generation into the src/backend/catalog "all" target such that it's generated automatically when needed and thus can be removed from the repo (removing the risk of using a stale file). Also updates the documentation and some minor comment fixes to the process_foreign_key script.
-
由 Daniel Gustafsson 提交于
Remove the trailing whitespace that catullus.pl appends to the comments on the DATA rows for no good reason. Also regenerate the pg_proc_gp.h file without the whitespace.
-
由 Pengzhou Tang 提交于
* Do UnassignResGroup within prepareTransaction too, prepareTransaction() will put QE out of any transactions temporarily until the second commit command arrives, so any failures in this gap will cause leaks of resource group including slots etc. * Clean code, move UnassignResGroup() into AtEOXact_ResGroup() so resource group related codes will not spread across prepare/commit/abort functions. * Do not call callback functions in PrepareTransaction because the transaction is not trully commited.
-
由 Pengzhou Tang 提交于
-
由 Adam Lee 提交于
-
由 Lisa Owen 提交于
* docs - PXF supports RPM install of clients * edits per review comments from alex
-
由 Lisa Owen 提交于
* docs - add PXF memory and thread config content * edits to intro paragraph from shivram * edits from alex re: tomcat queueing
-
由 Lav Jain 提交于
-
由 Todd Sedano 提交于
-
由 Karen Huddleston 提交于
-
由 Melanie Plageman 提交于
In situations in which our available memory is much larger than the memory in our sortcontext, it was previously possible to overflow the maxNumEntries variable. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 David Yozie 提交于
-
由 Mel Kiyama 提交于
* docs: gpdbrestore add information for --noplan option * docs: add gpdbrestore --noplan option information to example text.
-
由 Asim R P 提交于
Unfortunately we can't remove the code referenced by the FIXME yet, but we've pulled the existing context into the comment and moved to GitHub for tracking. [ci skip] Signed-off-by: NJacob Champion <pchampion@pivotal.io> Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Ekta Khanna 提交于
This commit acheives the same behaviour as before, ensuring backward compatibility for python. Reverting copyfile changes from commit 640fd9d5 for regression.diffs and regression.out as it is used for icg regression diffs in the ORCA ci pipeline. Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Alexander Denissov 提交于
-
由 sambitesh 提交于
-
由 Xin Zhang 提交于
Signed-off-by: NJiangTian Nie <jiangtian.nie@gmail.com>
-
由 Taylor Vesely 提交于
Running ALTER TABLE PARTITION SPLIT on range subpartitions results in both new partitions to incorrectly have the same partition order value (parruleord in pg_partition_rule). ALTER TABLE PARTITION SPLIT is accomplished by running multiple DDLs in sequence: 1. CREATE TEMP TABLE to match the data type/orientation of the partition we are splitting 2. ALTER TABLE PARTITION EXCHANGE the partition with the new temporary table. The temporary table now contains the partition data, and the partition table is now empty. 3. ALTER TABLE DROP PARTITION on the exchanged partition (the new empty table) 3a. Drop the partitioning rule on the empty partition 3b. DROP TABLE on the empty partition At this point (in the old behavior) we remove the partition rule from the in memory copy of the partition metadata. We need to remove it from the context here or ADD PARTITION will believe that a partition for the split range already exists, and will fail to create a new partition. Now, create two new partitions in the place of the old one. For each partition: 4a. CREATE TABLE for the new range 4b. ADD PARTITION - Search for a hole in the partition order to place the partition. Open up a hole in the parruleord if needed. When adding a subpartition, ADD PARTITION relies on the partition rules passed to it in order to find any holes in the partition range. Previously, the metadata was not refreshed when adding the second partition, and this resulted in the ADD PARTITION command creating both tables with the same partition rule order (parruleord). This commit resolves the issue by refreshing the partition metadata (PgPartRule) passed to the CREATE TABLE/ADD PARTITION commands upon each iteration.
-
由 Lisa Owen 提交于
* docs - qualify some resource-queue specific content (part 1) * explicitly state resource groups do not use gp_vmem_protect_limit * RQ/RG qualify some gucs and system tables/views * qualify gp_toolkit RQ content * add RG segment memory calculation * clarify resgroup perseg memory based on active primary segs on host * remove max_resource_groups guc again
-
由 Adam Lee 提交于
* Several small fixes of the tests 1, ignore two generated test files. 2, remove the string containing unpredictable segment numbers. 3, drop tables in external_table case, so we could run multiple times of it once. * Fix cases which are unpredictable > commit 3bbedbe9 > Author: Heikki Linnakangas <hlinnakangas@pivotal.io> > Date: Thu Nov 2 10:04:58 2017 +0200 > > Wake up faster, if a segment returns an error. > Previously, if a segment reported an error after starting up the > interconnect, it would take up to 250 ms for the main thread in the QD > process to wake up and poll the dispatcher connections, and to see that > there was an error. Shorten that time, by waking up immediately if the > QD->QE libpq socket becomes readable while we're waiting for data to > arrive in a Motion node. > This isn't a complete solution, because this will only wake up if one > arbitrarily chosen connection becomes readable, and we still rely on > polling for the others. But this greatly speeds up many common scenarios. > In particular, the "qp_functions_in_select" test now runs in under 5 s > on my laptop, when it took about 60 seconds before. > Before this commit, the master would only check every 250 ms if one of the > segments had reported an error. Now it wakes up and cancels the whole query as > soon as it receives an error from the first segment. That makes it more likely > that the other segments have not yet reached the same number of errors as what > is memorized in the expected output. These two cases check: 1, when selecting from a cte fails, one of the external table of the cte reached the error limit, how many errors happened in the other external table of the cte, which would not reached the limit. 2, when selecting from an external table with two locations mapped to two segments each, one segment reached the reject limit, the other also reached the same. We could not predict these two results without special test files, even without that commit actually. This commit removes the cte case and checks at least one segment failed in case readable_query26.
-
- 08 11月, 2017 11 次提交
-
-
由 Heikki Linnakangas 提交于
This small batch of commits contains the changes to support hashing for SELECT DISTINCT queries, and UNION/INTERSECT/EXCEPT.
-
由 Heikki Linnakangas 提交于
add_slice_to_motion doesn't use the sortPathKeys argument for anything, so let's remove it.
-
由 Pengzhou Tang 提交于
Previously, to speed up dispatching, cdbdisp_dispatchToGang_async and cdbdisp_waitDispatchFinish_async are designed to use nonblock flush to dispatch commands in bulk, however, risks exist that some commands are not fully dispatched in corner error cases, so QD must do a force flush before handling such connections, otherwise QD will get stuck.
-
由 Pengzhou Tang 提交于
* Do UnassignResGroup within prepareTransaction too, prepareTransaction() will put QE out of any transactions temporarily until the second commit command arrives, so any failures in this gap will cause leaks of resource group including slots etc. * Clean code, move UnassignResGroup() into AtEOXact_ResGroup() so resource group related codes will not spread across prepare/commit/abort functions. * Do not call callback functions in PrepareTransaction because the transaction is not trully commited.
-
由 Heikki Linnakangas 提交于
This test was simply large, and also took quite a long time to run, so it's nice to split it up. Furthermore, only some of the tests produce different output with ORCA. Split the test so that the tests that use EXPLAIN, or produce different output with ORCA for some other reason, go to a new 'bfv_partition_plans' test, and the rest remain in 'bfv_partition'.
-
由 Ning Yu 提交于
In the resgroup concourse we use bash here doc to execute commands on remote server, but once a ssh command is executed the preceding commands are all ignored. This is because the here doc is hijacked or consumed by the inner ssh. Fixed by redirect inner ssh's stdin from /dev/null.
-
由 Ning Yu 提交于
The installcheck-resgroup (ICR) error code was replaced with the diff watcher's error code. Now pass it to the shell correctly.
-
由 Xin Zhang 提交于
Signed-off-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
Moved lot of filerep code to ftsprobefilerep.c, so now ftsprobe.c becomes very specific to walrep. Also, along the way removed calling `probePollOut()` and `probePollIn()` from `probeSegmentHelper()` as its not needed.