- 11 11月, 2017 3 次提交
-
-
由 David Yozie 提交于
* removing docs for pgadmin (not included in build) * remove pgadmin graphics
-
由 Heikki Linnakangas 提交于
The error message was changed accidentally in commit 78b0a42e. Probably a copy-paste mistake. Change it back.
-
由 David Yozie 提交于
* Moving module references to 'Additional Supplied Moduels' section to better match postregsql docs organization * port postgresql passwordcheck docs
-
- 10 11月, 2017 15 次提交
-
-
由 Ning Yu 提交于
These tests are not stable enough, so remove them for now. We will add them back after improvements.
-
由 Daniel Gustafsson 提交于
This was in part already supported as the backend part of the below commit was already backported. This brings in the frontend changes as well the followup commit to remove backend support for SSLv3 as it no longer make any sense to keep it around. commit 4dddf8552801ef013c40b22915928559a6fb22a0 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Thu May 21 20:41:55 2015 -0400 Back-patch libpq support for TLS versions beyond v1. Since 7.3.2, libpq has been coded in such a way that the only SSL protocol it would allow was TLS v1. That approach is looking increasingly obsolete. In commit 820f08ca we fixed it to allow TLS >= v1, but did not back-patch the change at the time, partly out of caution and partly because the question was confused by a contemporary server-side change to reject the now-obsolete SSL protocol v3. 9.4 has now been out long enough that it seems safe to assume the change is OK; hence, back-patch into 9.0-9.3. (I also chose to back-patch some relevant comments added by commit 326e1d73, but did *not* change the server behavior; hence, pre-9.4 servers will continue to allow SSL v3, even though no remotely modern client will request it.) Per gripe from Jan Bilek. commit 326e1d73 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri Jan 31 17:51:07 2014 -0500 Disallow use of SSL v3 protocol in the server as well as in libpq. Commit 820f08ca claimed to make the server and libpq handle SSL protocol versions identically, but actually the server was still accepting SSL v3 protocol while libpq wasn't. Per discussion, SSL v3 is obsolete, and there's no good reason to continue to accept it. So make the code really equivalent on both sides. The behavior now is that we use the highest mutually-supported TLS protocol version. Marko Kreen, some comment-smithing by me
-
由 Ning Yu 提交于
* resgroup: correct the memory quota size on QE. QE might have different caps with QD, so the runtime status must also be considered to decide the per slot memory quota. * resgroup: retire sessionId in slot. * resgroup: also update memQuotaUsed on QEs. This value is necessary on ALTER RESOURCE GROUP to decide the new memory capabilities, but it was not updated on QEs in the past, so the memory quota might be over released on QEs. This does not lead to runtime errors, and the memory quota can be regained laterly, but the performance can be affected in worst case. To fix it now we update this value also on QEs. * resgroup: reduce the duplicated code in groupApplyMemCaps(). Move the duplicated logic into mempoolAutoRelease(). * resgroup: validate proc is in the right resgroup wait queue. We used to check proc's queuing status, now we also validate it's in the specific resgroup's wait queue. This check is expensive so it's only enabled in the debugging build. * resgroup: retire MyProc->resWaiting. resWaiting is only true when proc is in the wait queue, and is only false when proc is not in the wait queue, so it's a redundant flag. And checking proc's queuing status can be performed automatically, so resWaiting can be safely retired. Their helper functions procIsWaiting() and procIsInWaitQueue() are also merged into one. * and other misc changes.
-
由 Richard Guo 提交于
* Set/Unset doMemCheck in Attach/Detach slot. * Use IsResGroupEnabled() in ResGroupReleaseMemory(). * Remove AssertImply in Attach/Detach slot. * Replace magic number 0 with macro to represent cgroup top dir. * Remove static global variable cpucores. * Dump more info in ResGroupDumpMemoryInfo.
-
由 C.J. Jameson 提交于
-
由 Lisa Owen 提交于
* docs - add pxf upgrade procedure * docs - add content for pxf upgrade procedure * edits requested by david * edits from alex, kong
-
由 Amil Khanzada 提交于
Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - RG/RQ-qualify gpperfmon, other content where appropriate * edits from david
-
由 Ashwin Agrawal 提交于
In all kills 17 combinations running gpstop -a. These tests already have coverage for gpstop -i. In case of gpstop -a tests the fault injected is resumed, command completes and gpstop -a is performed, which will write checkpoint and clean shut-down. So, really there is nothing to test as recovery will be normal means not doing anything, do not need so many combinations testing this behavior.
-
由 Ashwin Agrawal 提交于
These tests already have coverage for gpstop -i. In case of gpstop -a tests the fault injected is resumed, command completes and gpstop -a is performed, which will write checkpoint and clean shut-down. So, really there is nothing to test as recovery will be normal means not doing anything, do not need so many combinations testing this behavior.
-
由 Ashwin Agrawal 提交于
test_switch_13_24.py was using fault injector `dtm_xlog_distributed_commit`, while test_switch_01_12.py has tests for `dtm_broadcast_commit_prepared`. From code `dtm_xlog_distributed_commit` is set after commit record is written in `RecordTransactionCommit()` and `dtm_broadcast_commit_prepared` is set just before commit prepared is broadcasted to segments in `doNotifyingCommitPrepared()` which gets called right after `RecordTransactionCommit()`. No 2PC state change between these two fault injector points making the tests at these 2 points redundant. Hence, reducing one more box on CI by moving some combinations to test_switch.py and deleting the test_switch_13_24.py.
-
由 Lisa Owen 提交于
* docs - add gp_toolkit discussion of resgroup views * remove proposed_cpu_rate_limit
-
由 Shoaib Lari 提交于
For AO tables, users do not always want to run ANALYZE on a table when the analyzedb command is run. For example, when they have already ANALYZE'ed the table. The --gen_profile_only option saves the modification count of the specified AO table (or all AO tables if none is specified) so that a subsequent analyzedb command will not ANALYZE the AO table if the modification count for the table has not changed from the saved value. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 09 11月, 2017 22 次提交
-
-
由 Daniel Gustafsson 提交于
The releng NON_PRODUCTION_FILES and QAUTILS_FILES were referencing quite a few apps that were removed a long time ago (some of them in the 3.x cycle). Also, the Perl module split in 5.x has made explain in this list not work since it lacked the corresponding module.
-
由 Daniel Gustafsson 提交于
Instead of having to remember to manually update the gppylib JSON file (which has frequently been forgotten), hook the generation into the src/backend/catalog "all" target such that it's generated automatically when needed and thus can be removed from the repo (removing the risk of using a stale file). Also updates the documentation and some minor comment fixes to the process_foreign_key script.
-
由 Daniel Gustafsson 提交于
Remove the trailing whitespace that catullus.pl appends to the comments on the DATA rows for no good reason. Also regenerate the pg_proc_gp.h file without the whitespace.
-
由 Pengzhou Tang 提交于
* Do UnassignResGroup within prepareTransaction too, prepareTransaction() will put QE out of any transactions temporarily until the second commit command arrives, so any failures in this gap will cause leaks of resource group including slots etc. * Clean code, move UnassignResGroup() into AtEOXact_ResGroup() so resource group related codes will not spread across prepare/commit/abort functions. * Do not call callback functions in PrepareTransaction because the transaction is not trully commited.
-
由 Pengzhou Tang 提交于
-
由 Adam Lee 提交于
-
由 Lisa Owen 提交于
* docs - PXF supports RPM install of clients * edits per review comments from alex
-
由 Lisa Owen 提交于
* docs - add PXF memory and thread config content * edits to intro paragraph from shivram * edits from alex re: tomcat queueing
-
由 Lav Jain 提交于
-
由 Todd Sedano 提交于
-
由 Karen Huddleston 提交于
-
由 Melanie Plageman 提交于
In situations in which our available memory is much larger than the memory in our sortcontext, it was previously possible to overflow the maxNumEntries variable. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 David Yozie 提交于
-
由 Mel Kiyama 提交于
* docs: gpdbrestore add information for --noplan option * docs: add gpdbrestore --noplan option information to example text.
-
由 Asim R P 提交于
Unfortunately we can't remove the code referenced by the FIXME yet, but we've pulled the existing context into the comment and moved to GitHub for tracking. [ci skip] Signed-off-by: NJacob Champion <pchampion@pivotal.io> Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Ekta Khanna 提交于
This commit acheives the same behaviour as before, ensuring backward compatibility for python. Reverting copyfile changes from commit 640fd9d5 for regression.diffs and regression.out as it is used for icg regression diffs in the ORCA ci pipeline. Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Alexander Denissov 提交于
-
由 sambitesh 提交于
-
由 Xin Zhang 提交于
Signed-off-by: NJiangTian Nie <jiangtian.nie@gmail.com>
-
由 Taylor Vesely 提交于
Running ALTER TABLE PARTITION SPLIT on range subpartitions results in both new partitions to incorrectly have the same partition order value (parruleord in pg_partition_rule). ALTER TABLE PARTITION SPLIT is accomplished by running multiple DDLs in sequence: 1. CREATE TEMP TABLE to match the data type/orientation of the partition we are splitting 2. ALTER TABLE PARTITION EXCHANGE the partition with the new temporary table. The temporary table now contains the partition data, and the partition table is now empty. 3. ALTER TABLE DROP PARTITION on the exchanged partition (the new empty table) 3a. Drop the partitioning rule on the empty partition 3b. DROP TABLE on the empty partition At this point (in the old behavior) we remove the partition rule from the in memory copy of the partition metadata. We need to remove it from the context here or ADD PARTITION will believe that a partition for the split range already exists, and will fail to create a new partition. Now, create two new partitions in the place of the old one. For each partition: 4a. CREATE TABLE for the new range 4b. ADD PARTITION - Search for a hole in the partition order to place the partition. Open up a hole in the parruleord if needed. When adding a subpartition, ADD PARTITION relies on the partition rules passed to it in order to find any holes in the partition range. Previously, the metadata was not refreshed when adding the second partition, and this resulted in the ADD PARTITION command creating both tables with the same partition rule order (parruleord). This commit resolves the issue by refreshing the partition metadata (PgPartRule) passed to the CREATE TABLE/ADD PARTITION commands upon each iteration.
-
由 Lisa Owen 提交于
* docs - qualify some resource-queue specific content (part 1) * explicitly state resource groups do not use gp_vmem_protect_limit * RQ/RG qualify some gucs and system tables/views * qualify gp_toolkit RQ content * add RG segment memory calculation * clarify resgroup perseg memory based on active primary segs on host * remove max_resource_groups guc again
-
由 Adam Lee 提交于
* Several small fixes of the tests 1, ignore two generated test files. 2, remove the string containing unpredictable segment numbers. 3, drop tables in external_table case, so we could run multiple times of it once. * Fix cases which are unpredictable > commit 3bbedbe9 > Author: Heikki Linnakangas <hlinnakangas@pivotal.io> > Date: Thu Nov 2 10:04:58 2017 +0200 > > Wake up faster, if a segment returns an error. > Previously, if a segment reported an error after starting up the > interconnect, it would take up to 250 ms for the main thread in the QD > process to wake up and poll the dispatcher connections, and to see that > there was an error. Shorten that time, by waking up immediately if the > QD->QE libpq socket becomes readable while we're waiting for data to > arrive in a Motion node. > This isn't a complete solution, because this will only wake up if one > arbitrarily chosen connection becomes readable, and we still rely on > polling for the others. But this greatly speeds up many common scenarios. > In particular, the "qp_functions_in_select" test now runs in under 5 s > on my laptop, when it took about 60 seconds before. > Before this commit, the master would only check every 250 ms if one of the > segments had reported an error. Now it wakes up and cancels the whole query as > soon as it receives an error from the first segment. That makes it more likely > that the other segments have not yet reached the same number of errors as what > is memorized in the expected output. These two cases check: 1, when selecting from a cte fails, one of the external table of the cte reached the error limit, how many errors happened in the other external table of the cte, which would not reached the limit. 2, when selecting from an external table with two locations mapped to two segments each, one segment reached the reject limit, the other also reached the same. We could not predict these two results without special test files, even without that commit actually. This commit removes the cte case and checks at least one segment failed in case readable_query26.
-