- 29 7月, 2020 17 次提交
-
-
由 Adam Berlin 提交于
(cherry picked from commit 3b7d6b85)
-
由 Ning Yu 提交于
We used to mark the GUC gp_interconnect_proxy_addresses as PGC_POSTMASTER, so the cluster must be restarted to reload this setting, this can be a problem during gpexpand: the cluster expansion itself is online, but to configure the proxy addresses for the new segments a restart is needed. Now we changed it to PGC_SIGHUP, so the setting can be reloaded on SIGHUP. Also changed the setting from a developer option to a normal one. (cherry picked from commit c2523232)
-
由 Ning Yu 提交于
The code will be compiled with ic-proxy enabled, but the tests are still ran in the default ic-udpifc mode. Authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io> (cherry picked from commit cdf4cdeb)
-
由 Ning Yu 提交于
We used to use the option --with-libuv to enable ic-proxy, it is not staightforward to understand the purpose of that option, though. So we renamed it to --enable-ic-proxy, and the default setting is changed to "disable". Suggested by Kris Macoskey <kmacoskey@pivotal.io> (cherry picked from commit 81810a20)
-
由 Ning Yu 提交于
The interconnect proxy mode, a.k.a. ic-proxy, is a new interconnect mode, all the backends communicate via a proxy bgworker, all the backends on the same segment share the same proxy bgworker, so every two segments only need one network connection between them, which reduces the network flows as well the ports. To enable the proxy mode we need to first configure the guc gp_interconnect_proxy_addresses, for example: gpconfig \ -c gp_interconnect_proxy_addresses \ -v "'1:-1:10.0.0.1:2000,2:0:10.0.0.2:2001,3:1:10.0.0.3:2002'" \ --skipvalidation Then restart to take effect. (cherry picked from commit 6188fb1f)
-
由 Xiaoran Wang 提交于
* Upgrade pgbouncer to 1.13 Make pgbouncer 1.13 to work on centos6 Use pgbouncer master branch
-
由 David Yozie 提交于
-
由 Mel Kiyama 提交于
* docs - add GUC gp_add_column_inherits_table_setting 6.x only -Add GUC -Also, updated ADD COLUMN clause of ALTER TABLE command with GUC information * docs - update based on review comment. * docs - review comment update.
-
- 28 7月, 2020 2 次提交
-
-
由 Paul Guo 提交于
Here is the diff output of the test result. drop database some_database_without_tablespace; -DROP +ERROR: database "some_database_without_tablespace" is being accessed by other users +DETAIL: There is 1 other session using the database. drop tablespace some_basebackup_tablespace; -DROP +ERROR: tablespace "some_basebackup_tablespace" is not empty The reason is that after client connection to the database exits, the server needs some time (the process might be scheduled out soon, and the operation needs to content for the ProcArrayLock lock) to release the PGPROC in proc_exit()->ProcArrayRemove(). During dropdb() (for database drop), postgres will call CountOtherDBBackends() to see if there are still sessions that are using the database by checking proc->databaseId, and it will try at most 5 sec. This test quits the db connection of some_database_without_tablespace and then drops the database immediately. This should be mostly fine but if the system is in slow or in heavy load, this still could lead to test flakiness. This issue could be simulated using gdb. Let's poll until database drop commands succeeds for the affected database. It seems that drop database sql command could not be in transaction block so I could not use plpgsql to implement, instead I use dropdb utility and bash command to implement that. Reviewed-by: NAsim R P <pasim@vmware.com> (cherry picked from commit c8b00ac7)
-
由 mkiyama 提交于
Also, fix bad cross-ref.
-
- 23 7月, 2020 4 次提交
-
-
由 Hubert Zhang 提交于
ExecChooseHashTableSize() is a hot function which is not only called by executor, but also by planner. Planner will call this function when calcualting cost for each join path. The number of join path grow exponentially with the number of table. As a result, do not using elog(LOG) to avoid generating too many logs. (cherry picked from commit 6b4d93c5)
-
由 Paul Guo 提交于
Previously it used max_prepared_xacts for shared snapshot slot number. The reason that it does not use MaxBackends, per comment, is that ideally on QE we want to use QD MaxBackends for the slot number, and note usually QE MaxBackends should be greater than QD MaxBackends due to potential multiple gangs per query. The code previously used max_prepared_xacts finally for the shared snapshot slot number calculation. That is not correctly given we have read-only query, and we have one-phase commit now. Let's use MaxBackends for shared snapshot slot number calculation for safety though this might waste some memory. Reviewed-by: Nxiong-gang <gxiong@pivotal.io> (cherry picked from commit f6c59503)
-
由 Paul Guo 提交于
Previously we assign it as max_prepared_xacts. It is used to initialize some 2pc related shared memory. For example the array shmCommittedGxactArray is created with this length and that array is used to collect not-yet "forgotten" distributed transactions during master/standby recovery, but the array length might be problematic since: 1. If master max_prepared_xacts is equal to segment max_prepared_xacts as usual. It is possible some distributed transactions use just partial gang so the total distributed transactions might be larger (and even much larger) than max_prepared_xacts. The document says max_prepared_xacts should be greater than max_connections but there is no code to enforce that. 2. Also it is possible that master max_prepared_xacts might be different than segment max_prepared_xacts (although the document does not suggest it there is no code to enforce that), To fix that we use MaxBackends for the gxact number on master. We may just use guc max_connections (MaxBackends includes number for autovacuum workers and bg workers additionally besides guc max_connections), but I'm conservatively using MaxBackends, since this issue is annoying - standby can not recover due to the FATAL message as below even after postgres reboot unless we temporarily increase the guc max_prepared_transactions value. 2020-07-17 16:48:19.178667 CST,,,p33652,th1972721600,,,,0,,,seg-1,,,,,"FATAL","XX000","the limit of 3 distributed transactions has been reached","It should not happen. Temporarily increase max_connections (need postmaster reboot) on the postgres (master or standby) to work around this issue and then report a bug",,,,"xlog redo at 0/C339BA0 for Transaction/DISTRIBUTED_COMMIT: distributed commit 2020-07-17 16:48:19.101832+08 gid = 1594975696-0000000009, gxid = 9",,0,,"cdbdtxrecovery.c",571,"Stack trace: 1 0xb3a30f postgres errstart (elog.c:558) 2 0xc3da4d postgres redoDistributedCommitRecord (cdbdtxrecovery.c:565) 3 0x564227 postgres <symbol not found> (xact.c:6942) 4 0x564671 postgres xact_redo (xact.c:7080) 5 0x56fee5 postgres StartupXLOG (xlog.c:7207) Reviewed-by: Nxiong-gang <gxiong@pivotal.io> (cherry picked from commit 2a961e65)
-
由 Paul Guo 提交于
We need that in more than one test. Reviewed-by: Nxiong-gang <gxiong@pivotal.io> (cherry picked from commit af942980)
-
- 22 7月, 2020 4 次提交
-
-
由 Zhenghua Lyu 提交于
General and segmentGeneral locus imply that if the corresponding slice is executed in many different segments should provide the same result data set. Thus, in some cases, General and segmentGeneral can be treated like broadcast. But what if the segmentGeneral and general locus path contain volatile functions? volatile functions, by definition, do not guarantee results of different invokes. So for such cases, they lose the property and cannot be treated as *general. Previously, Greenplum planner does not handle these cases correctly. Limit general or segmentgeneral path also has such issue. The fix idea of this commit is: when we find the pattern (a general or segmentGeneral locus paths contain volatile functions), we create a motion path above it to turn its locus to singleQE and then create a projection path. Then the core job becomes how we choose the places to check: 1. For a single base rel, we should only check its restriction, this is the at bottom of planner, this is at the function set_rel_pathlist 2. When creating a join path, if the join locus is general or segmentGeneral, check its joinqual to see if it contains volatile functions 3. When handling subquery, we will invoke set_subquery_pathlist function, at the end of this function, check the targetlist and havingQual 4. When creating limit path, the check and change algorithm should also be used 5. Correctly handle make_subplan OrderBy clause and Group Clause should be included in targetlist and handled by the above Step 3. Also this commit fixes DMLs on replicated table. Update & Delete Statement on a replicated table is special. These statements have to be dispatched to each segment to execute. So if they contain volatile functions in their targetList or where clause, we should reject such statements: 1. For targetList, we check it at the function create_motion_path_for_upddel 2. For where clause, they will be handled in the query planner and if we find the pattern and want to fix it, do another check if we are updating or deleting replicated table, if so reject the statement. CherryPick from commit d1f9b96b from master to 6X.
-
由 Paul Guo 提交于
During testing, I encountered an incremental gprecoverseg hang issue. Incremental gprecoverseg is based on pg_rewind. pg_rewind launches a single mode postgres process and quits after crash recovery if the postgres instance was not cleanly shut down - this is used to ensure that the postgres is in a consistent state before doing incremental recovery. I found that the single mode postgres hangs with the below stack. \#1 0x00000000008cf2d6 in PGSemaphoreLock (sema=0x7f238274a4b0, interruptOK=1 '\001') at pg_sema.c:422 \#2 0x00000000009614ed in ProcSleep (locallock=0x2c783c0, lockMethodTable=0xddb140 <default_lockmethod>) at proc.c:1347 \#3 0x000000000095a0c1 in WaitOnLock (locallock=0x2c783c0, owner=0x2cbf950) at lock.c:1853 \#4 0x0000000000958e3a in LockAcquireExtended (locktag=0x7ffde826aa60, lockmode=3, sessionLock=0 '\000', dontWait=0 '\000', reportMemoryError=1 '\001', locallockp=0x0) at lock.c:1155 \#5 0x0000000000957e64 in LockAcquire (locktag=0x7ffde826aa60, lockmode=3, sessionLock=0 '\000', dontWait=0 '\000') at lock.c:700 \#6 0x000000000095728c in LockSharedObject (classid=1262, objid=1, objsubid=0, lockmode=3) at lmgr.c:939 \#7 0x0000000000b0152b in InitPostgres (in_dbname=0x2c769f0 "template1", dboid=0, username=0x2c59340 "gpadmin", out_dbname=0x0) at postinit.c:1019 \#8 0x000000000097b970 in PostgresMain (argc=5, argv=0x2c51990, dbname=0x2c769f0 "template1", username=0x2c59340 "gpadmin") at postgres.c:4820 \#9 0x00000000007dc432 in main (argc=5, argv=0x2c51990) at main.c:241 It tries to hold the lock for template1 on pg_database with lockmode 3 but it conflicts with the lock with lockmode 5 which was held by a recovered dtx transaction in startup RecoverPreparedTransactions(). Typically the dtx transaction comes from "create database" (by default the template database is template1). Fixing this by using the postgres database for single mode postgres execution. The postgres database is commonly used in many background worker backends like dtx recovery, gdd and ftsprobe. With this change, we do not need to worry about "create database" with template postgres, etc since they won't succeed, thus avoid the lock conflict. We may be able to fix this in InitPostgres() by bypassing the locking code in single mode but the current fix seems to be safer. Note InitPostgres() locks/unlocks some other catalog tables also but almost all of them are using lock mode 1 (except mode 3 pg_resqueuecapability per debugging output). It seems that it is not usual in real scenario to have a dtx transaction that locks catalog with mode 8 which conflicts with mode 1. If we encounter this later we need to think out a better (might not be trivial) solution for this. For now let's fix the issue we encountered at first. Note in this patch the code fixes in buildMirrorSegments.py and twophase.c are not related to this patch. They do not seem to be strict bugs but we'd better fix them to avoid potential issues in the future. Reviewed-by: NAshwin Agrawal <aashwin@vmware.com> Reviewed-by: NAsim R P <pasim@vmware.com> (cherry picked from commit 288908f3)
-
由 Paul Guo 提交于
Now that we do not have to use full gang for distributed transaction, that makes in-progress distributed transaction on master might be greater than max_prepared_xacts if max_prepared_xacts is configured with a small value. max_prepared_xacts is used to as the inProgressXidArray length for distributed snapshot. This might lead to distributed snapshot creation failure due to "Too many distributed transactions for snapshot" if the system is in heavy 2pc load. Fixing this by using GetMaxSnapshotXidCount() for the length of array inProgressXidArray, following the setting on master. This fixes github issue https://github.com/greenplum-db/gpdb/issues/10057 No test for this since test isolation2:prepare_limit already covered this. (I encountered this issue when backporting a PR that introduces the test isolation2:prepare_limit, so need to push this at first then the backporting PR). Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
由 Zhenghua Lyu 提交于
Greenplum use unique row id path as a candidate to implement semijoin. It is introduced long before. But GPDB6 has upgraded the kernel version to Postgres 9.4 and introduced many new path types and new plan nodes, thus cdbpath_dedup_fixup failed to consider them. Some typical issues are: https://github.com/greenplum-db/gpdb/issues/9427 On Master branch, Heikki's commit 9628a332 refactored this part of code so that it is OK on master. And for 4X and 5X, we do not have many new kinds of plannode and pathnode, it is also OK. It is very hard to backport commit 9628a332 to 6X, there is no concept of a Path's target list in 9.4. And to totally remove this kind of path is too overkilling. So the policy is to fix them one bye one if reported.
-
- 21 7月, 2020 2 次提交
-
-
由 (Jerome)Junfeng Yang 提交于
Enlarge the sleep time for a query which will be canceled later to avoid slow execution fails the test. Normally the test's running time should not get affected since the sleep query will get terminate immediately. (cherry picked from commit bbccf20c)
-
由 Denis Smirnov 提交于
XLogReaderAllocate returns NULL if the xlogreader couldn't be allocated. This NULL checks were forgotten in several places of twophase.c and caused segmentation faults under heavy workloads.
-
- 20 7月, 2020 1 次提交
-
-
由 Hao Wu 提交于
Oracle Linux is compiled from Red Hat Enterprise Linux (RHEL) source code, replacing Red Hat branding with Oracle's[1]. The ICW jobs for oracle7 consume the GPDB binary compiled from centos7. [1]: Wiki https://en.wikipedia.org/wiki/Oracle_Linux
-
- 17 7月, 2020 5 次提交
-
-
由 Paul Guo 提交于
We've seen such a case on a stable release but it is hard to debug via the message only, so let's provide more details in the error message.
-
由 Jesse Zhang 提交于
Our implementations of memory pools have a hidden dependency on _the_ global memory pool manager: typically GPOS_NEW and GPOS_DELETE will reach for the memory pool manager singleton. This makes GPOS_DELETE on a memory pool manager undefined behavior because we call member functions on an object after its destructor finishes. On the Postgres 12 merge branch, this manifests itself in a crash during initdb. More concerning is that it only crashed when we set max connections and shared buffers to a specific number.
-
由 Mel Kiyama 提交于
* docs - update utility docs with IP/hostname information. Add information to gpinitsystem, gpaddmirrors, and gpexpand ref. docs --Information about using hostnames vs. IP addresses --Information about configuring hosts that are configured with mulitple NICs Also updated some examples in gpinitsystem * docs - review comment updates. Add more information from dev. * docs - change examples to show valid configurations that support failorver. Also fix typos and minor edits. * docs - updates based on review comments.
-
由 Lisa Owen 提交于
- 16 7月, 2020 5 次提交
-
-
由 Pengzhou Tang 提交于
The failed test case is to test the command "copy lineitem to '/tmp/abort.csv'" can be cancelled after COPY is dispatched to QEs. To verify this, it checks that /tmp/abort.csv has fewer rows than lineitem. The cancel logical in codes is: QD dispatched the COPY command to QEs, then if QD get a cancel interrupt, it sends a cancel request to QEs, however, the QD will keep receiving data from QEs even QD already get a cancel interrupt. QD relies on QEs to receive the cancel request and explicitly stop copying data to QD. Obviously, QEs may already have copied out all data to QDs before they get cancel requests, so the test case cannot guarantee /tmp/aborted.csv has fewer rows than lineitem. To fix this, we just verify the COPY command can be aborted with message 'ERROR: canceling statement due to user request', the count verification looks pointless here. It's cherry-pick of 9480d631 from master
-
由 Ashuka Xue 提交于
Pull out the implementation for binary heap into its own templated h file.
-
由 Ashuka Xue 提交于
Prior to this commit, merging two histograms was not commutative. Meaning histogram1->Union(histogram2) could result in a row estimate of 1500 rows, but histogram2->Union(histogram1) could result in a row estimate of 600 rows. Now, MakeBucketMerged has been renamed to SplitAndMergeBuckets. This function, which calculates the statistics for the merged bucket, now consistently return the same histogram buckets regardless of the order of input. This in turn, makes MakeUnionHistogramNormalize and MakeUnionAllHistogramNormalize commutative. Once we have successfully split the buckets and merged them as necessary, we may have generated up to 3X the number of buckets that were originally present. Thus we cap the number of buckets to be either the max size of the two incoming buckets, or, 100 buckets CombineBuckets will then reduce the size of the histogram by combining consecutive buckets that have similar information. It does this by using a combination of two ratios: freq/ndv and freq/bucket_width. These two ratios were decided based off the following examples: Assuming that we calculate row counts for selections like the following: - For a predicate col = const: rows * freq / NDVs - For a predicate col < const: rows * (sum of full or fractional frequencies) Example 1 (rows = 100), freq/width, ndvs/width and ndvs/freq are all the same: ``` Bucket 1: [0, 4) freq .2 NDVs 2 width 4 freq/width = .05 ndv/width = .5 freq/ndv = .1 Bucket 2: [4, 12) freq .4 NDVs 4 width 8 freq/width = .05 ndv/width = .5 freq/ndv = .1 Combined: [0, 12) freq .6 NDVs 6 width 12 ``` This should give the same estimates for various predicates, with separate or combined buckets: ``` pred separate buckets combined bucket result ------- --------------------- --------------- ----------- col = 3 ==> 100 * .2 / 2 = 100 * .6 / 6 = 10 rows col = 5 ==> 100 * .4 / 4 = 100 * .6 / 6 = 10 rows col < 6 ==> 100 * (.2 + .25 * .4) = 100 * .5 * .6 = 30 rows ``` Example 2 (rows = 100), freq and ndvs are the same, but width is different: ``` Bucket 1: [0, 4) freq .4 NDVs 4 width 4 freq/width = .1 ndv/width = 1 freq/ndv = .1 Bucket 2: [4, 12) freq .4 NDVs 4 width 8 freq/width = .05 ndv/width = .5 freq/ndv = .1 Combined: [0, 12) freq .8 NDVs 8 width 12 ``` This will give different estimates with the combined bucket, but only for non-equal preds: ``` pred separate buckets combined bucket results ------- --------------------- --------------- -------------- col = 3 ==> 100 * .4 / 4 = 100 * .8 / 8 = 10 rows col = 5 ==> 100 * .4 / 4 = 100 * .8 / 8 = 10 rows col < 6 ==> 100 * (.4 + .25 * .4) != 100 * .5 * .8 50 vs. 40 rows ``` Example 3 (rows = 100), now NDVs / freq is different: ``` Bucket 1: [0, 4) freq .2 NDVs 4 width 4 freq/width = .05 ndv/width = 1 freq/ndv = .05 Bucket 2: [4, 12) freq .4 NDVs 4 width 8 freq/width = .05 ndv/width = .5 freq/ndv = .1 Combined: [0, 12) freq .6 NDVs 8 width 12 ``` This will give different estimates with the combined bucket, but only for equal preds: ``` pred separate buckets combined bucket results ------- --------------------- --------------- --------------- col = 3 ==> 100 * .2 / 4 != 100 * .6 / 8 5 vs. 7.5 rows col = 5 ==> 100 * .4 / 4 != 100 * .8 / 8 10 vs. 7.5 rows col < 6 ==> 100 * (.2 + .25 * .4) = 100 * .5 * .6 = 30 rows ``` This commit also adds an attribute to the statsconfig for MaxStatsBuckets and changes the scaling method when creating singleton buckets.
-
由 Ashuka Xue 提交于
MergeHistogramMapsforDisjPreds This commit refactors MakeStatsFilter to use MakeHistHashMapConjOrDisjFilter instead of individually calling MakeHistHashMapConj and MakeHistHashMapDisj. This commit also modifies MergeHistogramMapsForDisjPreds to avoid copy and creating unnecessary histogram buckets.
-
由 Mel Kiyama 提交于
* docs - add information for SSL with standby master --SSL file should not be in $MASTER_DATA_DIRECTORY Also --Add not about not using NULL ciphers --Correct default directory for SSL files to $MASTER_DATA_DIRECTORY * docs - review comment updates
-