- 17 8月, 2020 2 次提交
-
-
由 Huiliang.Liu 提交于
-
由 xiaoxiaoHe-E 提交于
-
- 23 7月, 2020 3 次提交
-
-
由 Paul Guo 提交于
Previously it used max_prepared_xacts for shared snapshot slot number. The reason that it does not use MaxBackends, per comment, is that ideally on QE we want to use QD MaxBackends for the slot number, and note usually QE MaxBackends should be greater than QD MaxBackends due to potential multiple gangs per query. The code previously used max_prepared_xacts finally for the shared snapshot slot number calculation. That is not correctly given we have read-only query, and we have one-phase commit now. Let's use MaxBackends for shared snapshot slot number calculation for safety though this might waste some memory. Reviewed-by: Nxiong-gang <gxiong@pivotal.io> (cherry picked from commit f6c59503)
-
由 Paul Guo 提交于
Previously we assign it as max_prepared_xacts. It is used to initialize some 2pc related shared memory. For example the array shmCommittedGxactArray is created with this length and that array is used to collect not-yet "forgotten" distributed transactions during master/standby recovery, but the array length might be problematic since: 1. If master max_prepared_xacts is equal to segment max_prepared_xacts as usual. It is possible some distributed transactions use just partial gang so the total distributed transactions might be larger (and even much larger) than max_prepared_xacts. The document says max_prepared_xacts should be greater than max_connections but there is no code to enforce that. 2. Also it is possible that master max_prepared_xacts might be different than segment max_prepared_xacts (although the document does not suggest it there is no code to enforce that), To fix that we use MaxBackends for the gxact number on master. We may just use guc max_connections (MaxBackends includes number for autovacuum workers and bg workers additionally besides guc max_connections), but I'm conservatively using MaxBackends, since this issue is annoying - standby can not recover due to the FATAL message as below even after postgres reboot unless we temporarily increase the guc max_prepared_transactions value. 2020-07-17 16:48:19.178667 CST,,,p33652,th1972721600,,,,0,,,seg-1,,,,,"FATAL","XX000","the limit of 3 distributed transactions has been reached","It should not happen. Temporarily increase max_connections (need postmaster reboot) on the postgres (master or standby) to work around this issue and then report a bug",,,,"xlog redo at 0/C339BA0 for Transaction/DISTRIBUTED_COMMIT: distributed commit 2020-07-17 16:48:19.101832+08 gid = 1594975696-0000000009, gxid = 9",,0,,"cdbdtxrecovery.c",571,"Stack trace: 1 0xb3a30f postgres errstart (elog.c:558) 2 0xc3da4d postgres redoDistributedCommitRecord (cdbdtxrecovery.c:565) 3 0x564227 postgres <symbol not found> (xact.c:6942) 4 0x564671 postgres xact_redo (xact.c:7080) 5 0x56fee5 postgres StartupXLOG (xlog.c:7207) Reviewed-by: Nxiong-gang <gxiong@pivotal.io> (cherry picked from commit 2a961e65)
-
由 Paul Guo 提交于
We need that in more than one test. Reviewed-by: Nxiong-gang <gxiong@pivotal.io> (cherry picked from commit af942980)
-
- 22 7月, 2020 4 次提交
-
-
由 Zhenghua Lyu 提交于
General and segmentGeneral locus imply that if the corresponding slice is executed in many different segments should provide the same result data set. Thus, in some cases, General and segmentGeneral can be treated like broadcast. But what if the segmentGeneral and general locus path contain volatile functions? volatile functions, by definition, do not guarantee results of different invokes. So for such cases, they lose the property and cannot be treated as *general. Previously, Greenplum planner does not handle these cases correctly. Limit general or segmentgeneral path also has such issue. The fix idea of this commit is: when we find the pattern (a general or segmentGeneral locus paths contain volatile functions), we create a motion path above it to turn its locus to singleQE and then create a projection path. Then the core job becomes how we choose the places to check: 1. For a single base rel, we should only check its restriction, this is the at bottom of planner, this is at the function set_rel_pathlist 2. When creating a join path, if the join locus is general or segmentGeneral, check its joinqual to see if it contains volatile functions 3. When handling subquery, we will invoke set_subquery_pathlist function, at the end of this function, check the targetlist and havingQual 4. When creating limit path, the check and change algorithm should also be used 5. Correctly handle make_subplan OrderBy clause and Group Clause should be included in targetlist and handled by the above Step 3. Also this commit fixes DMLs on replicated table. Update & Delete Statement on a replicated table is special. These statements have to be dispatched to each segment to execute. So if they contain volatile functions in their targetList or where clause, we should reject such statements: 1. For targetList, we check it at the function create_motion_path_for_upddel 2. For where clause, they will be handled in the query planner and if we find the pattern and want to fix it, do another check if we are updating or deleting replicated table, if so reject the statement. CherryPick from commit d1f9b96b from master to 6X.
-
由 Paul Guo 提交于
During testing, I encountered an incremental gprecoverseg hang issue. Incremental gprecoverseg is based on pg_rewind. pg_rewind launches a single mode postgres process and quits after crash recovery if the postgres instance was not cleanly shut down - this is used to ensure that the postgres is in a consistent state before doing incremental recovery. I found that the single mode postgres hangs with the below stack. \#1 0x00000000008cf2d6 in PGSemaphoreLock (sema=0x7f238274a4b0, interruptOK=1 '\001') at pg_sema.c:422 \#2 0x00000000009614ed in ProcSleep (locallock=0x2c783c0, lockMethodTable=0xddb140 <default_lockmethod>) at proc.c:1347 \#3 0x000000000095a0c1 in WaitOnLock (locallock=0x2c783c0, owner=0x2cbf950) at lock.c:1853 \#4 0x0000000000958e3a in LockAcquireExtended (locktag=0x7ffde826aa60, lockmode=3, sessionLock=0 '\000', dontWait=0 '\000', reportMemoryError=1 '\001', locallockp=0x0) at lock.c:1155 \#5 0x0000000000957e64 in LockAcquire (locktag=0x7ffde826aa60, lockmode=3, sessionLock=0 '\000', dontWait=0 '\000') at lock.c:700 \#6 0x000000000095728c in LockSharedObject (classid=1262, objid=1, objsubid=0, lockmode=3) at lmgr.c:939 \#7 0x0000000000b0152b in InitPostgres (in_dbname=0x2c769f0 "template1", dboid=0, username=0x2c59340 "gpadmin", out_dbname=0x0) at postinit.c:1019 \#8 0x000000000097b970 in PostgresMain (argc=5, argv=0x2c51990, dbname=0x2c769f0 "template1", username=0x2c59340 "gpadmin") at postgres.c:4820 \#9 0x00000000007dc432 in main (argc=5, argv=0x2c51990) at main.c:241 It tries to hold the lock for template1 on pg_database with lockmode 3 but it conflicts with the lock with lockmode 5 which was held by a recovered dtx transaction in startup RecoverPreparedTransactions(). Typically the dtx transaction comes from "create database" (by default the template database is template1). Fixing this by using the postgres database for single mode postgres execution. The postgres database is commonly used in many background worker backends like dtx recovery, gdd and ftsprobe. With this change, we do not need to worry about "create database" with template postgres, etc since they won't succeed, thus avoid the lock conflict. We may be able to fix this in InitPostgres() by bypassing the locking code in single mode but the current fix seems to be safer. Note InitPostgres() locks/unlocks some other catalog tables also but almost all of them are using lock mode 1 (except mode 3 pg_resqueuecapability per debugging output). It seems that it is not usual in real scenario to have a dtx transaction that locks catalog with mode 8 which conflicts with mode 1. If we encounter this later we need to think out a better (might not be trivial) solution for this. For now let's fix the issue we encountered at first. Note in this patch the code fixes in buildMirrorSegments.py and twophase.c are not related to this patch. They do not seem to be strict bugs but we'd better fix them to avoid potential issues in the future. Reviewed-by: NAshwin Agrawal <aashwin@vmware.com> Reviewed-by: NAsim R P <pasim@vmware.com> (cherry picked from commit 288908f3)
-
由 Paul Guo 提交于
Now that we do not have to use full gang for distributed transaction, that makes in-progress distributed transaction on master might be greater than max_prepared_xacts if max_prepared_xacts is configured with a small value. max_prepared_xacts is used to as the inProgressXidArray length for distributed snapshot. This might lead to distributed snapshot creation failure due to "Too many distributed transactions for snapshot" if the system is in heavy 2pc load. Fixing this by using GetMaxSnapshotXidCount() for the length of array inProgressXidArray, following the setting on master. This fixes github issue https://github.com/greenplum-db/gpdb/issues/10057 No test for this since test isolation2:prepare_limit already covered this. (I encountered this issue when backporting a PR that introduces the test isolation2:prepare_limit, so need to push this at first then the backporting PR). Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
由 Zhenghua Lyu 提交于
Greenplum use unique row id path as a candidate to implement semijoin. It is introduced long before. But GPDB6 has upgraded the kernel version to Postgres 9.4 and introduced many new path types and new plan nodes, thus cdbpath_dedup_fixup failed to consider them. Some typical issues are: https://github.com/greenplum-db/gpdb/issues/9427 On Master branch, Heikki's commit 9628a332 refactored this part of code so that it is OK on master. And for 4X and 5X, we do not have many new kinds of plannode and pathnode, it is also OK. It is very hard to backport commit 9628a332 to 6X, there is no concept of a Path's target list in 9.4. And to totally remove this kind of path is too overkilling. So the policy is to fix them one bye one if reported.
-
- 21 7月, 2020 2 次提交
-
-
由 (Jerome)Junfeng Yang 提交于
Enlarge the sleep time for a query which will be canceled later to avoid slow execution fails the test. Normally the test's running time should not get affected since the sleep query will get terminate immediately. (cherry picked from commit bbccf20c)
-
由 Denis Smirnov 提交于
XLogReaderAllocate returns NULL if the xlogreader couldn't be allocated. This NULL checks were forgotten in several places of twophase.c and caused segmentation faults under heavy workloads.
-
- 20 7月, 2020 1 次提交
-
-
由 Hao Wu 提交于
Oracle Linux is compiled from Red Hat Enterprise Linux (RHEL) source code, replacing Red Hat branding with Oracle's[1]. The ICW jobs for oracle7 consume the GPDB binary compiled from centos7. [1]: Wiki https://en.wikipedia.org/wiki/Oracle_Linux
-
- 17 7月, 2020 5 次提交
-
-
由 Paul Guo 提交于
We've seen such a case on a stable release but it is hard to debug via the message only, so let's provide more details in the error message.
-
由 Jesse Zhang 提交于
Our implementations of memory pools have a hidden dependency on _the_ global memory pool manager: typically GPOS_NEW and GPOS_DELETE will reach for the memory pool manager singleton. This makes GPOS_DELETE on a memory pool manager undefined behavior because we call member functions on an object after its destructor finishes. On the Postgres 12 merge branch, this manifests itself in a crash during initdb. More concerning is that it only crashed when we set max connections and shared buffers to a specific number.
-
由 Mel Kiyama 提交于
* docs - update utility docs with IP/hostname information. Add information to gpinitsystem, gpaddmirrors, and gpexpand ref. docs --Information about using hostnames vs. IP addresses --Information about configuring hosts that are configured with mulitple NICs Also updated some examples in gpinitsystem * docs - review comment updates. Add more information from dev. * docs - change examples to show valid configurations that support failorver. Also fix typos and minor edits. * docs - updates based on review comments.
-
由 Lisa Owen 提交于
- 16 7月, 2020 6 次提交
-
-
由 Pengzhou Tang 提交于
The failed test case is to test the command "copy lineitem to '/tmp/abort.csv'" can be cancelled after COPY is dispatched to QEs. To verify this, it checks that /tmp/abort.csv has fewer rows than lineitem. The cancel logical in codes is: QD dispatched the COPY command to QEs, then if QD get a cancel interrupt, it sends a cancel request to QEs, however, the QD will keep receiving data from QEs even QD already get a cancel interrupt. QD relies on QEs to receive the cancel request and explicitly stop copying data to QD. Obviously, QEs may already have copied out all data to QDs before they get cancel requests, so the test case cannot guarantee /tmp/aborted.csv has fewer rows than lineitem. To fix this, we just verify the COPY command can be aborted with message 'ERROR: canceling statement due to user request', the count verification looks pointless here. It's cherry-pick of 9480d631 from master
-
由 Ashuka Xue 提交于
Pull out the implementation for binary heap into its own templated h file.
-
由 Ashuka Xue 提交于
Prior to this commit, merging two histograms was not commutative. Meaning histogram1->Union(histogram2) could result in a row estimate of 1500 rows, but histogram2->Union(histogram1) could result in a row estimate of 600 rows. Now, MakeBucketMerged has been renamed to SplitAndMergeBuckets. This function, which calculates the statistics for the merged bucket, now consistently return the same histogram buckets regardless of the order of input. This in turn, makes MakeUnionHistogramNormalize and MakeUnionAllHistogramNormalize commutative. Once we have successfully split the buckets and merged them as necessary, we may have generated up to 3X the number of buckets that were originally present. Thus we cap the number of buckets to be either the max size of the two incoming buckets, or, 100 buckets CombineBuckets will then reduce the size of the histogram by combining consecutive buckets that have similar information. It does this by using a combination of two ratios: freq/ndv and freq/bucket_width. These two ratios were decided based off the following examples: Assuming that we calculate row counts for selections like the following: - For a predicate col = const: rows * freq / NDVs - For a predicate col < const: rows * (sum of full or fractional frequencies) Example 1 (rows = 100), freq/width, ndvs/width and ndvs/freq are all the same: ``` Bucket 1: [0, 4) freq .2 NDVs 2 width 4 freq/width = .05 ndv/width = .5 freq/ndv = .1 Bucket 2: [4, 12) freq .4 NDVs 4 width 8 freq/width = .05 ndv/width = .5 freq/ndv = .1 Combined: [0, 12) freq .6 NDVs 6 width 12 ``` This should give the same estimates for various predicates, with separate or combined buckets: ``` pred separate buckets combined bucket result ------- --------------------- --------------- ----------- col = 3 ==> 100 * .2 / 2 = 100 * .6 / 6 = 10 rows col = 5 ==> 100 * .4 / 4 = 100 * .6 / 6 = 10 rows col < 6 ==> 100 * (.2 + .25 * .4) = 100 * .5 * .6 = 30 rows ``` Example 2 (rows = 100), freq and ndvs are the same, but width is different: ``` Bucket 1: [0, 4) freq .4 NDVs 4 width 4 freq/width = .1 ndv/width = 1 freq/ndv = .1 Bucket 2: [4, 12) freq .4 NDVs 4 width 8 freq/width = .05 ndv/width = .5 freq/ndv = .1 Combined: [0, 12) freq .8 NDVs 8 width 12 ``` This will give different estimates with the combined bucket, but only for non-equal preds: ``` pred separate buckets combined bucket results ------- --------------------- --------------- -------------- col = 3 ==> 100 * .4 / 4 = 100 * .8 / 8 = 10 rows col = 5 ==> 100 * .4 / 4 = 100 * .8 / 8 = 10 rows col < 6 ==> 100 * (.4 + .25 * .4) != 100 * .5 * .8 50 vs. 40 rows ``` Example 3 (rows = 100), now NDVs / freq is different: ``` Bucket 1: [0, 4) freq .2 NDVs 4 width 4 freq/width = .05 ndv/width = 1 freq/ndv = .05 Bucket 2: [4, 12) freq .4 NDVs 4 width 8 freq/width = .05 ndv/width = .5 freq/ndv = .1 Combined: [0, 12) freq .6 NDVs 8 width 12 ``` This will give different estimates with the combined bucket, but only for equal preds: ``` pred separate buckets combined bucket results ------- --------------------- --------------- --------------- col = 3 ==> 100 * .2 / 4 != 100 * .6 / 8 5 vs. 7.5 rows col = 5 ==> 100 * .4 / 4 != 100 * .8 / 8 10 vs. 7.5 rows col < 6 ==> 100 * (.2 + .25 * .4) = 100 * .5 * .6 = 30 rows ``` This commit also adds an attribute to the statsconfig for MaxStatsBuckets and changes the scaling method when creating singleton buckets.
-
由 Ashuka Xue 提交于
MergeHistogramMapsforDisjPreds This commit refactors MakeStatsFilter to use MakeHistHashMapConjOrDisjFilter instead of individually calling MakeHistHashMapConj and MakeHistHashMapDisj. This commit also modifies MergeHistogramMapsForDisjPreds to avoid copy and creating unnecessary histogram buckets.
-
由 Mel Kiyama 提交于
* docs - add information for SSL with standby master --SSL file should not be in $MASTER_DATA_DIRECTORY Also --Add not about not using NULL ciphers --Correct default directory for SSL files to $MASTER_DATA_DIRECTORY * docs - review comment updates
-
由 Tyler Ramer 提交于
Commit 0b2c7325 into 6X Stable, first added in PR #10451, did not require the git submodule update on Windows because this has previously been added with Pygresql - this caused a build failure for windows clients because PyYAML source was not found. Adding the git submodule pull should resolve this Authored-by: NTyler Ramer <tramer@vmware.com>
-
- 15 7月, 2020 6 次提交
-
-
由 Peifeng Qiu 提交于
Local fork at gpMgmt/bin/ext/yaml was removed by 03960e45333c3a7d8fe677b9015ce2a7c33c502b. Unpack it from gpMgmt/bin/pythonSrc/ext just like pygresql.
-
由 Tyler Ramer 提交于
Use yaml.safe_load rather than yaml.load as yaml.load is deprecated Co-authored-by: NTyler Ramer <tramer@vmware.com> Co-authored-by: NJamie McAtamney <jmcatamney@vmware.com>
-
由 Tyler Ramer 提交于
Yaml was imported but unused in several locations. gpMgmt/test/behave/mgmt_utils/steps/mgmt_utils.py had numerous unused or duplicated imports. Co-authored-by: NTyler Ramer <tramer@vmware.com> Co-authored-by: NJamie McAtamney <jmcatamney@vmware.com>
-
由 Tyler Ramer 提交于
It seems this yaml class is dead code. Removing it for this reason. Co-authored-by: NTyler Ramer <tramer@vmware.com> Co-authored-by: NJamie McAtamney <jmcatamney@vmware.com>
-
由 Tyler Ramer 提交于
The version of PyYAML vendored in gpMgmt/bin/ext is old, unmaintained, and does not support python3. Actually, it does not even contain a `__version__` attribute, so it is not possible to know the version. We need to unvendor yaml and address CVEs that have been found in the library since the version vendored in source. Also update yaml.load to use yaml.safe_load instead. Co-authored-by: NTyler Ramer <tramer@vmware.com> Co-authored-by: NJamie McAtamney <jmcatamney@vmware.com>
-
由 Richard Guo 提交于
Currently GPDB tries to pull up EXPR sublinks to inner joins. For query select * from foo where foo.a > (select avg(bar.a) from bar where foo.b = bar.b); GPDB would transform it to: select * from foo inner join (select bar.b, avg(bar.a) as avg from bar group by bar.b) sub on foo.b = sub.b and foo.a > sub.avg; To do that, GPDB needs to recurse through the quals in sub-select and extract quals of form 'outervar = innervar' and then build new SortGroupClause items and TargetEntry items based on these quals for sub-select. But for quals of form 'function(outervar, innervar1) = innvervar2', GPDB handles them incorrectly and will cause wrong results issues as described in issue #9615. This patch fixes this issue by treating these kinds of quals as not compatible correlated and thus the sub-select would not be converted to join. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NAsim R P <apraveen@pivotal.io> (cherry picked from commit dcdc6c0b)
-
- 13 7月, 2020 2 次提交
-
-
由 David Yozie 提交于
-
由 (Jerome)Junfeng Yang 提交于
Remove the set `gp_fts_probe_retries to 1` which may cause FTS probe failed. This was first added to reduce the test time, but set a lower retry value may cause the test failed to probe FTS update segment configuration. Since reduce the `gp_fts_replication_attempt_count` also save the test time, so skip alter ``gp_fts_probe_retries`. Also find an assertion may not match when mark mirror down happens before walsender exit, which will free the replication status before walsender exit and try to record disconnect time and failure count. Which lead the segment crash and starts recover.
-
- 10 7月, 2020 1 次提交
-
-
由 xiong-gang 提交于
When alter table add a column to AOCS table, the storage setting (compresstype, compresslevel and blocksize) of the new column can be specified in the ENCODING clause; it inherits the setting from the table if ENCODING is not specified; it will use the value from GUC 'gp_default_storage_options' when the table dosen't have the compression configuration.
-
- 09 7月, 2020 4 次提交
-
-
由 Hao Wu 提交于
Currently, replicated tables are not allowed to inherit a parent table. But ALTER TABLE .. INHERIT can pass around the restriction. On the other hand, a replicated table is allowed to be inherited by a hash distributed table. It makes things much complicated. When the parent table is declared as a replicated table inherited by a hash distributed table, its data on the parent is replicated but the data on the child is hash distributed. When running `select * from parent;`, the generated plan is: ``` gpadmin=# explain select * from parent; QUERY PLAN ----------------------------------------------------------------------------- Gather Motion 3:1 (slice1; segments: 3) (cost=0.00..4.42 rows=14 width=6) -> Append (cost=0.00..4.14 rows=5 width=6) -> Result (cost=0.00..1.20 rows=4 width=7) One-Time Filter: (gp_execution_segment() = 1) -> Seq Scan on parent (cost=0.00..1.10 rows=4 width=7) -> Seq Scan on child (cost=0.00..3.04 rows=2 width=4) Optimizer: Postgres query optimizer (7 rows) ``` It's not particularly useful for the parent table to be replicated. So, we disallow the replicated table to be inherited. Reported-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NHubert Zhang <hzhang@pivotal.io> (cherry picked from commit dc4b839e)
-
由 xiong-gang 提交于
When there is a big lag between primary and mirror replay, gp_replica_check will fail if the checkpoint is not replayed in about 60 seconds. Extend the timeout to 600 seconds to reduce the chance of flaky.
-
由 Tyler Ramer 提交于
[Lockfile](https://pypi.org/project/lockfile/) has not been maintained since around 2015. Further, the functionality it provided seems poor - a review of the code indicated that it used the presence of the PID file itself as the lock - in Unix, using a file's existence followed by a creation is not atomic, so a lock could be prone to race conditions. The lockfile package also did not clean up after itself - a process which was destroyed unexpectedly would not clear the created locks, so some faulty logic was added to mainUtils.py, which checked to see if a process with the same PID as the lockfile's creator was running. This is obviously failure prone, as a new process might be assigned the same PID as the old lockfile's owner, without actually being the same process. (Of note, the SIG_DFL argument to os.kill() is not a signal at all, but rather of type signal.handler. It appears that the python cast this handler to the int 0, which, according to man 2 kill, leads to no signal being sent, but existance and permission checks are still performed. So it is a happy accident that this code worked at all) This commit removes lockfile from the codebase entirely. It also adds a "PIDLockFile" class which provides an atomic-guarenteed lock via the mkdir and rmdir commands on Unix - thus, it is not safely portable to Windows, but this should not be an issue as only Unix-based utilities use the "simple_main()" function. PIDLockFile provides API compatible classes to replace most of the functionality from lockfile.PidLockFile, but does remove any timeout logic as it was not used in any meaningful sense - a hard-coded timeout of 1 second was used, but an immediate result of if the lock is held is sufficient. PIDLockFile also includes appropriate __enter__, __exit__, and __del__ attributes, so that, should we extend this class in the future, with syntax is functional, and __del__ calls release, so a process reaped unexpectedly should still clean its own locks as part of the garbage collection process. Authored-by: NTyler Ramer <tramer@pivotal.io> Do not remove PYLIB_SRC_EXT during make clean/distclean Commit 8190ed40 removed lockfile from mainUtils, but did not remove a reference to its source directory in the make clean/distclean target. As a result, because LOCKFILE_DIR is no longer defined, the make clean/distclean target removes the PYLIB_SRC_EXT directory.
-
由 Chris Hajas 提交于
Previously, the PdrgpcrsAddEquivClass function would modify the input colref set. This does not appear intentional, as this same reference may be accessed in other places. This caused Orca to fall back to planner in some cases during translation with "Attribute number 0 not found in project list". Co-authored-by: Nmubo.fy <mubo.fy@alibaba-inc.com> Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NHans Zeller <hzeller@vmware.com> (cherry picked from commit 84c027afdff13f1d91447a5a88809f6e85399a1b)
-
- 08 7月, 2020 4 次提交
-
-
由 xiong-gang 提交于
The entry in aocsseg table might be compacted and waiting for drop, so we should use 'state' to filter the unused entry.
-
由 xiong-gang 提交于
column 'vpinfo' in pg_aoseg.pg_aocsseg_xxx record the 'eof' of each attribute in the AOCS table. Add a new check 'aoseg_table' in gpcheckcat, it checks the number of attributes in 'vpinfo' is the same as the number of attributes in 'pg_attribute'. This check is performed in parallel and independently on each segment, and it checks aoseg table and pg_attribute in different transaction, so it should be run 'offline' to avoid false alarm.
-
由 (Jerome)Junfeng Yang 提交于
When ExecReScanBitmapHeapScan get executed, bitmap state (tbmiterator and tbmres) gets freed in freeBitmapState. So the tbmres is NULL, and we need to reinit bitmap state to start scan from the beginning and reset AO/AOCS bitmap pages' flags(baos_gotpage, baos_lossy, baos_cindex and baos_ntuples). Especially when ExecReScan happens on the bitmap append only scan and not all the matched tuples in bitmap are consumed, for example, Bitmap Heap Scan as inner plan of the Nest Loop Semi Join. If tbmres not get init, and not read all tuples in last bitmap, BitmapAppendOnlyNext will assume the current bitmap page still has data to return. but bitmap state already freed. From the code, for Nest Loop Semi Join, when a match find, a new outer slot is requested, and then `ExecReScanBitmapHeapScan` get called, `node->tbmres` and `node->tbmiterator` set to NULL. `node->baos_gotpage` still keeps true. When execute `BitmapAppendOnlyNext`, it skip create new `node->tbmres`. And jump to access `tbmres->recheck`. Reviewed-by: NJinbao Chen <jinchen@pivotal.io> Reviewed-by: NAsim R P <pasim@vmware.com> (cherry picked from commit cb5d18d1)
-
由 (Jerome)Junfeng Yang 提交于
Make some utilities search path safe, so it'll not calling any external functions that has the same name with our built-in functions. This fix does not guarantee to fix CVE-2018-1058. Backport from 070d6221. Co-authored-by: Jamie McAtamney jmcatamney@pivotal.io Co-authored-by: Jacob Champion pchampion@pivotal.io Co-authored-by: Shoaib Lari slari@pivotal.io
-