- 13 5月, 2020 2 次提交
-
-
由 Hans Zeller 提交于
The scripts we use in Concourse pipelines download Apache xerces-c-3.1.2 and then apply a patch that is part of our source code tree. Abhijit has pointed out that this is no longer necessary. This commit removes the patch and uses the vanilla xerces-c-3.1.2 source code instead. Eventually, we want to stop including xerces into our releases and rely on the natively installed xerces. See also https://github.com/greenplum-db/gpdb/pull/10068. (cherry picked from commit 2448be9b)
-
- 12 5月, 2020 4 次提交
-
-
由 Hao Wu 提交于
workfile_shared->num_active may not the actual size of the list workfile_shared->activeList in production. The best way is to find the root cause and fix the inconsistency. However, we failed to find where the problem code is. The above two variables are both in shared memory. It's commonly to reset the shared memory when part of the shared memory is corrupted. Reviewed-by: NHao Wang <haowang@pivotal.io> (cherry picked from commit 4c7854ee)
-
由 Peifeng Qiu 提交于
gpload in the latest windows client package requires VS redistributable package. Output more meaningful message if pg.py fails to load.
-
由 Jesse Zhang 提交于
Looks like we were missing an "extern" in two places. While I was at it, also tidy up guc_gp.c by moving the definition of Debug_resource_group into cdbvars.c, and add declaration of gp_encoding_check_locale_compatibility to cdbvars.h. This is uncovered by building with GCC 10 and Clang 11, where -fno-common is the new default [1][2] (vis a vis -fcommon). I could also reproduce this by turning on "-fno-common" in older releases of GCC and Clang. We were relying on a myth (or legacy compiler behavior, rather) that C tentative definitions act _just like_ declarations -- in plain English: missing an "extern" in a global variable declaration-wannabe wouldn't harm you, as long as you don't put an initial value after it. This resolves #10072. [1] "3.17 Options for Code Generation Conventions: -fcommon" https://gcc.gnu.org/onlinedocs/gcc-10.1.0/gcc/Code-Gen-Options.html#index-tentative-definitions [2] "Porting to GCC 10" https://gcc.gnu.org/gcc-10/porting_to.html [3] "[Driver] Default to -fno-common for all targets" https://reviews.llvm.org/D75056 (cherry picked from commit ee7eb0e8)
-
由 Hans Zeller 提交于
DPE stats are computed when we have a dynamic partition selector that's applied on another child of a join. The current code continues to use DPE stats even for the common ancestor join and nodes above it, but those nodes aren't affected by the partition selector. Regular Memo groups pick the best expression among several to compute stats, which makes row count estimates more reliable. We don't have that luxury with DPE stats, therefore they are often less reliable. By minimizing the places where we use DPE stats, we should overall get more reliable row count estimates with DPE stats enabled. The fix also ignores DPE stats with row counts greater than the group stats. Partition selectors eliminate certain partitions, therefore it is impossible for them to increase the row count.
-
- 09 5月, 2020 11 次提交
-
-
由 Heikki Linnakangas 提交于
They use GPOS_RESET_EX, which needs ITask. Fix missing includes in unit tests. (cherry picked from commit 88f9744a)
-
由 Heikki Linnakangas 提交于
ops.h brings in the headers for *all* the in include/gpopt/operators/, which is way more than is needed in most cases. (cherry picked from commit 143dd82d)
-
由 Heikki Linnakangas 提交于
(cherry picked from commit 347fba32)
-
由 Heikki Linnakangas 提交于
Avoid including dxlops.h, which pulls *all* the CParseHandler header files. Makes the postgres binary (with assertions and debugging information) about 1.5 MB smaller. (cherry picked from commit 529ce1a7)
-
由 Heikki Linnakangas 提交于
Let's keep base.h as slim as possible. (cherry picked from commit b88c8195)
-
由 Heikki Linnakangas 提交于
(cherry picked from commit af6431ad)
-
由 Heikki Linnakangas 提交于
CMemoryPool.h is included literally everywhere, because it comes with gpos/base.h. Every little there helps. (cherry picked from commit 99a0066f)
-
由 Heikki Linnakangas 提交于
Try to not pull in unnecessary dependencies in header files. (cherry picked from commit 632ad764)
-
由 Heikki Linnakangas 提交于
With this, the xerces headers are not pulled into the xforms/ files. Makes each .o file about 100 kB shorter. Shrinks the postgres binary from about 128 MB to 121 MB, with assertions and debugging enabled. (cherry picked from commit 35cfc37d)
-
由 Heikki Linnakangas 提交于
(cherry picked from commit c9756796)
-
由 Heikki Linnakangas 提交于
(cherry picked from commit 4eebb0e1)
-
- 08 5月, 2020 3 次提交
-
-
由 Zhenghua Lyu 提交于
Target partitions need new ResultRelInfos and override previous estate->es_result_relation_info in NextCopyFromExecute(). The new ResultRelInfo may leave its resultSlot as NULL. If sreh is on, the parsing errors will be caught and loop back to parse another row; however, the estate->es_result_relation_info was already changed. This can cause crash. Reproduce: ```sql CREATE TABLE partdisttest(id INT, t TIMESTAMP, d VARCHAR(4)) DISTRIBUTED BY (id) PARTITION BY RANGE (t) ( PARTITION p2020 START ('2020-01-01'::TIMESTAMP) END ('2021-01-01'::TIMESTAMP), DEFAULT PARTITION extra ); COPY partdisttest FROM STDIN LOG ERRORS SEGMENT REJECT LIMIT 2; 1 '2020-04-15' abcde 1 '2020-04-15' abc \. ``` Authored-by: Nggbq <taos.alias@outlook.com>
-
由 Pengzhou Tang 提交于
This is a backport from master 2a7b2bf6
-
由 Soumyadeep Chakraborty 提交于
The dbms_pipe_session_{A,B} tests are flaky in CI as it can so happen that session B calls receiveFrom() before session A can even call createImplicitPipe(). This leads to flaky test failures such as: --- /tmp/build/e18b2f02/gpdb_src/gpcontrib/orafce/expected/dbms_pipe_session_B.out 2020-04-20 17:02:27.270832458 +0000 +++ /tmp/build/e18b2f02/gpdb_src/gpcontrib/orafce/results/dbms_pipe_session_B.out 2020-04-20 17:02:27.278832994 +0000 @@ -7,14 +7,6 @@ -- Receives messages sent via an implicit pipe SELECT receiveFrom('named_pipe'); -NOTICE: RECEIVE 11: Message From Session A -NOTICE: RECEIVE 12: 01-01-2013 -NOTICE: RECEIVE 13: Tue Jan 01 09:00:00 2013 PST -NOTICE: RECEIVE 23: \201 -NOTICE: RECEIVE 24: (2,rob) -NOTICE: RECEIVE 9: 12345 -NOTICE: RECEIVE 9: 12345.6789 -NOTICE: RECEIVE 9: 99999999999 receivefrom ------------- @@ -152,12 +144,13 @@ ORDER BY 1; name | items | limit | private | owner ----------------+-------+-------+---------+----------------- + named_pipe | 9 | 10 | f | pipe_name_3 | 1 | | f | private_pipe_1 | 0 | 10 | t | pipe_test_owner private_pipe_2 | 9 | 10 | t | pipe_test_owner public_pipe_3 | 0 | 10 | f | public_pipe_4 | 0 | 10 | f | -(5 rows) +(6 rows) This commit introduces an explicit sleep at the start of session B to give session A a better chance to run. Co-authored-by: NJesse Zhang <jzhang@pivotal.io> (cherry picked from commit 985c5e2b)
-
- 07 5月, 2020 3 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Previously, gpinitsystem did not allow the user to specify a hostname and address for each segment in the input file used with -I; it only accepted one value per segment and used it for both hostname and address. This commit changes the behavior so that the user can specify both hostname and address. If the user specifies only the address (such as by using an old config file), it will preserve the old behavior and set both hostname and address to that value. It also adds a few tests around input file parsing so SET_VAR is more resilient to further refactors. The specific changes involved are the following: 1) Change SET_VAR to be able to parse either the old format (address only) or new format (host and address) of the segment array representation. 2) Move SET_VAR from gpinitsystem to gp_bash_functions.sh and remove the redundant copy in gpcreateseg.sh. 3) Remove a hardcoded "~0" in QD_PRIMARY_ARRAY in gpinitsystem, representing a replication port value, that was left over from 5X. 4) Improve the check for the number of fields in the segment array representation. Also, Remove use of ignore warning flag and use [[ ]] the check for IGNORE_WARNING
-
由 Bhuvnesh Chaudhary 提交于
Previously, gpintsystem was incorrectly filling the hostname field of each segment in gp_segment_configuration with the segment's address. This commit changes it to correctly resolve hostnames and update the catalog accordingly. This reverts commit 12ef7352, Revert "gpinitsystem: update catalog with correct hostname". Commit message from 12ef7352: The commit requires some additional tweaks to the input file logic for backwards compatibility purposes, so we're reverting this until the full fix is ready.
- 06 5月, 2020 1 次提交
-
-
由 Francisco Guerrero 提交于
This commit enables parallel writes for Foreign Data Wrapper. This feature is currently missing from the FDW framework, whilst parallel scans are supported, parallel writes are missing. FDW parallel writes are analogous to writing to writable external tables that run on all segments. One caveat is that in the external table framework, writable tables support a distribution policy: CREATE WRITABLE EXTERNAL TABLE foo (id int) LOCATION ('....') FORMAT 'CSV' DISTRIBUTED BY (id); In foreign tables, the distribution policy cannot be defined during the table definition, so we assume random distribution for all foreign tables. Parallel writes are enabled when the foreign table's exec_location is set to FTEXECLOCATION_ALL_SEGMENTS only. For foreign tables that run on master or any segment, the current policy behavior remains.
-
- 05 5月, 2020 1 次提交
-
-
由 Mel Kiyama 提交于
* docs - clarify text for resource group CPU core usage in catalog tables. * docs - minor edits
-
- 02 5月, 2020 2 次提交
-
-
由 Mel Kiyama 提交于
* docs - update for gpstart when standby master is not available * docs - minor edits. * Update gpstart.xml Co-authored-by: NDavid Yozie <dyozie@pivotal.io>
-
由 David Yozie 提交于
* Update docs around ssl_ciphers default, default behavior, TLS 1.2 recommendations * Update cipher string per Stanley's feedback
-
- 01 5月, 2020 4 次提交
-
-
由 Robert Haas 提交于
Michael Paquier Original Postgres commit: https://github.com/postgres/postgres/commit/3153b1a52f8f2d1efe67306257aec15aaaf9e94c
-
由 Robert Haas 提交于
Previously, some functions returned various fixed strings and others failed with a cache lookup error. Per discussion, standardize on returning NULL. Although user-exposed "cache lookup failed" error messages might normally qualify for bug-fix treatment, no back-patch; the risk of breaking user code which is accustomed to the current behavior seems too high. Michael Paquier Original Postgres commit: https://github.com/postgres/postgres/commit/976b24fb477464907737d28cdf18e202fa3b1a5b
-
由 David Yozie 提交于
-
由 Denis Smirnov 提交于
According to plannode.h "plan_node_id" should be unique across entire final plan tree. But ORCA DXL to PlanStatement translator returns uninitialized zero values for BitmapOr and BitmapAnd nodes. This behaviour differs from Postgres planner and from all other node translations in this class. It was fixed. (cherry picked from commit 53a0b781)
-
- 30 4月, 2020 2 次提交
-
-
由 David Yozie 提交于
-
由 David Yozie 提交于
-
- 29 4月, 2020 4 次提交
-
-
由 Ning Yu 提交于
The gpinitsystem will fail if the gpinitsystem logs contain errors or warnings from previous tests.
-
由 xiong-gang 提交于
ALTER DATABASE SET ... FROM CURRENT dispatches incorrect statement to the segments. Reported in https://github.com/greenplum-db/gpdb/issues/9823
-
由 Shreedhar Hardikar 提交于
Implements an algorithm in MakeHistArrayCmpAnyFilter() using CStatsPredArrayCmp: 1. Construct a histogram with the same bucket boundaries as present in the base_histogram. This is better than using a singleton bucket per point, because it that case, the frequency of each bucket is so small, it is often less than CStatistics::Epsilon, and may be considered as 0, leading to cardinality misestimation. Using the same buckets as base_histogram also aids in joining histogram later. 2. Compute the frequency for each bucket based on the number of points (NDV) present within each bucket boundary. NB: the points must be de-duplicated beforehand to prevent double counting. 3. Join this "dummy_histogram" with the base_histogram to determine the buckets from base_histogram that should be selected (using MakeJoinHistogram) 4. Compute and adjust the resultant scale factor for the filter. Co-authored-by: NAshuka Xue <axue@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Ashuka Xue 提交于
Functions renamed: - CHistogram::Buckets -> GetNumBuckets - CHistogram::ParseDXLToBucketsArray -> GetBuckets Implemented DbgPrint for: - CBucket - CHistogram Co-authored-by: NAshuka Xue <axue@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
- 28 4月, 2020 3 次提交
-
-
由 Paul Guo 提交于
The reason is that new created reader gang would fail on QE due to missing writer gang process in locking code, and retry would fail again with the same reason, since the cached writer gang is still used because QD does not know & check the real libpq network status. See below for the repro case. Fixing this by checking the error message and then reset all gangs if seeing the error message, similar to the code logic that checks the startup/recovery message in gang create function. We could have other fixes, e.g. checking the writer gang network status, etc but those fixes seem to be ugly after trying. create table t1(f1 int, f2 text); <kill -9 one idle QE> insert into t1 values(2),(1),(5); ERROR: failed to acquire resources on one or more segments DETAIL: FATAL: reader could not find writer proc entry, lock [0,1260] AccessShareLock 0 (lock.c:874) (seg0 192.168.235.128:7002) insert into t1 values(2),(1),(5); ERROR: failed to acquire resources on one or more segments DETAIL: FATAL: reader could not find writer proc entry, lock [0,1260] AccessShareLock 0 (lock.c:874) (seg0 192.168.235.128:7002) <-- Above query fails again. Cherry-picked from 24f16417 and a0a5b4d5
-
由 Paul Guo 提交于
commit d453a4aa implemented that for the crash recovery case (not marking the node down and then not promoting the mirror). It seems that we should do that for the usual "starting up" case also(i.e. CAC_STARTUP), besides for the existing "in recovery mode" case (i.e. CAC_RECOVERY). We've seen that fts promotes the "starting up" primary during isolation2 testing due to 'pg_ctl restart'. In this patch we check recovery progress for both CAC_STARTUP an CAC_RECOVERY during fts probe and thus can avoid this. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> cherry-picked from d71b3afd On master the commit message was eliminated by mistake. Added back on gpdb6.
-
由 Pengzhou Tang 提交于
In TCP interconnect, the sender used to force an EOS messages to the receiver in two cases: 1. cancelUnfinished is true in mppExecutorFinishup. 2. an error occurs. For case1, the comment says: to finish a cursor, the QD used to send a cancel to the QEs, QEs then set the cancelUnfinished flag and did a normal executor finish up. We now use QueryFinishPending mechanism to stop a cursor, so case1 logic is invalid for a long time. For case2, the purpose is: when an error occurs, we force an EOS to the receiver so the receiver didn't report an interconnect error and QD then will check the dispatch results and report the errors in the QEs. From the view of interconnect, we have selectedd to the end of the query and no error in the interconnect, this logic has two problems: 1. it doesn't work for initplan, initplan will not check the dispatch results and throw the errors, so when an error occurs in the QEs for the initplan, the QD cannot notice that. 2. it doesn't work for cursors, for example: DECLARE c1 cursor for select i from t1 where i / 0 = 1; FETCH all from c1; FETCH all from c1; All FETCH commands don't report errors which is not expected. This commit removed the forceEos mechanism, for the case2, the receiver will report an interconnect error without forceEos, this is ok because when multiple errors reports from QEs, the QD is inclined to report non-interconnect error.
-