- 20 5月, 2020 1 次提交
-
-
由 Lisa Owen 提交于
* docs - enhance the pxf supported platforms section * vendor -> bundle
-
- 19 5月, 2020 5 次提交
-
-
由 xiong-gang 提交于
It takes time to start the walsender after gpinitstandby, this commit added a wait loop to reduce the flaky. It also fixes the next test commit_blocking_on_standby. Cherry-picked from 45328e5e
-
由 Jinbao Chen 提交于
* Enable reognization on partition leaf node We are not allowed to modify the distribution of partition leaf table after gpdb6. Because the distibution of root table and leaf table must be same. But if only reorg on the partition leaf table, it will not cause this problem. So we enable reorg when the distibution was not be changed. * Ignore a test on gp_explain to fix the flaky case first After merge the commit d5254740, The test "explain SELECT * from information_schema.key_column_usage" in gp_explain sometimes fails. I think the main reason for the failure is that the catalog table involved in this query will change during the test. So the query plan should change. Ignore this test first to fix pipeline first.
-
由 Paul Guo 提交于
Fix a bug that restartpoint could remove/recycle segment files which still have prepared but not yet committed/aborted transaction xlog. This happens because restartpoint does not consider the oldest prepared transaction when calculating the xlog seg number. Note checkpoint is doing the right thing. During recovery, the prepared transaction information is maintained in hash table crashRecoverPostCheckpointPreparedTransactions_map_ht. The prepared transaction information is added/removed during replaying of xlogs of checkpoint, prepare, commit/abort prepare. In this patch we get the oldest prepared transaction LSN in startup process when replaying the checkpoint xlog and let checkpointer uses that (via shared memory variable) for WAL xlog removal/recycling when creating restartpoint. Typically when the bug is encountered, mirror could FATAL during promotion with stack as below. "FATAL","58P01","requested WAL segment pg_xlog/000000010000000000000003 has already been removed", 1 0xae1453 postgres errstart (elog.c:557) 2 0x566225 postgres <symbol not found> (xlogutils.c:572) 3 0x5669b3 postgres read_local_xlog_page (xlogutils.c:870) 4 0x564777 postgres <symbol not found> (xlogreader.c:503) 5 0x56400e postgres XLogReadRecord (xlogreader.c:226) 6 0x54c0e0 postgres PrescanPreparedTransactions (twophase.c:1696) 7 0x559dcc postgres StartupXLOG (xlog.c:7595) 8 0x8e6b3b postgres StartupProcessMain (startup.c:248) 9 0x5b028a postgres AuxiliaryProcessMain (bootstrap.c:437) 10 0x8e58f6 postgres <symbol not found> (postmaster.c:5827) 11 0x8dfbe1 postgres PostmasterMain (postmaster.c:1500) 12 0x7d973f postgres <symbol not found> (main.c:264) 13 0x3397e1ed5d libc.so.6 __libc_start_main + 0xfd 14 0x490059 postgres <symbol not found> + 0x490059 Reviewed-by: NHao Wu <hawu@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Paul Guo 提交于
Those data are stored as extended checkpoint so that prepared transactions are not forgotten after checkpoint. If we access them without lock, it is possible that we access some inconsistent data. That will lead to some unknown behaviors. Reviewed-by: NHao Wu <hawu@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Kalen Krempely 提交于
For Greenplum 6X on centos7 in a FIPS enabled environment our python utilities would log the following error. ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/usr/local/greenplum-db-devel/ext/python/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/greenplum-db-devel/ext/python/lib/python2.7/hashlib.py", line 109, in __get_openssl_constructor return __get_builtin_constructor(name) File "/usr/local/greenplum-db-devel/ext/python/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md5 This adds a regression test and was inspired from the following 5X commit b07b2a23. Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
- 18 5月, 2020 8 次提交
-
-
由 Gang Xiong 提交于
- Add function bt_index_parent_check_on_all, bt_index_check_on_all, bt_index_parent_check_on_segments and bt_index_check_on_segments - Add test cases for AO and AOCS table
-
由 Peter Geoghegan 提交于
contrib/amcheck failed to consider the possibility that unlogged relations will not have any main relation fork files when running in hot standby mode. This led to low-level "can't happen" errors that complain about the absence of a relfilenode file. To fix, simply skip verification of unlogged index relations during recovery. In passing, add a direct check for the presence of a main fork just before verification proper begins, so that we cleanly verify the presence of the main relation fork file. Author: Andrey Borodin, Peter Geoghegan Reported-By: Andrey Borodin Diagnosed-By: Andrey Borodin Discussion: https://postgr.es/m/DA9B33AC-53CB-4643-96D4-7A0BBC037FA1@yandex-team.ru Backpatch: 10-, where amcheck was introduced.
-
由 Tom Lane 提交于
contrib/amcheck didn't get the memo either.
-
由 Gang Xiong 提交于
Cherry-pick 382ceffd from upstream, only the changes to amcheck.
-
由 Andres Freund 提交于
The previous coding of the test was vulnerable against autovacuum triggering work on one of the tables in check_btree.sql. For the purpose of the test it's entirely sufficient to check for locks taken by the current process, so add an appropriate restriction. While touching the test, expand it to also check for locks on the underlying relations, rather than just the indexes. Reported-By: Tom Lane Discussion: https://postgr.es/m/30354.1489434301@sss.pgh.pa.us
-
由 Andres Freund 提交于
No exclusive lock is taken anymore...
-
由 Andres Freund 提交于
This is the beginning of a collection of SQL-callable functions to verify the integrity of data files. For now it only contains code to verify B-Tree indexes. This adds two SQL-callable functions, validating B-Tree consistency to a varying degree. Check the, extensive, docs for details. The goal is to later extend the coverage of the module to further access methods, possibly including the heap. Once checks for additional access methods exist, we'll likely add some "dispatch" functions that cover multiple access methods. Author: Peter Geoghegan, editorialized by Andres Freund Reviewed-By: Andres Freund, Tomas Vondra, Thomas Munro, Anastasia Lubennikova, Robert Haas, Amit Langote Discussion: CAM3SWZQzLMhMwmBqjzK+pRKXrNUZ4w90wYMUWfkeV8mZ3Debvw@mail.gmail.com
-
- 16 5月, 2020 4 次提交
-
-
由 Mel Kiyama 提交于
most functions have been updated to use regclass (oid or table name)
-
由 Mel Kiyama 提交于
* docs - clarify/fix CREATE TABLE syntax for partitioned tables Also add more partitioned table examples. * doc - minor updates to partitioned table syntax. * docs - minor fix to syntax diagram
-
由 Mel Kiyama 提交于
-
由 Mel Kiyama 提交于
* docs - update bloat best practices information from dev. --Remove copying or redistributing table data as alternatives to VACUUM FULL --Mention that VACUUM (without FULL) maintenance is for both heap and AO tables. Also Reorganized information. Clarified ACCESS EXCLUSIVE lock is reason users cannot access table during VACUUM FULL * docs - updates based on review comments. * docs - removed warning about stopping VACUUM FULL.
-
- 15 5月, 2020 2 次提交
-
-
由 Bradford D. Boyle 提交于
The current python build artifact uses a semver pre-release segment to encode build metadata (e.g, 2.7.12-build.42). This pattern does not work well with the idea of post-release/revision numbers, where the revision number is used to reflect additional deltas made on top of the upstream version. As a concrete example, the version string `1.2.3+gp.4.build.5` would indicate this is the fifth build of the binary artifact with four modication/patches made to upstream version `1.2.3`. The Greenplum Release Engineering team has update our build dependency pipeline for python to use this post-release/revision number convention and this PR updates the 6X_STABLE pipeline to consume these newer artifacts. [#172829377] Authored-by: NBradford D. Boyle <bboyle@pivotal.io>
-
由 David Yozie 提交于
-
- 14 5月, 2020 1 次提交
-
-
由 Ashuka Xue 提交于
In commit `Improve statistics calculation for exprs like "var = ANY (ARRAY[...])"`, we improve performance in cardinality estimation for ArrayCmp. However, it caused ArrayCmp expressions with text-like types to default to NDV based cardinality estimations in spite of present and valid histograms. This commit re-enables using histograms for text-like types provided it is safe to do so. Removed because non-singleton buckets for text is not valid: - src/backend/gporca/data/dxl/minidump/CTE-12.mdp - src/backend/gporca/data/dxl/statistics/Join-Statistics-Text-Input.xml - src/backend/gporca/data/dxl/statistics/Join-Statistics-Text-Output.xml Co-authored-by: NAshuka Xue <axue@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
- 13 5月, 2020 4 次提交
-
-
由 Tingfang Bao 提交于
The future Greenplum 7 (master) may not ever target SLES 12 as a supported platform. We can backport this to 6X_STABLE as well because it is not yet a supported platform. It will be at some point in the future. Authored-by: NTingfang Bao <baotingfang@gmail.com>
-
由 Ning Yu 提交于
We use "pkill postgres" to cleanup leaked segments in the behave tests, if the postgress processes already exited the pkill command would fail with code 1, "No processes matched or none of them could be signalled". Fixed by ignoring the return code of pkill. (cherry picked from commit a92e0a33)
-
由 Hans Zeller 提交于
The scripts we use in Concourse pipelines download Apache xerces-c-3.1.2 and then apply a patch that is part of our source code tree. Abhijit has pointed out that this is no longer necessary. This commit removes the patch and uses the vanilla xerces-c-3.1.2 source code instead. Eventually, we want to stop including xerces into our releases and rely on the natively installed xerces. See also https://github.com/greenplum-db/gpdb/pull/10068. (cherry picked from commit 2448be9b)
-
- 12 5月, 2020 4 次提交
-
-
由 Hao Wu 提交于
workfile_shared->num_active may not the actual size of the list workfile_shared->activeList in production. The best way is to find the root cause and fix the inconsistency. However, we failed to find where the problem code is. The above two variables are both in shared memory. It's commonly to reset the shared memory when part of the shared memory is corrupted. Reviewed-by: NHao Wang <haowang@pivotal.io> (cherry picked from commit 4c7854ee)
-
由 Peifeng Qiu 提交于
gpload in the latest windows client package requires VS redistributable package. Output more meaningful message if pg.py fails to load.
-
由 Jesse Zhang 提交于
Looks like we were missing an "extern" in two places. While I was at it, also tidy up guc_gp.c by moving the definition of Debug_resource_group into cdbvars.c, and add declaration of gp_encoding_check_locale_compatibility to cdbvars.h. This is uncovered by building with GCC 10 and Clang 11, where -fno-common is the new default [1][2] (vis a vis -fcommon). I could also reproduce this by turning on "-fno-common" in older releases of GCC and Clang. We were relying on a myth (or legacy compiler behavior, rather) that C tentative definitions act _just like_ declarations -- in plain English: missing an "extern" in a global variable declaration-wannabe wouldn't harm you, as long as you don't put an initial value after it. This resolves #10072. [1] "3.17 Options for Code Generation Conventions: -fcommon" https://gcc.gnu.org/onlinedocs/gcc-10.1.0/gcc/Code-Gen-Options.html#index-tentative-definitions [2] "Porting to GCC 10" https://gcc.gnu.org/gcc-10/porting_to.html [3] "[Driver] Default to -fno-common for all targets" https://reviews.llvm.org/D75056 (cherry picked from commit ee7eb0e8)
-
由 Hans Zeller 提交于
DPE stats are computed when we have a dynamic partition selector that's applied on another child of a join. The current code continues to use DPE stats even for the common ancestor join and nodes above it, but those nodes aren't affected by the partition selector. Regular Memo groups pick the best expression among several to compute stats, which makes row count estimates more reliable. We don't have that luxury with DPE stats, therefore they are often less reliable. By minimizing the places where we use DPE stats, we should overall get more reliable row count estimates with DPE stats enabled. The fix also ignores DPE stats with row counts greater than the group stats. Partition selectors eliminate certain partitions, therefore it is impossible for them to increase the row count.
-
- 09 5月, 2020 11 次提交
-
-
由 Heikki Linnakangas 提交于
They use GPOS_RESET_EX, which needs ITask. Fix missing includes in unit tests. (cherry picked from commit 88f9744a)
-
由 Heikki Linnakangas 提交于
ops.h brings in the headers for *all* the in include/gpopt/operators/, which is way more than is needed in most cases. (cherry picked from commit 143dd82d)
-
由 Heikki Linnakangas 提交于
(cherry picked from commit 347fba32)
-
由 Heikki Linnakangas 提交于
Avoid including dxlops.h, which pulls *all* the CParseHandler header files. Makes the postgres binary (with assertions and debugging information) about 1.5 MB smaller. (cherry picked from commit 529ce1a7)
-
由 Heikki Linnakangas 提交于
Let's keep base.h as slim as possible. (cherry picked from commit b88c8195)
-
由 Heikki Linnakangas 提交于
(cherry picked from commit af6431ad)
-
由 Heikki Linnakangas 提交于
CMemoryPool.h is included literally everywhere, because it comes with gpos/base.h. Every little there helps. (cherry picked from commit 99a0066f)
-
由 Heikki Linnakangas 提交于
Try to not pull in unnecessary dependencies in header files. (cherry picked from commit 632ad764)
-
由 Heikki Linnakangas 提交于
With this, the xerces headers are not pulled into the xforms/ files. Makes each .o file about 100 kB shorter. Shrinks the postgres binary from about 128 MB to 121 MB, with assertions and debugging enabled. (cherry picked from commit 35cfc37d)
-
由 Heikki Linnakangas 提交于
(cherry picked from commit c9756796)
-
由 Heikki Linnakangas 提交于
(cherry picked from commit 4eebb0e1)
-