- 06 4月, 2019 1 次提交
-
-
由 Adam Berlin 提交于
User defined functions cache their plan, therefore if we modify the plan during execution, we risk having invalid data during the next execution of the cached plan. ExecuteTruncate modifies the plan's relations when declaring partitions that need truncation. Instead, we copy the list of relations that need truncation and add the partition relations to the copy.
-
- 05 4月, 2019 1 次提交
-
-
由 Daniel Gustafsson 提交于
The metadata_track suite was originally part of cdbfast, dating back around 10-11 years. It was later moved into bugbuster around six years ago, already then with doubts as to what it actually did. After the open sourcing we scrapped bugbuster, moving anything worthwhile (or just not analyzed for usefulness yet) into the normal regress schedule which is where metadata_track remained till now. This all according to memory and the old proprietary issue tracker. Looking at metadata_track, it's entirely duplicative only issuing lots of DDL already tested elsewhere without verifying the results, most likely since it was originally testing the metadata tracking of the operations. Since the latter part is no longer happening, move parts of the test into the existing pg_stat_last_operation test and remove the rest, as the remaining value of this quite slow test (spending ~10-12 minutes serially in the pipeline) is highly debatable. The existing pg_stat_last_operation was, and I quote, "underwhelming" so most of it is replaced herein. There is still work to be done in order to boost metadata tracking test coverage, but this is at least a start. Reviewed-by: Jimmy Yih
-
- 04 4月, 2019 2 次提交
-
-
由 Daniel Gustafsson 提交于
Joining nested REURSIVE clauses is planned as a join between two WorkTableScan nodes, which we currently cannot do. Detect and disallow for now until we have the required infrastructure to handle this class of queries. The below query is an example of this: WITH RECURSIVE r1 AS ( SELECT 1 AS a UNION ALL ( WITH RECURSIVE r2 AS ( SELECT 2 as b UNION ALL SELECT b FROM r1, r2 ) SELECT b FROM r2 ) ) SELECT * FROM r1 LIMIT 1; In upstream PostgreSQL, the resulting plan exhibits the same behavior as in GPDB, but there is no restiction on WorkTableScan on the inner side of joins in PostgreSQL: QUERY PLAN ------------------------------------------------------------- Limit CTE r1 -> Recursive Union -> Result -> CTE Scan on r2 r2_1 CTE r2 -> Recursive Union -> Result -> Nested Loop -> WorkTable Scan on r1 r1_1 -> WorkTable Scan on r2 -> CTE Scan on r1 (12 rows) Backport to 6X_STABLE as it's a live bug there.
-
由 Paul Guo 提交于
It will assert fail at this mdcreate (reln=0x2ac5548, forkNum=INIT_FORKNUM, isRedo=0 '\000') at md.c:288 ExceptionalCondition (conditionName=0xe7cc00 "!(reln->md_fd[forkNum] == ((void *)0))", errorType=0xe7cbbe "FailedAssertion", fileName=0xe7cbb9 "md.c", lineNumber=288) at assert.c:46 This fixes https://github.com/greenplum-db/gpdb/issues/7340 Reported-by: zhbp366@github Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NTaylor Vesely <tvesely@pivotal.io>
-
- 03 4月, 2019 2 次提交
-
-
由 Zhenghua Lyu 提交于
`create rule <...> do instead select * from t for update` will dispatch a query with lockingclause node. Add deserialization method for it to make things correct.
-
由 Richard Guo 提交于
We've removed SRF-in-targetlist support for most node types in GPDB. Instead, we insert Result nodes to evaluate the set-returning-functions. This can be benefit to performance. However, Result node cannot evaluate WindowFunc, which an WindowAgg's target list usually has. So we need to support SRF-in-targetlist for WindowAgg nodes. This is the same logic as for AggState. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NTaylor Vesely <tvesely@pivotal.io>
-
- 30 3月, 2019 1 次提交
-
-
由 Adam Berlin 提交于
Postgres optimizes the storage of the ItemPointer in the Gin Posting List. It only stores 11 bits for the offset number because heap tables only have enough tuples per block to fit in 11 bits. However, Greenplum append-only tables store 16 bits worth of offset numbers. Initially we thought we'd need to modify decode_varbyte() but it turns out that it is OK. It handles 48 bits already. Co-authored-by: NAlexandra Wang <lewang@pivotal.com> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 27 3月, 2019 1 次提交
-
-
由 Alexandra Wang 提交于
INSERT needs to Lock the child table on QD, otherwise the following dead lock scenario may happen if INSERT runs concurrently with VACUUM drop phase on an AppendOnly partitioned table. 1. VACUUM on QD: acquired AccessExclusiveLock on the child table 2. INSERT on QE: acquired RowExclusiveLock on the child table 3. VACUUM on QE: waiting for AccessExclusiveLock on the child table 4. INSERT on QD: waiting for AccessShareLock at ExecutorEnd() on the child table, this is after QE sends back which child it inserted Note that in step 2, INSERT only locks the child table on QE, it does not lock the child table on QD in the previous code. This patch adds the lock on QD as well to prevent the above dead lock. Added `insert_while_vacuum_drop` and updated `partition_locking` to reflect the changes
-
- 26 3月, 2019 1 次提交
-
-
由 Adam Berlin 提交于
A warning message is displayed to a user that attempts to perform `pg_create_physical_replication_slot()` on any individual segment, including the master segment so that they are aware that this command is not made to be distributed to the entire cluster.
-
- 15 3月, 2019 1 次提交
-
-
由 Ning Yu 提交于
We used to make cost calculation with this property, it is equal to the segments count of the cluster, however this is wrong when the table is a partial one (this happens during gpexpand). We should always get numsegments from the motion. The gangsize.sql test is updated as in some of its queries the slices order is different than before due to change of the costs.
-
- 14 3月, 2019 9 次提交
-
-
由 Daniel Gustafsson 提交于
As we merge with upstream and by that keep refining the Postgres planner, legacy planner is no longer a suitable name. This changes all variations of the spelling (legacy planner, legacy optimizer, legacy query optimizer etc) to say "Postgres" rather than "legacy". Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io> Reviewed-by: NDavid Yozie <dyozie@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
* Avoid unnecessary start/end_ignore blocks * The test tables were supposed to be created in a dedicated schema, but because of the RESET ALL commands, they were created in 'public' schema instead. Fix by replacing "RESET ALL" with "RESET test_print_direct_dispatch_info" * Don't try to DROP tables that should not exist yet. * Don't bother DROPping test tables as we go. They will be dropped at the end, when we drop the whole schema. * Use fewer partitions in a test on partitioned table. * Use fewer rows in 'tblexecutions' test table. This reduces the execution time by about 20 s.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
We had essentially the same test in 'partition' already.
-
由 Heikki Linnakangas 提交于
* Avoid using start/end_ignore blocks where not necessary. * Remove unnecessary DROP commands. * Use begin/commit when building test tables, to reduce 2PC overhead * Reuse test tables, rather than drop and recreate them Shaves a few seconds from the total execution time.
-
由 Heikki Linnakangas 提交于
Commit 90a957eb changed the test to create fewer partitions, but didn't update the NOTICEs in expected output accordingly. NOTICEs are ignored when comparing the expected output with actual output, but it's still nice to keep the expected output in sync with reality.
-
由 Shaoqi Bai 提交于
* Update relation's stats in pg_class during vacuum full. Hash index depends on estimation of numbers of tuples and pages of relations, incorrect value could be a reason of significantly growing of index. Vacuum full recreates heap and reindex all indexes before renewal stats. The patch fixes that, so indexes will see correct values. Backpatch to v10 only because earlier versions haven't usable hash index and growing of hash index is a single user-visible symptom. Author: Amit Kapila Reviewed-by: Ashutosh Sharma, me Discussion: https://www.postgresql.org/message-id/flat/20171115232922.5tomkxnw3iq6jsg7@inml.weebeastie.net * Collect QE's relpages and reltuples to QD And logic in swap_relation_files() to collect QE's relpages and reltuples to QD when doing vacuum full Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> * Add test Add test to verify that relpages and reltuples has become proper numbers when vacuum full Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> * Update PR pipeline failures Reviewed-by: NAdam Berlin <aberlin@pivotal.io> Reviewed-by: NAlexandra Wang <lewang@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Paul Guo 提交于
Make sure array type is not missed after partition exchange.
-
由 David Kimura 提交于
This commit is to ensure that we have basic coverage to create and insert data into an appendonly row or column oriented tables with quicklz, zstd, zlib, or rle. It uses get_ao_compression_ratio() function to check insert success by comparing the uncompressed/compressed sizes. It includes API testing of invalid scenarios, such as invalid compress level or unsupported compresstype in a table format (e.g. rle is not supported with appendonly row-oriented tables). Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
- 13 3月, 2019 5 次提交
-
-
由 Jinbao Chen 提交于
We have an assert failure for copy to partition table if root and child relation have different attno. The root cause is that 'reconstructTupleValues' has a wrong value of newNumAttrs lenth.
-
由 Ning Yu 提交于
We need to do a cluster expansion which will check if there are partial tables, we need to drop the partial tables to keep the cluster expansion run correctly.
-
由 Jialun Du 提交于
If the target is not a table, it must error out. So it's better to do permission check first, or the logic may access some fields which is null-able for non-table object and cause crash.
-
由 Ning Yu 提交于
Add tests to ensure that a table can be expanded correctly even if it contains misplaced tuples.
-
由 Tang Pengzhou 提交于
I noticed that we haven't a sanity check for the correctness of "numsegments" of a table, the numsegments might be larger than the size of cluster, eg, table is expanded after a global transaction is started and then the table is accessed in that transaction or gp_distribution_policy is corrupt in some way, Dispatcher or interconnect cannot handle such case, so add a sanity check to error it out. This sanity check is skipped in UTILITY mode. getgpsegementcount() returns -1 in UTILITY mode which will always make numsegments sanity check failed, so skip it for UTILITY mode.
-
- 12 3月, 2019 8 次提交
-
-
由 David Krieger 提交于
This commit is part of Add Partitioned Indexes #7047. For the partitioned index PR#7047, we add tests for the use of internal auto dependencies in both the existing index-backed constraints and in standalone partitioned indexes. Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Taylor Vesely 提交于
This commit is part of Add Partitioned Indexes #7047. AttachPartitionEnsureIndexes iterates through all the indexes on an incoming partition and adds INTERNAL_AUTO dependencies to the ones match the definition of the parent partition's partitioned indexes. These are all the tests that broke when we started exchanging/testing for regular indexes on partitioned tables. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Taylor Vesely 提交于
This commit is part of Add Partitioned Indexes #7047. After adding INTERNAL_AUTO, and dependencies between partitioned indexs, many tests that assumed that we need to manually delete indexes added to leaf partitions need updating. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 David Krieger 提交于
This commit is part of Add Partitioned Indexes #7047. Tests were added to verify: - Index backed constraint names have matching index name - Constraints on partition tables including ADD PARTITION and EXCHANGE PARTITION. - Constraints and indexes can be upgraded. This includes testing directly in pg_regress, or creating tables to be used by pg_upgrade. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Taylor Vesely 提交于
This commit is part of Add Partitioned Indexes #7047. Constraint names must now match their index. Fix ICW tests where this assumption no longer holds. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Taylor Vesely 提交于
This commit adds partitioned indexes from upstream Postgres. This commit is mostly cherry-picked from Postgres 11 and bug fixes from Postgres 12. Differences from upstream: - Postgres has two additional relkind's - RELKIND_PARTITIONED_TABLE and RELKIND_PARTITIONED_INDEX, which have no on disk storage. Greenplum does not have these additional relkinds. Thus, partitioned indexes have physical storage. - CREATE INDEX ON ONLY <table> DDL has not yet been implemented - ALTER INDEX ATTACH PARTITION DDL has not yet been implemented Constraint changes: - Constraints and their backing index have the same names. Thus, partitions of a table no longer share the same constraint name, and are instead related to their parent via INTERNAL_AUTO dependencies. Index changes: - Child partition indexes can no longer be directly dropped, and must be dropped from their root. This includes mid-level and leaf indexes. - Adding indexes to mid-level partitions cascade to their children. These changes are mostly cherry-picked from: commit 8b08f7d4 Author: Alvaro Herrera <alvherre@alvh.no-ip.org> Date: Fri Jan 19 11:49:22 2018 -0300 Local partitioned indexes When CREATE INDEX is run on a partitioned table, create catalog entries for an index on the partitioned table (which is just a placeholder since the table proper has no data of its own), and recurse to create actual indexes on the existing partitions; create them in future partitions also. As a convenience gadget, if the new index definition matches some existing index in partitions, these are picked up and used instead of creating new ones. Whichever way these indexes come about, they become attached to the index on the parent table and are dropped alongside it, and cannot be dropped on isolation unless they are detached first. To support pg_dump'ing these indexes, add commands CREATE INDEX ON ONLY <table> (which creates the index on the parent partitioned table, without recursing) and ALTER INDEX ATTACH PARTITION (which is used after the indexes have been created individually on each partition, to attach them to the parent index). These reconstruct prior database state exactly. Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit Langote, Jesper Pedersen, Simon Riggs, David Rowley Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql Changes were also cherry-picked from the following Postgres commits: eb7ed3f3 - Allow UNIQUE indexes on partitioned tables ae366aa5 - Detach constraints when partitions are detached 19184fcc - Simplify coding to detach constraints when detaching partition c7d43c4d - Correct attach/detach logic for FKs in partitions 17f206fb - Set pg_class.relhassubclass for partitioned indexes Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Hans Zeller 提交于
* Bump ORCA version to 3.29.0 * ORCA: Updating subquery plans in ICG expected files This change is needed for ORCA PR https://github.com/greenplum-db/gporca/pull/449. Some subquery plans changed in minor ways in the ICG test. Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Jimmy Yih 提交于
Currently, a randomly distributed table cannot be created with a primary key or unique index. We should put this restriction for ALTER TABLE SET DISTRIBUTED RANDOMLY as well. This was caught by gpcheckcat distribution_policy check. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
- 11 3月, 2019 3 次提交
-
-
由 Daniel Gustafsson 提交于
When altering a partitioned table and adding an incorrectly specified partition, an assertion was hit rather than gracefully erroring out. Make sure that the requested partition matches the underlying table definition before continuing down into the altering code. This also adds a testcase for this. Reported-by: Kalen Krempely in #6967 Reviewed-by: NPaul Guo <pguo@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Daniel Gustafsson 提交于
The GUC which enables recursive CTEs is in the currently released version called gp_recursive_cte_prototype, but in order to reflect the current state of the code it's now renamed to gp_recursive_cte. By default the GUC is still off, but that might change before we ship the next release. The previous GUC name is still supported, but marked as deprecated, in order to make upgrades easier. Reviewed-by: NIvan Novick <inovick@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Ning Yu 提交于
This method was introduced to improve the data redistribution performance during gpexpand phase2, however per benchmark results the effect does not reach our expectation. For example when expanding a table from 7 segments to 8 segments the reshuffle method is only 30% faster than the traditional CTAS method, when expanding from 4 to 8 segments reshuffle is even 10% slower than CTAS. When there are indexes on the table the reshuffle performance can be worse, and extra VACUUM is needed to actually free the disk space. According to our experiments the bottleneck of reshuffle method is on the tuple deletion operation, it is much slower than the insertion operation used by CTAS. The reshuffle method does have some benefits, it requires less extra disk space, it also requires less network bandwidth (similar to CTAS method with the new JCH reduce method, but less than CTAS + MOD). And it can be faster in some cases, however as we can not automatically determine when it is faster it is not easy to get benefit from it in practice. On the other side the reshuffle method is less tested, it is possible to have bugs in corner cases, so it is not production ready yet. In such a case we decided to retire it entirely for now, we might add it back in the future if we can get rid of the slow deletion or find out reliable ways to automatically choose between reshuffle and ctas methods. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/8xknWag-SkI/5OsIhZWdDgAJReviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 08 3月, 2019 1 次提交
-
-
由 Ning Yu 提交于
Build mode does not use any information from the cluster and does not affect its running status, in fact it does not require a cluster exists at all. Co-authored-by: NHubert Zhang <hzhang@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io>
-
- 06 3月, 2019 2 次提交
-
-
由 Daniel Gustafsson 提交于
A large set of tests were wrapped in an ignore block in the with suite due to them not working properly in the past. Since most of these have been addressed, it's time to break up the block and ensure testing coverage. This removes as much of the ignore as possible, and updates the underlying returned data to match. This will create merge conflicts with upstream but since we wont merge more code before cutting the next release it's better to have sane tests for the lifecycle of the next release, and we can always revert this on master as we statrt merging again. The trigger tests are left under ignore, even though they seem to work quite well, since atmsort cannot handle that output yet. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
An inner ModifyTable node must run to completion even if the outer node can be satisfied with a squelched inner. Ensure to run the node to completion when asked to Squelch to not risk losing modifications. This adds a testcase from the upstream with test suite to the GPDB with_clause suite. The original test is under an ignore block, but even with lifting that the output is different due to state being set up by prior tests which happen to fail in GPDB. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
- 05 3月, 2019 1 次提交
-
-
由 Pengzhou Tang 提交于
Commit 4eb65a53 bring GPDB the ability to allow tables to be distributed on a subset of segments, that commit taken care of the replicated table well for SELECT/UPDATE/INSERT on a subset, however, COPY FROM was not. For COPY FROM replicated table, only one replica should be picked to provide data and the QE whose segid match gp_session_id % segment_size will be chosen, obviously, for table on a subset of segments, a valid QE might be chosen. To fix it, the real numsegments of replicated table should be used instead of current segment size, what's more, dispatcher can allocate gangs on a subset of segments now, QD can directly allocate picked gang directly, QE doesn't need to care about whether it should provide data anymore. For COPY TO replicated table, we also should allocated correct QEs matching the numsegments of replicated table. Co-authored-by: NGang Xiong <gxiong@pivotal.io>
-
- 04 3月, 2019 1 次提交
-
-
由 Zhang Shujie 提交于
We generate the fake ItemPointer for AO table's tuple, but it is a little different from the HEAP table's ItemPointer, the offset of the pointer can be 0 when it is a fake ItemPointer, but we treat 0 to be invalid in the previous codes, In order to run correctly, we set the 16th bit of offset to be 1 as a flag, then it can pass some check, but the value is not the real value, so when using the fake ItemPointer, we have to note to process the flag. The bitmap index build code expects to see the tuple IDs in order. It gets confused when it sees offset number 32768(0x8000) before offset number 1, this commit convert 32768(0x8000) to be 0.
-