- 14 3月, 2019 10 次提交
-
-
由 Daniel Gustafsson 提交于
As we merge with upstream and by that keep refining the Postgres planner, legacy planner is no longer a suitable name. This changes all variations of the spelling (legacy planner, legacy optimizer, legacy query optimizer etc) to say "Postgres" rather than "legacy". Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io> Reviewed-by: NDavid Yozie <dyozie@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
* Avoid unnecessary start/end_ignore blocks * The test tables were supposed to be created in a dedicated schema, but because of the RESET ALL commands, they were created in 'public' schema instead. Fix by replacing "RESET ALL" with "RESET test_print_direct_dispatch_info" * Don't try to DROP tables that should not exist yet. * Don't bother DROPping test tables as we go. They will be dropped at the end, when we drop the whole schema. * Use fewer partitions in a test on partitioned table. * Use fewer rows in 'tblexecutions' test table. This reduces the execution time by about 20 s.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
We had essentially the same test in 'partition' already.
-
由 Heikki Linnakangas 提交于
* Avoid using start/end_ignore blocks where not necessary. * Remove unnecessary DROP commands. * Use begin/commit when building test tables, to reduce 2PC overhead * Reuse test tables, rather than drop and recreate them Shaves a few seconds from the total execution time.
-
由 Heikki Linnakangas 提交于
Commit 90a957eb changed the test to create fewer partitions, but didn't update the NOTICEs in expected output accordingly. NOTICEs are ignored when comparing the expected output with actual output, but it's still nice to keep the expected output in sync with reality.
-
由 Shaoqi Bai 提交于
* Update relation's stats in pg_class during vacuum full. Hash index depends on estimation of numbers of tuples and pages of relations, incorrect value could be a reason of significantly growing of index. Vacuum full recreates heap and reindex all indexes before renewal stats. The patch fixes that, so indexes will see correct values. Backpatch to v10 only because earlier versions haven't usable hash index and growing of hash index is a single user-visible symptom. Author: Amit Kapila Reviewed-by: Ashutosh Sharma, me Discussion: https://www.postgresql.org/message-id/flat/20171115232922.5tomkxnw3iq6jsg7@inml.weebeastie.net * Collect QE's relpages and reltuples to QD And logic in swap_relation_files() to collect QE's relpages and reltuples to QD when doing vacuum full Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> * Add test Add test to verify that relpages and reltuples has become proper numbers when vacuum full Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> * Update PR pipeline failures Reviewed-by: NAdam Berlin <aberlin@pivotal.io> Reviewed-by: NAlexandra Wang <lewang@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Jinbao Chen 提交于
this commit re-enable commit 1ec65820
-
由 Paul Guo 提交于
Make sure array type is not missed after partition exchange.
-
由 David Kimura 提交于
This commit is to ensure that we have basic coverage to create and insert data into an appendonly row or column oriented tables with quicklz, zstd, zlib, or rle. It uses get_ao_compression_ratio() function to check insert success by comparing the uncompressed/compressed sizes. It includes API testing of invalid scenarios, such as invalid compress level or unsupported compresstype in a table format (e.g. rle is not supported with appendonly row-oriented tables). Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
- 13 3月, 2019 9 次提交
-
-
由 Jinbao Chen 提交于
We have an assert failure for copy to partition table if root and child relation have different attno. The root cause is that 'reconstructTupleValues' has a wrong value of newNumAttrs lenth.
-
由 Zhang Shujie 提交于
Writable external table has an entry in gp_distribution_policy so it has numsegments field. Previous code skips any external tables so that their numsegments fields are not updated. This commit fixes this by: 1. add a column in status_detail table to record whether the table is writable external and invokes correct SQL to expand such tables. 2. Support `Alter external table <tab> expand table` for writable external tables. Co-authored-by: Zhenghua Lyu zlv@pivotal.io
-
由 Jinbao Chen 提交于
On sql "alter table parttab exchange partition for (1) with table y", we checked if the table has same columns with the partion table. But on sql "alter table parttab alter partition for (1) exchange partition for (1) with table x", we forgot the check. Add the check back.
-
由 Ning Yu 提交于
We need to do a cluster expansion which will check if there are partial tables, we need to drop the partial tables to keep the cluster expansion run correctly.
-
由 Zhenghua Lyu 提交于
Previously, test cases `partial_table` and `subselect_gp2` are in the same test group so that they might be running concurrently. `partital_table` contains a statement: `update gp_distribution_policy`, `subselect_gp2` contains a statement: `VACUUM FULL pg_authid`. These two statements may lead to local deadlock on QD when running concurrently if GDD is disabled. If GDD is disabled, `update gp_distribution_policy`'s lock-acquire: 1. at parsing stage, lock `gp_distribution_policy` in Exclusive Mode 2. later when it needs to check authentication, lock `pg_authid` in AccessShare Mode `VACUUM FULL pg_authid`'s lock-acquire: 1. lock pg_authid in Access Exclusive Mode 2. later when rebuilding heap, it might delete some dependencies, this will do GpPolicyRemove, which locks `gp_distribution_policy` in RowExclusive Mode So there is a potential local deadlock.
-
由 Jialun Du 提交于
If the target is not a table, it must error out. So it's better to do permission check first, or the logic may access some fields which is null-able for non-table object and cause crash.
-
由 Ning Yu 提交于
Add tests to ensure that a table can be expanded correctly even if it contains misplaced tuples.
-
由 Tang Pengzhou 提交于
I noticed that we haven't a sanity check for the correctness of "numsegments" of a table, the numsegments might be larger than the size of cluster, eg, table is expanded after a global transaction is started and then the table is accessed in that transaction or gp_distribution_policy is corrupt in some way, Dispatcher or interconnect cannot handle such case, so add a sanity check to error it out. This sanity check is skipped in UTILITY mode. getgpsegementcount() returns -1 in UTILITY mode which will always make numsegments sanity check failed, so skip it for UTILITY mode.
-
由 Daniel Gustafsson 提交于
Spell out VACUUM ANALYZE rather than using a pointless, and silly, shortening. No functional change. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
- 12 3月, 2019 8 次提交
-
-
由 David Krieger 提交于
This commit is part of Add Partitioned Indexes #7047. For the partitioned index PR#7047, we add tests for the use of internal auto dependencies in both the existing index-backed constraints and in standalone partitioned indexes. Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Taylor Vesely 提交于
This commit is part of Add Partitioned Indexes #7047. AttachPartitionEnsureIndexes iterates through all the indexes on an incoming partition and adds INTERNAL_AUTO dependencies to the ones match the definition of the parent partition's partitioned indexes. These are all the tests that broke when we started exchanging/testing for regular indexes on partitioned tables. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Taylor Vesely 提交于
This commit is part of Add Partitioned Indexes #7047. After adding INTERNAL_AUTO, and dependencies between partitioned indexs, many tests that assumed that we need to manually delete indexes added to leaf partitions need updating. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 David Krieger 提交于
This commit is part of Add Partitioned Indexes #7047. Tests were added to verify: - Index backed constraint names have matching index name - Constraints on partition tables including ADD PARTITION and EXCHANGE PARTITION. - Constraints and indexes can be upgraded. This includes testing directly in pg_regress, or creating tables to be used by pg_upgrade. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Taylor Vesely 提交于
This commit is part of Add Partitioned Indexes #7047. Constraint names must now match their index. Fix ICW tests where this assumption no longer holds. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Taylor Vesely 提交于
This commit adds partitioned indexes from upstream Postgres. This commit is mostly cherry-picked from Postgres 11 and bug fixes from Postgres 12. Differences from upstream: - Postgres has two additional relkind's - RELKIND_PARTITIONED_TABLE and RELKIND_PARTITIONED_INDEX, which have no on disk storage. Greenplum does not have these additional relkinds. Thus, partitioned indexes have physical storage. - CREATE INDEX ON ONLY <table> DDL has not yet been implemented - ALTER INDEX ATTACH PARTITION DDL has not yet been implemented Constraint changes: - Constraints and their backing index have the same names. Thus, partitions of a table no longer share the same constraint name, and are instead related to their parent via INTERNAL_AUTO dependencies. Index changes: - Child partition indexes can no longer be directly dropped, and must be dropped from their root. This includes mid-level and leaf indexes. - Adding indexes to mid-level partitions cascade to their children. These changes are mostly cherry-picked from: commit 8b08f7d4 Author: Alvaro Herrera <alvherre@alvh.no-ip.org> Date: Fri Jan 19 11:49:22 2018 -0300 Local partitioned indexes When CREATE INDEX is run on a partitioned table, create catalog entries for an index on the partitioned table (which is just a placeholder since the table proper has no data of its own), and recurse to create actual indexes on the existing partitions; create them in future partitions also. As a convenience gadget, if the new index definition matches some existing index in partitions, these are picked up and used instead of creating new ones. Whichever way these indexes come about, they become attached to the index on the parent table and are dropped alongside it, and cannot be dropped on isolation unless they are detached first. To support pg_dump'ing these indexes, add commands CREATE INDEX ON ONLY <table> (which creates the index on the parent partitioned table, without recursing) and ALTER INDEX ATTACH PARTITION (which is used after the indexes have been created individually on each partition, to attach them to the parent index). These reconstruct prior database state exactly. Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit Langote, Jesper Pedersen, Simon Riggs, David Rowley Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql Changes were also cherry-picked from the following Postgres commits: eb7ed3f3 - Allow UNIQUE indexes on partitioned tables ae366aa5 - Detach constraints when partitions are detached 19184fcc - Simplify coding to detach constraints when detaching partition c7d43c4d - Correct attach/detach logic for FKs in partitions 17f206fb - Set pg_class.relhassubclass for partitioned indexes Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Hans Zeller 提交于
* Bump ORCA version to 3.29.0 * ORCA: Updating subquery plans in ICG expected files This change is needed for ORCA PR https://github.com/greenplum-db/gporca/pull/449. Some subquery plans changed in minor ways in the ICG test. Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Jimmy Yih 提交于
Currently, a randomly distributed table cannot be created with a primary key or unique index. We should put this restriction for ALTER TABLE SET DISTRIBUTED RANDOMLY as well. This was caught by gpcheckcat distribution_policy check. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
- 11 3月, 2019 5 次提交
-
-
由 Daniel Gustafsson 提交于
When altering a partitioned table and adding an incorrectly specified partition, an assertion was hit rather than gracefully erroring out. Make sure that the requested partition matches the underlying table definition before continuing down into the altering code. This also adds a testcase for this. Reported-by: Kalen Krempely in #6967 Reviewed-by: NPaul Guo <pguo@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Daniel Gustafsson 提交于
The GUC which enables recursive CTEs is in the currently released version called gp_recursive_cte_prototype, but in order to reflect the current state of the code it's now renamed to gp_recursive_cte. By default the GUC is still off, but that might change before we ship the next release. The previous GUC name is still supported, but marked as deprecated, in order to make upgrades easier. Reviewed-by: NIvan Novick <inovick@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Ning Yu 提交于
This method was introduced to improve the data redistribution performance during gpexpand phase2, however per benchmark results the effect does not reach our expectation. For example when expanding a table from 7 segments to 8 segments the reshuffle method is only 30% faster than the traditional CTAS method, when expanding from 4 to 8 segments reshuffle is even 10% slower than CTAS. When there are indexes on the table the reshuffle performance can be worse, and extra VACUUM is needed to actually free the disk space. According to our experiments the bottleneck of reshuffle method is on the tuple deletion operation, it is much slower than the insertion operation used by CTAS. The reshuffle method does have some benefits, it requires less extra disk space, it also requires less network bandwidth (similar to CTAS method with the new JCH reduce method, but less than CTAS + MOD). And it can be faster in some cases, however as we can not automatically determine when it is faster it is not easy to get benefit from it in practice. On the other side the reshuffle method is less tested, it is possible to have bugs in corner cases, so it is not production ready yet. In such a case we decided to retire it entirely for now, we might add it back in the future if we can get rid of the slow deletion or find out reliable ways to automatically choose between reshuffle and ctas methods. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/8xknWag-SkI/5OsIhZWdDgAJReviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Zhenghua Lyu 提交于
Previously, during initializing ResultRelations in InitPlan on QD, it always builds the relids as all the relation oids in a partition table (including root and all its inheritors). Sometimes we does not need all the relids. A typical case is for ao partition table. When we directly insert into a specific child partition, the plan's ResultRelation only contains the child partition. And if we still make relids as root and all its inheritors, during `assignPerRelSegno`, it might lock each aoseg file on AccessShare mode on QEs. It causes confusion that the insert statement is only for a child partition but holding other partition's lock. This commit changes the relids building logic as: - if the ResultRelation contains the root partition, then relids is root and all its inheritors - otherwise, relids is a map of ResultRelations to get the element's relation oid
-
由 Daniel Gustafsson 提交于
Make sure all ereports() starts with a lowercase letter, and move longer explanations to errdetail/errhints. Also fix expected error output to match. Reviewed-by: NAdam Berlin <aberlin@pivotal.io> Reviewed-by: NJacob Champion <pchampion@pivotal.io>
-
- 08 3月, 2019 1 次提交
-
-
由 Ning Yu 提交于
Build mode does not use any information from the cluster and does not affect its running status, in fact it does not require a cluster exists at all. Co-authored-by: NHubert Zhang <hzhang@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io>
-
- 06 3月, 2019 2 次提交
-
-
由 Daniel Gustafsson 提交于
A large set of tests were wrapped in an ignore block in the with suite due to them not working properly in the past. Since most of these have been addressed, it's time to break up the block and ensure testing coverage. This removes as much of the ignore as possible, and updates the underlying returned data to match. This will create merge conflicts with upstream but since we wont merge more code before cutting the next release it's better to have sane tests for the lifecycle of the next release, and we can always revert this on master as we statrt merging again. The trigger tests are left under ignore, even though they seem to work quite well, since atmsort cannot handle that output yet. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
An inner ModifyTable node must run to completion even if the outer node can be satisfied with a squelched inner. Ensure to run the node to completion when asked to Squelch to not risk losing modifications. This adds a testcase from the upstream with test suite to the GPDB with_clause suite. The original test is under an ignore block, but even with lifting that the output is different due to state being set up by prior tests which happen to fail in GPDB. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
- 05 3月, 2019 3 次提交
-
-
由 Pengzhou Tang 提交于
Commit 4eb65a53 bring GPDB the ability to allow tables to be distributed on a subset of segments, that commit taken care of the replicated table well for SELECT/UPDATE/INSERT on a subset, however, COPY FROM was not. For COPY FROM replicated table, only one replica should be picked to provide data and the QE whose segid match gp_session_id % segment_size will be chosen, obviously, for table on a subset of segments, a valid QE might be chosen. To fix it, the real numsegments of replicated table should be used instead of current segment size, what's more, dispatcher can allocate gangs on a subset of segments now, QD can directly allocate picked gang directly, QE doesn't need to care about whether it should provide data anymore. For COPY TO replicated table, we also should allocated correct QEs matching the numsegments of replicated table. Co-authored-by: NGang Xiong <gxiong@pivotal.io>
-
由 Pengzhou Tang 提交于
checkPolicyForUniqueIndex() checks if distribution key conflict with unique/primary key, for example, unique index is not allowed for random-distributed table and allowed for replicated-distributed table, for normal distributed, the set of columns being indexed should be a superset of the table. What about entry-distributed table? (eg, table created in utility mode, it has no records in gp_distribution_policy and GpPolicyFetch translated it to entry-distributed)? Such tables are localized in a single db, so adding a unique index should also be allowed. This is spotted by the assertion in checkPolicyForUniqueIndex() when checking the conflict for normal distributed tables. This fixes #5880
-
由 Jacob Champion 提交于
We don't have Subject Alternative Name support, and won't until 9.5. Make sure we uncomment these tests when we get there. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
- 04 3月, 2019 1 次提交
-
-
由 Zhang Shujie 提交于
We generate the fake ItemPointer for AO table's tuple, but it is a little different from the HEAP table's ItemPointer, the offset of the pointer can be 0 when it is a fake ItemPointer, but we treat 0 to be invalid in the previous codes, In order to run correctly, we set the 16th bit of offset to be 1 as a flag, then it can pass some check, but the value is not the real value, so when using the fake ItemPointer, we have to note to process the flag. The bitmap index build code expects to see the tuple IDs in order. It gets confused when it sees offset number 32768(0x8000) before offset number 1, this commit convert 32768(0x8000) to be 0.
-
- 01 3月, 2019 1 次提交
-
-
由 Zhenghua Lyu 提交于
The following utilities do not work when we are in gpexpand phase 1: * gppkg * gpconfig * gpcheckat Add check for them so that if cluster is expanding in phase1, they will print error message and exit.
-