- 14 3月, 2019 16 次提交
-
-
由 Daniel Gustafsson 提交于
As we merge with upstream and by that keep refining the Postgres planner, legacy planner is no longer a suitable name. This changes all variations of the spelling (legacy planner, legacy optimizer, legacy query optimizer etc) to say "Postgres" rather than "legacy". Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io> Reviewed-by: NDavid Yozie <dyozie@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Daniel Gustafsson 提交于
This just removes unused imports, variables and functions as well as cleans up a little bit of whitespace. No functional changes. Reviewed-by: NAdam Berlin <aberlin@pivotal.io> Reviewed-by: NShoaib Lari <slari@pivotal.io> Reviewed-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Heikki Linnakangas 提交于
* Avoid unnecessary start/end_ignore blocks * The test tables were supposed to be created in a dedicated schema, but because of the RESET ALL commands, they were created in 'public' schema instead. Fix by replacing "RESET ALL" with "RESET test_print_direct_dispatch_info" * Don't try to DROP tables that should not exist yet. * Don't bother DROPping test tables as we go. They will be dropped at the end, when we drop the whole schema. * Use fewer partitions in a test on partitioned table. * Use fewer rows in 'tblexecutions' test table. This reduces the execution time by about 20 s.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
We had essentially the same test in 'partition' already.
-
由 Heikki Linnakangas 提交于
* Avoid using start/end_ignore blocks where not necessary. * Remove unnecessary DROP commands. * Use begin/commit when building test tables, to reduce 2PC overhead * Reuse test tables, rather than drop and recreate them Shaves a few seconds from the total execution time.
-
由 Heikki Linnakangas 提交于
Commit 90a957eb changed the test to create fewer partitions, but didn't update the NOTICEs in expected output accordingly. NOTICEs are ignored when comparing the expected output with actual output, but it's still nice to keep the expected output in sync with reality.
-
由 Shaoqi Bai 提交于
* Update relation's stats in pg_class during vacuum full. Hash index depends on estimation of numbers of tuples and pages of relations, incorrect value could be a reason of significantly growing of index. Vacuum full recreates heap and reindex all indexes before renewal stats. The patch fixes that, so indexes will see correct values. Backpatch to v10 only because earlier versions haven't usable hash index and growing of hash index is a single user-visible symptom. Author: Amit Kapila Reviewed-by: Ashutosh Sharma, me Discussion: https://www.postgresql.org/message-id/flat/20171115232922.5tomkxnw3iq6jsg7@inml.weebeastie.net * Collect QE's relpages and reltuples to QD And logic in swap_relation_files() to collect QE's relpages and reltuples to QD when doing vacuum full Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> * Add test Add test to verify that relpages and reltuples has become proper numbers when vacuum full Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> * Update PR pipeline failures Reviewed-by: NAdam Berlin <aberlin@pivotal.io> Reviewed-by: NAlexandra Wang <lewang@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Jinbao Chen 提交于
this commit re-enable commit 1ec65820
-
由 Paul Guo 提交于
Reviewed-by Adam Berlin
-
由 Zhenghua Lyu 提交于
Now gpexpand uses database `postgres` to store expand information instead of a user-specific database. Co-authored-by: NJialun Du <jdu@pivotal.io>
-
由 Paul Guo 提交于
Make sure array type is not missed after partition exchange.
-
由 David Kimura 提交于
This commit is to ensure that we have basic coverage to create and insert data into an appendonly row or column oriented tables with quicklz, zstd, zlib, or rle. It uses get_ao_compression_ratio() function to check insert success by comparing the uncompressed/compressed sizes. It includes API testing of invalid scenarios, such as invalid compress level or unsupported compresstype in a table format (e.g. rle is not supported with appendonly row-oriented tables). Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Kalen Krempely 提交于
Reword exception message to be more clear and concise. Authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Kalen Krempely 提交于
Authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 David Yozie 提交于
-
- 13 3月, 2019 21 次提交
-
-
由 Jinbao Chen 提交于
We have an assert failure for copy to partition table if root and child relation have different attno. The root cause is that 'reconstructTupleValues' has a wrong value of newNumAttrs lenth.
-
由 Bob Bao 提交于
* Add configure_flags_with_extensions to the pipeline template file * regenerate the gpdb_master pipeline and use fly to set it up. Co-authored-by: NBob Bao <bbao@pivotal.io> Co-authored-by: NNing Fu <nfu@pivotal.io>
-
由 Zhang Shujie 提交于
Writable external table has an entry in gp_distribution_policy so it has numsegments field. Previous code skips any external tables so that their numsegments fields are not updated. This commit fixes this by: 1. add a column in status_detail table to record whether the table is writable external and invokes correct SQL to expand such tables. 2. Support `Alter external table <tab> expand table` for writable external tables. Co-authored-by: Zhenghua Lyu zlv@pivotal.io
-
由 Georgios Kokolatos 提交于
Upstream commit <568d4138> introduced a proper MVCC model for catalog lookups. This change means that catalog lookups must be avoided if not in a proper transaction state. In PortalSetBackoffWeight(), a check was validating if the call was in a transaction and in case of failure no catalog look ups were perfomed nor a backend entry was initialized. This commit initializes a backend entry in all cases with a proper weight. No catalogue lookups are performed outside of a transaction state. Also it tidies up a bit the interface for the initialisation of backoff entries as it removes the responsibility for calculating the backoff weight from the caller. Removes 94_MERGE_FIXME. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io>
-
由 Jinbao Chen 提交于
On sql "alter table parttab exchange partition for (1) with table y", we checked if the table has same columns with the partion table. But on sql "alter table parttab alter partition for (1) exchange partition for (1) with table x", we forgot the check. Add the check back.
-
由 Tingfang Bao 提交于
* Build necessary extensions for release * Add new '--enable-debug-extensions' configure option, if provide the option, the extensions include: gp_distribution_policy gp_internal_tools gp_sparse_vector gp_replica_check gp_inject_fault gp_debug_numsegments * Rename configure_flags to configure_flags_with_extensions The configure_flags_with_extensions SHOULD BE set to configure_flags_with_extensions: "--enable-cassert --enable-debug-extensions" in these secrets files secrets/gpdb_master-ci-secrets.dev.yml secrets/gpdb_master-ci-secrets.prod.yml Co-authored-by: NBob Bao <bbao@pivotal.io> Co-authored-by: NNing Fu <nfu@pivotal.io>
-
由 Ming Li 提交于
If no python in $GPHOME/ext/python/bin/python, $PYTHONHOME will be set to empty string, which make python env wrong. Signed-off-by: NTingfang Bao <bbao@pivotal.io>
-
由 Ning Yu 提交于
We need to do a cluster expansion which will check if there are partial tables, we need to drop the partial tables to keep the cluster expansion run correctly.
-
由 Jialun Du 提交于
- Change rollback complete message for online expand needn't restart - fsync the status file write operation to make sure that the data has been sync to disk
-
由 Zhenghua Lyu 提交于
Previously, test cases `partial_table` and `subselect_gp2` are in the same test group so that they might be running concurrently. `partital_table` contains a statement: `update gp_distribution_policy`, `subselect_gp2` contains a statement: `VACUUM FULL pg_authid`. These two statements may lead to local deadlock on QD when running concurrently if GDD is disabled. If GDD is disabled, `update gp_distribution_policy`'s lock-acquire: 1. at parsing stage, lock `gp_distribution_policy` in Exclusive Mode 2. later when it needs to check authentication, lock `pg_authid` in AccessShare Mode `VACUUM FULL pg_authid`'s lock-acquire: 1. lock pg_authid in Access Exclusive Mode 2. later when rebuilding heap, it might delete some dependencies, this will do GpPolicyRemove, which locks `gp_distribution_policy` in RowExclusive Mode So there is a potential local deadlock.
-
由 Jialun Du 提交于
If the target is not a table, it must error out. So it's better to do permission check first, or the logic may access some fields which is null-able for non-table object and cause crash.
-
由 Ning Yu 提交于
Add tests to ensure that a table can be expanded correctly even if it contains misplaced tuples.
-
由 Jialun Du 提交于
Modify gpexpand phase 1, so it can continue to retry the failed work by re-run gpexpand, if it fails after releasing the catalog lock. The current progress of gpexpand segment preparation is 1. create template base on master 2. lock catalog 3. build and start new segments 4. update gp_segment_configuration (then new transaction will see new nodes) 5. unlock catalog 6. create schema and table for phase2(data redistribution) If it fails before step 5, it can roll back to original state by running gpexpand with -r. But if it fails after step5, it can not roll back. Because new database/table/schema may be created after unlocking the catalog. And new data may be inserted into new segments. If it fails in step 6 now, DBA can do nothing. They can not roll back and also can not conitnue to retry the failing work in step 6 without complex manual intervention. So we change the behaviour here, if gpexpand finds that the last expansion didn't complete successfully and can not roll back, it will cleanup the schemas and tables built in step 6 last time and retry the step 6.
-
由 Tang Pengzhou 提交于
I noticed that we haven't a sanity check for the correctness of "numsegments" of a table, the numsegments might be larger than the size of cluster, eg, table is expanded after a global transaction is started and then the table is accessed in that transaction or gp_distribution_policy is corrupt in some way, Dispatcher or interconnect cannot handle such case, so add a sanity check to error it out. This sanity check is skipped in UTILITY mode. getgpsegementcount() returns -1 in UTILITY mode which will always make numsegments sanity check failed, so skip it for UTILITY mode.
-
由 David Yozie 提交于
-
由 Daniel Gustafsson 提交于
Spell out VACUUM ANALYZE rather than using a pointless, and silly, shortening. No functional change. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Jacob Champion 提交于
This reintroduces commit ba3eb5b4, which was reverted in 659f0ee5. After the WALrep changes, the previous -F (filespace) option was co-opted to be the new standby data directory option. This isn't a particularly obvious association. Change the option to -S. (-D would have been better, but that's already in use as a short alias for --debug.) Also document this option in the official gpinitstandby help.
-
由 Jacob Champion 提交于
This reintroduces commit c9c3c351, which was reverted in 659f0ee5. When a standby is initialized on the same host as the original master, remind the user that the data directory and port need to be explicitly set.
-
由 Jacob Champion 提交于
Commit 6610b941 removed the use of createTemplate() from gpexpand. There are no more callers, and as that commit pointed out, the implementation is unsafe. We can also get rid of DiskUsage and LocalDirCopy. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 12 3月, 2019 3 次提交
-
-
由 Georgios Kokolatos 提交于
The function was using an anti-pattern where an argument is a static length char array. Array arguments in C don't really exist but compilers accept them and without any 'const' or 'static' decorators they don't always emit a warning. The proposed patch only fixes the unsafeness of the function declaration but it does not address the usefulness of it. Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Daniel Gustafsson 提交于
Make sure ereport() calls start with a lower case letter, and don't end with a period. Also remove superfluous mentions of Greenplum Database from messages. Reviewed-by: Heikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NAdam Berlin <aberlin@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit dfee2ff7 exposed workfile_mgr_cache_entries_get_copy() which returns a structure containing WorkFileUsagePerQuery entries, but the definition of WorkFileUsagePerQuery was not make public. A forward declaration was placed in the workfile_mgr.h file but it suffered from a redefinition warning (in clang at least): workfile_mgr.c:93:3: warning: redefinition of typedef 'WorkFileUsagePerQuery' is a C11 feature [-Wtypedef-redefinition] } WorkFileUsagePerQuery; ^ ../../../../src/include/utils/workfile_mgr.h:25:38: note: previous definition is here typedef struct WorkFileUsagePerQuery WorkFileUsagePerQuery; ^ Fix by moving the WorkFileUsagePerQuery definition to the header from which is moved in f1ef3668. If we expose an API which returns data in the struct we need to also make the definition available to callers. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NTeng Zhang <tezhang@pivotal.io>
-