- 27 9月, 2018 2 次提交
-
-
由 David Kimura 提交于
Until we have replication slots this will keep enough xlog segments around so that mirrors have an opportunity to reconnect when a checkpoint removes a segment while the mirror is not streaming. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Heikki Linnakangas 提交于
As far as I can see, the 'is_internal' flag is passed through to possible object access hook, but it has no other effect. Mark the LOV index and heap created for bitmap indexes, as well as constrains created for exchanged partitions as 'internal'.
-
- 26 9月, 2018 27 次提交
-
-
由 Heikki Linnakangas 提交于
I'm not entirely sure what was going on here before. I suspect we had backported some fixes from later upstream versions, and they caused merge conflicts and confusion now. But in any case, I see no reason to deviate from upstream now, so just remove the FIXME.
-
由 Heikki Linnakangas 提交于
We had backported upstream commits 425bef6ee7 and 2cd72ba42d earlier, but those got partially reverted in the 9.3 merge. Or earlier, or we hadn't backported them completely to begin with - I didn't investigate the exact path of how we got here. In any case, a partial backport is confusing, so take the code around this from the tip of 9.3 stable, so that we have both of those commits fully backported.
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Asim R P 提交于
The functions allow obtaining or removing entries from the shared hash table maintained on QD. Default size of this hash table is 1000 and entries are removed only after it is filled to capacity. The two functions should be helpful for testing as well as troubleshooting issues with appendonly tables in production deployments. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Asim R P 提交于
A segment file that is compacted by vacuum is left in awaiting drop state on QEs. Such a segment file should not be chosen for new inserts because it will never be considered for reading during scans. This patch fixes a bug in the logic to determine if a segment file is in awaiting drop state. Precondition for the bug includes a specific interleaving of vacuum and insert transactions on the same appendonly table, manifested in the accompanying test. The fix is to use SnapshotNow instead of MVCC snapshot. A segment file whose state is updated to awaiting drop by a vacuum compaction transaction may still be be seen as available for inserts through MVCC snapshot. When a vacuum compaction transaction is in progress, the aoentry for the relation in appendonly hash cannot be evicted and the need for obtaining state from QEs does not arise.
-
由 Asim R P 提交于
Spotted while reading.
-
由 Asim R P 提交于
This commit promotes a few assertions into elog(ERROR) so as to avoid new data being appended to a segmene file that is not in available state. Scans on an AO table do not read segment files that are awaiting to be dropped. New data, if inserted in such a segment file, will be lost forever. The accompanying isolation2 test demonstrates a bug that hits these errors. The test uses a newly added UDF to evict an entry from the appendonly hash table. In production, an entry is evicted when the appendonly hash table is filled (default capacity of 1000 entries). Note: the bug will be fixed in a separate patch. Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
-
由 David Yozie 提交于
-
由 Ekta Khanna 提交于
-
由 David Kimura 提交于
-
由 David Kimura 提交于
Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 David Kimura 提交于
The test is included in "installcheck" under the zstd module. It should be eventually included as part of ICW. CAVEAT: the test for zstd, as it turns out, is wrong (nondeterministic) since its inclusion in commit 724f9d27. This move eliminiates the need to have a separate "error: zstd not supported" answer file in ICG. Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 David Kimura 提交于
After this patch: - zstd functions are no longer part of the built-in catalog - is only built when enabled with `--with-zstd` in autoconf Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
由 Joao Pereira 提交于
-
由 Joao Pereira 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Joao Pereira 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Joao Pereira 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Joao Pereira 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Joao Pereira 提交于
Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Taylor Vesely 提交于
Create an extensions group and add gpcloud as part of it. The group will no longer be added as part of ICW, now it need to be specifically added as a test section when calling gen_pipeline.py Signed-off-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Joao Pereira 提交于
all, install, check-world target are now being recursively sent to gpcontrib that will execute them in all the extensions underneath it Signed-off-by: NDavid Kimura <dkimura@pivotal.io>
-
由 David Kimura 提交于
The file gpcheckcloud was in .gitignore, with the git mv it started ignoring the folder. Added the full path of the binary and added back again the files that were not checked in Signed-off-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
由 Joao Pereira 提交于
Moved the googletest folder to gpcontrib/gpcloud/test Signed-off-by: NTaylor Vesely <tvesely@pivotal.io> Signed-off-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Taylor Vesely 提交于
Signed-off-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
- 25 9月, 2018 10 次提交
-
-
由 Adam Berlin 提交于
In GPDB, we only want an autovacuum worker to start once we know there is a database to vacuum. When we changed the default value of the `autovacuum_start_daemon` from `true` to `false` for GPDB, we made the behavior of the AutoVacuumLauncherMain() be to immediately start an autovacuum worker from the launcher and exit, which is called 'emergency mode'. When the 'emergency mode' is running it is possible to continuously start an autovacuum worker. Within the worker, the PMSIGNAL_START_AUTOVAC_LAUNCHER signal is sent when a database is found that is old enough to be vacuumed, but because we only autovacuum non-connectable databases (template0) in GPDB and we do not have logic to filter out connectable databases in the autovacuum worker. This change allows the autovacuum launcher to do more up-front decision making about whether it should start an autovacuum worker, including GPDB specific rules. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Paul Guo 提交于
create_unique_path() could be used to convert semi join to inner join. Previously, during the Semi-join refactor in commit d4ce0921, creating unique path was disabled for the case where duplicats might be on different QEs. In this patch we enable adding motion to unique_ify the path, only if unique mothod is not UNIQUE_PATH_NOOP. We don't create unique path for that case because if later on during plan creation, it is possible to create a motion above this unique path whose subpath is a motion. In that case, the unique path node will be ignored and we will get a motion plan node above a motion plan node and that is bad. We could further improve that, but not in this patch. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NPaul Guo <paulguo@gmail.com>
-
由 Daniel Gustafsson 提交于
The bkuprestore test was imported along with the source code during the initial open sourcing, but has never been used and hasn't worked in a long time. Rather than trying to save this broken mess, let's remove it and start fresh with a pg_dump TAP test which is a much better way to test backup/restore. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Dhanashree Kashid 提交于
-
由 Shivram Mani 提交于
PXF client in gpdb uses pxf libraries from apache hawq repo. These pxf libraries will continue being developed in a new PXF repo greenplum-db/pxf and is in the process of getting open sourced in the next few days. The PXF extension and gpdb-pxf client code will continue to remain in gpdb repo. The following changes are included in this PR: Transition from the old PXF namespace org.apache.hawq.pxf to org.greenplum.pxf (there is a separate PR in the PXF repo to address the package namespace refactor greenplum-db/pxf#5) Doc updates to reflect the new PXF repo and the new package namespace
-
由 Ashwin Agrawal 提交于
Regular fault injection doesn't work for mirrors. Hence, using SIGUSR2 signal and on-disk file coupled with it just for testing a fault injection mechanism was coded. This seems very hacky and intrusive, hence plan is to get rid of the same. Most of the tests using this framework are found not useful as majority of code is upstream. Even if needs testing, better alternative would be explored.
-
由 Ashwin Agrawal 提交于
Most of the backup block related modification for providing the wal_consistency_checking was removed as part of 9.3 merge. This was mainly done to avoid merge conflicts. The masking functions are still used by gp_replica_check tool to perform checking between primary and mirrors. But the online version of checking during each replay of record was let go. So, in this commit cleaning up remaining pieces which are not used. We will get back this in properly working condition when we catch up to upstream.
-
由 Ashwin Agrawal 提交于
Removing the fault types which do not have implementation. Or have implementation but doesn't seem usable. This will just help to have only working subset of faults. Like data corruption fault seems pretty useless. Even if needed then can be easily coded for specific usecase using the skip fault, instead of having special one defined for it. Fault type "fault" is redundant with "error" hence removing the same as well.
-
由 Ashwin Agrawal 提交于
-
由 Dhanashree Kashid 提交于
Following commits have been cherry-picked again: b1f543f3. b0359e69. a341621d. The contrib/dblink tests were failing with ORCA after the above commits. The issue has been fixed now in ORCA v3.1.0. Hence we re-enabled these commits and bumping the ORCA version.
-
- 24 9月, 2018 1 次提交
-
-
由 Heikki Linnakangas 提交于
I couldn't find an easy way to make this assertion work, with the "flattened" range table in 9.3. The information needed for this is zapped away in add_rte_to_flat_rtable(). I think we can live without this assertion.
-