- 27 7月, 2018 1 次提交
-
-
由 tyacovone 提交于
Between GPDB 5.8.0 and 5.9.0, the recommended Linux System Settings changed for kernel.sem. https://gpdb.docs.pivotal.io/580/install_guide/prep_os_install_gpdb.html#topic3 https://gpdb.docs.pivotal.io/590/install_guide/prep_os_install_gpdb.html#topic3 However, the gpdb repository had references to the kernel.sem parameter, as well as a couple other parameters, which seem to be inconsistent. Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
- 26 7月, 2018 25 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The regression database has shown to be full of questionable objects which we probably won't support for upgrades without manual intervention before the upgrade. With all the changes going into pg_upgrade it's time to start on a known minimal dataset and expand from there to the full ICW leftover schema. As a stop-gap for now, drop the regression databases eand only upgrade a small subset. Also disallow upgrades of orhpaned toast tables, and clean up the invocation of the test_gpdb_pre.sql script to be less hacky.
-
由 Daniel Gustafsson 提交于
This is extracted from a larger patch in 5e92c436adcb32d295e77b3f3. Co-authored-by: NPaul Guo <paulguo@gmail.com> Co-authored-by: NMax Yang <myang@pivotal.io>
-
由 Daniel Gustafsson 提交于
Make sure we use the psql client from the new bindir (the on in PATH might well be an upstream postgres psql etc), and use a port different from the standard postgres port since it's likely to be in use. Also fix integer value comparison which caused a warning in my Bash, while it might work in other versions of Bash.
-
由 Bruce Momjian 提交于
binary-upgrade mode; instead only skip dumping the current user. This bug was introduced in during the removal of split_old_dump(). Bug discovered during local testing.
-
由 Daniel Gustafsson 提交于
Previously each QE would dump/restore the schema before upgrading the datafiles, which consumes a lot of time on large databases. Instead, use the datadir which was created from the dump/restore on the QD for the QEs as well and bootstrap the segment upgrade to jump straight to copying files instead. This is a first stab at implementing this model of distribution, follow-up commits will be required to finalize the patch.
-
由 Robert Haas 提交于
This doesn't appear to accompish anything useful, and does make the restore fail if the postgres database happens to have been dropped.
-
由 Daniel Gustafsson 提交于
Previously, the Greenplum pg_upgrade was syncronizing the Oids across upgrades using the backend Oid pre-allocation. All pre-allocations were generated in an Oid file, which was executed before the schema was restored. When doing parallel restores, this strategy becomes problematic as the Oid preallocation may be done in a backend other than the one where the Oid is required. This commit rips out the use of Oid preallocation in favor of using the upstream Oid handling in pg_upgrade where the Oids are set just before being required. A major difference is that we use the Oid preassignment scaffolding under the covers, but in a more upstream merge friendly way. For now, a known regression is that toast tables which in the old cluster doesn't match the naming convention in the new cluster fails to restore. Drop before running test until we know how to handle these.
-
由 Daniel Gustafsson 提交于
The previous commit backported the PostgreSQL 9.3 pg_upgrade code base and overwrote the hacks to allow for upgrading a Greenplum cluster. This commit brings back the patchsets that were lost, and adapts them to the new pg_upgrade code as well as mildly refactors them to take advantage of new opportunities.
-
由 Daniel Gustafsson 提交于
This overwrites the Greenplum pg_upgrade version with the upstream 9.3.22 sources. No attempt is made to make this compile or at all be even remotely useable. Follow-up commits will re-introduce all functionality which was lost.
-
由 Daniel Gustafsson 提交于
This attempts to minimize the diff to upstream in preparation of merging with a later version of pg_upgrade. In some cases this re-introduces quirks from small style cleanups we've performed, but it will only be for a short period until the merge happens. PostgreSQL have in recent versions moved to just having a version.c file for version specific function. In this commit we move Greenplum specific code to version_gp.c, since we would do that sooner or later anyways. No functionality is altered with this, this is limited to refactoring.
-
由 Omer Arap 提交于
When there is no stats available for any table, ORCA was treating it as an empty table while planning. On the other hand planner is utilizing a guc `gp_enable_relsize_collection` to obtain the estimated size of the table, but no other statistics. This commit enables ORCA to have the same behavior as planner when the guc is set. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Lav Jain 提交于
* Set fragment index for PXF fragments per segment * Added X-GP-LAST-FRAGMENT header to the request to indicate that this is the last fragment for the segment. The header is only sent for the last fragment, with the value "true". Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Scott Kahler 提交于
-
由 Chris Hajas 提交于
This is not used by any other utilities and isn't documented anywhere. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This isn't used by anything and isn't referenced anywhere in documentation. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This is not used or documented anywhere. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This is a OS agnostic df utility that worked on both Solaris and Linux. However, we no longer support solaris and this isn't referenced or mentioned in the docs anywhere. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This utility was designed for DCAs with version prior to 1.0.2 and isn't referenced or mentioned in the docs anywhere. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This isn't used by anything and isn't in the documentation. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This was added when we added the openbar gpcrondump tests in c9770456. Since these tests no longer exist, we can remove this too. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This isn't used by any utilities. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This is not used by anything. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This was only used by gprepairmirrorseg, which was removed in a previous commit. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 25 7月, 2018 7 次提交
-
-
由 Jacob Champion 提交于
The new optimized query introduced in commit 2ea6aec3 isn't as optimized as we'd like if enable_nestloop is on -- released versions of GPDB craft a plan that runs in polynomial time, and the dump grinds to a halt. Disable enable_nestloop temporarily as a workaround. This code may need to be somewhat rewritten to deal with external subpartition leaves anyway, in which case we may be able to get rid of some of the problematic JOINs. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Ekta Khanna 提交于
-
由 Lisa Owen 提交于
* docs - fix some pivnet links * add missing space
-
由 Jimmy Yih 提交于
The AO VACUUM on crash_before_cleanup_phase table was supposed to skip AO VACUUM drop phase because session 3 above it had acquired an AccessShareLock. However, the VACUUM is executed in background and the next command ends session 3's transaction and releases the AccessShareLock. This creates a race condition where AO VACUUM drop phase is not skipped when session 3's transaction ends too quickly. We fix the issue by ending session 3's transaction only after the VACUUM has hit the fault injector that tells us it is about to do its cleanup phase. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - enhance pxf filter pushdown info * edits requested by david * misc edits
-
由 Mel Kiyama 提交于
* docs - gpload - new config. file parameter FAST_MATCH will be ported to 5X_STABLE * docs - review updates for gpload config. file parameter FAST_MATCH
-
- 24 7月, 2018 4 次提交
-
-
由 Huiliang.liu 提交于
- The results of fast_match SQL don't include shema name, so we need add shema name to extSchemaTable for fast_match - Remove locationStr which is unused.
-
由 Lisa Owen 提交于
* docs - remove duplicate gphdfs/kerberos topic in best practices * remove unused file
-
由 David Kimura 提交于
If autovacuum was triggered before ShmemVariableCache->latestCompletedXid is updated by manually consuming xids then autovacuum may not vacuum template0 with a proper transaction id to compare against. We made the test more reliable by suspending a new fault injector (auto_vac_worker_before_do_autovacuum) right before autovacuum worker sets recentXid and starts doing the autovacuum. This allows us to guarantee that autovacuum is comparing against a proper xid. We also removed the loop in the test because vacuum_update_dat_frozen_xid fault injector ensures the pg_database table has been updated. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jamie McAtamney 提交于
The --file flag was broken because the command to read from segment postgresql.conf was being run locally instead of remotely. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
- 23 7月, 2018 3 次提交
-
-
由 BaiShaoqi 提交于
Spotted by Heikki Linnakangas.
-
由 Zhenghua Lyu 提交于
Before, we cannot update distribution column in legacy planner, because the OLD tuple and NEW tuple maybe belong to different segments. We enable this by borrowing ORCA's logic, namely, split each update operation into delete and insert. The delete operation is hashed by OLD tuple attributes, and insert operation is hashed by NEW tuple attributes. This change includes following items: * We need push missed OLD attributes to sub plan tree so that that attribute could be passed to top Motion. * In addition, if the result relation has oids, we also need to put oid in the targetlist. * If result relation is partitioned, we should special treat it because resultRelations is partition tables instead of root table, but that is true for normal Insert. * Special treats for update triggers, because trigger cannot be executed across segments. * Special treatment in nodeModifyTable, so that it can process Insert/Delete for update purpose. * Proper initialization of SplitUpdate. There are still TODOs: * We don't handle cost gracefully, because we add SplitUpdate node after plan generated. Already added a FIXME for this * For deletion, we could optimize in just sending distribution columns instead of all columns Author: Xiaoran Wang <xiwang@pivotal.io> Author: Max Yang <myang@pivotal.io> Author: Shujie Zhang <shzhang@pivotal.io> Author: Zhenghua Lyu <zlv@pivotal.io>
-
由 Huiliang.liu 提交于
- add fast_match option in gpload config file. If both reuse_tables and fast_match are true, gpload will try fast match external table(without checking columns). If reuse_tables is false and fast_match is true, it will print warning message.
-