- 29 1月, 2019 12 次提交
-
-
由 Mel Kiyama 提交于
* docs - remove gptransfer from docs --removed gptransfer topics, references to gptransfer, and images. --also updated text in gpcopy-migrate as rough update for 6.0 * docs - remove gptransfer from docs - review updates
-
由 Chuck Litzell 提交于
* docs - REPEATABLE READ xact mode is supported. SERIALIZABLE falls back to REPEATABLE READ. * Note that GPDB doesn't implement PGSQL SSI transactions * Review comments
-
由 Mel Kiyama 提交于
* docs - updates for online expand * docs - online expand - edits based on review comments. updated catalog table information. removed draft comments.
-
由 Bradford D. Boyle 提交于
Previously added these to the task, but missed adding them to the pipeline. Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Bradford D. Boyle 提交于
Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 David Sharp 提交于
And configure GPDB with --with-quicklz on RHEL This commit removes quicklz_compressor from all platforms except RHEL/Centos. Other platforms will be re-enable in the future. Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 David Sharp 提交于
Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io> Co-authored-by: NBen Christel <bchristel@pivotal.io>
-
由 Jacob Champion 提交于
A character transposition in the getopt_long() string meant that the option meant for -S was being applied to -R: pg_rewind: option requires an argument -- R Fix that.
-
由 David Yozie 提交于
* Add notes to qualify lack of large object support. * Replacing large object nonsupport note with more general description and link to postgresql docs
-
由 David Yozie 提交于
* update pg_class relkind entries * Remove duplicate entry for composite type * Add info for missing columns: reloftype, relallvisible, relpersistence, relhastriggers
-
由 Heikki Linnakangas 提交于
The point of this FIXME was that the code before the 9.2 merge was possibly broken, because it was missing this code to get the input slot. I think it was missing before the 9.2 merge, because of a bungled merge of commit 7fc0f062, during the 9.0 merge, but now the code in GPDB master is identical to upstream, and there's nothing to do. Also, comparing the 8.2 and 5X_STABLE code, it looks correct in 5X_STABLE, as well, so there's nothing to do there either.
-
由 Heikki Linnakangas 提交于
This is a backport of upstream commit 9556aa01, and Tom Lane's follow up commit 6119060d. Cherry-picked it now, to avoid the 256 MB limit on strings. We used to have an old workaround for that issue in GPDB, but lost it as part of the 9.1 merge. Upstream commit: commit 9556aa01 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Fri Jan 25 16:25:05 2019 +0200 Use single-byte Boyer-Moore-Horspool search even with multibyte encodings. The old implementation first converted the input strings to arrays of wchars, and performed the conversion on those. However, the conversion is expensive, and for a large input string, consumes a lot of memory. Allocating the large arrays also meant that these functions could not be used on strings larger 1 GB / pg_encoding_max_length() (256 MB for UTF-8). Avoid the conversion, and instead use the single-byte algorithm even with multibyte encodings. That can get fooled, if there is a matching byte sequence in the middle of a multi-byte character, so to eliminate false positives like that, we verify any matches by walking the string character by character with pg_mblen(). Also, if the caller needs the position of the match, as a character-offset, we also need to walk the string to count the characters. Performance testing shows that walking the whole string with pg_mblen() is somewhat slower than converting the whole string to wchars. It's still often a win, though, because we don't need to do it if there is no match, and even when there is, we only need to walk up to the point where the match is, not the whole string. Even in the worst case, there would be room for optimization: Much of the CPU time in the current loop with pg_mblen() is function call overhead, and could be improved by inlining pg_mblen() and/or the encoding-specific mblen() functions. But I didn't attempt to do that as part of this patch. Most of the callers of text_position_setup/next functions were actually not interested in the position of the match, counted in characters. To cater for them, refactor the text_position_next() interface into two parts: searching for the next match (text_position_next()), and returning the current match's position as a pointer (text_position_get_match_ptr()) or as a character offset (text_position_get_match_pos()). Getting the pointer to the match is a more convenient API for many callers, and with UTF-8, it allows skipping the character-walking step altogether, because UTF-8 can't have false matches even when treated like raw byte strings. Reviewed-by: John Naylor Discussion: https://www.postgresql.org/message-id/3173d989-bc1c-fc8a-3b69-f24246f73876%40iki.fi
-
- 28 1月, 2019 3 次提交
- 27 1月, 2019 2 次提交
-
-
由 Daniel Gustafsson 提交于
The planner wasn't correctly anchoring the path tree for queries which included multiple recursive CTE self-referential terms. Fix by anchoring to the appropriate parent root when invoking the sub- query planner. Adds a testcase illustrating the query, previously the test query would error with: ERROR: could not find CTE "x" (allpaths.c:<lineno>) Co-authored-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit cd733c64 removed the TINC tests from the CI pipeline, but these files were seemingly left behind and made dead code. Remove. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 26 1月, 2019 10 次提交
-
-
由 Heikki Linnakangas 提交于
After commit 56bb376c, \di no longer prints the Storage column. I failed to change the 'bfv_partition' test's expected output accordingly.
-
由 Heikki Linnakangas 提交于
The 'translate_columns' array must be larger than the number of columns in the result set, being passed printQuery(). We had added one column, "Storage", in GPDB, so we must make the array larger, too. This is a bit fragile, and would go wrong, if there were any translated columns after the GPDB-added column. But there isn't, and we don't really do translation in GPDB, anyway, so this seems good enough. The Storage column isn't actually interesting for indexes, so omit it for \di. Add a bunch of tests. For the \di+ that was hitting the assertion, as well as \d commands, to test the Storage column. Fixes github issue https://github.com/greenplum-db/gpdb/issues/6792Reviewed-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Ashwin Agrawal 提交于
icw_gporca_centos6 job generates the icw_gporca_centos6_dump. gpexpand has icw_gporca_centos6_dump as input, hence make it just depend on that particular job instead of all the ICW jobs. This makes the gpexpand job same as pg_upgrade job. Also, importantly marks the real dependency instead of perceived one.
-
由 Ashwin Agrawal 提交于
This reverts partial part of commit b597bfa8, as primaries need to be started once using convertMasterDataDirToSegment.
-
由 Bhuvnesh Chaudhary 提交于
-
由 Alexandra Wang 提交于
For cost estimation of MotionPath node, we calculate the rows as (subpath->rows * cdbpath_segments) for CdbPathLocus_IsReplicated() which do not have IndexPath, BitmapHeapPath, UniquePath, and BitmapAppendOnlyPath (which is completely removed in db516347) in its subpath. Previously, for the above mentioned node we always calculated the rows as subpath->rows. The reason why the Paths mentioned above are special is unknown, the logic has always been there, it used to be in cdbpath_rows() and was refactored as part of commit b2411b59. Therefore removing the checks all together, and calculating the rows for all CdbPathLocus_IsReplicated() to be same. We have already removed IndexPath as part of the 94_STABLE merge. With this update, we only see one plan change in ICG. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
All the tests have been ported out of this framework and nothing runs these tests in CI anymore.
-
由 Ashwin Agrawal 提交于
This also removes the last remaining walrep job from pipeline file using tinc. Those tests are anyways broken and can't be run. Plan is to port some of the relevant to regress or behave.
-
由 Ashwin Agrawal 提交于
gpexpand runs `_gp_expand.sync_new_mirrors()` at end after updating catalog which runs `gprecoverseg -aF`. While it was also calling `buildSegmentInfoForNewSegment()` as part of add_segments() which creates primaries and was also calling `ConfigureNewSegment()` for mirror which ran pg_basebackup internally. So, essentially as end result mirror was created twice, pg_basebackup and then later gprecoverseg -aF. Hence, modifying to just create primaries first as part of `_gp_expand.add_segments()` and let `_gp_expand.sync_new_mirrors()` do the mirror creation. Spotted the redundancy while browsing the code.
-
- 25 1月, 2019 13 次提交
-
-
由 Daniel Gustafsson 提交于
As per the error message style guid, error hints should be proper sentences starting with a capital letter and ending with a period. This adds the ending period to the ereports where it was missing. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Daniel Gustafsson 提交于
AO tables are long since Append Optimized and not Append Only, but the reloptions keyword (and much of the backend nomenclature) is still "appendonly". This adds support for "appendoptimized" as an alias for "appendonly", in order to improve the user experience. The created table, its error messages and the default reloptions will have "appendonly" set as the reloption as this is only a thin alias. db=# create table t (a integer, b integer) with (appendoptimized=true); CREATE TABLE db=# \d+ t Append-Only Table "public.t" Column | Type | Modifiers | Storage | Stats target | Description --------+---------+-----------+---------+--------------+------------- a | integer | | plain | | b | integer | | plain | | Compression Type: None Compression Level: 0 Block Size: 32768 Checksum: t Distributed by: (a) Options: appendonly=true This commit omits any changes to reflect this in the documentation, a proper discussion on how to do this needs to happen first. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Daniel Gustafsson 提交于
Greenplum had support for parsing YYYYMMDDHH24MISS timestamps, which upstream did not. This is however problematic since it cannot be parsed unambigously. The following example shows a situation when the parsing will perform a result which is unlikely to be what the user was expecting: postgres=# set datestyle to 'mdy'; SET postgres=# select '13081205132018'::timestamp; timestamp --------------------- 1308-12-05 13:20:18 (1 row) The original intent was to aid ETL jobs from Teradata which only supported this format. This is no longer true according to Teradata documentation. This retires the functionality (which was highlighted during the merge process) aligning the code with upstream, and adds a negative test for it, the corresponding documentation changes in the release notes are already done. The existing test for this in qp_misc_jiras was removed along with a test for a format supported in upstream, which was already covered by existing suites. Reviewed-by: NRob Eckhardt <reckhardt@pivotal.io>
-
由 Daniel Gustafsson 提交于
This mimicks the error messaging in ExecSquelchNode() for when the node type is unknown, to aid debugging. Also removes a header comment which was made obsolete in commit 6195b967. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Since GPDB is planning CTEs quite differently from upstream, the initplan generation in SS_process_ctes() is not required and has been commented out. This extends the commenting to the function and not just the caller to assist code reading and understanding (reading the function without inspecting the caller can easily lead to incorrect assumptions). Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Georgios Kokolatos 提交于
`suggest parentheses around ‘&&’ within ‘||’` produced by gcc and clang. Reproduced in gcc versions 4.8 - 7.3. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Hubert Zhang 提交于
Background workers have no client at all, so no need to forward QE message. On the other hand, MyProcPort is not initialized in background workers. While forward notice currently assume MyProcPort is already initialized. This is firstly detected by diskquota background worker. It also reported by #6756 which causes GDD infinite loop. GDD currently set the MyProcPort manually. This commit also removes MyProcPort in GDD, since there is no client in GDD process neither. So GDD and background worker will both skip generate/forward QE notice. Co-authored-by: Zhenghua Lyu zlv@pivotal.io Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NPengcheng Tang <petang@pivotal.io>
-
由 Shaoqi Bai 提交于
* Remove DB_IN_STANDBY_MODE, DB_IN_STANDBY_PROMOTED and use DB_IN_ARCHIVE_RECOVERY instead Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Wang Hao 提交于
There was a field queryCommandId in PGPROC existing in 5.X and earlier versions. After some code refactoring, it became dead code so was deleted. However, it is required by some monitoring extension so this commit brings it back with similar implementation as 5.X. Closes https://github.com/greenplum-db/gpdb/issues/6569Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Ning Fu 提交于
* Save git root, submodule and changelog information. The following information will be saved in ${GREENPLUM_INSTALL_DIR}/etc/git-info.json: * Root repo (uri, sha1) * Submodules (submodule source path, sha1, tag) Save git commits since last release tag into ${GREENPLUM_INSTALL_DIR}/etc/git-current-changelog.txt
-
由 Shaoqi Bai 提交于
Disallow alter parition table's leaf with different distribution compared with entire partitioned table. (#6780) Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Alexandra Wang 提交于
The GPDB_92_MERGE_FIXME for whether we need to deep copy or memcopy suffices in case of subroot can be removed as from the subroot all we care about is the `parse->rtable`, therefore, creating a deep copy of it is unnecessary. This commit also removes the `Assert()` which is valid in Upstream but for GPDB, since we create a new copy of the subplan if two SubPlans refer to the same initplan. Therefore, when we try to set references for subqueryscans in plans with copies of subplans refering to same initplan, we cannot directly Assert on the RelOptInfo's subplan being same as the subqueryscan's subplan. Added a test case for the same, which will ensure we do not merge back the Assert back from Upstream in future merges. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Ekta Khanna 提交于
Here the check for `RTE_CTE` is correct since in GPDB, while planning, CTE is considered similar to subquery. Hence, removing the FIXME. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-