- 27 1月, 2019 1 次提交
-
-
由 Daniel Gustafsson 提交于
Commit cd733c64 removed the TINC tests from the CI pipeline, but these files were seemingly left behind and made dead code. Remove. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 26 1月, 2019 10 次提交
-
-
由 Heikki Linnakangas 提交于
After commit 56bb376c, \di no longer prints the Storage column. I failed to change the 'bfv_partition' test's expected output accordingly.
-
由 Heikki Linnakangas 提交于
The 'translate_columns' array must be larger than the number of columns in the result set, being passed printQuery(). We had added one column, "Storage", in GPDB, so we must make the array larger, too. This is a bit fragile, and would go wrong, if there were any translated columns after the GPDB-added column. But there isn't, and we don't really do translation in GPDB, anyway, so this seems good enough. The Storage column isn't actually interesting for indexes, so omit it for \di. Add a bunch of tests. For the \di+ that was hitting the assertion, as well as \d commands, to test the Storage column. Fixes github issue https://github.com/greenplum-db/gpdb/issues/6792Reviewed-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io> Reviewed-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Ashwin Agrawal 提交于
icw_gporca_centos6 job generates the icw_gporca_centos6_dump. gpexpand has icw_gporca_centos6_dump as input, hence make it just depend on that particular job instead of all the ICW jobs. This makes the gpexpand job same as pg_upgrade job. Also, importantly marks the real dependency instead of perceived one.
-
由 Ashwin Agrawal 提交于
This reverts partial part of commit b597bfa8, as primaries need to be started once using convertMasterDataDirToSegment.
-
由 Bhuvnesh Chaudhary 提交于
-
由 Alexandra Wang 提交于
For cost estimation of MotionPath node, we calculate the rows as (subpath->rows * cdbpath_segments) for CdbPathLocus_IsReplicated() which do not have IndexPath, BitmapHeapPath, UniquePath, and BitmapAppendOnlyPath (which is completely removed in db516347) in its subpath. Previously, for the above mentioned node we always calculated the rows as subpath->rows. The reason why the Paths mentioned above are special is unknown, the logic has always been there, it used to be in cdbpath_rows() and was refactored as part of commit b2411b59. Therefore removing the checks all together, and calculating the rows for all CdbPathLocus_IsReplicated() to be same. We have already removed IndexPath as part of the 94_STABLE merge. With this update, we only see one plan change in ICG. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
All the tests have been ported out of this framework and nothing runs these tests in CI anymore.
-
由 Ashwin Agrawal 提交于
This also removes the last remaining walrep job from pipeline file using tinc. Those tests are anyways broken and can't be run. Plan is to port some of the relevant to regress or behave.
-
由 Ashwin Agrawal 提交于
gpexpand runs `_gp_expand.sync_new_mirrors()` at end after updating catalog which runs `gprecoverseg -aF`. While it was also calling `buildSegmentInfoForNewSegment()` as part of add_segments() which creates primaries and was also calling `ConfigureNewSegment()` for mirror which ran pg_basebackup internally. So, essentially as end result mirror was created twice, pg_basebackup and then later gprecoverseg -aF. Hence, modifying to just create primaries first as part of `_gp_expand.add_segments()` and let `_gp_expand.sync_new_mirrors()` do the mirror creation. Spotted the redundancy while browsing the code.
-
- 25 1月, 2019 13 次提交
-
-
由 Daniel Gustafsson 提交于
As per the error message style guid, error hints should be proper sentences starting with a capital letter and ending with a period. This adds the ending period to the ereports where it was missing. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Daniel Gustafsson 提交于
AO tables are long since Append Optimized and not Append Only, but the reloptions keyword (and much of the backend nomenclature) is still "appendonly". This adds support for "appendoptimized" as an alias for "appendonly", in order to improve the user experience. The created table, its error messages and the default reloptions will have "appendonly" set as the reloption as this is only a thin alias. db=# create table t (a integer, b integer) with (appendoptimized=true); CREATE TABLE db=# \d+ t Append-Only Table "public.t" Column | Type | Modifiers | Storage | Stats target | Description --------+---------+-----------+---------+--------------+------------- a | integer | | plain | | b | integer | | plain | | Compression Type: None Compression Level: 0 Block Size: 32768 Checksum: t Distributed by: (a) Options: appendonly=true This commit omits any changes to reflect this in the documentation, a proper discussion on how to do this needs to happen first. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Daniel Gustafsson 提交于
Greenplum had support for parsing YYYYMMDDHH24MISS timestamps, which upstream did not. This is however problematic since it cannot be parsed unambigously. The following example shows a situation when the parsing will perform a result which is unlikely to be what the user was expecting: postgres=# set datestyle to 'mdy'; SET postgres=# select '13081205132018'::timestamp; timestamp --------------------- 1308-12-05 13:20:18 (1 row) The original intent was to aid ETL jobs from Teradata which only supported this format. This is no longer true according to Teradata documentation. This retires the functionality (which was highlighted during the merge process) aligning the code with upstream, and adds a negative test for it, the corresponding documentation changes in the release notes are already done. The existing test for this in qp_misc_jiras was removed along with a test for a format supported in upstream, which was already covered by existing suites. Reviewed-by: NRob Eckhardt <reckhardt@pivotal.io>
-
由 Daniel Gustafsson 提交于
This mimicks the error messaging in ExecSquelchNode() for when the node type is unknown, to aid debugging. Also removes a header comment which was made obsolete in commit 6195b967. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Since GPDB is planning CTEs quite differently from upstream, the initplan generation in SS_process_ctes() is not required and has been commented out. This extends the commenting to the function and not just the caller to assist code reading and understanding (reading the function without inspecting the caller can easily lead to incorrect assumptions). Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Georgios Kokolatos 提交于
`suggest parentheses around ‘&&’ within ‘||’` produced by gcc and clang. Reproduced in gcc versions 4.8 - 7.3. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Hubert Zhang 提交于
Background workers have no client at all, so no need to forward QE message. On the other hand, MyProcPort is not initialized in background workers. While forward notice currently assume MyProcPort is already initialized. This is firstly detected by diskquota background worker. It also reported by #6756 which causes GDD infinite loop. GDD currently set the MyProcPort manually. This commit also removes MyProcPort in GDD, since there is no client in GDD process neither. So GDD and background worker will both skip generate/forward QE notice. Co-authored-by: Zhenghua Lyu zlv@pivotal.io Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NPengcheng Tang <petang@pivotal.io>
-
由 Shaoqi Bai 提交于
* Remove DB_IN_STANDBY_MODE, DB_IN_STANDBY_PROMOTED and use DB_IN_ARCHIVE_RECOVERY instead Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Wang Hao 提交于
There was a field queryCommandId in PGPROC existing in 5.X and earlier versions. After some code refactoring, it became dead code so was deleted. However, it is required by some monitoring extension so this commit brings it back with similar implementation as 5.X. Closes https://github.com/greenplum-db/gpdb/issues/6569Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Ning Fu 提交于
* Save git root, submodule and changelog information. The following information will be saved in ${GREENPLUM_INSTALL_DIR}/etc/git-info.json: * Root repo (uri, sha1) * Submodules (submodule source path, sha1, tag) Save git commits since last release tag into ${GREENPLUM_INSTALL_DIR}/etc/git-current-changelog.txt
-
由 Shaoqi Bai 提交于
Disallow alter parition table's leaf with different distribution compared with entire partitioned table. (#6780) Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Alexandra Wang 提交于
The GPDB_92_MERGE_FIXME for whether we need to deep copy or memcopy suffices in case of subroot can be removed as from the subroot all we care about is the `parse->rtable`, therefore, creating a deep copy of it is unnecessary. This commit also removes the `Assert()` which is valid in Upstream but for GPDB, since we create a new copy of the subplan if two SubPlans refer to the same initplan. Therefore, when we try to set references for subqueryscans in plans with copies of subplans refering to same initplan, we cannot directly Assert on the RelOptInfo's subplan being same as the subqueryscan's subplan. Added a test case for the same, which will ensure we do not merge back the Assert back from Upstream in future merges. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Ekta Khanna 提交于
Here the check for `RTE_CTE` is correct since in GPDB, while planning, CTE is considered similar to subquery. Hence, removing the FIXME. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
- 24 1月, 2019 16 次提交
-
-
由 Daniel Gustafsson 提交于
This cleans up the error messages in the sparse vector code a little by ensuring they mostly conform to the style guide for error handling. Also fixes a nearby typo and removes commented out elogs which are clearly dead code. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
The gp_sparse_vector module was covered by the relicensing done as part of the Greenplum open sourcing, but a few mentions of previous licensing remained in the code. The legal situation of this code has been reviewed by Pivotal legal and is cleared, so remove incorrect statements and replace with the standard copyright file headers. This also cleans up a few comments while at it. Reviewed-by: Cyrus Wadia Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
The float_specials.h header was removed shortly after this contrib module was imported in 2010, and has been dead code since. Remove. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
This removes a few unused functions, and inlines the function body of another one which only had a single caller. Also properly mark a few functions as static. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Remove redundant test on array_agg which didn't have a stable output, and remove an ORDER BY to let atmsort deal with differences instead. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
The histogram structure was allocated statically via malloc(), but it had no data retention between calls as it was purely a microoptimization to avoid the cost of repeated allocations. This lead to the allocated memory leaking as it's not cleaned up automatically. Fix by pallocing the memory instead and take the cost of repeat allocation. Also ensure to properly clean up allocated memory on failure cases. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
palloc() is guaranteed to only return on successful allocation, so there is no need to check it. ereport(ERROR..) is guaranteed never to return, and to clean up on it's way out, so pfree()ing after an ereport() is not just unreachable code, it would be a double-free if it was reached. Also add proper checks on the malloc() and strdup() calls as those are subject to the usual memory pressure controls by the programmer. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Adam Lee 提交于
This part of codes are not covered by PR pipeline, tested manually.
-
由 Adam Lee 提交于
fmtQualifiedId() and fmtId() share the same buffer, we cannot use any of them until we finished calling.
-
由 Melanie 提交于
When you have subquery under a SUBLINK that might get pulled up, you should not allow indexscans to be chosen for the relation which is the rangetable for the subquery. If that relation is distributed and the subquery is pulled up, you will need to redistribute or broadcast that relation and materialize it on the segments, and cdbparallelize will not add a motion and materialize an indexscan, so you cannot use indexscan in these cases. You can't materialize an indexscan because it will materialize only one tuple at a time and when you compare that to the param you get from the relation on the segments, you can get wrong results. Because we don't pick indexscan very often, we don't see this issue very often. You need a subquery referring to a distributed table in a subplan which, during planning, gets pulled up and then when adding paths, the indexscan is cheapest. Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NJinbao Chen <jinchen@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 David Kimura 提交于
Stale replication slots can exist on mirrors that were once acting as primaries. In this case restart_lsn is non-zero value used in past replication slot setup. The stale replication slot will continue to retain xlog on mirror which is problematic and unnecessary. This patch drops internal replication slot on startup of mirror. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Jesse Zhang 提交于
We backported 128-bit integer support to speed up aggregates (commits 8122e143 and 959277a4) from upstream 9.6 into Greenplum (in commits 9b164486 and 325e6fcd). However, we forgot to also port a follow-up fix postgres/postgres@7518049980b, mostly because it's nuanced and hard to reproduce. There are two ways to tell the brokenness: 1. On a lucky day, tests would fail on my workstation, but not my laptop (or vice versa). 1. If you stare at the generated code for `int8_avg_combine` (and friends), you'll notice the compiler uses "aligned" instructions like `movaps` and `movdqa` (on AMD64). Today's my lucky day. Original commit message from postgres/postgres@7518049980b (by Tom Lane): > Our initial work with int128 neglected alignment considerations, an > oversight that came back to bite us in bug #14897 from Vincent Lachenal. > It is unsurprising that int128 might have a 16-byte alignment requirement; > what's slightly more surprising is that even notoriously lax Intel chips > sometimes enforce that. > Raising MAXALIGN seems out of the question: the costs in wasted disk and > memory space would be significant, and there would also be an on-disk > compatibility break. Nor does it seem very practical to try to allow some > data structures to have more-than-MAXALIGN alignment requirement, as we'd > have to push knowledge of that throughout various code that copies data > structures around. > The only way out of the box is to make type int128 conform to the system's > alignment assumptions. Fortunately, gcc supports that via its > __attribute__(aligned()) pragma; and since we don't currently support > int128 on non-gcc-workalike compilers, we shouldn't be losing any platform > support this way. > Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and > called it a day, I did a little bit of extra work to make the code more > portable than that: it will also support int128 on compilers without > __attribute__(aligned()), if the native alignment of their 128-bit-int > type is no more than that of int64. > Add a regression test case that exercises the one known instance of the > problem, in parallel aggregation over a bigint column. > This will need to be back-patched, along with the preparatory commit > 91aec93e. But let's see what the buildfarm makes of it first. > Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org (cherry picked from commit 75180499)
-
由 Jesse Zhang 提交于
This cherry-picks 91aec93e. We had to be extra careful to preserve still-in-use macros UnusedArg and STATIC_IF_INLINE and friends. > Generalize section 1 to handle stuff that is principally about the > compiler (not libraries), such as attributes, and collect stuff there > that had been dropped into various other parts of c.h. Also, push > all the gettext macros into section 8, so that section 0 is really > just inclusions rather than inclusions and random other stuff. > The primary goal here is to get pg_attribute_aligned() defined before > section 3, so that we can use it with int128. But this seems like good > cleanup anyway. > This patch just moves macro definitions around, and shouldn't result > in any changes in generated code. But I'll push it out separately > to see if the buildfarm agrees. > Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org (cherry picked from commit 91aec93e)
-
由 David Kimura 提交于
Currently GDD sets DistributedTransactionContext to DTX_CONTEXT_QD_DISTRIBUTED_CAPABLE and as a result allocates distributed transaction id. It creates entry in ProcGlobal->allTmGxact with state DTX_STATE_ACTIVE_NOT_DISTRIBUTED. The effect of this is that any query taking a snapshot will see this transaction as in progress. Since GDD transaction is short lived it is not an issue in general, but in CI it causes flaky behavior for some of the vacuum tests. The flaky behavior shows up as unvacuumed tables where the vacuum snapshot was taken while GDD transaction was running thereby forcing vacuum to lower its oldest XMIN. Current behavior of GDD consuming a distributed transaction id (every 2 minutes by default) is also wasteful behavior. Currently GDD also sends a snapshot to QE, but this isn't required and is wasteful as well. In this change for GDD we keep DistributedTransactionContext as DTX_CONTEXT_LOCAL_ONLY and avoid dispatching snapshots to QEs. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
Currently, dbid is used in tablespace path. Hence, while creating segment need dbid. To get the dbid need to add segment to catalog first. But adding segment to catalog before creating causes issues. Hence, modify gpexpand to not let database generate the dbid, but instead pass the dbid upfront generated while registering in catalog. This way dbid used while creating the segment will be same as dbid in catalog. Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-